http://www.freesoftwaremagazine.com/node/2299/issue_pdf

Page 1

Issue 19



Issue 19

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Issue 19 By admin Issue 19 of Free Software Magazine is out, and so are another 18 fantastic articles. Tony Mobily opens the magazine with his editorial on file formats. Andrew Min and Gary Richmond join forces to provide useful tips&tricks, while Robin Monks reviews some of the best free software media players. Howard Fosdick reviews Puppy Linux, Andrew Min talks about Pidgin, and Dirk Morris covers Untangle Gateway... and that's just the tip of the iceberg! Source URL: http://www.freesoftwaremagazine.com/issues/issue_019

Issue 19

1


Issue 19

2


Issue 19

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Winning the OpenDocument vs. OpenXML war By Tony Mobily In August 2005 Peter Quinn, now retired Chief Information Officer of Massachusetts, decided that OpenDocument was the best way to store documents with the guarantee that they would be able to be opened 10, 30, 50 years from now. For a state government, this is particularly important. He led Massachusetts toward OpenDocument and OpenOffice.org. The move, which sparked controversy and ferocious lobbying, is likely to end-up in history books (and while we’re at it, I’ll mention that history books in particular ought to be accessible 50, 100, 1000 years from now!). Quinn’s move was to create a snowball effect we have all witnessed over the past 2 years; when MA switched to OpenDocument, somebody at Microsoft realised that they had a problem—a real one—and that something needed to be done right away. Having OpenXML (Microsoft’s standard) approved by an official body quickly, very quickly, became Microsoft’s priority. To start with, it looked like the software giant had managed to fool everybody and had things its way. Things turned out a little more complicated than that. Right now, the future doesn’t look too bright for Microsoft, as the ISO fast-tracking of OpenXML is hitting problems on all fronts; money can open many doors, but it can’t buy you broad consensus on a bogus standard pushed through via sneaky practices. Microsoft keeps managing to defend an indefensible format day after day, but it’s proving to be an up-hill battle. However, the real war has only just started. ISO allowing Microsoft to fast track OpenXML is in itself a really bad sign: a sign of a standard body that is willing to be bent by lobbying and by the big muscles (or dollars?) of a company that has grown too big. I know Microsoft, I have witnessed computer history unfold before me for decades, and I sadly believe that OpenXML will eventually manage to squeeze through the standardisation process, and well, become a competing standard (an expression that is absurd in itself, if you ask me). While the standardisation war is absolutely crucial, I firmly believe that the only way this battle can be won is by making sure that people use OpenDocument in their everyday life. This sounds obvious, but well, it isn’t—and it isn’t an easy goal. Here are some things that can be done to fight the battle: • Ask companies to make documents available in OpenDocument format. Bank forms, mortgage application forms, templates, etc.: write to every company you use, and ask them to provide ODF files as well as DOC/PDF ones. • Lobby Microsoft so that it directly supports OpenOffice.org documents. Right now, there are two options: the Sun ODF Plug-in for Microsoft Office and the OpenXML/ODF Translator Add-ins for Office. Neither of them are optimal. They both have problems and limitations mostly because of Microsoft. For example, Sun’s plugin has problems that stem directly from Microsoft: every time you open an ODF document, you get: “This file needs to be opened by the ODF Text Document text converter, which may pose a security risk if the file you are opening is a malicious file. Choose Yes to open this file only if you are sure it is from a trusted source.” Also, you can’t open an OpenDocument file using the plugin with Office 2007, which has a bug that causes it to ignore… other input filters. These are obviously show-stoppers for the average user. I see these plugins as temporary solutions. Since OpenDocument is a standard, it should be included in Microsoft Office by default, even though this might take a great deal of lobbying and flexing of muscles. • Convince more and more OEMs to provide OpenOffice.org pre-installed with their computers. Dell is already doing it with their Ubuntu machines. However, what OpenDocument really needs, is to be an available option—a free one!—when you get a new Windows computer to have OpenOffice.org already available. This is made hard by Microsoft trying to put a “text drive” version of Office in OEM computers. Again, I have seen some movement here, and I think it’s a very obtainable goal. • Make people and small offices realise that Office doesn’t come with Windows, and if they are using Office without paying for it, they are doing something illegal. Show them OpenOffice.org, a valid alternative. This seems obvious, but I am always amazed by the number of people who have never

Winning the OpenDocument vs. OpenXML war

3


Issue 19 seen or heard of OpenOffice.org. • If you have a popular web site, only give away ODF files; if people complain, tell them to download and install OpenOffice.org. In my view. OpenOffice.org should become “The Firefox of the office suites”. Basically, members of the free software community can and should do more than just watch the fight and rest on their laurels: the more people that fight, the more likely we are to win this battle, which is anything but over. We should all keep in mind that OpenDocument might become, even in the long term, a “fringe format” that nobody actually uses. Microsoft’s monopoly on file formats, if that becomes the case, would create unimaginable damage.

Biography Tony Mobily (/user/2" title="View user profile.): Tony is the founder and the Editor In Chief of Free Software Magazine

Copyright information Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved. Source URL: http://www.freesoftwaremagazine.com/articles/editorial_019

4


Issue 19

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Tips and Tricks By Andrew Min, Gary Richmond This is a collection of tips&tricks written by Gary Richmond and Andrew Min. In this article: • How to get the best out of the history command in GNU/Linux (Gary) • How to close down GNU/Linux safely after a system freeze with the SysRq key (Gary) • How to find .debs (even if you think they don’t exist) (Andrew) • How to kill processes (Andrew)

How to get the best out of the history command in GNU/Linux (Gary) Anybody who has used the command line extensively to navigate, understand and configure GNU/Linux will know that in the course of a few months’ work it is possible to build up an extensive history of used commands. This necessitates some pro-active management to get the best out of it. Here are some tips to make the most of the history command. Please note, from the outset, that command history is only saved in interactive shells and does not, therefore, work with shell scripts. By default, the shell Bash is designed to retain the last five hundred commands you entered. If you want to see them, just open ~/.bash_history and scroll through it. Or, by simply typing history on a command line the terminal will list them. If you know there will be a lot of output, then it makes sense to pipe it to less: history | less; with less, you can see the command history one screen at a time (by pressing the spacebar) or one line at a time (by pressing the down arrow). If you’d rather not even be bothered with piping with less, just specify the number of commands you want (if you are pretty sure it was a quite recent one): history 25 or history | tail to output the last ten commands. Helpfully, if you want to rerun a command and can’t quite remember its format, the history facility allows you to simply step through them one at a time by pressing the up arrow (or Ctrl + P) continuously from your command prompt until you find the one you are looking for; you can also use the down arrow (or Ctrl + N) to go to the “next” command. When you find the command you want to run, hit the return key to run it. If you think you know the command but can’t quite recall the command exactly, then you can pipe it with grep and the first few letters of that command: history | grep -i. The i should be followed by the first letter(s) of the command you are attempting to find. Again, use less to step through them and or combine it with the down arrow: history | grep -i | less. Another useful feature related to history is to use Ctrl + R. This will output a prompt called a reverse-interactive-search. All you then need to do is to start typing the command and it will complete the command with the most recent matching command from the history file. If there is more than one relevant command in the history file and the one you need is not the most recent, then just type a few more letters to distinguish the command. Once you see the one you want press the return key to run it or press the right arrow key to edit it.

How to get the best out of the history command in GNU/Linux (Gary)

5


Issue 19

Stop repeating yourself Statistically, it is very likely that you have used the same command more than once, however small or large. If it is a long list why not simply skip the duplicate entries and speed thing up a bit? Add the following to either your personal bash file (~/.bashrc) or to the global bash configuration file (/etc/bash.bashrc): export HISTCONTROL=ignoreboth

This change, made in your favourite text editor, must of course be done as root. For it to take effect you must restart the bash shell: you can simply log out and log in again. If you now type env you should see that configuration listed: bash will, from now on, skip the duplicate entries.

Cut history down to size If you are not a big user of the command line and want to make history slimmer, then specify it permanently in one of the two bash files listed above. It takes the form of export HISTSIZE=X where “X” is replaced with the number of commands you want to keep in your history (setting X to 100 will store 100 commands). If the value is set to zero, history is disabled. Keep in mind that you will need to restart the shell in order for the changes to take effect. These changes may help to make you more secure. As an added measure it is possible to set up Bash to clear history upon exit by adding the following configuration in your ~/.bash_logout file: /usr/bin/clear_console. In some GNU/Linux distros this setting may already be the default. More radically, if you want to delete history altogether just type history -c but be aware that there is no way to retrieve that history once it is gone. Security is a huge subject for computers and rightly fills many books. Apart from the obvious things such as setting up a root password, running virus and rootkit scanners, keeping security patches up to date and IDSes like Snort, a quick hack (amongst many) is to check the output of the history command. If you know that you did not disable the history, and yet it’s empty and, therefore, ~/.bash_history is empty, then it may be that you have been hacked and that the Bash history has been wiped out. Run this command: ls -l ~/.bash_history and the result (unhacked) should look like this (mine): -rw------- 1 richmondg users 7228 2007-07-23 23:22 /home/richmondg/.bash_history

If your history is unexpectedly empty, you will know that your computer has probably been compromised and you should probably reinstall it.

Conclusion Well, there you have it. The history command is powerful and versatile. Like alias, it can save keystrokes, aid security and save time. It helps you to work smarter and reveals yet again the true power of the command line. So what are you waiting for? Fire up that terminal!

How to close down GNU/Linux safely after a system freeze with the SysRq key (Gary) Despite jeering at Windows for the infamous system freezes and blue screens of death, there are and will be times when your computer just locks up: the cursor is frozen and even invoking a console by Ctrl + Alt + [F2, F3, ...] to close down the X windows session running on F7 is non-functional. My fellow blogger, Andrew Min, has given excellent tips when faced with stubborn processes and applications that just refuse to terminate. This tip may be of assistance to those whose entire system has frozen and aren’t happy to just do a hard power off and trust to luck that data will not be corrupted. Fortunately GNU/Linux has journalled filesystems so the chances of this happening are reduced and you will not suffer the indignity of being told that you performed an illegal operation or having to drum your fingers waiting impatiently for scandisk to complete.

6

How to close down GNU/Linux safely after a system freeze with the SysRq key (Gary)


Issue 19 Like Ctrl + Alt + Delete this tip is a three fingered Vulcan neck pinch. It consists of Alt + SysRq and a selection of one other key (of which there are thirteen!), the choice of which will determine what operation is performed. This has been described as a way to communicate with the Kernel. It used to be that you had to enable this “magic key combo” when compiling a kernel but this is no longer necessary. (If your computer does not have a SysRq key then look for the “Print Screen” key—usually abbreviated to Prt Scr Normally, after certain key combos you will see “OK” and “Done”. If your kernel is really locked up you might not see them at all. If you have a file called /proc/sys/kernel/sysrq then you are okay and man proc should list it and it will be enabled if there is a “1” in it.

Alt + SysRq + B If you’re not running any crucial, scheduled tasks or in the middle of composing a letter or an e-mail then this key combination may be the one to use. It will reboot the system immediately without bothering to sync or unmount disks.

Alt + SysRq + R If you cannot get to a terminal window by using Ctrl + Alt + F2 then use this key combination (pressed altogether) to get a keyboard (this is because this key combination turns off what is called keyboard raw mode. Pressing these keys allows keyboard input even after your X-Windows session has crashed/frozen). Now, try Ctrl + Alt + F2 again and you can close down from the terminal. If that fails move on to the next option.

Alt + SysRq + S This key combo does just what it says on the tin: it (S)ync’s all filesystems which reduces the possibility of losing any data and possibly obviating the need for the system to run fsck upon reboot.

Alt + SysRq + U As you might guess, this one tries to unmount disks and remount them as read only.

Alt + SysRq + O Not so obvious, but this will power off your machine without syncing or unmounting disks (but it won’t reboot. See above).

If in doubt use a mnemonic If you are not sure about the sequence to use or just can’t remember, why not use a memory aid like: Raising Elephants Is So Utterly Boring (REISUB) or Everything Is Superb (EISUB) or So Everything Is Unusual Boot (SEIUB). If even that was too much effort for you, just do Alt + SysRq + H which will bring up a helpful list of the command above.

How to find .debs (even if you think they don’t exist) (Andrew) One of the biggest strengths of Debian (and derivatives like Ubuntu) is support for the .deb package. After all, it provides a one-click method of easily installing programs. Best of all, these programs are automatically updated via the official Debian repositories. Unfortunately, the official repositories aren’t always the best. Some programs aren’t always up to date (the latest version of Thunderbird is 2.0. However, the latest version in the repositories is 1.5). Worse, some packages aren’t in the repositories at all (Glest is a good example). True, you could build the program from source, but there are a number of reasons why that is undesirable (finding the dependencies, having to

How to find .debs (even if you think they don’t exist) (Andrew)

7


Issue 19 download the program again to uninstall it, not automatically adding itself to the menu, etc.). How do you find good Debian software?

GetDeb I first stumbled across GetDeb when looking for a Kompozer .deb. A kind Ubuntu Forums member pointed me towards the site, calling it the go-to place when the official repositories don’t have the program. Boy was he right. Not only have I used it for Kompozer, but I have used it for Glest, Pidgin, ActionCube, and many more programs. All of them are in tidy .deb packages for easy (un)installation.

Figure 1: GetDeb

Automatix There’s another option available. It’s called Automatix. It offers pre-compiled binaries of many popular programs and drivers, including Swiftfox, xdvdshrink, Nvidia drivers, and many more. However, there are two problems with it. First, it doesn’t distinguish between free (as in beer) and free (as in speech). Even worse, many users have reported problems with Automatix, occasionally creating problems only remedied by a live CD rescue. I personally have used it without trouble, but many people recommend NOT using it. If you’re still feeling adventurous, follow the instructions on the Automatix site on how to install it.

Figure 2: Automatix

Google Many people have created third-party .debs and just haven’t submitted them to GetDeb or the official repositories. So it makes sense to search for the packages online. But why Google (besides the fact that it is the king of search engines)? The main reason is that they have a special search site called Google Linux which only searches GNU/Linux-related sites. Go there, then do a search for [INSERTPROGRAMNAMEHERE]

8

GetDeb


Issue 19 debian package OR .deb OR binary, replacing [INSERTPROGRAMNAMEHERE] with the name of the program, e.g. kompozer or "thunderbird 2".

Figure 3: Searching for .debs on Google Linux

Converting RPMs to DEBs One of the biggest competitors to the .deb format is the .rpm package (used by Red Hat, Fedora, Mandriva, SuSE, ArkLinux, and many more). Luckily, there is a tool that will convert many (but not all) RPMs to DEBs. It is called Alien. Just install the alien package with apt, aptitude, or a package management tool like Synaptic. Then, open a terminal window, cd to the source package you wish to install and type alien [INSERTFILEHERE] --scripts -i, replacing [INSERTFILEHERE] with your RPM (e.g. amarok.rpm). The package will be converted and installed. If you use KDE, use Chad M’s RPM Installer for Konqueror or Dolphin, which lets users just right-click on an RPM and install it without having to remember Alien’s syntax.

Figure 4: An example Alien conversion

Last resort: making your own Sometimes, none of the above will work. Luckily, if the program is open source and uses make to compile and install, it might not be as bad as you think. All you need is two utilities called AutoApt and CheckInstall. What you do is download and install the auto-apt and checkinstall packages using apt, aptitude, or a package management tool like Synaptic. Then, open a terminal and cd to the location of the program you want to build from source. Type auto-apt run ./configure. This will (hopefully) download all the requirements for the program. To finish, type make and then sudo checkinstall to create and install a .deb. Obviously, Checkinstall won’t work with every single program, and AutoApt won’t find every single dependency. Still, they’re viable alternatives to using apt-cache search to search for every dependency, then compiling the program itself.

Google

9


Issue 19 Feeling experimental? Then you should try AutoDeb. It’s an experimental bash script that combines a modified version of AutoApt and CheckInstall. Installation is a breeze: just download the binary file here, and make it executable (chmod +x ./autodeb). Then, you’re set! You don’t even need to unzip (or untar) the source archive, just type autodeb archive.tar.gz.

Figure 5: Using AutoDeb

Killing processes (Andrew) One of the things I hate about Windows is that there is no good way to kill frozen processes. Theoretically, you type Ctrl-Alt-Delete, wait for Task Manager to pop up, and kill the process. But in reality, the process doesn’t always die immediately (it usually takes multiple tries and a very long time). GNU/Linux users don’t have this problem. Here’s how to end processes using the terminal, a few GUIs, and even a first person shooter.

Killing processes in the terminal The terminal (also known as the command line) is the most powerful tool for virtually any job. So let’s look at how to kill processes with it. First, open your favorite terminal program (konsole for KDE, gnome-terminal for Gnome, and xterm are some good ones). Then, type ps -A (not ps -a). This gives you a listing of all the programs currently running and their PID. To kill a program, type kill PIDHERE, replacing PIDHERE with the PID of the program as shown by ps. See figure 1a for an example. Note that sometimes you will get error messages when trying to kill a program. If you do, you must kill the program as root (sudo kill PIDHERE or su and then kill PIDHERE depending on the operating system).

Figure 1a: Killing Pidgin using ps and kill in xterm

10

Killing processes (Andrew)


Issue 19 Often, the searching through ps’s output is like searching for a semi-colon in a 5MB source file. Luckily, if you know the name of the program, it’s easy to find it. Instead of typing ps -a, type ps -A | grep NAMEHERE, replacing NAMEHERE with the string you want to find. For example, if I typed ps -A | grep fire, all the processes with the string fire in them would be listed. You can also add the -i flag to grep (so you’d type ps -A | grep -i fire) to make the search string case insensitive.

How to kill processes with GUIs Both KDE and Gnome provide their users with GUIs for killing processes. In KDE, run KSysGuard (many distributions also use Ctrl-Esc). Just click on the tab, “ProcessTable”, select the item, and click “Kill”. In Gnome, open System Monitor (aka Gnome System Monitor), select the “Processes” tab, select the item, and hit “End Process”. If you can’t kill the process, try running the KSysGuard or Gnome System Monitor as root (see previous paragraph for more). If you just want to kill an inactive window without having to dig up a PID or running a bloated GUI, there’s an option called xkill. Just open a terminal, and type xkill. Click on a window, and it will be killed (right-click to cancel). KDE users can also type Ctrl-Alt-Esc to bring up xkill.

A bonus: killing processes and having fun If you really want to “kill” a process, then you need to try psDooM (figure 2a). Download (make sure it isn’t a patch, but a source or a binary) and install it, copy an IWAD (Freedoom has a few available) into wherever you installed psDooM (usually /usr/local/games/psdoom), and then run one of the executables located in the folder. Now, when you get mad at the world, all you have to do is open up a Microsoft product in WINE, and then shoot it.

Figure 2a: Literally killing processes in psDooM

Conclusion You now know how to kill a process with the terminal, kill a process with KSysGuard, Gnome System Monitor, or xkill, and even shoot a process! Next time a Microsoft user complains about his system freezing, all you have to do is grin and show him GNU/Linux.

Biography Andrew Min (/user/37372" title="View user profile.): Definition: Andrew Min (n): a non-denominational, Bible-believing, evangelical Christian. (n): a Kubuntu Linux lover (n): a hard core geek (n): a journalist for several online publications including Free Software Magazine, Full Circle Magazine, and Mashable.com Gary Richmond (/user/3653" title="View user profile.):

Killing processes in the terminal

11


Issue 19 An aspiring wanabee--geek whose background is a B.A.(hons) and an M.Phil in seventeenth-century English, twenty five years in local government and recently semi-retired to enjoy my ill-gotten gains.

Copyright information Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved. Source URL: http://www.freesoftwaremagazine.com/articles/issue_19_tips_and_tricks

12

Conclusion


Issue 19

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Free as in free milk Microsoft's business practices in developing countries By David Jacovkis A first draft of this article has been sitting for months in my hard disk. I decided to finish it after reading that Microsoft will offer its operating system and office suite for $3 per machine to developing countries. That made me think of the way the giant software company “helps” these countries by giving licenses of its proprietary software almost for free, and that in turn made me think of free milk. Let me tell you about it.

The Nestl?oycott In 1977 a boycott campaign was launched against Nestl?o protest for its marketing of breast milk substitutes. To make a long story short, Nestl?squo;s commercial agents in developing countries gave free samples of the infant formula to mothers shortly after they had given birth. They would shamelessly lie to them about the alleged advantages of the substitute product over breast milk, encouraging them not to breastfeed their babies. Since lactation is interrupted if the mother doesn’t breastfeed for several days, this forced a dependency on the substitute: when the mother ran out of free samples she found out that she couldn’t breastfeed her child any more, and had to buy more infant formula. The use of breast milk substitutes in developing countries has been found directly or indirectly responsible for several health problems of infants. The water used to prepare the product is often contaminated in areas where drinking water supply is deficient. Also, when the mother has to buy the product she will sometimes use less than the indicated dose to make it last longer, causing malnutrition in the infant. Besides, breast milk is the best nutritional source for newborn infants if the mother is healthy, and provides babies not only with all the necessary nutrients but also with antibodies that protect them from several illnesses. It also strengthens the bond between mother and child, and causes the release of hormones into the mother’s body that delay the return of the fertile periods, helping her space pregnancies.

Breast milk is the best nutritional source for newborn infants. (c) Nico Maessen, CC-by-nd 2.0. Thus, due to Nestl?squo;s marketing strategy both the mother and the baby lost the multiple benefits of breastfeeding while the multinational company benefited from their dependence on the substitute product. The boycott campaign finally led the World Health Organization to establish an International Code of Marketing of Breast-milk Substitutes, which forbids most marketing strategies for breast milk substitutes. The case was so clear that public opinion turned against the big corporation from the beginning, and, even though now and then the issue arises again, Nestl?as tried by all means to clean its image.

The Nestl?oycott

13


Issue 19 This is very interesting indeed, I can hear you saying, but what does breast milk have to do with free software? Patience, I’m getting there!

Microsoft’s free milk For some years now, Microsoft has conducted an intensive marketing campaign in developing countries to make sure that its software is used in educational institutions. This includes negotiating license discounts with governments, providing training for educators and even giving their software away for free. And they claim to do it for the sake of future generations, who will benefit from the education of today’s students. That’s why, Microsoft says, they are giving their software away for free. Free as in free milk, for this strategy has many things in common with Nestl?iving free samples of breast milk substitutes. I’ll analyse the most evident ones.

Substituting a natural product Just like infant formula is a substitute of real breast milk, proprietary software substitutes what is natural for us: sharing knowledge to improve our lives. Sharing information is as natural as breathing for human beings. The history of art, science and technology is composed of incremental steps that build on previous knowledge. Even completely novel inventions and revolutionary theories are to some degree based on what was previously known. Newton saw further by standing on the shoulders of giants; personal computers exist thanks to hundreds of previous inventions, from the telegraph to the integrated circuit. Proprietary software, the kind of software that Microsoft sells, is distributed in binary form. This format is conceived to be executed by computers and not to be read by humans. When you receive a program in binary form, all you can do with it is execute it in the appropriate type of hardware. In many cases, you must also accept a license that restricts the ways in which you can use the software, or the number of machines on which you can install it. On the other hand, free software, also known as open source software, is distributed as source code, the set of instructions written by the developer in a specific programming language like C or Java. This means that anyone familiar with the language can read it, learn from it and try to improve it. Besides, free software licenses allow the modification and redistribution of the code, so that everyone can contribute to its development and benefit from the result. Free software is free as in free speech, not as in free milk. These features have led to a development model that is completely different from the traditional way in which companies develop software. Successful free software projects, like the Apache web server or the Linux kernel, are developed by a heterogeneous community of programmers. Some of them work for companies that use the software or provide services, some of them are students working on a project, some of them are enthusiastic hackers that work on free software in their free time. There are no marketing departments, sales reports or productivity bonuses. Each community is a self-organised entity with its own rules, and they demonstrate every day that the software they produce is at least as good as proprietary software. Free software is free as in free speech, not as in free milk It is plain to see which form of software development and distribution is more natural to us, which can better promote the development of all nations. The adoption of free software, specially in education, is the only way to bridge the digital divide between developing nations and the areas where most of the software is actually produced. Only free software can provide us with the tools to access the information society without leaving future generations with the mortgage of a technological dependence on a private corporation.

Creating a dependence When the biggest software corporation in the world starts giving away its products, the motivations behind this strategy must be carefully examined. As an example, the Fresh Start for Donated Computers program provides old donated computers with the company’s proprietary operating system. If some Swiss bank donates a bunch of old computers to a school in Guatemala, Microsoft will provide the software for free. For a

14

Creating a dependence


Issue 19 few years, teachers and students will not have to pay to use the software, but when the institution receives new computers, or the licenses expire, what will they do? They are very likely to pay for the licenses of the software they have been trained to use, or they will just continue using the software without paying the license. Microsoft’s spokesmen have said in several occasions that the company prefers people using illegal copies of their software rather than not using it at all. Dependence on breast milk substitutes lasts until the lactation period is over, and it can cause great damage during that time. But the dependence of a group of people on a proprietary software system lasts as long as that platform exists or until they have all been trained to use an alternative system. This network effect is deliberately reinforced by Microsoft with the use of closed file formats in popular applications, like office suites. Microsoft’s gifts are part of a plan to control emerging markets from the very beginning. Once those countries are dependent on the product, they become potential buyers of upgrades and new versions. According to Microsoft’s senior vice president for emerging markets, Orlando Ayala, “…for Microsoft this is an investment in the long term. These are the consumers of the future.” You can say that louder, but not clearer.

Aiming the weakest Microsoft’s marketing strategy has been very aggressive in developing countries, where the need for external help in IT-related areas is higher, but also in less favoured areas in the United States and Europe. And this gift is much cheaper for the software giant than infant formula samples are for Nestl?since the marginal cost of a software product is negligible.

Bridging the digital divide. (c) Jason Hudson, CC-by-nc-sa 2.0. Many of those who benefit from Microsoft’s gifts have their first contact with computers at that time. If they receive no further information they will never know that there are alternatives that can be much better for themselves and their communities. Within these, children and young students are the most attractive objectives for Microsoft’s campaigns, since they are, as we have seen, “the consumers of the future”.

Conclusion We have seen how two different corporations use free samples of their products to create a dependence in the most vulnerable areas of the world. Even when the similarities between both marketing strategies are evident, Microsoft has earned the image of a company concerned with social causes, while Nestl?as been the objective of a successful boycott campaign that forced it to change its marketing strategy.

Conclusion

15


Issue 19 This double standard is maintained by the lack of public awareness on the implications of proprietary software. To make these implications known, and to promote the use of free software in education, is a step towards a world where access to knowledge is not restricted to those who can afford it.

Biography David Jacovkis (/user/2317" title="View user profile.): David Jacovkis has worked as a systems engineer, ICT consultant and editor of educational materials. Nowadays he collaborates with the Universitat Oberta de Catalunya and ISOC.nl in the SELF Project. He is co-founder of the Free Knowledge Institute [freeknowledge.eu]. His main interests are the ethical and philosophical implications of knowledge sharing, the technical and non-technical aspects of security in systems, and networks and writing about these issues for non-experts.

Copyright information This article is made available under the "Attribution-Sharealike" Creative Commons License 3.0 available from http://creativecommons.org/licenses/by-sa/3.0/. Source URL: http://www.freesoftwaremagazine.com/articles/free_as_in_free_milk

16

Conclusion


Issue 19

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

No budget learning with free software The Guus Kieft School By Alan Berg This article describes the work in progress of applying Ubuntu Linux sensibly within an underfunded school, and as part of a wider well thought out and alternative educational structure. I shall emphasise best practices and try my best not to dwell too much on the underlying technologies.

The school Education is a fundamental pillar of civilization. Without learning structures, humanity will descend again into the dark ages and superstition will rule. Supporting children in their learning activities is not an optional extra, but a community responsibility as part of the global community that we all live within. Placing these sentiments in context, we all pay our taxes and as part of those taxes, we collectively pay for the educational system that supports our way of life and our betterment in the future. This system mostly works. The pressure for consistency of quality to some extent enforces uniformity within the school system; the tool of this enforcement is budgetary with a layer of legal requirements. If you step away from the main stream, funding becomes scarce. Methodologies outside the majority view can be dangerous for long-term survival. If a school sits at the edges of common practice, then receiving government funding takes a lot of patience, skill, paperwork and diplomacy. Enter the Guus Kieft School right. Enter the Guus Kieft School right The Guus Kieft School sits snuggly next to a park in Amstelveen (Holland). The quality of air is excellent and the motivation of the student children (ages 5-14) envelops you on entrance. Positive energy resonates within every nook and cranny of the old scouting building and pleasant mosaics (figure 1) and markings scatter the grounds. My younger son is part of this outstanding enterprise; notice my pride in the statement.

Figure 1: Mosaic seen as one walks into the School grounds The Guuf Kieft School is based on an sociocratic organizational model similar to that of the Sudbury valley school in Framingham, Massachusetts. The sociocratic approach is decision making by consent and places the student to a great degree at the center of influence. The schools structure enables students free range to learn at their own pace and with their own definitions of direction. For a mildly autocratic parent such as I that can be scary, as you need an element of

The school

17


Issue 19 trust in the process. However, after a year of observing the incremental results on my younger son, I am delighted with this approach’s structural effect of building self-confidence, understanding of social context and plain old knowledge building. Despite the obvious quality that permeates the project, the school is fairly new in its existence and lacks significant funding. Therefore, the school needs still to tap into government acceptance for long term viability. As a consequence, there is currently a requirement for a low budget, no budget IT policy. Having worked as a teacher, course writer and developer of heavily used scaled up systems it seemed natural to me to want to help define a two-year IT policy to bridge a perceived funding gap.

Consent driven contracts It may very well be in the best interests of a lead miner to pollute the upstream river water of a village, but it is definitely not in the interests of the community as a whole. Consent driven policy is very democratic in scope and on average causes the least damage to the local environment. I can say the same for children playing in school time on the internet. The power of the internet is very attractive. However, if a group form around the computer area whose only activity is to shoot as many targets as possible then the ebb and flow of learning of the group and the wider community is disrupted and potentially negative. Before launching new IT facilities, the students and mentors at the school need to decide a definition of fair usage of the computers. If you do not preplan a codification of best practices then you will risk ending up with a select few monopolizing the internet connection for gaming purposes. In an educational environment, a fast computer with a brilliant graphics card risks the concentration and focus of a whole subset of a school primarily the male segment. Defining a personal contract and allowing the best interests of the whole community to speak through sociocratic processes enables broader advantage. Per student topics, that the student and mentor need to agree include how to time share, how much of the time the student spends on the computer and for what purpose. Without a containing consent, the lack of school policy may inadvertently place young minds at risk.

Ubuntu has a positive role to play Okay, the community has agreed on best practice. Now it’s time for the IT policy to structurally support the governance. The school is small and we already have one laptop connected to the internet. I intend to install two reference machines chock full of free software and show the parents and mentors how easy it is to customize for a particular student’s needs. Better still, I will describe how to copy shortcuts from one machine to another so that with a couple of command line actions the student’s home machine has the same software as at school. If a mentor wishes the student to focus in on a task, or if a student wishes to follow through at home, the learning path is not limited by technology. Now it’s time for the IT policy to structurally support the governance Before buying the new, let us consider the old. With the help of a kind parent, the school has scored two Pentium 1.5 GHz machines, the memory and hard disk are limited, time to call in GNU/Linux in the shape of Ubuntu. At this point you may well be asking why not Edubuntu or another one of the many excellent flavors of GNU/Linux. The answer is you can, there are many distributions that are viable. However, Ubuntu is a strong candidate. Ubuntu has good solid market penetration and I have no issues in recommending it as a desktop for home use in dual boot mode with Windows or stand-alone. A standard install includes all the software one comes to expect like office suites, multimedia applications, games, etc., and just as importantly a standard file structure and application installation process. I prefer APT over RPM for package management and find from practice that there are very few dependency issues. Finally, Ubuntu is well known and the homepage does promise to continue releasing for a solid number of cycles to come. Ubuntu is a strong candidate The simpler the approach one takes the less maintenance is required later. I intend the flow of operation for sharing consistency between school and home computers to be similar to:

18

Ubuntu has a positive role to play


Issue 19 1. Install Ubuntu with a given version. 2. Install all the educational free software that you can via apt-get or a package manager. 3. Add a user account per student and hence a clean desktop. 4. The mentor sits with the student and agrees which software is useful for a given period and places shortcuts on the student’s desktop for the given piece of software. 5. If the parents wish for a mirror on their home computer then the relevant mentor sends desktop shortcuts with a list of apt-get commands via email. Notice that the standard environment enables this approach. 6. The parents downloads the shortcuts to the students desktop at home and then run the enclosed apt-get commands to install the software. Perhaps (at worst) periodically, I may need to explain to the teachers and parents what apt-get is or have an install fest for Ubuntu, but this is not a significant cost as the parents are all so friendly.

Installing educational software De-skilling IT is vital. Technology for the sake of whiz and bang is fun for hanging out on a rainy Saturday afternoon, but only gets in the way and increases support costs in the school environment. I do not intend to micromanage the IT infrastructure. If a recipe for action takes more than a couple of pages of A4 then the chances are that I will be spending unexpected quality time with misconfigured, insecure, virus ridden software. Simple is good, complex we leave to those who maintain grid environments or are paid for this type of noise by the hour On a sacrificial computer with the current version of Ubuntu select the Add/Remove option from the Applications menu which is normally seen on the top left side of the screen. Click on the Education icon (figure 2) and tick the Periodic table of elements. Submit via the Apply button.

Figure 2: Adding or removing educational software from the GUI Within in a brief few seconds you have functioning and worthy software (figure 3), which the student can later find under the Education menu item. Debian distributions sport a number of excellent periodic tables including ones that have photographs of the individual elements. I suspect it is now only a question of time before the developers will include multimedia chemistry lessons.

Installing educational software

19


Issue 19

Figure 3: A wonderfully exuberant periodic table Exploring for a few more seconds you will, with very little effort, discover Celestia eye candy for the masses. My over imaginative brain is fevered with the prospect of one-on-one astronomy lessons based on Celestia and Stellarium, pointing to energetic events in the simulated heavens and then later electronically controlling a telescope to observe those fuzzy planets. Okay, in reality, a $30 pair of binoculars viewing the moon or globular clusters. Celestia eye candy for the masses

Figure 4: Celestia eye candie Or how about installing the software Kpercentage (figure 5) which works (despite the K in its name) under the GNOME desktop as well as the KDE desktop. My younger son loves mathematics and I have seen him spend hours on similar software, long after the boundary of my own patience would have been crossed.

20

Installing educational software


Issue 19

Figure 5: Rigorously exercising the young mind with Kpercentage Many mathematicians have a strong musical aspect to their souls, which Solfege (figure 6) has the potential to support the unlocking of. With Solfege students can learn how to differentiate notes and intervals, scales and theory.

Figure 6: If music be the food of life, play C minor Moreover, the list goes on and on. For example, my elder son, admittedly at another School is learning European Geography and Kgeography (figure 7) reinforces his scores via good old fashioned repetition.

Figure 7: Graphical learning of facts via Kgeography The Add/Remove option only displays a relatively limited number of packages. From the command line searching for mathematical packages is as simple as typing the command:

Installing educational software

21


Issue 19 sudo apt-cache search mathematics

Once you have found your target package to zoom in apply the -f option, for example: sudo apt-cache -f search junior-math Package: junior-math Priority: extra Section: universe/misc Installed-Size: 36 Maintainer: Ben Armstrong <synrg@sanctuary.nslug.ns.ca> Architecture: all Version: 1.3 Depends: bc, bsdgames, snowflake, xaos, xbase-clients Filename: pool/universe/j/junior-math/junior-math_1.3_all.deb Size: 1896 MD5sum: 2c746ec40c76ff43c304f63a23983d18 SHA1: 1dd73fe2eb221eba933a67fc8b234187f71b8bae SHA256: 258c90fc6a24aaa875025b5e893cb8635abec52a6fe4061c1e5dfb4611f248ae Description: Debian Jr. educational math This meta package will install educational math programs suitable for children. Some of the packages use mathematics that is well beyond the abilities of young children (e.g. fractals and cryptography), but we hope that by using them they gain an appreciation of the beauty of math from an early age. Other packages allow children to explore and learn math concepts in an engaging, interactive way. Some packages are more general, providing math activities as only one part of the package, e.g. bsdgames includes "arithmetic" in addition to other non-math games, and xbase-clients provides xcalc. Bugs: mailto:ubuntu-users@lists.ubuntu.com Origin: Ubuntu

To install the software type: sudo apt-get install junior-math

Sending an install script by email is straightforward, simply add the previous line to a text file software_to_install_ref.sh. Send the file as an attachment to the interested parents and let the parent run the command: sh ./ software_to_install_ref.sh

The parent will then need to fill their administrative password and the software will auto-magically be installed. Okay, so I am stating the obvious. However, being obvious does not make the approach less powerful. The fact that this methodology is intuitive actually adds power and value. At this point you may be wondering why have ref as part of the file name. Well, “ref� is actually a placeholder. Every file name should have a unique identifier so that the teacher/parent can go back and find which files were installed and for what reason. A good ref would be something like a date or just an incremented number. The mentor should make notes and perhaps a lesson plan or two based on each ref. To remove software is just as straightforward: apt-get remove junior-math

Therefore, because of the simplicity of it all, the school should also send a remove script at the same time as the add script. This gives the parent/student fine-tuned control. Even though synchronized installation via email is a powerful learning enabler, we are still missing the synchronization of the shortcuts on the desktop that would allow the student to have a consistent set of environments.

22

Installing educational software


Issue 19

Creating and sending shortcuts Just because an idea is simple does not imply that the idea doesn’t have positive potential for creative impact. Placing shortcuts on a student’s desktop at home and at school in a synchronized and incremental way is one of these so called simple ideas. If you want to create a shortcut for already installed software that appears in the GNOME menu, right-click on the software and an extra set of menu items appear (figure 8).

Figure 8: Right-clicking to create shortcuts sometimes does make your life simpler Sometimes your favorite software does not exist in the list found in your favorite package manager or, worse still, it does exist in the list but not in the place you were expecting it. xaos, as shown in figure 9, is a render of graphically detailed fractals that I have spent relaxed moments zooming down. Not thinking on my feet, I had assumed that xaos was educational and, being lazy, I simply added it from the command line. sudo apt-get install xaos

It was only later that I realized that this same application lay under the graphics subsection of the menu.

Figure 9: Generating fractals with xaos To add an application to the desktop that does not show within the menu, I normally take the easy route. By right-clicking on the top plane where the menu resides and mentions Applications, Places, System you will be first given the option “+Add Panel”. Selecting “+Add Panel” brings up the graphic shown in figure 10.

Creating and sending shortcuts

23


Issue 19

Figure 10: Adding a custom panel Clicking on “Custom Application Launcher” triggers the create Launcher dialog, as shown in figure 11.

Figure 11: The create Launcher Dialog Fill in the details shown. Click on the “No Icon” button and choose an icon of choice. In my case, that was amazingly the xaos icon. Finally press OK. GNOME generates a new desktop icon in the top panel. Drag and drop the desktop item to the main desktop. The new item is just plain text that GNOME interprets and then renders as an icon. The icon activates the given program when the user double clicks. To prove this text derived point, type: more ~/Desktop/xaos.desktop

and you will see a result similar to: [Desktop Entry] Version=1.0 Encoding=UTF-8 Name=xaos Type=Application Terminal=false Name[en_US]=xaos Exec=xaos Comment[en_US]=Fractal generator Icon[en_US]=xaos Comment=Fractal generator Icon=xaos GenericName[en_US]=

If a student chooses twenty pieces of software you can imagine sending the parent a 20 line apt-get add software script with a corresponding 20 line apt-get remove software script with the necessary desktop items contained within a .gz or .zip file. Automation of this is trivial and more a matter of administrative

24

Creating and sending shortcuts


Issue 19 housekeeping. Thus, Ubuntu or another standard environment enables selective learning exercises with just a couple of cheap and easy to describe tricks. Ubuntu or another standard environment enables selective learning exercises with just a couple of cheap and easy to describe tricks

Conclusions By agreeing on a standard GNU/Linux distribution and using the over obvious trick of sending desktop links via email, you will find it possible to create a working computing school environment that is potentially synchronized with the clever learner’s home environment. The package management developers have already done the hard work. Why don’t we cheat on our IT exams by making the student’s life easier and more consistent.

Acknowledgements I would like to thank Eliane Gomperts for her outstanding efforts and for being the driving force in starting a new mediated learning and, thus, motivating school. Lawrence says “well done”.

Biography Alan Berg (/user/8" title="View user profile.): Alan Berg Bsc. MSc. PGCE, has been a lead developer at the Central Computer Services at the University of Amsterdam for the last eight years. In his spare time, he writes computer articles. He has a degree, two masters and a teaching qualification. In previous incarnations, he was a technical writer, an Internet/Linux course writer, and a science teacher. He likes to get his hands dirty with the building and gluing of systems. He remains agile by playing computer games with his kids who (sadly) consistently beat him physically, mentally and morally. You may contact him at reply.to.berg At chello.nl

Copyright information This article is made available under the "Attribution-NonCommercial-Sharealike" Creative Commons License 3.0 available from http://creativecommons.org/licenses/by-nc-sa/3.0/. Source URL: http://www.freesoftwaremagazine.com/articles/no_budget_learning_with_free_software

Acknowledgements

25


Issue 19

26

Acknowledgements


Issue 19

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

This puppy rocks! Puppy Linux is fast yet full-featured By Howard Fosdick Fast, small, lightweight—and still a full-featured GNU/Linux: Puppy Linux combines a complete set of applications with great flexibility, yet it requires minimal hardware. This article introduces this increasingly popular GNU/Linux distribution.

Which GNU/Linux? The GNU/Linux operating system comes in hundreds of flavors or distributions. All are essentially different packaging of the same base software, assembled and adapted for different purposes. Among the features that distinguish the many distros are the user interface, bundled applications, tools, system requirements, and the methods for installing the basic system and additional applications. Each GNU/Linux distribution has its own personality and strengths. Puppy Linux offers a full-featured, high performance system that doesn’t require state of the art hardware. Puppy’s goals aim at creating a distribution that: • contains all the applications needed for daily use • has good performance • requires minimal system resources • is highly reliable—“it just works” • is easy to install and boot from any allowable media (hard disk, flash drives, USB devices, CDs, DVDs, CD/RWs, Zip disks, the network card, et al) • Easy to use Read this list carefully and you’ll notice that the third goal directly conflicts with the first two. How can a system offer all the applications most users need, perform well, and still run on low end hardware? Puppy’s solution to this paradox underlies its success. Puppy is not based on any other GNU/Linux distribution. It is not a “remastered” version of some other GNU/Linux. It was created, file by file, from scratch several years ago specifically to meet the goals above. And so it attains them. I will discuss Puppy in terms of its goals. Before I start, one note: there are several versions of Puppy as well as a number of derivatives called Puplets. This discussion represents them all.

Applications Puppy’s primary goal is to include all the applications users normally require, be easy to use, and still perform well even on limited hardware. How can it do that? Part of the answer lies in its selection of applications. Puppy includes programs for every need—but it carefully picks those that are the most resource efficient. These include everything from office applications to personal information management, from multimedia to web access, from networking to instant messaging. The sample Puppy screens in figures 1 and 2 show various apps being accessed.

Applications

27


Issue 19

Figure 1: Accessing Puppy’s Mulimedia Applications

Figure 2: Some of Puppy’s Utilities At every turn, Puppy chooses small, lightweight applications. For example, for office work the system includes the Abiword word processor, the Gnumeric spreadsheet, and GsView to display PDF and Postscript files. These applications meet the needs of most users, yet they are way more resource efficient than their Microsoft Office and OpenOffice.org alternatives. Since they are file format compatible with these competing applications, they make reasonable replacements. Here’s another example of this principle at work. The default browser Puppy uses, called Dillo, runs in only 350 kilobytes. Contrast this to current versions of Internet Explorer or Firefox, which can easily consume many megabytes of memory. Yet for most users’ needs Dillo works just fine. If for some reason you prefer some other browser, you can easily add Firefox, SeaMonkey, Mozilla, Opera, Flock, or almost any other browser. Get the idea? The major difference between Puppy Linux and its derivatives are in the area of bundled applications. Various Puplets add specific applications like OpenOffice.org, Skype, Firefox, Apache, or many others. I sometimes configure PCs donated to charity, and I’ve found it easy to select a version or derivative of Puppy that bundles the required applications. Check out the apps included in the various Puplets here and here.

Adding applications Beyond the included applications, a key difference among GNU/Linux distributions is how easy it is to add extra applications to the base system. Can applications be downloaded and installed automatically? Almost any GNU/Linux—including Puppy—allows you to download and compile applications from source code, but most users don’t have the time or the expertise for this. A package manager makes installing additional applications infinitely easier.

28

Adding applications


Issue 19 Another important factor is how many easily-installed applications are available. A package manager is only as valuable as the apps it can install. A large pool of applications from which to select means greater value. Puppy’s package manager is called PETget. Figure 3 shows its main interface panel. Simply select the apps you want to install and tell whether you are installing them from the Puppy Live CD or from the internet. The software does the rest.

Figure 3: Adding applications with PETget PETget also installs many packages outside the official Puppy distribution. These packages are put together by the Puppy community and are often referred to as DotPups. The official Puppy Live CD distribution includes over 500 packages. DotPups add another couple of hundred (see here and here). The result is that PETget easily installs any mainstream GNU/Linux application. You can create your own customized version of Puppy Linux using a tool called Puppy Unleashed. With it you create your own Puppy live CD (a bootable CD) with the applications you select from its 500 packages. This way you can quickly customize Puppy into your own version for your organization.

Performance One secret to Puppy’s performance is its careful selection of lean but mean applications. The other is that Puppy runs everything from memory. The operating system and applications reside in RAM and run from there. Memory access is orders of magnitude faster than disk access so running everything from RAM coaxes reasonable performance even from underpowered computers. For example, this web page lists start-up times for Puppy running on a 433MHz PC. The PC has only 128MB of RAM and a 128MB compact flash card, and no hard drive (Puppy knows to minimize writes to the flash device to prolong its life). Most applications start in less than a second on this PC. Here are timings for some of the bigger, slower applications as listed on that web page: Application Mozilla Seamonkey web browser Inkscape graphics editor Abiword word processor Gnumeric spreadsheet Gxine media player Geany code editor/IDE Start-up times for applications

First start (seconds) 12(once installed) 10 5 3 2 2

Subsequent starts (seconds) 6 8 5 3 2 2

These statistics verify my experience with Puppy’s responsiveness on a variety of old systems. I’ve installed different Puppy releases on about a dozen older machines ranging down to Pentium IIs and all performed

Performance

29


Issue 19 well. They were also easy for clients to use and included all the required applications. For example, this article describes my experience installing Puppy on a discarded 550MHz Pentium III. That old machine runs Puppy applications about as fast as my 2.6GHz Celeron runs many Windows XP and Red Hat Enterprise Linux apps. It makes you think, doesn’t it? The complete Puppy download ranges from about 28MB up to 95MB, depending on the release. This is much less than GNU/Linux systems that are not optimized for low end hardware, which typically require at least a 700MB CD. Puppy achieves its small footprint both through its selection of small, space efficient applications and by compressing its files. Puppy’s automatic compression and expansion of its files is transparent to the user. Puppy needs from 128MB to 320MB to run fully in memory, depending on the version. Puppy runs on computers with less memory but is slower because then disk access is required. So with only 128MB of memory you get a responsive system when using Puppy.

Flexibility A good way to try Puppy and see if it meets your needs is simply to download the live CD software, burn it to a CD disc, and then boot it. You can try out the product and it will not change anything on your system. When you burn the live CD software to CD disc, be sure to direct your CD burning software to create an “ISO disk”, “disk image” or “bootable disk”. (Options like “data disk”, “audio disk” or “video disk” will not boot.) If you decide you like Puppy, you have many options for how to use it going forward. Continue to boot off the CD, and tell Puppy to save your session to a hard disk file. This saves your preferences across sessions. It even saves any new applications you installed during your session. Puppy saves the information in a file it writes to any existing Windows or GNU/Linux partition. (This includes Windows NTFS partitions.) Alternatively, you can install Puppy to hard disk, USB device, flash drive, zip drive, CD, DVD, CD/RW, or whatever else your machine will boot from. The Puppy Universal Installer makes this process simple. The option I especially like is called a frugal disk install. To do this you just copy four files from the Puppy Live CD to any existing disk partition. You can copy the files manually or let Puppy do it for you from a menu selection. Puppy’s files consume about 600MB of disk space. Place these files in any Windows or GNU/Linux hard disk partition. Then you can run Puppy from there. No need to tangle with disk partitioning or risk your previously installed software. For example, on some machines I placed the Puppy files in a Red Hat Linux partition. Then I added a few simple lines to Red Hat’s boot loader file (the Linux GRUB utility) to include Puppy in the list of operating systems I can select from when the PC boots. I didn’t have to repartition the hard drive to set this up, and Puppy boots much faster off the hard drive than it did off the old slow CD drives. Puppy is as flexible in booting and as generous in co-existence with other operating systems as anything you’ll find.

Should you adopt a Puppy? Every GNU/Linux distribution presents its own advantages and unique personality. Puppy is superior as a full-featured yet lightweight product. It doesn’t compete with the “large GNU/Linuxes” like Ubuntu, PCLinuxOS or Red Hat, but rather creates its own category of product. Neither is Puppy intended as a “business GNU/Linux” for IT customers who require support contracts, regular product upgrades, and a large development community. Puppy addresses the needs of millions of PC users who want an easy-to-use, reliable system with good online community and support. Puppy brings new life to old PCs and makes them useful again. And it really flies on newer PCs that ship with bloated Windows systems! Puppy comes in several versions so you can pick the one that meets your needs, or you can download standard Puppy and easily add or delete apps via the PETget package manager. You can even master your own customized version with Puppy Unleashed.

30

Should you adopt a Puppy?


Issue 19 Puppy’s default user interface will make anyone familiar with Windows feel right at home. Anyone can start using the product with minimal training. Puppy features thorough online documentation, though it sometimes lags behind the fast-moving product. The keys to Puppy’s support are its two forums here and here. Having participated in numerous online communities, I’ve found the Puppy forums exceptional. This community is enthusiastic, knowledgeable, and helpful. Why not give Puppy a go? You can try it as a live CD without affecting any installed operating systems or their applications. Or install Puppy on that old PC you have in the basement collecting dust. You’ll be amazed at how useful it can become with the right software. Bring this little Puppy home and I guarantee it’ll put a smile on your face.

More For everything about Puppy Linux, including downloads, visit: • www.Puppylinux.com • www.Puppylinux.org • www.Puppyos.com This article was written with Puppy Linux and its tools the OpenOffice.org Writer word processor, the Composer and Bluefish HTML editors, mtPaint for creating the figures, and the Opera, Firefox, and Dillo web browsers—all running on an old Pentium II laptop. My productivity was as good with this setup as it is on my expensive new Windows Vista machine at my office. For more on how to revitalize old PCs by installing Puppy and other small Linuxes, see this article.

Biography Howard Fosdick (/user/36931" title="View user profile.): Howard Fosdick is an independent DBA consultant who recently wrote the first book on free and open source Rexx scripting, The Rexx Programmerâ’ s Reference (http://www.amazon.com/rexx). He frequently writes technical papers and presents at conferences. His primary interests are databases, operating systems, and scripting technologies.

Copyright information Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved. Source URL: http://www.freesoftwaremagazine.com/articles/puppy_linux_fast_yet_full_featured

More

31


Issue 19

32

More


Issue 19

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Stretching your instant messaging wings with Pidgin How to connect to virtually any instant messenger network using Pidgin By Andrew Min Today, everyone uses a different instant messenger. Your boss may use Lotus Sametime, your colleague AIM, your friend Google Talk, and your kid Yahoo! Messenger. However, these all take up hard drive space, RAM, and CPU usage. In addition, many of these are proprietary and Windows-only (two big minuses for GNU/Linux users). Luckily, the free software world has an alternative that enables users to chat with users of all of these programs (and many more). It is called Pidgin. Note: This is part 2 of an instant messenger series. Part 1 deals with the history of instant messenger clients and protocols.

History of Pidgin Before I get started explaining how to use Pidgin, I had better spend a little time to explaining what Pidgin actually is. According to the Pidgin site: Pidgin is a multi-protocol Instant Messaging client that allows you to use all of your IM accounts at once. To be more specific, it can connect to AIM, Bonjour, Gadu-Gadu, Google Talk, Groupwise, ICQ, IRC, MSN, QQ, SILC, SIMPLE, Sametime, XMPP (the core technology of Jabber), Yahoo!, and Zephyr. Originally, Pidgin started out as an AIM clone known as GAIM (an acronym for GTK+ AIM, later changed to simply “Gaim” to avoid naming disputes with AOL). Then, it started to add more protocol support by reverse-engineering proprietary protocols (and adding open protocols such as IRC or XMPP). Soon, it began to pick up steam. The well-known free software repository Sourceforge.net named it Project of the Month for November 1998. Adam Iser created a fork of Gaim known as Adium which allowed Macintosh users to use Gaim without X11. John T. Haller created a portable “launcher” called Portable Gaim (later renamed to Gaim Portable), which allowed users to take Gaim on their USB drive. In 2007, Sean Egan (the head of the project) announced that his team would rename their project “Pidgin” to avoid more legal disputes with AIM (the new name is a reference to the term “pidgin”, which describes communication between people who do not share a common language. It may also be a joke related to Gaim, since pigeons are game birds).

Installation On Windows, installing Pidgin is easy. Just download it from the Windows download page and run the installer. Note: The GTK+ package bundled with Pidgin on Windows can sometimes cause problems with the GIMP image editor. The solution is to uninstall both Pidgin and GIMP, install the latest GTK+ build from the official site, and then reinstall Pidgin and GIMP. Macintosh users aren’t encouraged to download Pidgin at all. Instead, the developers recommend Adium, an OS X fork. If you don’t like Adium, you can install an old 1.5 build of Gaim via fink. Pidgin offers repositories for Fedora Core, CentOS, and Red Hat Enterprise Linux (find them at the download page). In addition, several GNU/Linux distributions such as Gentoo offer Pidgin in their official repositories

Installation

33


Issue 19 (search for pidgin in your package manager). Unfortunately, many GNU/Linux distributions don’t have it so easy. At the time of this writing, the latest version of Pidgin (2.0.1) wasn’t available in many official repositories (though the older Gaim 1.5 is mostly available). Debian offers an unstable version from their repository. Ubuntu Feisty users don’t have an official package (only users of the upcoming Gutsy 7.10 do, available here). The reason is that the metapackage ubuntu-desktop in Feisty and early versions requires the package gaim. If you don’t want to wait for Gutsy, you can mess around with some unofficial packages from GetDeb or ubuntu.pl, you can compile it from source, or you can just use the old Gaim 1.5 package pre-installed on Ubuntu (all the tutorials below will probably work with 1.5, but the screens will be different and some things will be renamed). Note that the ubuntu.pl Pidgin package conflicts with Gaim, causing Gaim to be removed and ubuntu-desktop along with it (though according to this comment or this one there are ways to keep ubuntu-desktop). The GetDeb package does not conflict with Gaim, so Gaim will stay installed (as will ubuntu-desktop). Just make sure you run Pidgin, not Gaim.

How to set up accounts Pidgin is a great program, but it isn’t all that easy to use. The average user can be easily scared away by the fact that it isn’t intuitive when it comes to setting up accounts. The following are instructions on how to set up accounts for eight of the most popular protocols.

Setting up an AIM account Like it or not, AIM is the most popular instant messenger on the planet (53 million users and counting). Unfortunately, AIM is full of bloatware, adware, and even spyware. That’s why many users opt for Pidgin instead. First, you’ll need an AIM or AOL screen name. If you don’t have one, register one here. Then, open Pidgin and go to Accounts→Add Edit. In the Accounts window, click the Add button. Make sure the protocol is AIM and type in your AIM or AOL screen name (e.g. johnsmith). You can also add things such as mail notification, password saving (you must type your password in the password field and check password saving), buddy icons, the local alias (the name that will show up on the network, e.g. John Smith), and proxies (under the Advanced tab). To finish, click the save button.

Figure 1: AIM account set up

Setting up an ICQ account ICQ is the oldest instant messenger around. But like AIM, it’s bloated and full of ads. Luckily, Pidgin makes it easy to connect to your ICQ buddies. Before you do so, however, you’ll need an ICQ number (available here). Then, open Pidgin and click on Accounts→Add Edit. In the Accounts window, click the Add button. Change the protocol to ICQ and type in your ICQ number (e.g. 000000000) for your screen name. You can also add things such as mail notification, password saving (you must type your password in the password field and check password saving), buddy icons, the local alias (the name that will show up on the network, e.g. John Smith), and proxies (under the Advanced tab). To finish, click the save button.

34

How to set up accounts


Issue 19

Figure 2: ICQ account set up

Setting up an MSN (Windows Live) account I don’t particularly like Microsoft. However, there are a lot of people who disagree, at least as far as the instant messaging market is concerned. Just look at Windows Live Messenger: it claims 27.2 million active users. I personally know lots of people who use it. Luckily, I don’t have to actually install a Microsoft product to communicate with them. I just need Pidgin and a Windows Live ID (if you don’t have one, you can sign up for one here). Open Pidgin and go to Accounts→Add Edit. In the Accounts window, click the Add button. Change the protocol to MSN and type in your Windows Live ID email (e.g. johnsmith@hotmail.com) for your screen name. You can also add features such as mail notification, password saving (you must type your password in the password field and check password saving), buddy icons, the local alias (the name that will show up on the network, e.g. John Smith), and proxies (under the Advanced tab). To finish, click the save button.

Figure 3: MSN account set up

Setting up a Yahoo! Messenger account After Microsoft, Yahoo! is my least favorite company. But there are a lot of Yahoo! Messenger users that I know. Luckily, I don’t have to install Yahoo! Messenger to stay in touch with them. All I need is Pidgin and a Yahoo ID (available here). Open Pidgin and go to Accounts→Add Edit. In the Accounts window, click the Add button. Change the protocol to Yahoo and type in your Yahoo ID (e.g. johnsmith) for your screen name. You can also add things such as mail notification, password saving (you must type your password in the password field and check password saving), buddy icons, the local alias (the name that will show up on the network, e.g. John Smith), proxies, Yahoo! Japan, and an option to automatically refuse conference and chat room invitations (the last four options are under the Advanced tab). To finish, click the save button.

Setting up an ICQ account

35


Issue 19

Figure 4: Yahoo! account set up

Setting up an XMPP (Jabber) account XMPP (the core technology in Jabber) is probably my favorite protocol. It’s open and extremely popular, and Pidgin provides support for it (though many features, like transports, are disabled). Open Pidgin and go to Accounts→Add Edit. In the Accounts window, click the Add button. Change the protocol to XMPP, type in your XMPP screen name (e.g. johnsmith, if you don’t have one make one up), and type in your domain (e.g. johnsmith.com, if you don’t have one visit here for a list of ones to choose from). You can also add things such as mail notification, password saving (you must type your password in the password field and check password saving), buddy icons, the local alias (the name that will show up on the network, e.g. John Smith), the resource, proxies, old SSL, plaintext authentication, authentication over unencrypted streams, and connect port and server (the last four under the Advanced tab). To finish, click the save button (if you already have a screen name) or the register button (if you don’t already have a screen name).

Figure 5: XMPP account set up

Setting up a Google Talk account for Gmail Google is one of my favorite companies. Part of the reason is that they use standards. Instant messaging is no exception. Their Google Talk server uses XMPP as its protocol. However, connecting to the Google Talk server from a third-party client isn’t as easy as connecting to most other XMPP servers. To connect, you’ll need a Google Account (get one here). Then, open Pidgin and go to Accounts→Add Edit. In the Accounts window, click the Add button. Change the protocol to XMPP, type in your Google Accounts screen name (e.g. johnsmith, NOT johnsmith@gmail.com) for your screen name, and type in gmail.com (or googlemail.com for googlemail.com users) for your domain (if you’re signing in with a Google Account that’s not linked to any Google email service, enter gmail.com). Then, go to the Advanced tab, set Connect port to 5222 and Connect server to talk.google.com. You can also add things such as mail notification, password saving (you must type your password into the password field and check password saving), buddy icons, the local alias (the name that will show up on the network, e.g. John

36

Setting up a Yahoo! Messenger account


Issue 19 Smith), and proxies (under the Advanced tab) To finish, click the save button. If you use Google Apps for Domain, read this article for how to set up Pidgin with your server.

Figure 6: Google Talk account set up

Setting up an IRC account For those who like providing (or receiving) amateur tech support, Pidgin offers a way to connect to IRC chat rooms. Unfortunately, it isn’t as fully featured as real IRC clients. Still, it’s a veritable option for those who don’t want to run multiple programs at the same time. To set it up, open Pidgin and go to Accounts→Add Edit. In the Accounts window, click the Add button. Change the protocol to IRC and enter in the server you are trying to connect to (e.g. irc.freenode.net). If you have a screen name, type it (e.g. johnsmith) in the screen name field. If not, you can just make up your own screen name for now. You can also save the password (you must type your password in the password field and check password saving), enter the local alias (the name that will show up on the network, e.g. John Smith), change the port, use SSL, change authentication name, password, disconnect ghosts, operator passwords, and use proxies (the last seven under the Advanced tab). Then, hit Save. If you haven’t done so already, you should probably register your username (otherwise, someone could steal it). Doing so varies from server to server. Here’s how to do it on Freenode, one of the biggest IRC servers. Click on Buddies→New Instant Message, choose the IRC account you want to register, and make the buddy you are messaging nickserv. The first message should be register your-password (replacing your-password with a password, e.g. thisisapassword). To send, hit the Enter key. If you want to add an email, type another message reading set hide email on, then set email your-email-address (replacing your-email-address with your email, e.g. johnsmith@johnsmith.com). To remember the password, go to the IRC account again (Accounts→Add Edit Accounts, select the IRC account, and hit Modify), and type in your password.

Figure 7: IRC account set up

Setting up a Google Talk account for Gmail

37


Issue 19

Setting up a QQ account Tencent QQ (also known as simply QQ) is one of the most popular instant messengers in China and South Africa. Like ICQ, QQ gives you a number, not a username. Unfortunately, to get the username you need to download the Windows-only QQ client (the official site in Chinese is here, the English version based out of South Africa is here). The problem is, QQ is labeled by some companies as having spyware (see McAfee’s report on qq.com). Once you have registered, Pidgin makes it easy to connect to QQ. Open Pidgin and go to Accounts→Add Edit. In the Accounts window, click the Add button. Change the protocol to QQ and type in your QQ # (e.g. 000000000) for your screen name. You can also add things such as password saving (you must type your password in the password field and check password saving), buddy icons, proxies, log-in in TCP, log-in hidden (all three under the Advanced tab), and the local alias (the name that will show up on the network, e.g. John Smith). To finish, click the save button.

Figure 8: QQ account set up

Basic usage Now that you’ve logged into your accounts, you’ll want to be able to message your friends. You should see all the buddies you have previously added from the other instant messengers you have used. To message, just double-click on a name, and it will pop up an instant message screen as shown in figure 9. You could also just do Buddies→New Instant Message, and type in the name of a buddy. The instant messaging window has quite a few formatting options. You can add bold, italic, and underline formatting, raise and lower font sizes, change the font, change the colors, insert links, insert images, and insert emoticons (also known as smileys). See figure 9 for some examples. To send the message, hit the Enter key.

Figure 9: Instant messaging (with myself)

38

Basic usage


Issue 19 In addition to making your messages look better (or worse), Pidgin has a handy find feature to search the conversation, a log viewer, a feature to save the conversation to an HTML file, and a file transfer protocol (only in certain protocols), and do much more. But what if you haven’t added a buddy yet? Does that mean you have to open up the official client, add the buddy, then go back to Pidgin? Of course not (though, if you really want to do that, you’re welcome to). All you do is go to Buddies→Add Buddy, fill in the required fields, choose which account to add the buddy to, and hit Add. Instant messaging to individual users might be fun, but group chats are even more fun—especially for IRC, whose main focus is on group chats. It is now time to learn how to join chat rooms. It’s simple: just go to Buddies→Join a Chat, select the account that is going to connect to the chat room, and enter any additional information that is required. Then, hit Join. A window will pop up similar to the one in figure 10.

Figure 10: An IRC chat Many users enjoy going to certain IRC chat rooms every day. For instance, I personally like hanging out in the #kubuntu channel at irc.freenode.net. I also like hanging out in certain AIM chat rooms with some of my friends. However, entering the room name over and over again can be a little tedious. Luckily, Pidgin offers a way to bookmark chats. Just go to Buddies→Add Chat, enter the information about the room, and hit Add. A new “buddy” should appear on your list. Double-clicking on it will take you to the chat room it represents. When you only have a few buddies, managing your buddy list is easier than starting a GNOME vs. KDE flame war. But as your list gets bigger (I currently have 100+), it gets impossible to manage your buddies. The solution? Groups. Just click Buddies→Add Group, type in a name for the group, and click Add. You can then drag and drop buddies (or IRC chats) into the group. Starting to get spam from a particular person? There’s a great way to stop that: just use Pidgin’s block feature. If that person is on your buddy list, right-click on the person, and click Block. You can also start a conversation, then go to Conversation→Block in the messaging window. Sometimes, you need to know exactly when a person comes online. Pidgin provides a unique way to do this. It is called “buddy pounces” (no, they did not copy this from AIM’s “Buddy Alerts”. It was the other way around). Go to Tools→Buddy Pounces, and click Add. Type in the account you will be using and the name of the buddy. Then, choose when the pounce will happen. You could do it when the buddy signs on, signs off, sends a message, goes away, returns from away, becomes idle, is no longer idle, starts typing, pauses when typing, or stops typing. Finally, choose what you want to do. You could automatically send a message, pop up a notification, play a sound, or even execute a command (for example, open Firefox). There are many uses for this: everything from annoying your friend to remotely controlling your computer (for example, if you receive a message from yourself saying “play bonjovi”, XMMS could start playing You Give Love A Bad Name).

Basic usage

39


Issue 19

Extending Pidgin You can now chat with your friends, and use some of the more advanced features that make Pidgin so much better than competitors. But Pidgin is capable of so much more! The key to it is its extensibility.

Plugins One of the coolest things about Pidgin is that it has an API. Like the Mozilla programmers, the Pidgin developers have created a way for third party programmers to create plugins. And the community has responded by creating tons of plugins. Quite a few useful ones, like Timestamping, come pre-installed in Pidgin. To view them, go to Tools→Plugins. One of the most popular 3rd-party Pidgin plugins is called Guifications. It is a wonderful plugin that displays “toaster” popups to notify when a buddy comes online, offline, chats, or starts typing, similar to what AIM, Windows Live Messenger, and Yahoo! Messenger offer. Windows users have an installer, while GNU/Linux users have to compile from source (Ubuntu users have a pre-built, unofficial binary from GetDeb here). Don’t like the default Guifications theme? Third-party themes are available here (you extract them into ~/.purple/guifications/themes/). To run a theme, open the Plugins window, select Guifications, click Configure, click on the Themes tab, and check the theme(s) you want. Another popular plugin is Off-the-Record Messaging. This allows for encryption from prying eyes trying to tap into your conversations. OTR has installers for Fedora, Mandriva, and Windows (there is also an Ubuntu binary in the repositories called pidgin-otr. However, it requires Gutsy). Another popular addon is the Purple Plugin Pack. It’s not just one plugin. It is 42 plugins all wrapped into one package. The plugins range from useful plugins (like an IRC helper) to completely useless ones (like a dice roller). There is a Windows installer, or you can compile it from source (for Ubuntu users, there is an unofficial binary here).

Smiley themes Another cool feature Pidgin has is that it lets you change the default emoticons set (called smiley themes). Just open Pidgin, go to Tools→Preferences, and click on Smiley Themes. Then, drop the smiley set that you want to install into the smiley themes window. Select the theme you want to use, then close the window. My personal favorite is Crystal Emoticons, but there are many more available (a good place to look is Gnome-Look.org).

Figure 11: Crystal Emoticons for Pidgin

Skins One of my biggest disappointments with Pidgin is that there is no API for skinning it (that is, changing the way it looks). Luckily, that has not stopped several ambitious artists. At Gnome-Look.org, developers have

40

Extending Pidgin


Issue 19 hacked together several themes for Pidgin. The instructions will vary, but for most of them you extract the files into /usr/share/pixmaps/pidgin (overwriting the existing files) and then restart Pidgin. Some examples of Pidgin skins include Human (Ubuntu users will especially like this), Pidgin_neu, Pidgin_osX (perfect for integration with Baghira), and Pango_Pidgin.

What to look forward to in future versions Pidgin is a wonderful program with many features. And more are coming, including voice/video support, better MSN support, .NET plugins, MySpaceIM support, remote logging, virtual classrooms for teachers, and much more. To see all the planned features, check out the Pidgin trac.

Links • Pidgin homepage • Developer home • Developer blogs • Wikipedia article • Using Pidgin from a USB drive • Adium (Mac OS X fork) • PhoneGaim (Gaim + SIP calling) (source available at download page)

Biography Andrew Min (/user/37372" title="View user profile.): Definition: Andrew Min (n): a non-denominational, Bible-believing, evangelical Christian. (n): a Kubuntu Linux lover (n): a hard core geek (n): a journalist for several online publications including Free Software Magazine, Full Circle Magazine, and Mashable.com

Copyright information This article is made available under the "Attribution-Sharealike" Creative Commons License 3.0 available from http://creativecommons.org/licenses/by-sa/3.0/. Source URL: http://www.freesoftwaremagazine.com/articles/pidgin

Links

41


Issue 19

42

Links


Issue 19

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Zonbu GNU/Linux computer Run silent, run green By Jeremy Turner Zonbu GNU/Linux is a new, environmentally-friendly, compact PC available from Zonbu. It includes some features that really make it stand out from other PCs. Last, but not least, it comes with GNU/Linux. In this article, I will give you some of the highlights and thoughts of my experience with Zonbu. The Zonbu GNU/Linux computer is small, about the size of a book. It is virtually silent, since it has no fans or other moving parts. Zonbu really looks cool enough for the geek. It’s black and silver (with skins on the way), and has a bright blue power LED on the front. The rear of Zonbu includes several common connectors, making Zonbu a drop-in replacement for older computers. The best part of all is it eats up only 10 watts of power.

Figure 1: The rear ports of the Zonbu GNU/Linux computer One major benefit of online storage is all documents are stored off site

The network experience The Zonbu GNU/Linux computer uses the ability for users to connect with their data in a new way. Zonbu uses storage space on a remote server in place of a traditional large local storage device. As opposed to most traditional computers which store programs and data on a local hard drive, Zonbu uses storage on their servers, syncing to a local cache to operate. Zonbu includes only a 4GB CompactFlash card with GNU/Linux pre-installed. The operating system takes up about 1.5 GB, while the remaining 2.5 GB serves as a local storage cache for the internet data. This does not necessarily mean that Zonbu must be connected to the internet at all times; however, only the local cache will be accessible. The Zonbu operating system appears to use a customized version of Gentoo along with the XFCE desktop environment, although it is difficult for even an experienced GNU/Linux user to tell. One huge benefit with storing all of your data securely online is that, by definition, your data is stored off-site. If a Zonbu box were to fail, the user would be able to get a new unit and be working on their documents again. Zonbu provides some other means of storing documents. The simplest way of getting documents to the online storage space is to copy them into the /Documents folder. This kicks off the synchronization process. Other methods of getting data onto the online storage include a web interface and a Microsoft Windows client.

The network experience

43


Issue 19 Although I couldn’t get an exact measurement, some uploads to the Zonbu server seemed painfully slow at home. While this is most likely a limitation of a pitiful upload connection speed, moving to another location with a faster speed did not seem to improve the experience. However, if you use the local cache all of the file transfers happen in the background. If Zonbu is shut down before data synchronization is complete, it resumes on the next login.

The software experience The user experience really seems geared towards the non-geek. Three icons sit atop the grass-clad house desktop which include the user’s documents, the trash bin, and “Neighborhood” (which is a link to SMB/CIFS workgroups, computers, and shares on the local network). Also, applications are not named by their official names (Firefox, Evolution, Banshee, GnuCash, etc), but by their function (“Web Browser”, “Mail and Calendar”, “Music Library”, “Personal Finance Manager”, etc.). After launching the particular application, the normal title appears in the title bar (or wherever is appropriate). The bottom panel includes icons for a Start menu (an icon of a map and compass), Firefox, Evolution, a drawer for OpenOffice.org (Writer, Calc, and Impress), a drawer for searching (local computer, internet), a drawer for media (media player/MPlayer, and Banshee), a “show desktop” icon, the task list, system tray, process load indicator, and date/time applet. The default size of the bottom panel is a whopping 54 pixels. But that aside, the panel looks well organized and intuitive to navigate.

Figure 2: Zonbu desktop with Thunar file manager The Start menu continues some of the same methodologies of Zonbu. The entries are easily grouped and named in a very user-friendly fashion. There are icons for the user’s documents, Firefox, Evolution, local search, help, and shutdown. There are also submenus which group the rest of the applications. These submenus include Office (OpenOffice.org, Adobe Acrobat Reader 7, and GnuCash), Publishing (the GIMP, Scribus, and Nvu), Multimedia (Media Player, Banshee, F-Spot, GNOME Sound Recorder, Recording volume level indicator), and Internet (Firefox, Evolution, Skype, GAIM, Azureus, and aMule). There is a large collection of games, some of which include SuperTux, Frozen Bubble, Chess, Sudoku, Freeciv, Nibbles, Mahjongg, and Same GNOME. The rest of the Start menu includes an Accessories menu (Calc, Xfburn, Eye of GNOME, Mousepad, GNOME Screenshot, and File Roller), Control Panel menu (includes options for changing desktop, keyboard, mouse, volume, networking, screen saver, and other settings). Zonbu users will find two special options of the Control Panel as being of a bigger importance. The Storage option is a custom application which shows the status of the user’s on-line storage. This menu provides the status of both the internet storage space and the local cache. Also, there is a button which will show a log of the actions left to do, whether synchronize up or delete. The synchronizing application doesn’t seem to be extremely efficient. For instance, it will list files as needing to be uploaded, and then it will list them later on as needing to be deleted. In order to save some bandwidth, it would be nice if the application could detect files that really do not need to be uploaded. It was not apparent that it did so. There is a button that will open a window to show files that are currently being transferred in the background. In my testing, that was not functional. I could open the log of files to be transferred and get an idea of what was happening, but I didn’t get a nice progress meter. Another item in the Control Panel of interest is a link to Zonbu’s website, but more

44

The software experience


Issue 19 specifically to the user’s account settings. From this web site, you can change your account settings, get support, view forums, and more. Another sub-menu on the Start menu is the System menu. This menu gives you options to open a terminal, run a command, open the task manager, or even reset the device to defaults.

Figure 3: Zonbu doesn’t run any Windows executables

Tinkering with the software For those geek types who like to tinker and investigate, there also is a way to get a root prompt and add additional packages from portage. Zonbu has made patches available that they have applied to the software, so you could recompile and install your own versions. Additionally, they offer the full Zonbu distribution available as a download, so that anyone can install it on an ordinary PC, using their own Amazon S3 storage account.

The hardware experience Under the hood, Zonbu houses a VIA CX700 chipset, and the VIA Esther processor at 1.2GHz. The front of Zonbu has a USB port, power button, blue power LED and green HDD activity LED. The blue power LED seems very bright. It produced a noticeable glowing circle approximately 12 inches in diameter against a wall several feet away. The rear of Zonbu contains a plethora of ports commonly found on all PCs. The only option for video is VGA, at a maximum resolution of 1400x1050. There are 5 USB ports, both 3.5mm mic and headphone jacks, RJ-45 Ethernet, keyboard and mouse PS/2 ports, custom 5V power adapter jack, CompactFlash slot, and a power switch. There is also a hole for a wireless antenna, which looks to be a possible add-on down the road. The sides of Zonbu are not flat but laid out like a grill in such a way to help dissipate heat from the unit. I never noticed heat to be an issue. It did get quite warm to the touch, but that is expected since there are no fans inside to blow out hot air. Zonbu only uses 10 watts of power, so it could be a huge relief on your electricity bill as well.

The price Zonbu sells for $99 with a 2 year storage plan contract. Storage plans run from $12.95 per month for 25GB of storage up to $19.95 per month for 100GB. If you would like to buy Zonbu with a month-to-month storage package, the one time cost rises to $250. Zonbu carries a 3 year limited warranty, and also provides free recycling of the unit.

Who is Zonbu for? Who is Zonbu for?

45


Issue 19 Zonbu would fit very well for someone who wants a simple, task-based computer system with included online backup. Geeks who have a primary computer will definitely enjoy this unit as a secondary computer for various applications around the house.

Who Zonbu isn’t for? Video editing would not be practical with the CompactFlash card and slower processor. In order to expand the local storage, you will need to purchase an external hard drive. Also, the $250 price point (without a pre-paid service contract) comes very close to the price of a low-end OEM PC or even a decent refurbished PC. If you have a flakey high-speed broadband connection (dial-up users, forget about it), you will not be able to access all of your online files. Installing more applications requires gaining root using the console, which maybe be daunting for some users.

Final thoughts Zonbu is a very simple system. It is geared for users who may not necessarily care what operating system they are using, but someone who just wants a functional system. Zonbu does exactly that. The software is very straightforward to use. Zonbu is very environmentally friendly. Not only does it not require a lot of power to run, it also does not contribute to noise pollution.

Biography Jeremy Turner (/user/21" title="View user profile.): Jeremy Turner enjoys freelance writing when given the opportunity. He often plays system administrator, hardware technician, programmer, web designer, and all-around nice guy. You contact him by visiting his web site (http://linuxwebguy.com/).

Copyright information This article is made available under the "Attribution-NonCommercial" Creative Commons License 3.0 available from http://creativecommons.org/licenses/by-nc/3.0/. Source URL: http://www.freesoftwaremagazine.com/articles/zonbu

46

Final thoughts


Issue 19

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Free software media players The good, the bad and the ugly By Robin Monks Last year, while running Ubuntu, I decided I wanted to watch a video, so I opened it up in the built-in Totem player. What happened next took me back to the dark era of codecs and computing. The XviD video I was watching became pixelated, the video became out of sync; within a few minutes it was unwatchable. I dual booted back into Windows XP, opened up by trusty MPUI and watched the video with the free software XviD codecs without any issues. The experience had left a bad taste in my mouth. What happened next took me back to the dark era of codecs and computing… The experience had left a bad taste in my mouth Last month Tony put out a call for articles, and I suggested media players would be a good area to cover, and he jumped on it. This brings me to here; sitting in front of my word processor in Windows, with Ubuntu running in VirtualBox, and a list of free software media players ready to go! I wanted to choose a broad range of players, so I checked around looking for what other’s had felt were the best free software players. And the contestant’s are…

Totem

Figures 1: Screenshot of Totem The Totem video player that ships with GNOME has advanced a lot in version 7.04 of the Ubuntu OS. It will now search for codecs (both free software and restricted, including FFmpeg) for formats it can’t natively play. This feature alone would have fixed my issues from my last Totem experience! Totem will now search for codecs (both free software and restricted, including FFmpeg) for formats it can’t natively play Playing a video file with Totem (at least if you’re using a GNOME distro which has Totem configured or have installed and configured Totem manually on your distro) is as easy as double-clicking a video file assigned to Totem. Totem also has an easy to use “Play Disk” option under the file menu that lets you play Audio CDs, VCDs, DVDs and—here’s the part I like—data disks with files on them that Totem understands or can find codecs for. Totem was the only player in the round-up with a disk play feature this easy to use.

Totem

47


Issue 19 The bad news is that , although I found it very good for the audio and video I tested, Totem has a GUI that I didn’t find to be very user friendly. Hiding the sidebar helped somewhat, but it still feels like it has the controls of an audio player, and the display area of a video player—but without letting me hide just what I don’t need. A control to hide the video-play and controls would help to alleviate this feeling of rigidness a lot. All in all, I like Totem the best out of all the players reviewed here, and I can’t help but think the GNOME guys are working hard to keep Totem ahead of the pack. If you’re running a GNOME distro that doesn’t have Totem installed, or you just haven’t tried Totem yet, I urge you to give it a go! Name Maintainer(s) License Platforms MARKS (out of 10) Installation Vitality Stability Usability Features Overall Totem

Totem GNOME GPL GNU/Linux, Solaris, BSD 10 10 10 7 9 9

VLC

Figures 2: Screenshot of the VLC media player VLC is probably one of the more well known players for GNU/Linux. It uses FFmpeg natively (unlike Totem, which will need to download a custom version of the library the first time it requires it). FFmpeg is a free software library for reading MPEG4, AVI, WMV and FLV videos (among others). Installation on my Ubuntu was as easy as entering “VLC” into the package manager. Depending on your distro you’ll need to get and install VLC differently. Overall, installation and setup was pretty painless. I did, however, find VLC’s video to look “washed-out” compared to Totem, and finer details didn’t pick up as well. I liked VLC’s controls more then Totem’s, the smaller size didn’t seem to get in the way; although, I found its menus a bit less user-friendly than those of Totem. I really wish it were possible to hide all window components except the video, and the seek bar like you can with Media Player Classic (also free software under the GPL) on Windows. I found VLC’s video to look “washed-out” compared to Totem, and finer details didn’t pick up as well

48

VLC


Issue 19 Overall, I found VLC more advanced than Totem, but still had a more user-friendly face than MPlayer.

Figures 3: Quality difference between VLC and Totem, VLC at left Name Maintainer(s) License Platforms MARKS (out of 10) Installation Vitality Stability Usability Features Overall VLC

VLC VideoLAN GPL Windows, Mac OS X, GNU/Linux, BeOS, BSD 10 10 9 4 7 7.5

MPlayer

Figures 4: Screenshot of MPlayer MPlayer was next on my list; once again, installation was painless as the Ubuntu package repositories already contained the MPlayer project. MPlayer was by far the hardest of the programs I’ve tried to configure in quite some time. The website touts a “wide range of supported output drivers” as a main feature, but I found this to be the source of its weakness. It took a lot of trial and error to get the preferences set up correctly. After the correct video driver was set up, it

MPlayer

49


Issue 19 produced an image quality with no noticeable difference from Totem, but it couldn’t maintain a good frame-rate. To be fair, this is running on a virtual machine emulating a PIII class CPU, and with 512MB of RAM; so it might be able to produce a better frame rate on a better box. I should insert a quick sidenote here and let you know that MPlayer and VLC are both available for Windows as well, so if you’re looking for a free software player for your Windows computer make sure and give them a try. Name Maintainer(s) License Platforms MARKS (out of 10) Installation Vitality Stability Usability Features Overall MPlayer

MPlayer MPlayer Project GPL GNU/Linux, Windows, Mac OS X, 10 9 7 4 7 7

Conclusion I had also tried out Democracy player (now Miro ), but it wouldn’t work in my Virtual Machine, so I was unable to review it here to my dismay. Overall, there are some good choices to pick from when you’re looking for a good free software media player for your GNU/Linux box. And, these aren’t even counting the distributions dedicated just to being a home media center! If your interesting in picking up one of those look no further than LinuxMCE, MythTV and Mythbuntu! What are you waiting for? Go watch some videos! See you next time.

Biography Robin Monks (/user/35" title="View user profile.): Robin Monks is a volunteer contributor to Mozilla (http://mozilla.org), Drupal (http://drupal.org), GMKing (http://gmking.org) and Free Software Magazine and has been helping free software development for over three years. He currently works as an independent contractor for CivicSpace LLC (http://civicspacelabs.org)

Copyright information This article is made available under the "Attribution-NonCommercial-Sharealike" Creative Commons License 3.0 available from http://creativecommons.org/licenses/by-nc-sa/3.0/. Source URL: http://www.freesoftwaremagazine.com/articles/media_players

50

Conclusion


Issue 19

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Snap happy with free software Manage your photos with digiKam By Ryan Cartwright It’s been said that for a free software desktop to succeed it needs to address the needs of the average home user. Managing digital photographs is just one of those needs. Let’s see how one of the more popular free software photo management applications, digiKam, measures up.

Everyone is a photographer Digital photography is now part and parcel of many home computer users’ lives and they are demanding more from their software. While professionals are more at home using packages like the GIMP (okay, it’s more likely Photoshop, but we can dream), the home user generally wants something half as complex to use but with two-thirds the capabilities. The title on the digiKam website declares that it is “photo management for the masses”. That statement should tell you a lot about the aims of this accomplished application. It is written with photographers in mind and not just the professional. digiKam “photo management for the masses”—that says a lot about its aims

Installation Installing digiKam is easily done through package managers, such as Synaptic, Yum, etc. Pretty much any GNU/Linux distribution which has packages for KDE available will include a digiKam package. BSD users should also have a package available. In most of the cases I have found, the package is simply called “digikam”. On my Debian system, I also installed the “digikamimageplugins” package which provides many of the editing tools discussed later. Both of these have documentation packages that are very good, so I recommend them. If your package manager doesn’t install gphoto2 automatically, then you’ll need this package as well. Also, if it is available, I would recommend installing the Kipi plugins package for extra functionality (more on that later). Finally, for those who like to get their hands a little dirtier, source tar balls can be downloaded from the digiKam site. While digiKam is an official part of KDE, it can be run under other free desktops such as XFCE and GNOME (although for a GTK interface you might like to consider F-Spot). Of course, you may have to install some KDE libraries with it, but any package manager worthy of the name should take care of that. Naturally, KDE is the preferred desktop for running digiKam, the capital K in the name being a clue, and it is under that desktop that I use it. This article is based on v0.92, which at the time of this writing is the latest version.

Getting photos from your camera Of course, before you do anything else you need to download your photos from your camera into digiKam. Using the tried and tested free software principle of not reinventing the wheel, digiKam employs gphoto2 to handle the image capture side of things. A nice touch is that digiKam does this within its own interface meaning you are not forced to run one application then another just to manage your photos. This philosophy extends throughout digiKam and the result is a seamless set of very powerful but easy to use features.

Getting photos from your camera

51


Issue 19 Whilst I won’t go into the details of connecting specific devices to your desktop, I will give an overview. If you have never connected your digital camera to your desktop then I suggest you give it try. Many recent versions of GNU/Linux distributions have excellent support for USB devices and often just plugging in a device will result in that device appearing on your desktop. If you have never connected your camera to your desktop then give it a try digiKam will import images directly from a wide range of digital cameras, scanners and removable media. Even if yours is not directly listed you may well find it will work. Many low- to mid-range digital cameras connect as a USB mass storage device in much the same way as USB flash keys. If your camera has removable media, such as Compact Flash, Smart Media or XD cards, you may find it more convenient to use a card reader. Media connected via one of these USB devices invariably appear as a removable media icon on your KDE desktop. Another benefit of this approach is that it won’t run the batteries down on your camera.

The import images wizard digiKam integrates excellently into the KDE desktop. This means that right clicking on a camera or removable media desktop icon will give you an option to “Download images with digiKam”. Either this, or using the Camera menu within digiKam, will bring you to the Import images wizard (figure 1). Here you will find thumbnails of all the image files on the camera or media you’ve selected. The programmers have put some considerable thought into making importing simple. This means that you don’t need to know exactly where or how your camera stores its images, the wizard finds all the photos on the media. From there it is a case of simply selecting the photos you require and clicking Download. You can download all the images or selected ones. The final step is to select the album you want to import into - either a new one or an existing one.

Figure 1: The import images wizard

Sorting it all out Albums will be a familiar concept to anyone who has used other photo management software. They provide a simple concept to managing photos with an obvious link to the “good old days” of traditional photography. digiKam’s albums are filed in a hierarchical structure below what it calls “My albums”. This is a top level folder which you set up when you first run digiKam via the first-run wizard. As a matter of fact, what is called an album within digiKam is simply a folder within the underlying file system. I have laid a digiKam snapshot alongside a Konqueror window showing the underlying file system structure (figure 2).

52

Sorting it all out


Issue 19

Figure 2: Albums in digiKam and the underlying file system structure While all this makes backing up your photo library very simple, it doesn’t mean you can add a new album by creating a new folder via Konqueror. In figure 2 you will see the digiKam database field (SQLite if you are interested) which keeps track of the filing system you have set up in digiKam. If you delete a folder, or image file for that matter, outside of digiKam, you will be warned about the missing items next time you start the program. Albums have a number of properties including date, which can match the oldest or newest photo, comments and a collection. Collections are how you group albums. Each album will have an icon which is a thumbnail of the first photo within it. You can sort albums by date and also change views so the images are not grouped by album but by date as well.

Tagging Sometimes you will have photos that fit into two different albums. Similarly you may have photos which are connected but need to remain in separate albums. As an example, I have albums for various holidays each of which contain photos of my children. I want to keep the holiday snaps together but it would be useful to be able to pull out all the photos of my daughter from across all the holidays. This is where tagging comes in. Tags are a way to link photos whilst keeping them in separate albums Tags are a simple way to link photos together while keeping them in separate albums. Each image can have multiple tags as shown in figure 3. The tags assigned to each photo are in blue beneath it. You can see how the penguin photos are tagged as both “Birds” and “Penguins” and the parrot is simply tagged as “Bird”. As another example, my wife uses tags to identify photos from across various albums that she wants to get printed at the photo store. Once you have tagged photos you can view them in a pseudo album using the tag view.

Figure 3: A typical digiKam window displaying an album.

Tagging

53


Issue 19

Viewing and editing photos Viewing a photo from an album view is done by clicking it. It will then be enlarged in the main pane. This is a fairly new feature, earlier versions of digiKam would open the image in an external window. It may not seem a major enhancement but opening them in the same window is more intuitive and is a great improvement in usability. Information about the photo (such as EXIF data, comments and tags) is available in a fly-out pane on the right. Clicking the enlarged image will return you to the album. To edit an image you are viewing just click the Edit toolbar icon. This action does open it in an additional window. Not only are the expected tools like red-eye reduction and noise reduction present—and easy to use—there is also a wealth of other tools and methods provided to enhance your photos. To be honest, there are probably too many to cover in any great detail here, but I will try to briefly cover some of the more advanced things possible with digiKam. You get not only red-eye reduction but a wealth of other tools Old and damaged scanned images can be repaired with a single tool and the auto-colour correction takes care of much of the hard work for you. The ubiquitous black and white conversion not only provides the traditional sepia and selenium conversion but allows you mimic (to an extent) the tone of certain 35mm film such as Ilford Delta or AGFA Pan ranges, as shown in figure 4.

Figure 4: Converting an image to black and white You can also add textures and borders with a range of parameters such as bevels and artificial wooden frames. As you would expect there are also resizing, scaling, shearing and perspective tools. Finally, in addition to the usual emboss and oil painting filters, you also have some of the more novel ones like infrared film and raindrops. All in all, there are some very powerful but somewhat intuitive tools available.

Getting your photos out I said earlier that digiKam ties in very well with KDE and this is evident when it comes to printing. Individual images can be printed from the edit window using the normal KDE print dialogs. For printing a selection of photos or an entire album you’ll need the print wizard which is one of the Kipi plugins. This wizard helps you to print multiple images on one sheet in various arrangements before handing over to the KPrint dialogs.

Enhancing digiKam with Kipi plugins digiKam is an excellent application but with the addition of the Kipi plugins its usefulness increases by a significant factor. The KDE Image Plugin Interface (Kipi) is a framework which enables KDE imaging applications, including digiKam and showImage, to share plugins, which makes life easier. Again we come back to the idea of not reinventing the wheel for each application. For applications which support it, Kipi gives a range of additional

54

Enhancing digiKam with Kipi plugins


Issue 19 capabilities which might not be available had they been required to be coded separately for each one. Kipi gives a range of additional capabilities As mentioned, the print wizard is one of these plugins, but you can also create HTML galleries locally or on remote servers (including Flickr) with a few clicks. Batch image processing enables you to convert or rename images in batches. You can also archive images or albums to CD, scan images in via Kooka, send them by email (allowing resizing and compression before sending) and even create a calendar.

Conclusion I work in the IT industry and computers are my hobby so perhaps I am little biased in my review of software aimed at end-users. My father-in-law is not like me; he is more like the typical end-user. (Perhaps a little braver than some in that he is willing to try things out.) Recently, I convinced him to switch to GNU/Linux and a KDE desktop. One of his requirements was a photo management application. I gave him digiKam. A week later he telephoned for some support on Thunderbird and during the conversation he said that digiKam was the best and most easy to use photo application he had come across. While F-Spot lovers will no doubt argue, that is a sentiment I would echo, particularly when you add Kipi into the mix.

Bibliography digiKam website—[http://www.digikam.org](http://www.digikam.org) KDE—[http://www.kde.org](http://www.kde.org) Gphoto2—[http://www.gphoto.org](http://www.gphoto.org)

Kipi (KDE Image Plugin Interface)—[http://extragear.kde.org/apps/kipi/](http://extragear.kd

F-Spot—Photo management for the GNOME desktop—[http://f-spot.org/Main_Page](http://f-spot.o

Biography Ryan Cartwright (/user/8833" title="View user profile.): Ryan Cartwright is IT Manager for Contact a Family (http://www.cafamily.org.uk), a UK National charity for families with disabled children where they make significant use of free software (http://www.cafamily.org.uk/oss). He is also a free software advocate and you might find him on the GLLUG (http://gllug.org.uk) mailing list.

Copyright information This article is made available under the "Attribution-NonCommercial-Sharealike" Creative Commons License 3.0 available from http://creativecommons.org/licenses/by-nc-sa/3.0/. Source URL: http://www.freesoftwaremagazine.com/articles/manage_your_photos_with_digikam

Bibliography

55


Issue 19

56

Bibliography


Issue 19

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Create your own Live CD in 7 Steps Revisor saves the day! By Jonathan Roberts Knoppix made live CDs popular—and with good reason too. Do you want to check whether a distribution works well with your hardware, or to show off the latest Compiz Fusion magic, or maybe you have a presentation to do and you want to make sure you have the same environment to show it in as you had to create it? A live CD can help with all of these scenarios. However, until recently you had to read through some pretty dense documentation to make any customisations. Now, Fedora 7 is out and Revisor is here to help you create any kind of live system you can imagine, in 7 easy steps. In this article, I’m going to be creating a live CD with a custom package set. If you have slightly different goals in mind, follow along anyway as the process for creating a live CD/DVD/USB image is virtually identical, as is the process for creating any kind of non-live installation media. I’ll also assume that you are running a working Fedora 7 system with access to the Fedora repositories (networked or locally). All of the software you need to create a live CD can be found in the Fedora repository

Step 1—install Revisor All of the software you need to create a live CD can be found in the Fedora repository, so you can install Revisor and its dependencies in one of two ways: • su -c "yum install revisor" from the command line • Applications→Add/Remove Programs, search for Revisor, mark it for installation and then click Apply. Once the installation is finished, launch Revisor by using its menu entry found under Applications→System Tools→Revisor. You’ll be presented with the welcome screen, explaining a bit about the software and where it’s come from. Obviously, to get things started click Get Started!

Figure 1: Welcome to Revisor!

Step 2—Select your media Step 2—Select your media

57


Issue 19 The next screen asks you to select what kind of media you would like to create. As I said before, I’m going to be creating a live CD, but in many situations this might not be appropriate; below is a brief summary of what you can expect from each type of media and possible reasons for using it.

Installation media types • DVD Set—creates a DVD (or DVD set) that will boot into the Anaconda system installer. Gives the greatest flexibility for an installed system as it has the most space to play with. • CD Set—creates a CD (or CD set) that will boot into the Anaconda system installer. It’s useful if you don’t have a DVD drive available. Revisor will automatically split the image into multiple CD ISOs if needed, with disc one being bootable and the others holding packages.

Live media types • Optical Live Media—creates either a live CD or DVD (depending on the amount of data to be included). Boots into the working desktop environment of your choice, with many customisations available through the use of a kickstart configuration file. • USB Live Media—this option is not yet available for Revisor (it will be in the near future). If you want this functionality, it can be performed from the command line using the livecd-iso-to-disk script, which converts a live CD ISO image to work on a USB stick. Select which ever option is more suitable to your circumstances and then click Forward.

Step 3—Revisor configuration Each run of Revisor uses two configuration files: the first can be thought of as an index, pointing to a number of different repository configurations you can use; the second holds the information about the repositories, such as URI etc. By default Revisor uses the file revisor.conf, found in /etc/revisor/ as the first configuration file, and points to one of a number of example files in /etc/revisor/conf.d/ for the repository information. A kickstart file allows you to specify a large number of options, saving you from having to manually configure them every time you create a live CD By default these files give you a wide range of possibilities, including media based on Fedora 6, 7 or development, and targeted at the i386, PPC or x86_64 architectures (specified through the “Configuration section to use” drop down menu). This is probably sufficient for most circumstances; however, if you want to use a local repository, or to add a third party repository not included with Fedora by default, you will have to edit a configuration file. The simplest way to do this is to edit one of the existing entries; for example, to use Revisor to create a Fedora 7 based system from a local repository, you would edit the file /etc/revisor/conf.d/revisor-f7-i386.conf, and edit the line that starts with baseurl to point at the folder containing your local repository. The same would be true for a remote, third party repository. If you do choose to edit one of these configuration files, be sure to hit the refresh button in Revisor so that the latest information is loaded. Once this is done, you can choose which repositories to enable using the check-boxes below, and specify the directory where the ISO will be outputted to. Make sure you remember this! Once again, click Next.

Step 4—load kickstart data A kickstart file allows you to specify a large number of options (including package manifest), saving you from having to manually configure them every time you create a live CD. Revisor provides a sample kickstart file which should be sufficient for most uses: if you want to customise the details held in this file, you can select the appropriate check-box from under the “Advanced Options” section of this screen. In this tutorial, I will

58

Step 4—load kickstart data


Issue 19 simply be customising the package selection, but if you want to experiment with more advanced options go right ahead! Advanced users can create their own kickstart file using the graphical system-config-kickstart application which is installed as a dependency of Revisor, and found under Applications→System Tools→Kickstart menu entry. Make sure you remember the location of where you save this kickstart file, as you will need to point Revisor at it to make use of it. Once you are finished on this screen, click Next.

Step 5—package selection The next screen allows you to select packages for inclusion on the media, and uses the same interface as Fedora’s Pirut (Add/Remove Applications) program.

Figure 2: Package selection with Revisor In the default view, categories are available on the left, and sub-categories on the right; each sub-category contains a number of mandatory and optional packages, whose selection can be refined by selecting the Optional Packages button. To remove, or add, all optional packages from a sub-category, right click it and select the appropriate option. These options only become available once that sub-category is itself selected. You can also browse the available packages as a list, or search for an individual package, using the tabs at the top of the screen. Once satisfied, Revisor will resolve the dependencies of the selected packages, ensuring they have access to everything they need to work correctly. The amount of time this takes will vary, depending on how many packages you selected.

Step 6—Go! When Revisor has finished checking for dependencies, it will move on to the next screen that gives you some final pieces of information about your media before proceeding. Included is the number of packages selected, and the approximate size of the final image to be created. If you’re happy with this, click Forward; otherwise, return to the previous screen to further refine your package selection. It will take some time to grab all the packages from the repository—especially if it’s a remote repository—and compose them into your desired media The advanced options for this section have not yet been implemented. At this point you may want to go and make a cup of tea. It will take some time to grab all the packages from the repository—especially if it’s a remote repository—and compose them into your desired media. Keep

Step 6—Go!

59


Issue 19 checking back from time to time, and once it’s done you have one final job left: burn the ISO.

Step 7—Burn the ISO Remember where you told Revisor to output the ISO image to (hint: the default is /srv/revisor/)? Well, once it’s finished take a peek in this folder and you should see your brand new ISO image. To burn this to a disc, simply right click it and select the option from the context menu to burn the image to a disc—it’s that simple! Again, depending on how large the image is, and the speed of your drive, go and make another cup of tea as it could take a while. Once it’s finished your disc will be ejected and you’re ready to go!

Conclusion Easy, right? There are loads of other customisation options you might want to make (other than changing the packages included on the disc), but they’re best left for you to experiment with (hint: kickstart!). Keep an eye on this exciting project too, as there is talk of creating a web interface for remotely creating your own custom Fedora system, although for now this is very much talk.

Biography Jonathan Roberts (/user/31677" title="View user profile.): Currently a gap year student! I have a huge interest in Free Software which seems to keep growing. I run the Questions Please... podcast which can be found at questionsplease.org. On an unrelated note I'm reading theology at Exeter next year.

Copyright information This article is made available under the "Attribution-Sharealike" Creative Commons License 3.0 available from http://creativecommons.org/licenses/by-sa/3.0/. Source URL: http://www.freesoftwaremagazine.com/articles/create_your_own_live_cd_in_7_steps

60

Conclusion


Issue 19

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Using third party schemes to install applications, codecs and drivers in GNU/Linux Where there's an easy way, Phil will find it By Phil Thane A common criticism levelled at GNU/Linux and free software by proprietary software companies is that installing applications, drivers and media codecs is made difficult. Well, it isn’t. While there can be problems installing software in GNU/Linux, these difficulties also exist on other systems and are rarely show-stoppers. And, to be fair, many companies are still struggling to get their products working properly on Vista. A common issue with any operating system is that different versions of a program often need different drivers and supporting applications. Naturally, similar problems can occur with GNU/Linux, and the variety of distributions (distros) and rapid upgrade cycle can seem confusing to newcomers and outsiders, but a new crop of installers are making things really quite simple.

The problems Assuming you are running one of the mainstream distros such as Mandriva, Ubuntu, Fedora or SUSE, your GNU/Linux system will come prepackaged with a graphical desktop such as KDE or GNOME and a lot of applications. If you are the least bit adventurous, there will come a time when you will want to add more. In the early days of GNU/Linux the only way to install anything was to compile from source and manually move the compiled “binary” into the directory of your choice. Then various companies and collectives started to produce ready-compiled selections (known as packages) and distribute them (hence “distros”). To make life easier, package management systems were created to automate the installation. The most common are the Red Hat Package Manager (RPM) originally developed by Red Hat, but also used by Mandriva and SuSE; Advanced Package Tool (APT) originally developed by Debian, but also used by Ubuntu, Xandros and others; and pkgtool, developed by Slackware. Just using the same package management system doesn’t necessarily make distros’ packages compatible Just using the same package management system doesn’t necessarily make distros’ packages compatible. An RPM package (.rpm) for Open Office as supplied by SuSE may not be the same as a .rpm supplied by Fedora or Mandriva. Each may have modified the software to take account of their unique file system or to fit in better with their “look and feel”. There is even conflict within individual distros—a SuSE 9.0 .rpm may not install correctly on SuSE 10.0. Debian based distros tend to be more consistent. Many .deb files meant for Debian will install and work on Ubuntu. Then, as if that wasn’t enough, there are “dependencies”. Because many applications have a great deal in common, it makes sense to put common features into “libraries”, where they can be accessed by different applications. Microsoft does this with .dll files, but it is much more pervasive in the free software world. Unfortunately, making sure your PC has the correct library files to enable a specific application to run can be a problem. Providing you only use the applications supplied by your distro, either via .iso files (CD or DVD) or on-line via APT, then the dependencies are solved for you. Once you step outside of that comfort zone, tracking down the right versions of each library can be a pain, rightly known as “dependency hell”.

The problems

61


Issue 19

The solutions If you have installed one of the mainstream GNU/Linux distros from a DVD or collection of CDs, then it is very unlikely that you actually installed more than a quarter of the apps that are on there. The first port of call when looking for an application you don’t have is your distro’s DVD or CD set. Each of these distros has an “Add new software” feature somewhere in the menu options.

Figure 1: Synaptic is an APT GUI used by many distros Some distros take a different approach, such as Ubuntu. Instead of providing a DVD’s worth of applications, the basics come on a single CD (or CD .iso) and anything you want to add is done via APT, generally using a graphical “front end” such as Synaptic or Adept. APT has the happy knack of resolving dependencies on the fly; if you request an application that requires a library file you don’t already have, APT will add it to the selection. Applications for `.deb` based distros are stored in on-line repositories Applications for .deb based distros are stored in on-line repositories. The default set up on Ubuntu only connects to their “Officially supported” repositories, but both Synaptic and Adept have a menu option to add others (read the warnings first, though).

Alternatives There comes a time, though, when most GNU/Linux users want a piece of software not supported by their distro. It may be an application not considered sufficiently stable or too esoteric for inclusion, or a proprietary driver or codec, which the distro team won’t include in case it causes all manner of legal problems for them. Whatever it is, you need a way to install it, and there are several to look at.

Autopackage

62

Alternatives


Issue 19

Figure 2: Autopackage is almost the Install Shield for GNU/Linux Autopackage aims to do for GNU/Linux what “Install Shield” does for Windows. It uses a completely new package format, which includes a pointer to where required library files can be found. From a user’s perspective it just works, but in the background Autopackage checks dependencies and resolves them automatically. To install software with Autopackage, choose from the list of available autopackages and then click on the “Download File URL”. You will find dozens of packages available, but Autopackage’s most visible weakness is a relative lack of selection.

Figure 3: Make your Autopackage script executable The actual download is a small “script”, not the package itself. Save it somewhere convenient, then make it “executable” (In Konqueror right click, choose Properties→Permissions then check “Is executable”). Now double-click the file to start the installer and follow the on-screen instructions.

Autopackage

63


Issue 19

Figure 4: Installation complete, the easy way Autopackage does automatically what a GNU/Linux expert would do manually—check dependencies and install all the right libraries to make an application work. It works on most distros, but it isn’t completely perfect and some packages don’t install on some distros. For example, Xara Xtreme would not install on Kubuntu Fiesty running on my AMD64 machine. In any case, it doesn’t do any harm to try.

klik klik is close to install heaven, though in the normal sense it doesn’t “install” software at all. klik is a packaging tool that enables developers to create ready-to-run packages that can be launched from anywhere. You can install packages to your home directory without needing a root password. Put one on a portable drive or USB flash memory and you can run an app on any PC you visit. klik puts the responsibility for dependencies on the developers, not the users. As a humble user, I think this makes sense. The developer knows what it takes to make his or her program work and can make sure the compiled klik package includes everything.

Figure 5: Don’t install at all, just klik To install the klik client, use the command wget klik.atekon.de/client/install -O - | sh. If you’re unfamiliar with the GNU/Linux command line, all you need to do is copy and paste this command into a terminal and press Enter. The shortcut Alt + F2 will open a terminal or command dialog on most distros. Follow the instructions that pop up on your screen and you’ll be taken to the klik “store”. It’s that simple. (Ubuntu users, please note instructions on the klik site about installing libstdc before using klik) klik has a wider range of packages than Autopackage and is simpler to use

64

klik


Issue 19 klik has a wider range of packages than Autopackage and is simpler to use. The single-file installation means klik packages will not interfere with anything else on your system and can be safely removed by deleting that single file.

Automatix2 Automatix is a graphical package manager for the installation, uninstallation and configuration of commonly requested applications. Currently it supports variations on Ubuntu, Debian, and Mepis 6. Automatix has no qualms about offering free-as-in-beer, proprietary or even commercial packages such as CrossOver Linux, though it does start with a splash screen warning users in the US that some codecs they make available may infringe US laws. You have been warned.

Figure 6: Read the warning To install Automatix, browse Automatix’s very informative site, then click on Installation. There are a couple of options: downloading a .deb file to suit your system and installing it using your usual GUI, or using APT on the command line. Normally I’d recommend the graphical method, but trying it on a new PC with very little in the way of development packages installed threw up a host of dependency problems, and it turns out to be easier using APT. Follow the relevant links on the Automatix site, then follow the instructions step-by-step. Copy each command (highlight then Ctrl + C) and insert each into a terminal with Shift + Insert followed by Enter. The final command will throw up the same error message you would get using a graphical installer, but APT suggests a solution: the command sudo apt-get -f install which will compel APT to install all the supporting packages.

Figure 7: Choose and click in Automatix2 Once installed, you can run Automatix from the Kmenu and grab a tremendous range of codecs, drivers and applications very easily. If you are tempted to try out the proprietary Nvidia 3D drivers, do make a note of the

Automatix2

65


Issue 19 command line instruction for restoring the basic Xorg setup in case things go wrong: automatix-nvidia-restore.

Looking to the future: Click’n’Run (CNR) These three systems represent the cream of the crop of the present GNU/Linux package management field, outside of those schemes defaulted to by each distribution. If you find that it supports the particular package that interests you, any one of these options will be a simple and effective solution for software installation. However, there is one more competitor on the horizon.

Figure 8: CNR Coming Soon Back in January, Linspire announced that they would be making their CNR Warehouse system available to users of other versions of GNU/Linux in the second quarter of 2007. At the time of this writing, the quarter has yet to end and the CNR site still says “Coming Soon”. If it works properly, its “one-click delivery service” could revolutionise installing new software on GNU/Linux. Time will tell. Check the CNR site for the latest details.

Biography Phil Thane (/user/36535" title="View user profile.): Originally a Design & Technology teacher in England, then Support Manager at TechSoft UK Ltd in Wales, with a hobby of freelancing for educational and technical magazines. These days Phil is a freelance writer pretty well full-time. For links to publications see www.brynvilla.llangollen.co.uk (http://www.brynvilla.llangollen.co.uk).

Copyright information Source URL: http://www.freesoftwaremagazine.com/articles/installing_applications_codecs_and_drivers

66

Looking to the future: Click’n’Run (CNR)


Issue 19

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

SSH beyond the command line File servers made easy with SSH By Nathan Sanders If you’re an experienced administrator, you’ve probably used SSH to remotely access a troublesome box or your personal computer. For those who don’t know: SSH it’s a great way to fiddle with a computer from miles away as if you were sitting at its keyboard, but it’s also just about the simplest and most secure way to configure your computer to let you access its files from anywhere. You can use SSH on nearly every operating system to transfer files to and from your computer over the internet or a LAN.

Is SSH for more than commands? SSH is traditionally used to give you remote access to a computer’s shell (command line terminal). Earlier protocols in this vein, such as telnet and rlogin, did not encrypt their traffic or take other security precautions that are necessary with untrusted networks like the internet. Depending upon the exact server, protocol, and configuration you use, SSH may be the most secure way to access a computer over a network. If you’re a typical user, however, you may never need to access the command line—or even graphical applications—on a remote computer. Even so, SSH will still be useful in sending your latest batch of photos home from your hotel, retrieving the latest version of a report left sitting on your desktop, or for any other situation requiring a file transfer. If you’d like to get more in depth, you can use it to load or edit a spreadsheet on another computer or keep documents synchronized between machines. If you’ve seen an experienced user work his magic with SSH, all of this may seem rather complicated. I assure you, though, that setting up an SSH server on GNU/Linux or another Unix-like operating system—and even Microsoft Windows—is as simple as installing any other software, and accessing your server from another machine running nearly any OS is even easier. Setting up an SSH server is as simple as installing any other software, and accessing your server from another machine is even easier

Installing the server The SSH protocol has been around for a while and several server packages have sprung up around it. The choice is a bit simpler than KDE vs GNOME, though, because almost everyone you meet will recommend OpenSSH. It’s tried, true, secure, free software developed by the OpenBSD project but made available for nearly every operating system under the sun. If you’re using a Unix-like system such as GNU/Linux, installation really is as simple as for any other software. If your distribution has a package management utility, OpenSSH is undoubtedly available from it. In Debian and Ubuntu, it’s listed as “openssh-server”. If you normally have to install software from source, you can get the code from the OpenSSH website. Unfortunately, things are a bit more complicated if you use Windows. You will need to use some software called Cygwin to emulate the GNU/Linux platform on your Windows box. If you are already familiar with Cygwin, you can install OpenSSH as a native package. Wait, though! If you don’t use Cygwin already—wouldn’t you know it—someone has gone and made the whole process just as easy. The OpenSSH for Windows project has combined OpenSSH with only the absolutely necessary components of Cygwin, rolled it all up into a ball, and released it as free software on

Installing the server

67


Issue 19 Sourceforge. Download and double click the “binary installer” as you would any other Windows setup package.

Some minor tweaking, if desired Below, I will explore some of the configuration options of OpenSSH. Odds are your distribution has your server setup more or less like this by default, so you may safely skip this section and refer back later on if you experience problems. OpenSSH for Windows users must also depart from the article here, as configuration on that platform works a little differently. You can refer to the README file, accessible from Start→All Programs→OpenSSH for Windows for configuration instructions. In particular, make sure to setup user passwords using the simple instructions in the readme. For those continuing on, OpenSSH configuration can easily be done by editing a configuration file (never as scary as it sounds). On most systems, this file will be located at /etc/ssh/sshd_config. Open this file as the root user using your favorite text editor and you will find that your distribution has supplied a long template configuration file. OpenSSH has a broad range of configuration options that can have serious effects on the security of your system. I will trust that the default settings provided by your distribution are sensible and focus on changing the configuration to serve the specific requirements of file transfer. You could debate whether a password is more secure than other methods of authentication, but it is undeniably convenient

Password authentication First, you can configure the SSH server to authenticate users with a password. You could debate whether a password is more secure than other methods of authentication, but it is very convenient when trying to access your machine from an arbitrary remote location where you might not have access to anything but your memory. Make sure the following line is set in your configuration file: PasswordAuthentication yes

Remember that any line in the configuration file beginning with # is a comment, so delete the hash mark if you want to activate the line.

Allowed users Next, you may want to assure that you are the only user who can access your computer through SSH. By default, anyone with an account on your computer may be configured to login with OpenSSH. By specifying your user, you can control which files can be accessed remotely, as only the files to which your user has permissions will be accessible. Put the following line in your configuration file: AllowUsers [my user]

Now only [my user] will be able to log in, using his regular account password. You can do a lot with host and user restrictions with SSH, but this should be enough for our purposes.

Don’t forget to restart! You will need to restart the SSH server before these changes to the configuration file take effect. The following command (as root) will work on most Unix-like systems: /etc/init.d/ssh restart

68

Some minor tweaking, if desired


Issue 19 The ssh in the above command may be replaced by, for instance, sshd, depending upon your distribution. You can use your shell’s tab-complete feature to guess at the right syntax. If you’re having a really hard time entering this command, you could just restart your computer—but that shouldn’t be necessary. To access the server you have just configured, you will need to know the exact address of your system. If you don’t have a domain name configured, just use your IP address. Of course, depending upon your internet service provider, this address could change periodically and you may need to use a dynamic DNS service to make a stable URL. If you are accessing the machine within a LAN, check your router’s configuration for the address of each machine. If you are dialing into a LAN, make sure the standard SSH port (22) is being forwarded to the machine.

Down to business: trading files There are a lot of SSH clients out there available for nearly every platform imaginable. I’m going to take a wild guess and assume that the majority of you Free Software Magazine readers are using GNOME, KDE, or Windows at home or at work. Below, I’ll walk you through graphical clients that were made for each of these three platforms and a command-line client that will work almost everywhere else.

The old standby: scp OpenSSH is distributed with scp, a command line tool for transferring files between computers with very simple syntax. Of course, unlike FTP, scp file transfers are tunneled through SSH encrypted, and are secure out of the box. The GUI programs discussed below are mostly front ends to scp and you should, by all means, use them if they are available. If you can’t get your hands on them though, or if you just want a better understanding of how the file transfers operate, read on about scp. The syntax of a simple single file transfer with scp is very concise: scp [source file location] [target file location]

You can make either the source or target location remote and the other local. The syntax for the remote location should include the user name to authenticate as on that machine, and should be formatted as follows: [user]@[address]:[remote directory]

For example, examine the command used to transfer a file from the Documents directory of my local computer onto the desktop of my laptop over my LAN: scp /home/nathan/Documents/Document.txt sanders@192.168.0.4:/home/sanders/Desktop/

When executing this command, you will be prompted for the password of the user on the remote machine. If it is the first time you are accessing the machine, you may be asked to verify its RSA host key—it is safe to respond in the affirmative. The scp tool has many advanced features to limit bandwidth usage, specify particular SSH configurations, copy directories recursively, and more. For a concise listing of these features, consult the scp manual by entering the command man scp.

GNOME: using Nautilus with remote files You may not have realized it, but GNOME’s default file manager Nautilus ships with capabilities for accessing remote hosts via several different protocols, including SSH, through an easy-to-use dialog. Open the dialog in Nautilus from File→Connect to Server. Select SSH from the “Service Type” menu and fill in the Server field with the appropriate information. You will be prompted for a username and password. Once entered, a Nautilus window displaying the root directory of the remote machine will be

GNOME: using Nautilus with remote files

69


Issue 19 presented.

Figure 1: Nautilus’ remote server dialog is wonderfully simple. The only thing you need to enter here is the server address, but you can specify the other variables if you wish Once you have an open Nautilus window to your remote PC, you can play with its files just as you would local ones. You can even open multiple remote servers through multiple protocols and drag & drop files between them. Once you have an open Nautilus window to your remote PC, you can play with its files just as you would local ones You may find that certain remote files will not open correctly when clicked in Nautilus. This is because only certain GNOME applications support accessing remote servers. In problematic cases, you may simply copy the file to a local directory before attempting to open it. You can, of course, copy the changed file back over to the remote server when you are done. gedit is one application that does support remote servers, so if you click on a text file in Nautilus (and gedit is your default editor) you should be again prompted for the remote user name and password and then presented with the document. Changes made with gedit will be saved directly to the remote server.

KDE: learning how to fish KDE users may access remote servers in a manner similar to Nautilus in GNOME. In Konqueror, open Go→Network Folders to be taken to the remote:/ URL. From there, you can add a remote SSH server with the “Add a Network Folder” link. You will be prompted to choose a protocol, name the remote location, and enter details about the server. You may specify a specific directory on the remote server if you wish. The remote location will then be forever accessible as a shortcut from remote:/ and can be bookmarked or accessed from other KDE applications as normal. You will be prompted for a password each time you access the server.

70

KDE: learning how to fish


Issue 19

Figure 2: Accessing remote directories in Konqueror takes a few more clicks than in Nautilus, but the process is very similar When you add a remote location this way, you are be guided through setting up a link using KDE’s fish KIO-slave. KIO lets you use simple URLs to access remote directories in KDE applications. Each “slave” supports a different protocol—fish works with SSH. As with GNOME, some programs will refuse to access remote servers, but most KDE 3.5 applications will play nicely. You can skip the Add a Network Folder wizard and construct your fish URLs directly without too much difficulty. Use the following format: fish://[server address]:[port]/[remote directory]

You will be prompted for a username and password upon accessing the URL. Although this prompt is important for security, there may be times when it is inconvenient. For instance, you may want to configure an application to load a remote file upon launch, in which case you wouldn’t want to be bothered with a password prompt. Use the following format to avoid having to input anything interactively: fish://[user]:[password]@[server address]:[port]/[remote directory]

Of course, your password is now stored in that application’s configuration file. At the very least, you may want to make sure the file’s permissions are set to allow read access only to you. You can do a lot of fun things with network transparency and fish. Right click on a file in Konqueror and select Actions→Print to directly print files from remote machines without having to set up a print server. Some KDE media players will allow you to play music and videos directly from a remote server, but limitations in the KDE 3 KIO system mean you may not be able to seek through the files (fast forward or rewind). Experiment with your own favorite KDE applications to see the extent of what you can do with fish.

Windows: less integration, but just as much fun with WinSCP Microsoft has not embraced SSH to the extent that the major free software desktop environments have, but you can still perform typical file transferring tasks with SSH from a Windows box using third party clients. Several Windows scp clients exist, but I will focus on WinSCP, a popular, simple, and free tool. WinSCP 4.0 (currently in beta) includes support for FTP, but I will explore the 3.x series and its SSH functionality only. Download the standalone executable or installation file for WinSCP 3.8.2.

Windows: less integration, but just as much fun with WinSCP

71


Issue 19

Figure 3: The scheme for logging into an SSH server from a GUI front end should be clear by now, and WinSCP calls for no exceptions WinSCP’s GUI can look like either a complicated Windows Explorer window or the old Norton Commander two-pane interface, depending upon your preference. You may also select Icon, List, and Details views as you would in Explorer. Either way, navigating within directories works as in other file managers and you can drag & drop files around as you please. WinSCP has some advanced features for synchronizing remote directories and managing remote server sessions that will not be addressed here. You may also execute commands on files on the remote server using the File→Custom Commands menu. WinSCP’s Preferences dialog is filled with dozens of settings to configure the way it uses the SSH protocol. While WinSCP is definitely less integrated than the GNOME and KDE solutions discussed, the developers have made a valiant effort to establish their application’s functionality within Explorer. Upon installation, or from the Preferences dialog, you may register WinSCP to handle SSH protocol addresses and add WinSCP to Explorer’s context menus. There are also options for integrating WinSCP with the PuTTY Windows SSH client.

Don’t stop there! You have just been introduced to a very limited set of features made possible by the SSH protocol. I think you will find that configuring an OpenSSH server for transferring files is easier than setting up an NFS, SAMBA, or FTP server—achieving a level of security comparable to what an encrypted SSH session can provide. If you find yourself in agreement and using SSH regularly, explore some of its other uses. You will find that configuring an OpenSSH server for transferring files is easier than setting up an NFS, SAMBA, or FTP server With just a little bit more reading, you could learn how to execute commands on your remote server or tunnel arbitrary network tasks through the secure piping of SSH. The web is filled with tutorials for these common SSH tasks. Read Free Software Magazine issue 20 (the next issue!) to learn how to run GUI GNU/Linux applications remotely on a Microsoft Windows host.

Biography Nathan Sanders (/user/20396" title="View user profile.): Nathan Sanders is an experienced free-software user and frequent contributor to publications concerning open-source software.

72

Don’t stop there!


Issue 19

Copyright information Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved. Source URL: http://www.freesoftwaremagazine.com/articles/ssh_beyond_the_command_line

Don’t stop there!

73


Issue 19

74

Don’t stop there!


Issue 19

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

The "alias" command Alias: Speed Dial for your Shell By Gary Richmond You almost certainly have speed dial set up on your home, office and mobile phone. It saves time, ensures against a failing memory and allows you to work smarter. Devotees of the command line don’t have to be left out in the cold. One of the crown jewels of GNU/Linux is that every user, be he ne’er so base, has at his or her fingertips the kind of power of which even Caligula could not dream. Alright, I’m exaggerating—a little. GNU/Linux comes with many commands and you use them every time you open a console and interact with it through the shell. The Bash shell (often described as the great grandaddy of all shells or, less flatteringly, as “an historical wart on the Bourne shell”) comes as standard with virtually every version of GNU/Linux and there are others too: Fish, Korn and Zsh. Whether you are listing file contents, configuring your wireless card, copying, deleting or moving files or appending arguments to built-in commands you are utilizing those features. The built-in command I want to look at is alias. (If you want to be technical, alias has been defined as a “parameterless macro” and is not to be confused with IP aliases, a process for adding more than one IP address to a network interface.) It is a perfect example of a command that is simple yet useful and its use is restricted only by your knowledge of the Unix commands and the capacity of your imagination to exploit it. It is probably at this point that you might want to sit down and think about working faster by working smarter. In short it’s time to get out your pencil and paper and start making a list of all of those commands (and composites thereof) that you use most frequently and see if you can’t make them more compact. In the process you will not only work more efficiently; you will have increased your knowledge of the GNU/Linux commands and file system.

You have mail. You also have aliases Yes you do. You didn’t install them but they’re there. After you grabbed, burned and booted your chosen distro version they are on your hard drive, but what are they and where they are is the question. So what exactly is an alais? Well, it has exactly the same meaning as it does in the everyday world. If the local bobby (or cop for those in the US) was on your trail you might want to change your name in order to disguise your identity. In the shell world there is no such sinister criminal intent. The purpose is not to deceive but to substitute and shorten. Where are your default aliases? As ever, it depends on what version of GNU/Linux you are using. It could be in one of three places: .bashrc, .bash_aliases or .profile. Mine is in /etc/bashrc, which means, incidentally, that it is available globally. The default install was fairly spartan and includes only the first four aliases. The rest are my own post-install additions (figure 1).

You have mail. You also have aliases

75


Issue 19

Figure 1: Location of aliases in .bashrc

Creating an alias So, when you type ls you are already using a built-in alias and your files in any given directory will be displayed, but you might find this a little underpowered. What if you want to list all your hidden files and file permissions? You could type ls -alp and that will do the trick, but by configuring it as an alias you will save time and typing if it is frequently used. To do this, give your alias a name. Clearly, the name needs to be contextual and meaningful to you and above all, it should be short. If you decide on a long name you have defeated the whole purpose of creating an alais in the first place! If you type: alias long="ls -alp" your alias is created. (Note that you cannot have an equals sign in your alias name.) Two points however: first, you can use single or double quotes when creating an alias. Single quotes will save having to select the Shift key and second, you have just made an alias but as you will see the task is not finished just yet.

Make your aliases stick If I open up a console and make an alias like the one above, it will be available for immediate use. But, if I log out or power down, when I restart and type that name the console will return the classic error message: command not found. Why is that? Simple. Aliases created in a shell will not survive across a reboot and an alias created in one shell will not work in another shell. Here is what I get when I type alias without arguments in my default Bash shell (see figure 2):

Figure 2: List of aliases in the default shell If I now open a session of the Korn shell in the console (by typing ksh) and then attempt to list all my aliases, I get what is shown in figure 3.

76

Make your aliases stick


Issue 19

Figure 3: Aliases in bash and korn You can see the difference. However, what you are really interested in is persistent aliases across a reboot regardless of the shell you are using. For this you need to add them permanently in one of the three files listed above. You will need to do this a root, so it might be a good idea to back up those files just in case over zealous or careless keying drops you into a whole world of pain and hurt. To make your new aliases reload the easy way just type . .bashrc or . .bash_aliases if that is where they are on your system. Note that this is a dot (“.�) followed by a space and another dot and the file name. The perceptive amongst you will have worked out that hand-knitting aliases one at a time is a rather tedious business and it might be a good idea to make a list of all the shortcuts you want and key them all in at one pass.

Taking stock of your aliases Having set all of this up you take a holiday in the sun for three weeks to reward yourself for all your clever and hard work. Upon your return, you can’t quite remember what aliases you created. The simple way to jog your memory is simplicity itself. Just open up a console and type alias without options or arguments and it will display all them. If you need to know how each one is constructed, it only requires the adding of the alias name as illustrated in figure 4.

Figure 4: All aliases listed and alias content Firefox users will be familiar with installing useful extensions to make that browser more productive, but, inevitably, there comes time when you need to prune them to prevent bloat and you realize that the killer extensions you installed six months ago are gathering dust. Aliases are no different. Review them and prune ruthlessly; but how? Well, appropriately, create an alias to see all the commands you have called them up for service and how often they have been invoked. Here is one to do the job:

alias most='history | awk '\''{print $2}'\'' | awk '\''BEGIN{FS="|"}{print $1}'\'' | sort | uni

Taking stock of your aliases

77


Issue 19 I called this alias “most”, short for most used commands, but you can choose something that is most meaningful to you. What it outputs is shown in figure 5.

Figure 5: Using an alias to identify frequent commands After tail -n I inserted the number twenty, which, as you will see in the screenshot (figure 5), listed the twenty most used commands. This provides a guide for deciding which ones are the best candidates for the alias treatment. If you don’t include a number the alias will output a default of ten. Of course, for power users on the command line that figure can be much larger. It’s your call.

Amending and deleting aliases Having reviewed the most frequently used commands and the aliases constructed to use them you can decide which ones are for the chop. Removal is simplicity itself: if you wanted to remove that alias for most just type unalias most and you’re done. Well, not quite. You have only removed the alias for the duration of the current shell and login. If you have set it up in the appropriate configuration file it will reload itself when you reboot. To make its removal really permanent you must delete it from the configuration file using your favourite text editor (as root). If you are the uber-cautious type and you think there is the slightest chance you might ever reuse that lovingly-crafted alias which is three lines long, you can always err on the side of caution and disable it in the configuration file by commenting it out. Just put a “#” sign in front of the selected alias(es). If you ever decide to revive it/them, just delete the hash sign (again, both operations as root). Another way to delete an alias is simply to overwrite it with another—you can even amend it to add in features to suit new requirements. If you want to temporarily delete all of your aliases in the current session just type unalias -a Although temporary aliases have their obvious disadvantages they are very useful for experimenting and testing without any fears of making you machine unbootable. Experiment to your hearts’ content until you get that alias working just as you want it before committing it permanently. For those of you for whom even the Medieval (sin of) sloth is too energetic and one key press is one too many you can short circuit all of this by using a backslash before an alias. For example, cp will operate as the default alias without asking you if you really want to copy and thus temporarily overide the alias cp='cp -i' that you set up. This will work with all default aliases which have been given their own alias but not with new aliases like most.

Alias and security If there is one thing GNU/Linux users rightly boast about when critizing Windows it is the superior security features of free software operating systems. This is not an empty boast but perhaps it should be qualified by saying that, good as the latter is, there is no such thing as one hundred per cent security and what comes pre-installed can always be improved upon. You could of course turn off your computer, unplug it from the mains, encase it in concrete and bury it at the bottom of a nuclear blast-proof bunker. It will certainly be secure but a bit short on usability. An alias, though humble enough, can add a little security to your daily work on the command line. cp, mv, rm are familiar

78

Alias and security


Issue 19 shell built-in aliases but even the untrained eye can see the potential dangers lurking in the indiscriminate use of these commands. A moments’ carelessness and typing rm could delete a file not intended for deletion. Many an impulsive use of the rm command has led to tragic scenes of sobbing, inconsolable users slumped over their computers as hundreds or thousands of files are deleted to the accompanied symphonic composition called “hard disk grinding”; and remember, this command bypasses the trash/recycle bin. Getting it back may be difficult. The simplest of aliases can give you an extra layer of protection and all you need to do is type alias rm='rm -i', or alias cp='cp -i' or alias mv='mv -i'. Any of these aliases will now be interactive and ask you if you want to remove, copy or move files respectively.

Noclobber—supplement alias Good as interactive aliases are, you have not plugged all the gaps just yet. The > and the >> commands will allow you to redirect a command’s output to a file and to append a command’s output to a file respectively. Misuse of these commands can lead to disaster. The command to preclude this possibility is the wonderfully named noclobber which, when set, will throw up an error message when you try to overwite a file. You can set it temporarily by giving the command set -o noclobber. But, like setting up aliases, it will only become permanent once you have added it to a configuration file like .bashrc. To delete it, open the file with your chosen text editor and remove it—or, if you prefer, disable it until required again without having to re-key it by preceding it’s entry with the “#” sign (as root). To disable it on a per session basis in your shell type set +o noclobber. Again, like the option for alias, a third alternative is available. Without either having to delete or disable noclobber you can overide it by using | which will allow you to overwrite a file whilst retaining the noclobber configuration. Power and flexibility—the essence of a good system.

Tip(s): If you are using the tcsh shell set noclobber and unset noclobber are the commands you will need and for forcing an overide !| will do the equivalent trick. The tcsh shell allows you to embed command line arguments in an alias where bash does not and to use arguments in that shell you can use ! : 1 for the first argument and ! : 2 for the second but you must escape the ! in the alias definition. Not unexpectedly, as with programming languages, certain words are reserved and cannot be used when devising aliases: while, done, do, then, until, else. Turbo-charge your aliases. Turn on tracking. What is tracking? Well, aliases are used frequently as shorthand for full path names and one particular facility allows you to automatically set the value of an alias to the full path name of the corresponding command. That’s a tracked alias. It speeds up execution because it eliminates the need for the shell to search the PATH variable for a full path name. Using set -h turns tracking on, thus the shell defines the value of the tracked alias. You should note that cd and pwd are not trackable (though aliasable). To remove all tracked aliases simply type alias -r. Appropriately enough, you can alias the alias command in Bash and Korn: alias a='alias' and alias a=alias respectively (though not in the C shell). Nesting and pipes: No, it’s nothing to do with a briar-scented country ramble. You can make compound aliases by connecting them using a semi-colon at the end of the first alias and separated by a space before the next one. It has been claimed that this is potentially dangerous because it can cause parsing of composite commands to do unexpected things. So far, I have not encountered any problems. Incidentally, you can create a single alias to launch two separate commands or create two separate aliases simultaneously. Again, it’s your call. To give your aliases a little power tweak why not use a pipe (|) to send the output of one alias as the input to another alias. For example: alias dir="cd /etc; ls -alp | grep ^d; cd ~"—which creates an alias called dir which will list all files, including hidden ones, in long format with file permissions and then pipe it to grep, to filter the results of all lines starting with the letter ‘d’ and finally return you back to your home directory.

Tip(s):

79


Issue 19 If you were experimenting with and creating aliases and you forgot to make it permanent and/or logged off, powered down, all is not lost. You will doubtless be familiar with the history command (if not, you can read more about it here) or using the up arrow to scroll through that history. Using this command you can locate that alias description and once you find it simply hit return and it will run for the duration of that session. This will only work for the shell in which you created that alias. You can then save it permanently as described above. (Security fanatics will argue that history enabled is the route—or is that root?—of all evil, as a hacker could gain access to commands run as root. Just use set -/+o history.) Finally, in this tips section, something for those whose typing skills are abysmal and who would sooner sit through a Vogon poetry reading than improve their keyboard skills. Can’t spell something properly or are inclined to fumble at the keyboard? Alias you way out by creating several aliases for a tricky one like the browser Kazehakase if you launch it from the command line. Thus: alias Kaezhaksae='Kazehakase'. This is trivial but it illustrates the point. It is also rank idleness and probably constitutes an abuse of the alias command. There is a better way to correct typos and that is to enable a spell feature via something called shopt -s which works for typos of the file path variety and also links into the relationship between aliases and shell scripts.

Shop(t) until you drop If you type help shopt in a console you will be told that “it toggles the values controlling optional behaviour”. Better still, just type it without options or arguments. What I get on Mepis is shown in figure 6:

Figure 6: Mepis shopt default It is a complete listing of all shopt settings, on and off. To filter for an enabled list type shopt -s and for a disabled list shopt -u. If you have been experimenting with this command and want to save the hassle of scrolling/searching through a list and just see the status of one setting only, then type, for example, shopt cdspell. For the purposes of illustration this is somewhat truncated, but the entry we are interested in is cdspell which is set to off, so if you type ls /ect -alp you will, of course, get an error message (figure 7).

80

Shop(t) until you drop


Issue 19

Figure 7: Error message when shopt cdspell set to off If, however, you type shopt -s cdspell this will set it to on (type shopt -s or shopt cdspell to confirm it is now on) and when you reproduce the original typing error it is automatically corrected and executed (figure 8).

Figure 8: Command and file path running correctly with typing error, shopt cdspell set to on The bonus is that this feature will work inside an alias you define when, for example, it includes a typing error. If cdspell is off the alias will fail (and default to a different directory) and output a standard error message as shown in figure 9.

Figure 9: Error message running a mistyped alias, shopt cdspell off When cdspell is on the alias works, despite the typing mistake (alias e='cd /ect; ls -alp') as shown in figure 10.

Shop(t) until you drop

81


Issue 19

Figure 10: Alias ‘e’ running correctly with typing error, shopt cdspell set to on Obviously, you are not going to intentionally mistype/mispell in an alias—but if you do, you can rest easy in the knowledge that with shopt -s cdspell on it is one less thing to edit and correct. Finally, although it is generally recommended to use a shell script for more powerful requirements, if you must use an alias inside a shell script shopt can be of assistance, if expand_aliases is set to on. Why?—because aliases are only expanded when the line of a script is read; i.e., when the shell function is defined and not when it is executed.

Conclusion The Alias command is not the most powerful piece of digital ordnance in the GNU/Linux armoury but I would still not wish to go into battle on the command line without it. You have a Formula One racing car in the driveway. Why only pop down to the end of the avenue to post a letter when you can take it out onto the racetrack and put your foot to the floor. See just what the command line can do. Alias saves typing, avoids spelling mistakes, assists poor memory, enhances security, cures the common cold (okay, I lied about that one), helps you to explore and exploit the full power of UNIX commands and the file system and if you are a citizen of the Republic of Paranoia you can always experiment on a live CD first without any fear of terminal (sorry) damage. It is frequently remarked by Windows naysayers that GNU/Linux has a (too steep) learning curve (as if this is not true of any worthwhile task or skill in life to be mastered) but mastery of aspects of the command line, including alias, is proof that some learning curves are worth the admission price. Can’t pay, won’t pay? then you may be consigned to a permanent digital underclass. Just ask the average Windows user.

Biography Gary Richmond (/user/3653" title="View user profile.): An aspiring wanabee--geek whose background is a B.A.(hons) and an M.Phil in seventeenth-century English, twenty five years in local government and recently semi-retired to enjoy my ill-gotten gains.

Copyright information Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved. Source URL: http://www.freesoftwaremagazine.com/articles/the_alias_command

82

Conclusion


Issue 19

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Create your online project site, start to finish, with Sakai A flexible Collaboration Learning Environment By Alan Berg Sakai is an online Collaboration Learning Environment, CLE for short. Indiana University has proactively deployed it for 100,000 students, and over 120 other Universities are involved with their own local deployments or test beds. Clearly, this well received application is worth checking out and taking for a vigorous and thorough test run. This article is a practical introduction to Sakai, along with numerous screen grabs. The article’s purpose is to highlight the power and flexibility of this Java based application, and to explain how simple the creation of ad hoc online communities can truly be. I hope this article will give newcomers a grasp of the power of collaboration and learning tools. If you are a course administrator, teacher, educational decision maker or student then knowing that Sakai exists and what it can achieve allows you to compare your current infrastructure with a modern free software online learning enabler.

So what is Sakai? Roughly speaking, a Sakai site has a set of tools available; it also has defined groups of users which are enabled to use some of those tools. The administrator is king and can make sites and users, and populate the sites with the users, giving each one a specific role. Students have one role which allows them some privileges to use specific student-oriented tools, whereas instructors have a different role with far more power. For each site, the administrator activates a tailored set of tools out of a long list. The tool list is expanding with each release. One refinement to the comments I have just made: an administrator has universal power and instructors tend to have site-wide administrative rights. Of course, this description is an over simplification, but I will come back to it later. When students log in, Sakai presents them a list of sites that they have joined at the top of their browser. Each site is for a particular project or course. A student enters a given site and can interact with fellow students or the instructors using the available tools. Normally only members of the site get to see information like test results. These defended areas allow students to interact naturally with groups of peers in a rather flexible and adaptive manner. To enhance the intrinsic value of the CLE, the architects of Sakai have been very clever: creating new tools is rather straightforward, and within the grasp of any Java developers who have previously written web applications. The success of the supporting infrastructure can easily be seen in the contribution section of the source code repository where many, many extra tools exist. As a result, it’s easy with Sakai to customize sites to specific needs. One can imagine a K-12 class using chat, wikis, and forums, and researchers using custom tools for grid interactions on top of the already available collaboration tools.

An aside Sakai is released under an Apache-like educational license. Sakai also has a reasonable set of commercial partners that offer commercial support. For decision makers, this represents a highly important safety net and will definitely help acceptance.

An aside

83


Issue 19 The community has placed Quality Assurance in the middle of the development cycle and not as a provisional last thought. Before each release, there is a viperous smoking out of issues. The last (2.4) release tag involved 17 preplanned and minor iterations on a code base of more than one million lines of code with around 90 active and consistently hard working participants. At the most recent conference in Amsterdam, shadowy characters gave away a number of Sakaigers (cute but dangerous cuddly toys) as rewards to the QA’ers. My personal Sakaiger escaped quickly into the Amsterdam underground (figure 1) and I suspect it will soon become a member of the A team or involved in serious adventures requiring many sharp teeth and lots of biting. How can such a cute animal be so dangerous?!

Figure 1: A brief spotting of a sharp toothed Sakaiger in the Amsterdam underground. Notice the use of telephoto zooming for safety reasons Since Sakai is free software, anybody can see its evolving code base without restrictions. Showing a high degree of confidence in the QA process, the communal infrastructure applies free software tools such as PMD and FindBugs to perform scheduled automatic code reviews on the Sakai source every night. The bug pattern observers generate generic reports that are open to the whole world. Imagine proprietary code being washed clean in such a way. Certainly, if taken seriously enough, the process can only help to push bug densities down and stability up!

Getting started Sakai is a Java based application that runs within a Tomcat server. As a result, the source code is pretty much OS independent and just relies on the fact that you have Java 1.5 installed on your machine. The demo is self-contained and has its own in-memory database. For initial learning the demo is great; however, when you want to deploy for a few hundred users you will need to connect to a more substantial database such as MySQL or Oracle (they are both supported). The change over requires only eight lines of configuration and installation of the database server with a clean database structure that supports the UTF-8 character set. Therefore, there is a clear path from demo to middle sized production. For the sake of simplicity, I will assume that you have a GNU/Linux box, preferably Ubuntu Feisty 7.04 with 1GB of RAM or more, and Java 1.5 installed. At the time of writing, the following simple installation instructions are correct. Download the demo package and unpack it in your location of choice (for example, your home directory) using the following command: tar xvf sakai-demo_2-4-0.tar.gz

A directory sakai-demo should now appear. Verify that the environment variable JAVA_HOME exists and points to the top-level directory of your Java 1.5 instance. If the variable has not been set then one way to permanently arrange a value is to change your ~.bashrc file to include a line similar to: export JAVA_HOME=/usr/lib/jvm/java-1.5.0-sun-1.5.0.11

84

Getting started


Issue 19 where, of course, the value of JAVA_HOME is specific to your environment. The value will take effect after your next login. To start the demo under the sakai-demo directory type: ./start-sakai.sh

You will now see a lot of Java related logging gibberish flowing down the screen. Please, do not worry: it’s just debugging information hopefully stating that the world is perfect; be patient and wait the prerequisite minutes until you see text similar to: Jun 19, 2007 11:31:52 AM org.apache.catalina.startup.Catalina start INFO: Server startup in 117423 ms

We have minor glitches. At the time of writing, there were two small bugs in the demo, but I hope that, by the time you read this article, these trivial issues have been resolved. The first of these issues was that you needed to change the permissions of start-sakai.sh and the scripts in the sakai-demo/bin directory to be executable, and then make sure that the that sakai-demo/logs/catalina.out exists. Browsing http://localhost:8080 should bring you to the main page, as shown in figure 2.

Figure 2: Demo main page http://localhost:8080 The administrator account is admin and the password is admin. At the very minimum, we should change the password. Therefore, login as admin, click on the user link on the left hand side and you will find yourself in the user administration tool. Select the admin account and you will see figure 3. Change the password in the “Create New Password” and “Verify New Password” inputs to your obscure, unguessable value and then update details. Hint: my password is quite rude if printed in a talking box of a cartoon character.

Figure 3: The admin user account Sakai has a reputation for being initially more daunting than more specifically structured course management systems such as Moodle. At the risk of generating some heated comments in the feedback section of the Free

Getting started

85


Issue 19 Software Magazine, I see that there are obvious differences between a course management system (such as Moodle) and a collaboration learning system (such as Sakai). Workflow tends to be more formalized within a CMS and is by its very nature more straight forward to understand and explain. In comparison to a CMS, a CLE has more convoluted flexibility. Unfortunately, the generalized nature of a CLE (and Sakai) makes the application more difficult to learn, as there are more options and approaches to site design and the flow therein. As Sakai evolves, it brings in even more tools to its powerful learning domain; as this happens, interaction design and workflow skinning for various well-known situations will become crucial for adoption.

Project sites and course sites At this point, you need to know the difference between a course website and a project website: • A course site has been structured by the developers with a preset set of tools and roles and permissions. It comes with three roles, instructor, teaching assistant and student. • A project site is more for ad hoc collaboration; it has only two roles: the maintainer and anyone that accesses the site. Sakai boasts a fully functioning help support which you can obviously access by clicking on the “?” icon; the help tool should be your first port of call before asking questions in the forums. The help function describes the difference between a course site and a project site perfectly (figure 4).

Figure 4: Thank you help

My first site—a project website The recipe for creating your first site is as follows: Login as admin and add two users. Create a “project” site, choose the tools in the site and enroll the student and instructor. While creating a sample project site, I will keep the article as simple and adaptable as possible for K-12 teachers and researchers.

Creating users Login in as admin and select the user tool from the “My Work Space” site. Add an instructor to the site as shown in figure 5. You should choose a password that nobody can guess.

86

My first site—a project website


Issue 19

Figure 5: The instructors account Do the same for the student account; note that the roles of the users are only defined per site and when generating or editing the site, and thus cannot be seen here under “account details”. After creating the users and going back to the top of the tool via the refresh button next to the word “Users”, you should now be able to view a screen similar to figure 6. Don’t worry about the rather strange internal IDs. Just keep in mind that the ID will always be unique, even if the local user ID is not; the system uses this ID for the smooth tracking of objects across clusters of servers for large deployments.

Figure 6: Top level view of the users tool Please note that logging in as student or instructor delivers a default working site that contains basic application-wide features (see figure 7).

Figure 7: The default work site of a new user

Creating users

87


Issue 19 The profile tool allows users to create a profile of themselves including photos, addresses, etc. The user has the choice to make information public. I won’t show you my personal profile as a figure—suffice it to say that the profile involves a lot of Photoshop manipulation and lying about my ability to climb large buildings in funny costumes. The membership tool allows users to join or un-join sites that instructors have made public. In the current scenario, admin will create the site and add the student and instructor; therefore, this tool is not strictly required for this article. The student has the power to create schedules and view global announcements. The resources section enables the uploading of files via either a web form or Webdav. Dragging and dropping from local folders to the server via Webdav is a feature that students tend to love, but does occasionally have glitches.

Creating the site First, login as admin and visit the “My workspace” sites tool. Ignore all the links to sites that you do not recognize. The other sites are part of the default setup of the application (perhaps the GUI designers should hide the extras from the view of newbie administrators). Clicking on the link “New Site” will send you to a site edit form as shown in figure 8. Add the details as shown in the figure. Leave any undefined elements as default. Notice that thanks to your actions the application published the site: now anyone can join with the role named access. The system designers intended the access role to give students, project workers access to a site, but limit the students’ powers to modify the site. In comparison, a second possible choice: the maintainer’s role contains more powers and is designed for use by those in charge of the project for example a teacher or instructor.

Figure 8: Creation of my first site

Setting the site’s permissions Next, you should visit the Worksite setup tool; select the tick box on the site ID “cooking_101” and modify the site by clicking on the edit link (figure 9).

88

Creating the site


Issue 19 Figure 9: The worksite tool for the administrator A quick aside: The instructor will be given the maintain role on the cookbook site, which will give him or her local site-wide administrative powers. When the instructor logs in, clicks on the site (in this case “basics of Cooking”) and then chooses the Worksite Setup tool, the instructor will see a lists of sites that are he or she can edit (figure 10).

Figure 10: Screen grab of the instructor’s worksite tool notice that the instructor is allowed to modify the cookbook_101 site Click on edit for the given site in the worksite tool (as shown in figure 11); you will see many options: for example, the page order option allows you to change the order of the menu links on the left hand side.

Figure 11: Lots of options for site manipulation It’s now time to give the instructor the maintenance role, and to give the student the correct access rights; therefore, click on the “Add Participants link”. Fill in the details as shown in figure 12. Notice that there is one ID per line in the text area box, and these are not comma delimited (which I sometimes fill in without thinking). Another point to note is that I have chosen the “Assign each participant a role individually” option which saves a little time later. Next press the ”continue” button.

Setting the site’s permissions

89


Issue 19

Figure 12: Adding multiple participants Select the correct roles by participant (figure 13) and then press continue. Next, you are offered an opportunity to send mail to the participants. Select “Don’t send …” as the email will go nowhere in the demo anyway.

Figure 13: Selecting the correct role per participant Don’t get excited yet: if you logged in as a student and then visited the cookbook site, you will only see an unpublished site similar to figure 14. The reason for this is that the project site does not have tools associated with it yet.

Figure 14: A view of a typical unpublished site Adding the tools as administrator or instructor is straightforward. After selecting the cookbook site as shown previously in figure 9, select the Edit tools link (figure 15) and select the blogger, chat room, discussion,

90

Setting the site’s permissions


Issue 19 podcasts, poll, resource, search, site info and wiki tools.

Figure 15: The edit tools dialog Log out now as admin and log in as instructor. Enter the cookbook site and select the chat box tool. Notice that you can edit the permissions on the tool via a link at the top. Clicking on the link will display the permissions for instructor and students (maintain and access), figure 16. The permission names are intuitive enough, but can be made better with a little tool tip; and as instructor, you have a right to change those permission names. Under most conditions, it is hardly worth the effort to change these details unless you have a specifically idiosyncratic set of students. For example, the instructor may be a researcher that wishes other researchers to have the same powers for a given tool.

Figure 16: The instructors screen for modifying chat tool permissions. Again, I cannot help emphasizing the real power of a context sensitive help tool. Clicking on help delivers the following usable information, as shown in figure 17. Note that the help screen warns with honesty that there are limitations with the chat tool; however, at least you can have multiple rooms.

Setting the site’s permissions

91


Issue 19

Figure 17: Help we need some information In conclusion, the recipe for creating a site is reasonably straightforward. Each new tool may have its own particular way of doing business and may require individual configuration from the instructor. However, the instructors do have a lot of power over their own configuration destiny. I haven’t discussed the generation of courses sites; as I said, they are similar to projects, but with a slightly more rigid and mildly more complex structure. That I will leave to your own explorations; if you are stuck read the help text and visit the links mentioned later in this article.

sakai.properties The system administrator modifies advanced configuration predominately via the central configuration file for Sakai, which sits in the Sakai directory and is thoughtfully named (very originally) sakai.properties. The property file defines global configuration such as the location of important files and if certain tools should be enabled or not. For example, as instructors visit the Search tool. Notice that the naughty administrator has forgotten that the tool is not fully activated until sakai.properties is updated with the line search.enable=true (see figure 18), and the server is restarted.

Figure 18: An inactive search tool making a clear statement Upgrading the built in in-memory database to MySQL or Oracle is not brain surgery and requires around eight lines to be changed in sakai.properties and a correctly set up database. Migration of the data on the other hand involves more complexity and thus detailed recipes and significant testing.

Developing for Sakai It’s very easy for a Java developer to build tools using a cleanly separated API. A basic tool is a Servlet plus an extra XML file that tells Sakai where to place the tool in the GUI. Once registered via the XML file, the

92

Developing for Sakai


Issue 19 new tool will just appear auto-magically in the tool selection dialog, as shown in figure 15. The community recognized that adding tools to the core functionality can make the project more unstable. Therefore, there is a methodical process for achieving recognition for new tools. First, tools are set as contributions and stay in the contribution section of the source code management system. Anyone is free to patch and compile an instance. That sounds hard, but it is just a matter of dropping it into the right directory and compiling it. If the tool has significant end user adoption, then the community makes the tool provisional, and the developers place the tools code in the main source code branch. However, the application hides a running instance from the end user when the server starts up. At this point, highly motivated workers spread throughout the world perform a fully loaded double barrel QA cycle on the tool. If the tool makes it, then it will appear in Sakai’s full release (and it will be available for mass use). The developers become famous and snapped up as guests for MTV unplugged sessions.

What’s next Sakai has deployment penetration, a healthy community and a centralized QA. Commercial support has grown, in part, due to the Apache-like license. The number of tools brought into the core package is widening. However, to reach the next level of penetration requires some significant tweaking of the GUI and workflow and, to a smaller extent, the underlying technologies. Personally, I believe that Sakai has already succeeded and now it is just a question of selling the idea of wider scale deployment outside the Sakaigers traditional university base. Sakai is also very well documented. The community has a number of well-known online watering holes. These include: • Sakaipedia • Sakai home page • Conference • Dev Group mailing list archive • PlanetSakai • Bug tracking

Acknowledgements I would like to acknowledge the rare few that have answered my dumb and dumber questions on the sakia-dev list, and especially to Steven Githens for creating SASH, a command line, Unix-like tool for the coding masses, awesome to the power of very large numbers indeed.

Biography Alan Berg (/user/8" title="View user profile.): Alan Berg Bsc. MSc. PGCE, has been a lead developer at the Central Computer Services at the University of Amsterdam for the last eight years. In his spare time, he writes computer articles. He has a degree, two masters and a teaching qualification. In previous incarnations, he was a technical writer, an Internet/Linux course writer, and a science teacher. He likes to get his hands dirty with the building and gluing of systems. He remains agile by playing computer games with his kids who (sadly) consistently beat him physically, mentally and morally. You may contact him at reply.to.berg At chello.nl

Copyright information This article is made available under the "Attribution-NonCommercial-Sharealike" Creative Commons License 3.0 available from http://creativecommons.org/licenses/by-nc-sa/3.0/.

Acknowledgements

93


Issue 19 Source URL: http://www.freesoftwaremagazine.com/articles/create_your_online_project_site_with_sakai

94

Acknowledgements


Issue 19

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Fast programming with Rexx Ease of use and power can co-exist By Howard Fosdick Ever need to code quickly? You can code Rexx like water—yet it’s powerful. Here’s everything you need to start, by studying real-world programming examples.

Script for success One magical aspect of the free software movement is the many programming languages available. Among them are the scripting languages, languages designed for highly productive programming or “scripting”. Popular free scripting languages include Perl, Python, Tcl/Tk, Ruby, Bash, Korn, and others. There is quite a variety because each offers its own unique combination of strengths, its own characteristics. Different languages meet different needs. This article introduces one of the first scripting languages, one that continues to be popular world-wide. Rexx initially rose to prominence as the scripting language for IBM’s mainframe operating systems, the Amiga, OS/2, and other systems. Today nine free Rexx interpreters run on any platform, ranging from phones and handhelds, to laptops and PCs, all the way on up to servers and mainframes. Of course, this includes operating systems like GNU/Linux, Windows, and Unix. Rexx’s big draw is that it is powerful yet easy to use. Many languages make this claim, but few achieve it. There exists a natural trade-off between power and ease-of-use—the more power designers cram into a language, the more complex that language becomes. Rexx employs specific techniques to pack power while retaining ease of use: • Minimal syntax • Few rules restricting the code (aka free-formatting) • Consistent, reliable behavior • Common-sense defaults (for simplicity) that can easily be overridden (for power) • A very small instruction set you can learn immediately, surrounded by larger, powerful function and object libraries you can learn over time • Language extensibility through add-in function and class libraries. You use these external modules the same way you access built-in language features • Modular and structured • Standardized I’m a big believer in “easy” languages—as long as they still meet my need for power. A language that is easy to code means higher productivity. You can script quickly. With modular design, I can write tons of small routines and evolve a large application in no time. Easy languages lead to fewer errors. Fewer errors result in more reliable code. Programs are quick to debug. And they are easier to maintain and upgrade as needs change. This is especially true with large programs. Ever had to maintain someone else’s code written in a syntax-driven language? It can be tough, especially for large or undocumented programs. Many complex languages are very powerful, but programs written with them are hard to enhance and maintain. Scripts written in “easy languages” like Rexx can be modified much

Script for success

95


Issue 19 more easily. To the degree code grows and changes and needs to be updated, programs written in easier languages have longer lives than those encoded in powerful but complex languages. Easy languages yield big benefits—even for experienced developers. While some consider it macho to work in powerful, complex languages, the best developers are wildly productive in easier languages. They script as fast as they think. Plus, the companies they work for prefer the maintainability of those languages. Organizations want living code—code that can adapt and be altered. Complex code doesn’t retain value once its originator “leaves the building”.

Varieties of Rexx There are nine free Rexx interpreters. They fall into two broad categorizes: 1. Procedural or “classic” Rexx 2. Object-oriented The tables below list the Rexx interpreters, the platforms on which they run, and a few of their major benefits. Regina Nearly all platforms Reginald Windows r4 Windows Rexx/imc Linux, Unix, BSD BRexx Most platforms Rexx for Palm OS Palm OS Free Rexx—classic interpreters

Portable, standard, popular Excellent Windows integration Includes many useful Windows tools Industrial-strength, nice concise documentation plus tutorial Fastest Rexx, small footprint, lightweight Palm OS integration, text interface

Open Object Rexx roo!

Linux, Unix, Windows Windows

NetRexx

Any Java Virtual Machine

Fully OOP, highly popular download on SourceForge Fully OOP, includes useful Windows tools Non-standard. Integrates with Java, provides easy coding alternative

Free Rexx—object-oriented interpreters All the interpreters meet the Rexx standards (except for NetRexx, designed for Java integration). This means that your code is portable across platforms. Stick to the language standards (and limit the operating-system specific commands you issue from within the code), and your code runs anywhere. Your knowledge is transportable too, both across platforms and across the Rexx interpreters. The first two object-oriented Rexx interpreters listed above are true supersets of the classic Rexx standards. They add complete object-orientation to standard Rexx. This includes objects, classes, messaging, single- and multiple- inheritance, data hiding, polymorphism, class libraries, and all the benefits of object-oriented programming. Since these interpreters meet the Rexx standards, classic Rexx programs run unchanged under object Rexx. This compatiblity both carries older procedural programs into the object-oriented world, and allows you to code whether or not you program using objects. You can mix procedural and object-oriented code however suits you. NetRexx is unique. It is not standard Rexx but a “Rexx-like” language. Designed for Java integration, it runs on any Java Virtual Machine. It can use Java objects and be used to create objects used by Java programs. It offers an easy way to code for the Java environment. You can download any of the Rexx interpreters from the Rexx site. The web site also offers hundreds of free add-in tools, and function and object libraries for Rexx, as well as sample code.

A sample program 96

A sample program


Issue 19 Here’s a real-world example of how quick scripting can be useful. The other day I faced a dozen old Windows desktop computers running different versions of Compuserve software. (For those who remember the old dial-up days of a decade ago, Compuserve was a leading vendor for email and early internet access.) My task was to somehow retrieve the contacts in the “address books” stored in the Compuserve software. I logged on to a few of the old machines and quickly determined that they ran ancient Compuserve software ranging from version 5 all the way back to version 2. These versions of Compuserve all saved their address books in a single file named “addrbook.dat”. The difficulty is that the information in this file is encoded in a proprietary format. View it with Windows Notepad and you can see the contact information, but it is difficult to read because it’s interspersed with tons of unprintable and miscellaneous characters. Since I didn’t know the Compuserve address book format, I could only guess as to what the unprintable characters represent (length indicators?, flags?, field separators? I found a free program on the web called ForMorph Message Converter that exports old Compuserve information, but I never decoded the file format of addrbook.dat files. If anyone knows, please post it in the comments to this article.) I decided to write a quick Rexx program to dump out the address book information in readable format. I have used classic (procedural) Rexx in the solution. First, I wrote a quick script just to dump out the file to the screen as printable characters. This meant translating unprintable characters into their hexadecimal (base 16) equivalents. Then I could see what was actually in the address book files. I could, for example, match the printed hex characters to the character map available in Windows (view it by selecting Start→Programs→Accessories→Character Map). This quickly showed me what the unprintable characters in the file were. The listing below shows what the program output looks like. Each line represents one character from the input file. Each line shows one character along with its hex equivalent to its immediate right. You can see that the letter “p” is “70” in hexadecimal, that the blank “ ” is “20” in hex, and that there are some mysterious characters with the hex value of “00” that do not show up on the screen at all. Only through this quick dump was I able to distinguish between hex blanks and the hex “00” characters in the file. This particular file was corrupted but I was still able to view its contents. Now I’ve got a better idea of what’s in these address book files. p a g e N ® A . ® N

70 61 67 65 20 4E AE 41 2E AE 00 00 4E

...etc...

Let’s take a look at the code that produced this quick “hex dump”: /***************************************************************/ /* Display file contents to the screen as hex characters */ /***************************************************************/ infile

= 'addrbook.dat'

do while chars(infile) > 0 inbyte = charin(infile,,1) say inbyte c2x(inbyte) end

A sample program

/* The input file

*/

/* Read the input file, */ /* character by character */ /* Display each character */

97


Issue 19 In the code, the first line after the introductory comments sets a variable to the input file name. The do while loop applies the built-in chars function to the input file. Its effect is to process the file while there are still characters to read. Inside the do while loop, the charin built-in function reads one character from the input file into the variable named inbyte. The c2x function converts this single character into its hexadecimal equivalent, while the say function displays it on the screen. In this way, the program processes the input file, one character at a time, and displays each on the screen with its hexadecimal equivalent. You can see why Rexx is quick to code in this sample code snippet: • Minimal syntax • Minimal coding rules and no required formatting • No mandatory program sections • Rexx automatically opens and closes files • Variables do not have to be declared prior to use An important principle is evident here. The behaviors of the last two points make good defaults but can easily be overridden—this is a technique Rexx employs to get simplicity to co-exist with power. For example, need to override Rexx’s default file behaviors and manually open or close a file? No problem, Rexx provides the stream function for this purpose. Use it to specify any kind of file processing you like. Ditto for variables—you can declare them in advance of use and even issue a command requiring that all variables be initialized before use (useful in large programs where you don’t want to allow uninitialized variables and typos that accidentally become variables). Rexx is ideal for quick scripting yet it allows you to write powerful programs. This is vital because there’s nothing more disappointing than using an easy programming language only to find yourself running out of power later.

An improved program Now let’s expand the example program into something more useful. This version creates an easy-to-read version of the input file and displays it to the user through Windows Notepad. This script is a generic “file viewer”. It converts any binary input file that contains some text into a more readable form, then displays the text to the user. Figure 1 shows how the first part of the program output appears.

Figure 1: Viewing Program Output

98

An improved program


Issue 19 Names and email addresses have been altered for privacy reasons, but you can see that the program makes the meaningful information easily readable from the original hard-to-read input file. In order to create a nice, viewable version of the input, the program masks out all the unprintable characters and miscellaneous punctuation. It converts them all to spaces. The script leaves unchanged only upper- and lower-case letters, the digits 0 through 9, and five meaningful punctuation characters: commas, underscores, colons, periods, and at signs (@). Here is the code: /***************************************************************/ /* This displays the viewable information in an input file. */ /* */ /* Write printable characters to a file, changing unprintable */ /* characters and low-value punctuation into blanks (spaces). */ /* Let the user view the file via Windows Notepad editor. */ /* */ /***************************************************************/ infile = 'addrbook.dat' outfile = 'addrbooklist.txt'

/* The input file /* The output file

*/ */

erase outfile

/* Erase previous output

*/

do while chars(infile) > 0 inbyte = charin(infile,,1)

/* Read the input file, */ /* character by character */

/* Convert input character-by-character to readable text if

(inbyte >= '30'x & inbyte (inbyte >= '61'x & inbyte (inbyte = '2c'x) | (inbyte (inbyte = '2e'x) then rc = charout(outfile,inbyte) else rc = charout(outfile,' ')

<= '5a'x) <= '7a'x) = '5f'x)

| | |

*/

, , ,

/* Write useful chars */ /* to the output file, */ /* convert others to blank*/

end rc = charout(outfile)

/* Close the output file

*/

notepad outfile

/* Let user view output

*/

The heart of the program is the if statement in the middle of the script. This statement writes the printable characters to the output file while converting any non-printable or miscellaneous characters to blanks (spaces). The effect is a “cleaned up” file that is easily readable. You can see that the if statement looks for certain ranges of hexadecimal characters that it leaves unchanged. The hex characters are denoted by the x that follows each, for example, '5f'x represents the underscore character in hexadecimal. Rexx handles all kinds of strings: character, alphanumeric, numeric, binary, hexadecimal, fixed-length, variable-length, whatever. Rexx facilitates problem-solving through string processing. The Rexx line continuation character is the comma (,) so the commas tie together the if statement across the several lines on which it’s encoded. The logical and operation is denoted by the ampersand (&) while Rexx uses the vertical bar (|) to represent a logical or. In this program, I used a single, long if statement to determine which hex characters to keep because I thought it more clear to those who don’t know Rexx. But I could alternatively have used Rexx’s parsing, string translation, or regular expressions to perform the byte translations. (Regular expressions are not part of the base language but are routinely used as supplied through several free add-in packages.) The script uses the charout function to write individual characters to the output file. I coded charout on the right-hand side of an equals sign, so that its return code will be assigned to the variable rc. In a more robust program you would always check return codes for events such as writing to files. You can always check whether or how any Rexx function worked through its return value.

An improved program

99


Issue 19 The last line in the program invokes Windows Notepad to allow the user to easily view the output file. You can see how easy it is to issue operating system commands from within a Rexx script—simply encode them! Any statement Rexx does not understand it sends to the operating system for execution. So it’s very easy to integrate operating system commands into Rexx programs, or to use Rexx as a driver for system automation. As this example demonstrates, building upon existing OS commands and facilities is highly code-efficient. Another example of an OS command occurs near the top of the program where the Windows erase command eliminates any output file from a previous run of the program. Rexx passes the erase command to Windows for execution after substituting in the proper value for the variable outfile. Rexx gives programmers full control over this substitution—given Rexx’s string manipulation capability, this gives you great power in issuing OS commands. You can programmatically create OS commands from within scripts and easily parse and analyze their outputs.

Finishing the job Scripting lends itself to quickly solving parts of a problem with short scripts. Tie the small scripts together and you have a complete solution. The piping facility in operating systems like GNU/Linux or Windows can link your short scripts together into a complete solution. Why write one big complicated program? With an easy scripting language you don’t have to design all the logic for the complete solution before you start. Just start small and build from there. That said, the final step in the solution is to take the output of the previous program as input to this new program, which then extracts and lists the names and email addresses. This was our initial goal. Here is sample output: Abbitona, Kirk Acie, Bobbie Ackermann, Jennifer Adams, Jason Adams, Sam Adleman, Thomas

kabbitona@aol.com bobbie.acie@WayIndustries.com jennifer.ackermann@amerilabs.com jason@worldtravel.com sadams44@killerapps.com thomas.adleman@fixit.com

Here is the code that creates this list: /****************************************************************/ /* Display the Names and Email Addresses from an old */ /* Compuserve address book. */ /* */ /****************************************************************/ string = linein('addrbooklist.txt')

/* Read the input file */

do while

/* Process each Entry

pos('INTERNET:', string) > 0

/* Find start and end positions of the next email address

*/ */

pos_email_start = pos('INTERNET:', string) pos_email_end = pos('.com', string) + 4 /* Calculate the length of the email address and extract it

*/

length_email = pos_email_end - pos_email_start email = substr(string, pos_email_start, length_email) email = delstr(email,1,9) /* Remove 'INTERNET' */ /* Extract the first and last names as: name_etc name_etc name name name say

100

name

= = = = =

Last, First

*/

substr(string, 1, pos_email_start-1) reverse(name_etc) word(name_etc,2) || ' ,' || word(name_etc,1) reverse(name) left(name,20) /* Left-justify name */

email

/* Display the Name and Email Address

*/

Finishing the job


Issue 19 string = substr(string, pos_email_end)

/* Adjust the string */

end

Here’s how this program works. The script reads the entire output file of the previous program in one statement by using the linein function. (Remember that there will be no end-of-line markers within the file because the previous program converted them all to spaces.) The program places this input into the variable named string. Then the program processes each name and email address in the input string, one at a time, through its do while loop. Each time through that loop, the script breaks out one entry’s first and last names and email address. It does this by scanning for the phrase “INTERNET:” that occurs prior to each email address, and also by looking for the “.com” string at the end of the email address. The code excludes the optional “comments” associated with some entries. Then the script shortens the string to process by eliminating the entry it just printed, and it goes back to the top of the do while loop to process the next address book entry. The program uses several Rexx string manipulation functions to identify and isolate information from the input string: pos Returns the starting position of a search string within a string substr Returns a substring from within a string delstr Deletes part of a string reverse Reverses the characters in a string word Returns a blank-delimited word from within a string left Left-justifies a string String functions in the example program I won’t walk you through the entire example, as its comments are probably sufficient to enable you to follow it. One trick I’d like to mention, though, is the use of the reverse function. The three parts of each address book entry are: name, email address, and optional comments. After each time through the do while loop, the program identifies and extracts the next name and email address from the input string. If the entry includes optional comments, these are left in the input string when the script processes the next address book entry. So, the script uses the reverse function to reverse-order the characters in the comments and name. Then with the names occuring first in the input string, the script can easily extract this information using the word function (once to retrieve the last name and the second time to retrieve the first name). After the script obtains the first and last names, it uses the reverse function one last time to properly order the characters in the names.

To learn more You’ve seen how easy it is to write short scripts to solve small problems, and then to tie them together to solve a larger problem. It would be simple to enhance these scripts to handle special cases in the input data (for example, email addresses that do not start with the string “INTERNET:” or that do not end in the suffix “.com”). Rexx is ideal for this kind of iterative programming, as are other easy-to-code scripting languages. But is Rexx powerful? If you still need to be convinced of this, read this article in Doctor Dobbs Journal showing how Rexx supports all kinds of data structures—based on simple dot notation. In Rexx, dot notation can represent arrays, trees, structures, records, lists, and other data structures. So you create all kinds of data structures in the same, simple manner—like this: data_structure.1 or data_structure.variable_A.variable_B. Contrast this to syntax-driven languages, where each data structure adds its own unique syntax to the language. Power languages that add syntax for each different data structure typically end up with some pretty complex and unforgiving syntax. Both approaches do the same job. Which is easier? This is the Rexx philosophy. Power through simplicity.

To learn more

101


Issue 19 To learn more about Rexx, read this two-part language tutorial. Part I further describes Rexx and its features and varieties while Part II is an easy language tutorial. All the interpreter download web sites also offer nice language tutorials. The simple examples in this article are hardly enough to get a feel for the language, much less showcase its features or its power. Download more sample code from the sources listed at this web site. Scroll up on the same page and you’ll see where you can download all the Rexx interpreters discussed in this article. Hundreds of free tools are downloadable from there as well. I recommend three key web sites. For Rexx information of all kinds (including downloads), visit RexxInfo.org. Other excellent web sites include the Rexx Language Association and the Open Object Rexx project web site for object-oriented programming.

Biography Howard Fosdick (/user/36931" title="View user profile.): Howard Fosdick is an independent DBA consultant who recently wrote the first book on free and open source Rexx scripting, The Rexx Programmerâ’ s Reference (http://www.amazon.com/rexx). He frequently writes technical papers and presents at conferences. His primary interests are databases, operating systems, and scripting technologies.

Copyright information Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved. Source URL: http://www.freesoftwaremagazine.com/articles/fast_programming_with_rexx

102

To learn more


Issue 19

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Configure and use the Untangle Gateway Facing the challenges with network administration, the right way By Dirk Morris Connecting a network to the modern day internet can be challenging. Basic infrastructure, like routers, DHCP Servers, and DNS servers, are required to get the network online. The network must also be protected with a firewall and intrusion prevention, and the desktops need protection from viruses and spyware. Next will come a spam and phish filter to stop the continual flood of junk email. Most are then forced to implement some sort of internet usage control, like web filtering, to control what users are doing on the network. As network users begin to work from remote locations, VPN is required to allow for safe remote access. As each problem is tackled, each point solution must be researched, purchased, deployed, integrated and maintained. While enterprises with time and money can accomplish this, home and small business networks are left to fend for themselves. The Untangle Gateway Platform is a free software network gateway solution that installs on off-the-shelf hardware and provides all the solutions necessary to get the network online and keep it safe and controlled. After installing the Untangle Software on a server, it needs to be installed somewhere between all the PCs and the internet, typically it is the firewall/gateway or it sits behind the current firewall/gateway as a transparent bridge.

Installing Untangle First, plug in the server you will be using as your Untangle Server. It will need two network cards, one which connects to the outside connection (such as your DSL/Cable modem) and one which connects to the internal network switch. If you are installing the Untangle Server behind your current router/firewall, you should connect the Untangle Server to the internal port on your router/firewall and the other port to you internal switch.

The first boot After booting the Untangle Server for the first time, a setup wizard gets the Untangle Server and the network online in a basic configuration.

Figure 1: The Setup Wizard

The first boot

103


Issue 19 After this, new applications (like Virus Blocker, Spam Blocker, etc.) can be downloaded from an online library directly into the Untangle Server. The Untangle library is filled with a multitude of best-of-breed free software applications that have been tailored specifically for small business and home networks. After downloading applications from the online library, they automatically install and appear as “virtual appliances” in the “virtual rack” and start processing network traffic. For networks with existing infrastructure, the Untangle solution complements existing point solutions. Redundant applications in the Untangle solution can either provide an additional layer of protection or not be used.

Figure 2: Installation of an application

Interview with Bob Walters by Tony Mobily TM: Hello Bob. Please introduce yourself to our readers! Hi everyone. I’m Bob Walters and I joined Untangle as CEO in June of 2006. I started my career in the US Marine Corps as a fighter pilot flying jets on aircraft carriers. Since then, I’ve served in a variety of executive roles with several startups: • Vice President and General Manager, Informix Software; • Vice President, Linuxcare • Vice President and General Manager, Securant • CEO, Teros (now Citrix) My last three companies were all substantial users of free software, and Linuxcare was a major player in the “Generation 1” class of commercial free software vendors. TM: What’s your view on free software in general? Simply put, free software is a better model for software development. The diversity of user and developer communities that support successful free software projects produce elegant and thoroughly tested code that is impossible for any proprietary software vendor with a single frame of reference to match. High quality free software that is customizable without any vendor lock-in… what’s not to like about that? TM: Do you think free software is at odds with the current economic system? Why? No, free software is not at odds with the current economic system. According to IDC, global revenue from stand alone free software as $1.8 billion in 2006. Sourcefire and Jboss represent successful commercial free software vendors and there are numerous startups growing quickly with commercial free software business models. Commercial free software business models reduce the costs of customer acquisition and allow software vendors to vigorously focus on incorporating user feedback and building great products. There is plenty of room for companies to sell services, support, training and premium features around free software. The fact that free software has been disruptive in so many areas is validation of just how well free software engineering and business models work within the current economic system.

104

The first boot


Issue 19 TM: You run Untangle, which is mainly providing free software. What’s your business model, and how is it going? Approximately 95% of Untangle software is free to use, study, modify and distribute under the GNU General Public License v2. This represents the Untangle Platform which virtualizes network applications, 12 of the 14 applications that run on it and all the software and signature updates associated with those 12 applications. Untangle sells the Professional Package, through our reseller and managed service provider partners. The Professional package is a commercial add-on for organizations that want the convenience of live support, two additional applications and advanced management features. The reaction to Untangle’s free software announcement at the end of June has been fantastic! Our forums are buzzing and our downloads are through the roof. TM: What’s next? Will Untangle release more software? What about selling hardware appliances? Is that something you’re interested in? We intend to develop more network based applications and continue to improve upon our platform. Our future development plans center around incorporating the community feedback that our recent free software announcement generated and delivering on our vision of making the best free software network applications as easy as possible for businesses to install, configure and manage. We don’t have any plans to offer an appliance, because we believe that proprietary hardware is an artificial form of vendor lock-in. Here at Untangle, our motto has become “software is better, open source is best.” However, we will continue to offer a prepared server, running on off-the-shelf hardware, for organizations that want the convenience of a pre-installed system that is ready to plug & play.

Securing the network All networks, from home networks to business networks, need to be secured from outside threats. With the Untangle solution, this is a simple process that can be accomplished quickly. To install new applications, simply click on them on the left side “library” menu and click “download”. The following applications are basic applications that will secure your network from outside (and inside) threats:

Figure 3: The Virtual Rack Virus Blocker is a great place to start. Virus Blocker scans the traffic entering and exiting the network for any viruses. No software is required on the end user desktops: all the scanning is done on the traffic as it transits through the Untangle Server. Virus Blocker provides a great solution for those without virus protection on the end user desktops or for those wanting an extra layer of virus protection. Virus Blocker is completely configured by default and requires no extra tinkering or configuration. Spyware Blocker stops spyware before it reaches the end user desktops, and can also help detect spyware already installed on machines. Spyware Blocker uses numerous techniques to stop spyware from entering your network and stop users from going to places which infect them from spyware. This can save numerous hours of tracking down weird problems and slowness issues caused by malware and spyware. Spyware

Securing the network

105


Issue 19 Blocker is also completely configured by default and requires no configuration.

Figure 4: Configuring Spyware Blocker Intrusion Prevention uses thousands of signatures to scan traffic for attacks. Using snort signatures, Intrusion Prevention can detect, block, and log attacks. Intrusion Prevention is preconfigured with good defaults to maximize protection and minimize false positives. Attack Blocker is a heuristic based intrusion prevention application which blocks attacks like floods and port scans based on reputation. Attack Blocker also requires no configuration. Router provides basic routing functionality, and DHCP and DNS servers. It may already be installed depending on the selections made during the setup wizard. If there is already a router on the network, there is no need to install the Router unless replacing the current firewall/router. Router makes the Untangle Server act like a router that by default serves DHCP and DNS with an internal IP of 192.168.1.1, much like a Linksys or Netgear router. Firewall is a simple rule-based firewall that blocks sessions based on protocol, IPs, or ports. This can provide an extra level of control to restrict the traffic entering or exiting the network. Again, if there is already a firewall on the network, this application can either complement the existing firewall or not be used. These applications are all easy to install and can secure your network within minutes.

For business networks Business networks usually face a few additional challenges. Having an email server brings an additional responsibility to protect the all the users from continuous spam, phish, and viruses they get via email. Fortunately, with the Untangle solution all that is required is downloading the appropriate applications. Spam Blocker uses all sorts of spam technologies like blacklists, bayesian filters, signatures, and optical character resolutions to scan email and detect spam. After installing Spam Blocker, it scans SMTP, POP3, and IMAP traffic and can block spam as it enters your network. Most companies with their own email server will have the Untangle Server scan the SMTP email as it enters the network on the way to the company email server. When Spam Blocker detects spam, it will file the spam message in the user’s quarantine. Each user receives a Quarantine Digest, and can wade through their personal spam quarantine if they choose, and maintain their own pass list. Spam Blocker provides an easy spam solution that requires no maintainence. Similarly Phish Blocker filters out all the phishing emails. Phishing emails are the fake emails coming from PayPal, banks, eBay, etc., that try to trick users into volunteering their password or personal information. Phish Blocker, similar to Spam Blocker, cans email traffic using signatures looking for phish email, and also scans web traffic to prevent users from visiting sites attempting to steal information. Phish Blocker also requires no extra configuration and can potentially save an organization from a massive disaster.

106

For business networks


Issue 19 Many businesses also have remote workers that may need access to internal network resources from home or when travelling. The Untangle library hosts two different applications to help remote users safely connect to the internal network. OpenVPN is VPN solution that provides remote connectivity directly into the network over an SSL connection. Install OpenVPN and configure the OpenVPN application as a server and go through the setup wizard. In the exported address step, add any addresses to the list which users should be able to reach remotely. In many cases, this is the entire internal subnet, but it can be limited to only a few servers for security reasons. After the wizard, new VPN client users can be created under the “clients” tab and prebuilt .exe’s can be emailed to clients. Each client will run the prebuilt .exe (executable files) on their PC, which installs a client in the system tray that will allow them to connect directly into the network, much like a long virtual ethernet cable. Remote Access Portal provides a clientless web-based portal to access internal network resources. For example, a remote user can login to their web portal and remotely access their desktop, using RDP or VNC, download files from network shares, and access internal websites (like Outlook Web Access). Remote Access Portal provides easy access and little configuration because there is no client installation. Users and administrators can create a dashboard of bookmarks to their internal network resources.

Controlling the network Many business networks and larger organizations (like schools and libraries) also need to control what activities users do on the network. This ranges from visiting inappropriate websites to wasting time on instant messaging, or using all the bandwidth downloading music on peer-to-peer networks. With the right applications, these issues are easily solved. Web Filter controls and monitor web usage on your network using a large database of known websites. It can be configured to block all porn, gambling, web mail, proxy sites, etc. or specific sites like myspace.com or even file types. This allows administrators to easily configure web filter to reflect the organizations usage policy. Of course, exceptions can be made for executives and administrators. Web Filter can also be used with Untangle Reports purely to monitor web usage on the network. Protocol Control helps clamp down on those hard to block protocols. Protocol Control scans all traffic regardless of port and identifies protocols by signature, allowing for tricky protocols like Instant Messaging to be detected and blocked. Peer-to-peer protocols, like Bittorrent, can also be detected and blocked preventing a single user from using up most of the bandwidth of the organization.

Example deployment Assume there is already a Linksys or similar router already in place. I may wish to keep my Linksys in place and deploy Untangle as a transparent bridge (this is an easy way to test without messing with the network). I would plug the external NIC on the Untangle Server to the internal port on the Linksys router, and the internal NIC on the untangle Server to my main network switch. During the Setup wizard it can make sure I will select “Transparent Bridge” and “DHCP” for the IP settings. I may need to swap the network cables after my Untangle Server decides which NIC is external and which is internal (there is a step to help you determine this). After the setup wizard, I can download the “Open Source Package” which contains all the applications discussed above. After all the applications are done downloading the installation is complete! Business users may also wish to configure OpenVPN at this point, or enforce the web usage policy with Web Filter.

Conclusion Conclusion

107


Issue 19 The Untangle solution provides free and effective protection and control for business and home networks. The Untangle Gateway Platform can be downloaded at untangle.com. After downloading the ISO CD image, it can be burned on to a CD. Booting from the CD on the server will install the Untangle Platform and convert the server into an Untangle Server.

Biography Dirk Morris (/user/41255" title="View user profile.):

Copyright information Source URL: http://www.freesoftwaremagazine.com/articles/configure_and_use_the_untangle_gateway

108

Conclusion


Issue 19

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

SPIP: Content management for publishers and writers A powerful tool to manage online publications By Dmitri Popov Content Management System (CMS) software comes nowadays in all shapes and colours, so you can afford to be picky and choose the one that fits your needs. And if you happen to be a writer or an editor of an online magazine, SPIP might be what you are looking for. While SPIP is not as well-known as, say, Joomla, it has a huge following in France, its country of origin. Unlike other CMS applications which cater for a broad user base that needs to manage “content”, SPIP is designed for a more specific audience and purpose. First of all, SPIP defines content more precisely than just a clump of text and pictures, and it allows you to structure it more rigidly. Basically, SPIP offers the same approach to managing content as a magazine publication. Like in most periodic publications, the key element in SPIP is an article. And, similar to a magazine, SPIP allows you to structure the content (i.e. articles) by specifying a hierarchy of categories, where any article must belong to only one category. In the same vein, SPIP treats its users as editors, and it offers them all the tools they need to collaborate, manage, and publish articles and news stories.

Installing and configuring SPIP SPIP runs on the MySQL/PHP stack, and there are two ways to install the application on your server: using the spip_loader script or installing the software manually. Using the SPIP loader is by far the easiest way to install the software: it automates the entire installation procedure, requiring the user only to provide a few pieces of information. To install SPIP using the SPIP loader, download the spip_loader.php file, create the spip folder on your server, and copy the loader file into it. Point your browser to http://yourserver/spip/spip_loader.php, select the installation language, and press the Install button. Then enter your MySQL login info, create a new SPIP database, and enter your personal info, including a user name and a password. Once you’ve done that, SPIP is ready to go. If you prefer to have full control over the installation process, you might want to download the full SPIP package (available from the same download page as spip_loader) and install it manually. Although this is obviously a more laborious process, it is still straightforward enough even for the average user.

The configuration section of SPIP offers plenty of options to customize your website Once SPIP is installed, point your browser to http://yourserver/spip. If everything works properly, you should see an empty SPIP website. Before you start populating it with articles and news, you should

Installing and configuring SPIP

109


Issue 19 configure some of its settings. To do this, click on the Private area link at the bottom of the page, log in using the user name and password you specified during the installation, and click on the “Configuration” icon. The Configuration area contains three sections. In The site’s content section, you can change your site’s name and logo, set up syndication options, configure the keywords feature and enable the document attachments feature. The Interactivity section allows you to configure forum settings, turn automatic editor registration on and off, and set up automated mailing options. Finally, you can use the Advanced functions section to configure thumbnails generation, a spell checker, and enable SPIP’s collaborative features. The latter includes the conflict resolution feature which can be used to avoid situations where several editors work simultaneously on the same article, and the revision tracking feature that allows users to track changes made to the articles. As mentioned above, SPIP uses categories as a way to structure content, which means that you need to create at least one category before you can start adding articles. To do this, click on the Launchpad icon and click on the Create a section button. Give the section a name and description and press Save. You can also add an unlimited number of sub-sections, thus creating a more elaborate content hierarchy.

Getting started with SPIP Now you can start working on an article. In the Launchpad, click on the Write a new article button, and you’ll be presented with SPIP’s editing area. Similar to a wiki, SPIP uses special markup for text formatting. Here is a snippet of formatted text which can give you a gist of what SPIP’s markup looks like: {{Ludwig van Beethoven}} was a German composer. He is generally regarded as one of the greatest composers in the history of music, and was a crucial figure in the transitional period between the [Classical->http://en.wikipedia.org/wiki/Classical_music_era] and [Romantic->http://en.wikipedia.org/wiki/Romantic_music] eras in Western classical music.

SPIP doesn’t support WYSIWYG editing, but you can use the formatting toolbar to quickly apply markup to the text. The toolbar itself is rather bare-bones, so make sure that you check the online help for more advanced formatting options such as tables and lists.

Working on an article in SPIP Since SPIP is designed as an online publication system (in fact, SPIP stands for Syst? de Publication Pour l’Internet, which can be loosely translated as Publishing System for the Internet), it tries to emulate the workflow of a printed publication. This means—among other things—that the article isn’t published on the website as soon as it’s written: instead, it moves through different editing stages. For example, until you are finished with the article, you set its status to “editing in progress”. Once you’re done with it you change its status to “submitted for evaluation”. The article then appears on the Launchpad, so other editors can see it. The editors can also stay abreast of the submitted article by subscribing to the Launchpad RSS feed. Moreover, they can comment on the article by attaching messages to it. If the article is accepted, you can finally publish it on the website by setting its status to “published online”. And if the article gets dropped or rejected, you can change its status to “in the dustbin” or “rejected”.

110

Getting started with SPIP


Issue 19

The Calendar allows you to keep track of published articles, events, and announcements. Another tool that is indispensable for collaborative publishing is the Calendar. On the face of it, the Calendar is a rather simple tool that allows you to keep tabs on your own and other editors’ schedules. However, it has a few clever tricks up its sleeve that make it more than just a mere calendar. First of all, when you set the status of the article to “published online”, it appears in the Calendar, which provides a handy mechanism for keeping track of published articles. Another of Calendar’s clever features is its integration with SPIP’s messaging capabilities. SPIP supports three types of messages: memos, messages, and announcements. Memos are private messages that no-one else can see but you. In most cases, you use them to add events to your calendar. Messages are, well, messages you can exchange with other editors. As soon as you send a message to a particular editor, she can see it in her Messages section. But here is the clever part: you can choose to display the message in the Calendar and you can even set its time. If you do so, the message appears as an event in both your own and the recipient’s calendars.

Use the RSS feeds and iCal files offered by SPIP to stay abreast of what’s happening on your website With all this editorial activity, keeping tabs on what’s going on can be somewhat difficult, so SPIP offers a few tools to tackle this challenge. Click on the Follow-up button on the site’s activity, and you will be presented with an assortment of useful features. As you would expect, there are RSS feeds for different sections of the website. There are also static and dynamic iCal files, suitable for use with any iCal-compatible calendaring application. Finally, there is also a tiny JavaScript applet ready to be embedded into a web page. As you would expect, the Forum feature allows editors to discuss different matters, but SPIP adds an interesting twist to it. The forum consists of internal and public sections. The internal section contains messages posted on the internal forum as well as private messages exchanged between editors. The public section consists of comments posted by visitors to specific articles. This may sound a bit confusing, but it actually works pretty well in practice. Choose Forum→Manage forums, and you can switch between the public and internal sections as well as manage the individual posts and messages. There is a de rigueur RSS feed, which you can use to keep tabs on forum messages.

Getting started with SPIP

111


Issue 19

The plug-in power SPIP has enough features to satisfy even the most demanding users. But its functionality can be extended even further by using plug-ins. There are quite a few of them available in the SPIP-Contrib repository. Installing plug-ins is rather straightforward. Create a “plugins” directory in the root of your SPIP installation, download the plug-in you want, unzip it, and copy it to the created directory. Before you can use it, though, you have to activate it. To do this, you need to switch to the advanced interface in the SPIP administration area by clicking on the Interface button and choosing the complete interface option. This adds the Manage plugins entry to the configuration menu. Choose this entry, and you should see a list of all the installed plug-ins. Tick the ones you want to activate, press the Submit button, and you are done.

Activating plug-ins in SPIP is as easy as ticking the appropriate check boxes. Of course, what plug-ins to install and activate depends largely on your needs, but there are a few of them that are worth mentioning. A word of caution: the descriptions of most plug-ins are in French only, so you might need a decent French-English dictionary, or run the pages through Google Translate. The Multimedia Player plug-in allows you to embed a Flash-based audio and video player that can play any files attached to the article. For example, you can attach an MP3 music file to the article and then embed it into the article as follows (docX refers to the file’s attachment name): <docX|player>. You can also choose between different player skins as well as define its alignment: <docX|player|center|player=dewplayer>. Another nifty plug-in is Nuage, which means cloud in French. In a very Web 2.0 style, it allows you to add a keyword cloud to an article. To add a cloud to an article, simply insert the following code (x refers to the number of the particular keyword group): <nuageX>. The Thickbox plug-in adds even more Web 2.0 pizzazz to the articles. When the plug-in is activated, it turns the embedded images into snazzy photo galleries consisting of the attached graphics files.

Taking SPIP further This is just a tiny fraction of SPIP’s capabilities, and there is much more to SPIP than meets the eye. For example, SPIP features a powerful templating engine and its own templating language that allows you to take full control of your website’s appearance. To learn the ropes, you might want to take a look at a tutorial that explains how to create a basic template. For more advanced stuff, check SPIP’s page layout reference manual.

Biography Dmitri Popov (/user/41918" title="View user profile.): Dmitri Popov is a freelance writer whose articles have appeared in Russian, British, and Danish computer magazines. His articles cover open-source software, Linux, web applications, and other computer-related topics. He is also the author of the book Writer for Writers and Advanced Users (http://www.lulu.com/content/221513).

112

Taking SPIP further


Issue 19

Copyright information Source URL: http://www.freesoftwaremagazine.com/articles/spip_content_management_for_publishers_and_writers

Taking SPIP further

113


Issue 19

114

Taking SPIP further


Issue 19

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Introduction to Firestarter Additonal security through a simple interface By Ken Leyba Most modern GNU/Linux distributions are secure with their default minimal installs, whether desktop or server, while some distributions are designed specifically with security in mind. However, any GNU/Linux distribution that needs services available to other users or systems will need either enhanced or configurable security. There are other situations in which added security is beneficial; for example, a large environment, while secure to the outside world, would be enhanced with additional security measures in place.

Network design There are typically only a few types of networks in smaller environments. A single computer that communicates with the internet via a single cable modem or DSL line, or a single internet connection that is shared between multiple computers are two examples (figure 1). Ideally, the internet connection is protected with a standalone firewall: either a firewall appliance or dedicated GNU/Linux firewall such as IPCop. Due to cost, location or space concerns the ideal is not always possible and the firewall must be on a single workstation or multiple purpose workstation that acts as a gateway for the other systems. In a larger environment with multiple operating systems, some insecure by default, a personal firewall enhances security, especially if a workstation contains sensitive information.

Figure 1: Two network types iptables is a tool—included as a standard part of GNU/Linux distributions—which is used to configure GNU/Linux firewalls. iptables can be configured manually, or with firewall configuration tools like Shorewall, Firestarter and various GUI front ends that are bundled with GNU/Linux distributions. These tools make configuring firewalls much simpler than the manual command line procedures, while giving you less granularity—a feature that may not be typically needed with less complex configurations.

Firestarter According to the the Firestarter web site, “Firestarter is an Open Source visual firewall program”. Primarily, Firestarter is a GUI front end for iptables, that removes the complexity of setting up a simple firewall for workstations, laptops, and servers. Even though the web site indicates Firestarter could be used to configure a gateway or dedicated firewall, I would be hesitant to use a computer with a desktop environment in this manner. It would be preferable, and more secure, to use a firewall geared distribution for a standalone

Firestarter

115


Issue 19 firewall. Additional features of Firestarter are: a configuration wizard, a real-time event monitor, an internet connection sharing configuration, a DHCP server configuration, and inbound and outbound access policies.

Installation Software installation with most modern GNU/Linux distributions has become a nearly trivial task. As I still prefer the feedback of text based installs and the ease of not having to navigate through too many menus, installation of Firestarter is straightforward from the command line. On an Ubuntu system, access the terminal application through the desktop menu system, Applications→Accessories→Terminal. At the terminal prompt type sudo apt-get install firestarter; at the password prompt, enter your password. Note that APT will suggest an additional package, dhcp3-server, which would be used on a gateway system to provide DHCP services as well as the firewall. A similarly simple installation on a Fedora system uses the yum package manager, as root enter yum install firestarter. You can also install Firestarter from the GUI; in Ubuntu, for example, run System→Administration→Symaptic Package Manager, and simply look for “Firestarter” in the search form. Keep in mind that the repository “Universe” needs to be enabled. Once the installation is complete, from the desktop menu select System→Administration→Firestarter. The first time Firestarter is started, the configuration wizard is run. Since the firewall will be run as a privileged user, i.e. root, you will be prompted for your password. The configuration wizard takes you through a simple process to configure a basic firewall. You are first greeted with a welcome screen: click on the “Forward” button. The “Network Device Setup” dialog box displays the detected network devices and there and two check boxes (figure 2). The first check box is to start the firewall on dial-out; in other words, it will start the firewall while using the dial-up network connection. The second check box is to allow a system to receive an IP address through a DHCP server, for example through an ISP cable modem or DSL line, or the company DHCP server. Select the internet side network device from the drop down box; if you have only a single network device, as in this example, use the default eth0 device and click on the “Forward” button.

Figure 2: Network Device Setup dialog

Configuration options The “Internet Connection Sharing” dialog box allows you to enable connection sharing, using the system as a gateway. If there is a second network device, it will be selected here as the local network side of the gateway. The checkbox in the dialog also allows you to enable a DHCP server on the local network. Since, in this example, there is only one network device, use the defaults and press the “Forward” button. The final dialog box, “Ready to start your firewall”, allows you to save the configuration and start the firewall; since this is what you want to do, click on the “Save” button (figure 3). This completes the initial configuration and the Firestarter Status Page displays (figure 4).

116

Configuration options


Issue 19

Figure 3: Starting the firewall

Figure 4: Firestarter Status Page The first basic preference that should be set is the “Minimize to tray on window close” preference. This will display an icon in the system tray that will indicate the status of the Firestarter firewall, either running, stopped or locked. Locking the firewall disallows all incoming and outgoing network connections. To change the settings, in the Status Page menu select Edit→Preferences or click on the “Preferences” button. On the Interface section of the preferences dialog, enable the “Minimize to tray on windows close” check box, then click on the “Accept” button.

Viewing events Possibly one of the nicest features of Firestarter is the ability to view real time events via the Events Page. To view these events click on the “Events” tab on the Status Page (figure 5). By default, five (time, port, source, protocol and service) of eleven columns are displayed in the event view. The columns are customizable under the “Show Column” section of the “Events” menu item. Events are color coded by severity: • gray for harmless (e.g. broadcasts) • black events are for regular connection attempts to a random port • red for possible attempts to non-public services In the figure, the red highlighted event is a probably harmless NFS event on the local network. There are also several gray SMB events from Windows workstations on the network. The number of events displayed can be reduced by setting the “Skipping redundant entries” and “Skip entries where the destination is not the firewall” preferences.

Viewing events

117


Issue 19

Figure 5: The Events Page

Allowing access Allowing access to the firewalled system is done in two ways: either through the Policy Page or via the Events Page. In figure 6, there are two types of events displayed, SMB and SSH connection attempts. To allow secure shell connections from a particular host attempting the SSH connection, select then right-click on the event and select “Allow inbound service for source”. This creates a policy rule in Firestarter to allow SSH connections only from the source machine; this can be verified in the Policy Page (figure 7). Since SMB (the Windows file sharing service) uses several ports, it is easier to create a rule in the Policy Page to allow SMB access. In the Policy Page, click the “Allow service” section; then click on the “Add Rule” icon. In the “Add new inbound rule” dialog box (figure 8), select “Samba (SMB)” from the drop-down box and leave the default “Anyone” radio button selected; then click on the “Add” button to dismiss the dialog. Finally, click the “Apply Policy” button to immediately activate the rule.

Figure 6: Events and adding access

118

Allowing access


Issue 19

Figure 7: Policy Page

Figure 8: Adding new inbound rules The Policy Page also allows enabling full access from specific machines or subnets. A couple of examples where this feature can be used is allowing full access from a workstation that needs access for administration, or allowing access for a particular group of machines on the same subnet. It is more secure, however, to allow only services needed to specific machines, rather than opening full access to groups of machines.

Configuring a gateway Setting up the firewall as a gateway system requires a few preparation steps prior to configuration. The internet network and local network devices must be identified and configured with the Network Configuration Tool or other preferred configuration method. For example, eth0 for the internet side configured with DHCP and eth1 for the local network configured with a static IP address. The DHCP server service should also be installed (for example by running sudo apt-get install dhcp3-server or by using Synaptic), and started.

The configuration wizard… again Once the network devices are set and the DHCP server is running, the Configuration Wizard can be run. The Configuration Wizard, once again, is run the first time Firestarter is started. The wizard can also be run from the Status Page menu, Firewall,→Run Wizard. Proceed through the wizard as in the previous example. At the “Internet connection sharing setup” dialog box, select the second network device (figure 9), in the example eth1, then complete the wizard as before. DHCP will be configured later through the Firestarter preferences.

The configuration wizard… again

119


Issue 19

Figure 9: Configuration wizard connection sharing DHCP is configured in the Preferences dialog box, under the Firewall section select Network Settings (figure 10). Here the settings of the internet and local network devices can be specified. Both the “Enable Internet connection sharing” and “Enable DHCP for local network” check boxes are selected. The “Create new DHCP configuration” radio button is selected and the DHCP range this server will provide is set. A list of domain name servers can be set in the “Name server:” text box; either the IP addresses or domain names delimited by commas are acceptable. There is a special entry that can be set, dynamic, used when the firewall system gets its DNS server list from a DHCP server, i.e. set by the ISP or by the company DHCP server.

Figure 10: New DHCP configuration

Forwarding and blocking In the gateway scenario, there is an additional feature available. Since all systems on the local network side of the firewall all share the same IP address through Network Address Translation (NAT), forwarding can direct services to specific machines. In the Policy Page, an additional section for forwarding services is displayed (figure 11). A web server can be setup on the local network side of the firewall and all web related network traffic can be redirected from the firewall to the web server.

120

Forwarding and blocking


Issue 19

Figure 11: Forwarding services Another feature is to block access to outside services or hosts to the local network. This is accomplished in the Policy Page, Outbound traffic policy (figure 12). A list of denied hosts can be added. A better way to accomplish this would be with a content filter on a dedicated firewall or web proxy. This is a simple way though for basic host blocking.

Figure 12: Blocking outside servers

Conclusion Firestarter is an easy to use graphical interface for configuring a simple personal firewall or gateway system. The Firestarter web site has additional information and current documentation, though development has seemed to come to a halt. Despite the status of development, the mailing list is still active and Firestarter is still being included in the latest GNU/Linux distributions.

Biography Ken Leyba (/user/5507" title="View user profile.): Ken has been working in the IT field since the early 80's, first as a hardware tech whose oscilloscope was always by his side, and currently as a system administrator. Supporting both Windows and Linux, Windows keeps him consistently busy while Linux keeps his job fun.

Copyright information This article is made available under the "Attribution-NonCommercial" Creative Commons License 3.0 available from http://creativecommons.org/licenses/by-nc/3.0/. Source URL: http://www.freesoftwaremagazine.com/articles/introduction_to_firestarter

Conclusion

121


Issue 19

122

Conclusion


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.