Issue 10
Issue 10
Table of Contents Issue 10................................................................................................................................................................1 .................................................................................................................................................................1 We can.................................................................................................................................................................3 all finally install....................................................................................................................................3 What’s free about free software?.....................................................................................................................5 Computing and the American West........................................................................................................5 What is X?..........................................................................................................................................................9 Discover the versatility and power of the X Window System................................................................9 Browsers for Mac OS X...................................................................................................................................25 Comparing FOSS browsers for Mac OS X...........................................................................................25 GRUB tips and tricks.......................................................................................................................................33 Spicing up a great utility for more IT fun.............................................................................................33 Jump to Debian GNU/Linux!..........................................................................................................................39 A guide to why the Debian distro is a good choice..............................................................................39 64 Studio...........................................................................................................................................................45 Building a native 64-bit creative distribution.......................................................................................45 Convincing management to approve free software......................................................................................51 Tips to better advocacy.........................................................................................................................51 Free software liberates Venezuela..................................................................................................................57 The free software revolution comes to Venezuela................................................................................57 Towards a free matter economy (Part 4).......................................................................................................63 Tools of the trade..................................................................................................................................63 A techno-revolutionary trip on the internet..................................................................................................75 Reflections on the lessons from Dean for America..............................................................................75
i
Issue 10
ii
Issue 10 By Tony Mobily In Free Software Magazine's 10th issue Eddy Macnaghten helps to make X a little less unknown and "MC" Brown browses the browsers for Mac OS X. On a more political note: David Sugar talks about how free software is freeing Venezuela and Tom Chance reveals how the internet is beginning to aid in political campaigning. And more... Source URL: http://www.freesoftwaremagazine.com/issues/issue_010
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Issue 10
1
Issue 10
2
We can all finally install By Tony Mobily I’ve seen a lot of new users—and even kids—using Linux comfortably. And everything goes fine—until they decide to install new applications. You see, in Mac people can install an application by simply downloading it, copying it wherever they like, and double-clicking on it. In Windows, it’s a matter of running an ugly installer, answering a few questions, and letting it copy a zillion files all over the place. In Linux... it depends. Even though distributions such as Ubuntu do a terrific job in giving users an amazingly complete “base” system, and a reasonably intuitive way of installing new software (thanks to the Synaptic Package Manager), users always get lost when they try installing an “unplanned” piece of software (that is, one that was not pre-packaged and pre-compiled by the distribution’s maintainers). Unfortunately, no matter how well distributions try to add every single package to the list of “supported” applications, such a list can never be 100% complete. The problem is even more relevant if somebody wants to install a piece of non-free software, which will obviously never be supported by a distribution! Users also need to be “root” to install software, and that’s not always ideal... I feel it’s only fair that I show my cards, and admit right away that I am a fan of “The Apple way of doing things”: each application has its own directory, which contains everything needed by the program, and is seen by the user as a fancy icon (chosen by the application’s developer). Uninstalling a program is as simple as dragging it into the rubbish bin! It sounds simple, and it is—for the user. Apple has been doing this for many years, and it definitely works. However, things that makes life easier for the user often add much more complexity to the system. In the case of Appdirs, there are several issues, and not all of them are purely technical. Appdirs need to be a joint effort: distributions and desktop environments (see: Gnome and KDE primarily) need to agree what an “Appdir” is, how it works, and so on; Appdirs go against the Unix philosophy of putting each file in the right place—this is possibly why there is resistance to Appdirs in the Linux world; with Appdirs, the “automatic upgrade” process becomes tricky to say the least (however, it’s definitely not impossible); the desktop environment must be able to “register” each Appdir (probably at its first execution), and must be able to associate a file type with a particular Appdir; finally, existing applications (and there are a lot of them!) all need to be repackaged. Some of these problems might never be solved completely; this is possibly why Linux has taken some steps toward Appdirs. However, no distribution or “desktop environment” has endorsed them fully. These problems have never had an easy solution. Until now. Thanks to Klik (http://klik.atekon.de/), developed by Simon Peter, this situation has finally changed. Klik deals with most of the issues I mentioned above (if you are curious, check the FAQs (http://klik.atekon.de/wiki/index.php/User’s_FAQ)); very importantly, it works well under Ubuntu, which happens to be the fastest growing distribution at the moment. Thanks to Klik, you can download your favourite applications, burn them onto a CD, and give them to your favourite and least experienced Linux user—even your grandmother! Klik is another huge step towards Linux’s desktop domination. Right now, thanks to Klik, there’s one less excuse for not using Linux. Are there any left now?
We can all finally install
3
Issue 10
Biography Tony Mobily (/user/2" title="View user profile.): Tony is the founder and the Editor In Chief of Free Software Magazine
Copyright information Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved. Source URL: http://www.freesoftwaremagazine.com/articles/editorial_10
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
4
What’s free about free software? Computing and the American West By John Locke Computer history has some interesting parallels with the history of the American West. After the initial forays of Lewis and Clark and the first set of explorers, early settlers crossed the plains in covered wagons. But the West wasn’t accessible to most Americans until the age of the railroads, when the Union Pacific Railroad put tracks across the continent and started running a regular passenger service.
Railroad history To ride the trains, you needed to pay a fare to the railroad companies. These companies became huge monopolies, because they controlled the only way for the average person to cross the country. For a time the railroads were it, and as long as the tracks went where you wanted to go, the rails were the cheapest and best way to get there. And after all, the major cities all developed along rail routes, so where would you want to go that you couldn’t get to by rail?
Driving the Golden Spike on the transcontinental railroad In the computer world, we’re in the very early days of the automobile, say in the 1920s... Of course, with the advent of the automobile, that all changed, and today, while trains still exist and still go to a lot of the same places, they are a shadow of their former dominance of the transportation landscape, used by commuters in some cities, and by tourists. Most of us drive, because we have the freedom to go wherever we want, on our schedule. It still costs money to get there—we have to pay for the upkeep of our cars and fill them with gas. We’ve all paid for our roads through taxes, tolls, and other means. While there have been three big, dominant auto manufacturers in the US, none have had a monopoly on sales—we all like different things in our cars, and make different choices based on our likes and needs. And we all need to go through some sort of drivers training before we’re safe on the roads, yet hundreds of millions of us take this for granted.
Railroaded today Microsoft is the Union Pacific of the computer world. Windows provides the tracks. In the computer world, we’re in the very early days of the automobile, say in the 1920s—the railroads are still dominant, and go nearly everywhere you might want to go. But you have to go on their schedule, and you have to pay a fare to get on board. While there are dirt roads all over the place, few are paved, the trains will get you to your destination faster, and you pretty much have to be a mechanic to keep that Model T running over any large distance. But the car is clearly the future, and we’re starting to build the freeways now. Free software powers those cars. While there are still a couple of cities you can’t drive to yet, you can pretty much go anywhere, including places you now can’t get to by train. Free software will get you nearly everywhere you can get with Windows. There are great free word processors, spreadsheet packages,
What’s free about free software?
5
Issue 10 presentation programs, desktop publishing programs, astronomy tools, databases, everything that you might imaginably need to get where you need to go. But these programs aren’t from Microsoft, or Adobe, or the other “standard” rail car manufacturers, which don’t run on your average dirt road. Free software is about giving you the freedom to drive wherever you want. Once upon a time, only 15 years ago, there were several different word processing programs considered “standard”. Anybody remember Word Perfect? It had a much bigger market share than Microsoft Word. Yet today, everybody expects Word documents and little else. At that point, having a different word processor was like having a rail car that fit a different set of rails—you couldn’t just open a WordPerfect document in Word and expect it to look the same.
Freedom to drive In the free software world, where programs need to communicate with each other, they use established, open standards. A new document format has emerged, called the Open Document format. Unlike Word Documents, you already have a choice of several different programs that will read and write these documents without issues: OpenOffice.org, StarOffice, KWord, AbiWord to name a few. The state of Massachussetts recently mandated that all their office documents be stored in this format, to prevent being locked into a single vendor. Naturally Microsoft is complaining—if they can’t maintain their advantage of being the only vendor capable of flawlessly working with their own office format, how can they maintain their monopoly? They would inevitably get a whole slew of competitors—bad for them, but good for the rest of us. Do you really want to be forced into paying Microsoft time and time again to keep buying access to your own documents? The state of Massachussetts decided, at least for them, the answer was no. You can free yourself from vendor lock-in, too, by going to OpenOffice.org (http://openoffice.org) and downloading the full office suite, entirely free. Version 2.0 is out, and it has a database to rival Access—the previous versions already provided excellent replacements for the other Office products. OpenOffice.org Writer is better than Word in several areas: • Bullets and numbering • Outline numbering • Page templates • Predictable page formatting • Master and sub documents, for those book authors out there • Drawings with connectors that stick to objects as you move them around, like Visio Naturally, it does have some drawbacks, too: the Outline view isn’t as nice, incompatible with any macros you’ve developed (and Word Macro viruses), envelope printing is confusing, merging to a catalog list doesn’t seem to exist. Are these features worth the extra $300 per seat in your business?
Land grants and land grabs In the 19th century, the land grant railroad companies were granted rights to pick and choose a swath of land across the country to lay down their rails. Did they choose the best route for their rails? Not necessarily. They picked the most fertile land, the most valuable land, and planned their rail route so that they could get the most dollars for the land granted to them by the government. A great-great-great-great grandfather of mine was on a crew that helped move the rails to a more sensible route, after the land grants were complete. Do you really want to be forced into paying Microsoft time and time again to keep buying access to your own documents? Today, it’s Microsoft doing the land grabs, trying to bundle as much functionality into their operating system as possible. First it was the web browser. Lately it’s the media player. There have been dozens of other small companies unable to compete with the software giant, who have gone out of business—and as a result, we have fewer choices as a whole. When Microsoft asks, “Where do you want to go today?” think customer survey: they’re trying to find out where they should build their rails so they can extract more fares from you in the future. Never mind the fact that you can probably drive there today with free software.
6
Computing and the American West
Issue 10 Free software is about giving you the freedom to drive wherever you want. It’s not free of cost—you have to buy your PC (the car), you have to pay a mechanic periodically, you need to learn how to drive, and there are taxes, toll roads, and potholes all over the place. But when given the choice between a railroad and a car, most of us choose to drive, simply because we have the freedom to go where we want and when, not just because it’s cheaper. In many cases, the railroad is cheaper, especially in these days of escalating fuel prices. But the more people who choose to drive free software, the more roads will get paved. And if you need a mechanic, a map, or a custom vehicle, that’s what Freelock Computing, or dozens of other new service companies can provide!
Biography John Locke (/user/24" title="View user profile.): John Locke is the author of the book Open Source Solutions for Small Business Problems. He provides technology strategy and free software implementations for small and growing businesses in the Pacific Northwest through his business, Freelock Computing (http://freelock.com/).
Copyright information This article is made available under the "Attribution-Sharealike" Creative Commons License 2.5 available from http://creativecommons.org/licenses/by-sa/2.5/. Source URL: http://www.freesoftwaremagazine.com/articles/computing_american_west
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Computing and the American West
7
Issue 10
8
Computing and the American West
What is X? Discover the versatility and power of the X Window System By Edward Macnaghten Everyone likes pretty pictures. The newsagent’s stand is now crowded with glossy magazines, roadside advertisements glare out at you as you drive along the freeway, you see a wondrous mosaic as you look at all the packaging on supermarket shelves. Television long ago replaced the radio as standard home entertainment and the fact that you cannot judge a book by its cover doesn’t prevent the vast majority of the human population from doing so. The same applies to computers now. The GUI (Graphical User Interface) or “windows functionality” has become part of the machine that everyone now takes for granted.
To sell something to the public it needs to look good Any home or client system software provider, free or otherwise, that wants to be taken seriously has to provide a graphical user interface (GUI) as their prime method of user interaction. Microsoft include theirs as part of the main system, or kernel, as do many other system providers. Most GNU/Linux distributions include it in the default install. However, with GNU/Linux and other POSIX operating systems, the GUI infrastructure is not part of the kernel but a separate program with the mystical name of “The X Window System”. It calls itself “X11 Release 6” and everyone else simply knows it as “X”. There’s a lot more to this X than meets the eye, it has attributes and powers that are not well known and can do things that other windowing systems cannot. What is this X? What amazing super-GUI powers does it have? This article attempts to tear off its mask and reveal all.
What is X? A more appropriate question to “What is X?” is “What is X not?”. X is not actually a GUI. The GUIs in GNU/Linux tend to be GNOME, or KDE or even MOTIF. Most GNU/Linux distributions are now delivered with a GUI configured as the default interface, and this can compare with any other computer system’s GUI. And the end user, who processes words in documents, calculates sheets in spreads, mails e’s in readers, browses the surf in webs and the such, need not know the technical ins and outs of it. However, there is a lot more that can be done with a small amount of know-how and a bit of command manipulation... So then what exactly is X if not a GUI? X is an infrastructure that a GUI uses to do its stuff. For example, a GUI handles the buttons, text and combo boxes, windows etc., whereas X handles the low level drawing of fonts, lines and pictures on the screen and accepts keyboard and mouse inputs as well as the inter-program communication of these. It can also handle network distribution of users and remote sessions. X is not a GUI—it is an infrastructure that a GUI uses One of the most confusing aspects of X is the naming of the parts, in that the terms “server” and “client” are the opposite of what many would expect. An X server is the screen and keyboard, what a lot of MS Windows people would automatically think of as the ultimate client. An X client is a program that opens and uses
What is X?
9
Issue 10 windows, such as a browser, email client, word-processor and so on. To go into why this is so I will compare an X-server to a file server... A file server is a machine on a network where files exist and other machines, or clients, can connect to it to open, read, write or manipulate files. Often, of course, the file server and the client is the same machine, but sometimes it isn’t. What a file server is serving is data in files.
The concept of a file server tends to be well understood An X-server is a machine on a network where a “windowing” program exists and other machines, or X-clients, can connect to it and create windows, write or display text, pictures and so on, into a window and can read any input the user makes to that window. The X-clients are often run on the same machine, but sometimes they aren’t. What an X-server is serving is windows and your input.
The concept of an X-server is really no different to that of a file server I will go through a detailed example of exactly what I mean. However, before commencing it is worth pointing out what follows is not how X is usually run. I am simply demonstrating some individual components to show how the infrastructure hangs together. First though, to install X... If you have a GNU/Linux system you almost certainly have X already installed. To run through these below examples it is best to switch the GUI off initially if it is on. To do this go to the main virtual console by pressing CTRL-ALT-F1, logging in as root then enter the command “init 3”, then log out of root and log in as your user. If you have a Windows system you can install the Cygwin version of X. Go to www.cygwin.com (http://www.cygwin.com), click on the “install now” icon, follow the defaults. When the list of packages come up ensure the “X11” is marked as “Default” install. When installed double click on the Cygwin icon (or navigate to it through the “Start” menus) to get the “$” prompt.
10
Discover the versatility and power of the X Window System
Issue 10 The X server can be started manually at the command line; this can be done if X is not already running by simply entering: X &
(X in capitals then an ampersand) at the dollar prompt. This will start an X-server in the background. On GNU/Linux this will be in a “virtual screen”, on Windows this will be in its own MS-Window. A black or gray patterned screen will appear with a plain graphical “X” in the middle that can be moved around by the mouse. This as it stands is totally useless—yup, a complete waste of time. Although you can move that X about by playing with the mouse, pressing keys and clicking on mouse buttons do nothing and it doesn’t even look pretty. This is like a formatted file server but with no files in it, interesting from a geeky point of view, has great potential, but currently not much to see.
An X-server running by itself—as such quite boring Note: on MS-Windows and Cygwin, if you would rather use the entire screen to gain the “proper” X experience then add the “-fullscreen” option to the “X” command. Pressing the MS-Windows key (the one with the logo on it) will get you back to MS-Windows. After the X-server has been started go back to the console where you entered the command (CTRL-ALT-F1 on GNU/Linux, or select the Cygwin window on MS-Windows), and enter the new commands: xauth add :0 . `mcookie`
Please note the backquotes, punctuation and spaces in the command. This gives you permission to open windows on the X-server and use it. I will go into this later. After that, enter: xterm -display :0 &
This opens a terminal shell on the X-server. To see this look at the “X-server” screen (by pressing CTRL-ALT-F7 on GNU/Linux, or selecting the X-server window on MS-Windows) and there should be a terminal program running in the top left hand corner. By moving your mouse over it you can enter commands (such as “ls”) and see the result.
Discover the versatility and power of the X Window System
11
Issue 10
An X-server with an xterm X-client connected to it In the above, the command “X” starts an X-server, which simply sits their looking not-so-pretty waiting for a client to connect to it. Xterm is an X-client that connects to the server, opens a window and runs a shell. An X-server is a program that simply sits their looking not-so-pretty waiting for an X-client to connect to it There are command line options to open windows of different sizes and at different positions, but all who I have talked to would agree that this is a very user unfriendly way to do things. What is needed is a X-client called a window manager. This provides a user-interactive way to manipulate windows in a manner we have become accustomed. Cygwin X usually comes with WindowMaker installed, and can be started by either entering wmaker & in the xterm window, or by entering wmaker -display :0 & in the Cygwin console. Some GNU/Linux distributions may have WindowMaker too, though it’s not normally in the default installation; however “mwm” or “fvwm” often is, and can be started by entering mwm & (or fvwm &) in the xterm window or mwm -display :0 & (or fvwm -display :0 &) on the console. Should none of these work then there is the somewhat old and antiquated “twm” (which is part of the base X package) is almost guaranteed to be installed. Once a window manager is loaded then moving and resizing of windows is interactive, and launching new applications is easier. Try entering xclock & in the xterm window to find out the current time, or even xeyes & for an impression of the boss watching you.
An X-server running the WindowMaker window manager and some other X-clients—things are beginning to get interesting Although the look and feel presented in the above is not that impressive it epitomises the philosophy of X—it provides a versatile infrastructure giving you the choice to run whatever program best suits your needs. Far better look-and-feel suites of course can be, and are, run on X. However, instead of forcing you to use a predefined window manager as in MS-Windows, you have a choice from the many available in X systems. The easiest way to quit an X-server is to hit CTRL-ALT-BACKSPACE.
12
Discover the versatility and power of the X Window System
Issue 10
A schematic diagram of some X programs connecting to an X-server
What’s in a display? You would have noticed the “-display :0” arguments of the commands to open windows on the X-server. A “display” is the particular screen to connect to. By default, when X is started it will create a display called “:0”, that is display zero on the local machine. However, both GNU/Linux and Cygwin under MS-Windows have the ability to run more than one X-server. Some computers have the ability to run more than one X-server simultaneously To demonstrate, make sure you are not running any X servers on the machine then from the console (CTRL-ALT-F1 in GNU/Linux, or the Cygwin window in MS-Windows) enter.... X :0 &
to start the first X-server. Then switch back to the console when loaded, and enter X :1 &
This will have started the second. Again, switch back to the console, then... xauth add :0 . `mcookie`
if not already done, also... xauth add :1 . `mcookie`
to grant yourself permissions to connect to the servers, Then enter the commands... xterm -display :0 -bg yellow xterm -display :1 -bg purple
If you go to the first X-server (CTRL-ALT-F7 on GNU/Linux, select the appropriate window in MS-Windows) you should see an xterm with a yellow background, and on the second X-server (CTRL-ALT-F8 on GNU/Linux) you should see one with a purple background. You will discover it is possible to run one window manager on one X-server, and another on the other. It is also possible to run the X-servers at different resolutions, and logged in as different users—though doing that is not described here. This is like having two sets of input/display devices (keyboard, mouse and monitor) connected to the machine.
Discover the versatility and power of the X Window System
13
Issue 10 However, that’s not all—there is a sequel!. The Display “:0” is in fact short for “:0.0”. The “dot zero” is assumed unless another number is defined. This is the screen number of the display. To differentiate, a display in X consists of a keyboard, a pointy device (like a mouse) and one or more screens. Think multiple monitors here. The display “:0.0” is the first monitor, “:0.1” is the second and so on. This is rarely used as most PCs and workstations only have one screen connected to it. However, X allows for more—the multi-screen aspect of X can be demonstrated using the X program “Xnest”, which is both an X-client and an X-server. It runs a virtual X-server in an X-window. To demonstrate this, from an “xterm” window (on a session that has a window manager running) run... Xnest :3 -scrns 2 &
(Note the capital X.) This runs an “Xnest” client emulating an X-server with two screens using display number 3. Each screen is represented as a different X-window. Then from the “xterm” window... xauth add :3 . `mcookie`
to give yourself permissions. Then.... xterm -display :3.0 -bg red xterm -display :3.1 -bg blue
for two xterms on these (virtual) screens. Should you run “mwm” or “wmaker” on one of these, and “twm” on another you can also see some of the versatility of X. To quit these simply close the Xnest windows.
An Xnest session emulating an X-server with two screens A display in X consists of a keyboard, a pointy device (like a mouse) and one or more screens, as in multiple monitors To save you entering the “-display” argument every time you enter a command, X-clients will use the display specified in the DISPLAY environmental variable should no “-display” argument be provided. Therefore... xterm -display :0 &
is the same as... DISPLAY=:0 export DISPLAY xterm &
When an X session is fired up, or multiple X-clients for that matter, it automatically places the display it is connected to in the DISPLAY variable thus reducing the requirement for the user to specify it within the X
14
Discover the versatility and power of the X Window System
Issue 10 session itself. There is a small caveat here of course: if you should change the DISPLAY variable in an xterm window it can mess up any X commands you enter from there subsequently. Should no “-display” argument be defined and if the DISPLAY variable is not set then the X-clients will use the display :0.0 by default.
X in the real world No one runs X the way described in the above examples—it would be far too cumbersome. The usual ways are with the shellscript “startx”, or with xdm, which really needs to be called by the POSIX init process (xdm is not available under MS-Windows/Cygwin). To start X from the console, simply enter “startx”, or “xinit” if that does not work. This will start an X server on display :0 and run an xterm in it (though that is configurable and can do something different). To run an X-server on a display other than zero then the command startx -- :1, or even xinit -- :1 will start an X-server on display 1. However, most X-servers are started using xdm, or one of its close cousins such as kdm and gdm, especially in the GNU/Linux world. Most distributions are set up to start xdm in run level 5, and not in run level 3. That means that entering the command “init 5” as root in a console will switch the X suite on, and entering “init 3” will switch it off. Most GNU/Linux distributions automatically boot up with X enabled, including RedHat and SuSE, so X is started on bootup. Xdm, by the way, stands for “X Display Manager” and kdm and gdm are, for all due intents and purposes, functionally identical. They just look prettier.
A gdm (pretty cousin to xdm) graphical log in screen program, or greeter Xdm (and kdm and gdm) present the user with an X-client which is a graphical log in screen called a “greeter”. Once you have entered your user name and password not only a window manager, but an entire desktop environment is usually loaded, such as GNOME or KDE, that can compare favourably with the best of the best other systems such as Microsoft Windows can offer. Cygwin X can be started with a “-multiwindow” argument. This is in fact the default when using “startx”. This integrates MS-Windows’ own window manager and uses it for all the X programs too. Thus integrating the X programs into the MS-Windows display environment.
Distributed X At the beginning of the article, I skimmed over the networking attributes of X. I’ll now go through them in more detail. When an X-client connects to an X-server it does exactly that. There is nothing virtual about it. It connects using its own protocol designed for the purpose. When the X-server is on the same machine as the X-client the means by which it connects is through something called “Unix Sockets”. This is one way that a program can communicate with another on the same machine. However, the X protocol can work over TCP/IP. In other words, you can run the X-server on one machine and the connecting X-clients on another.
Discover the versatility and power of the X Window System
15
Issue 10 You can run the X-server on one machine and the connecting X-clients on another There are both advantages and disadvantages to running the X-server on a seperate machine to the X-client. An advantage is that the machine running the word-processor for example, is not cluttered up with the bloat of the screen handling process, and will run in a smaller footprint which can make it faster. Also it is possible to have windows of programs running on several machines opened in front of you. A disadvantage is that screen refreshes can be slower as some network traffic is required for these. It is difficult to show network features with one machine so to demonstrate this you really need two machines either with MS-Windows with Cygwin installed or Linux or a mixture. If you are using MS-XP with SP2 you need to place the path of the XWin.exe (C:\Cygwin\usr\bin\X11R6\XWin.exe by default) into the “exceptions” of the MS-Windows firewall. On the GNU/Linux machines you need to let through TCP ports 6000 through 6009 and UDP port 177. You will also need to know the IP address of the two machines. For the sake of this demonstration I’ll assume one machine to have the IP address of 192.168.0.1, and the other 192.168.0.2. On the first machine (192.168.0.1) start the X-server using “startx”, then ensure a window manager is running. In the “xterm” window enter the command: xhost add 192.168.0.2
or whatever the IP address of the other machine is. This will permit connections from that machine. On the second machine, from a console, enter the command: xterm -display 192.168.0.1:0 &
Then go back to the first machine which is running the X server and you should see a new xterm window. Further investigations in that should show that the shell in that xterm is running on the second machine. What’s happening here is an X-client on the second machine is connecting to the X-server on the first opening a window there.
A schematic diagram of X programs running on a different machine to the X-server
X and security Wherever you have a network protocol you have security issues, and X is no exception here. The fact of the matter is that the X protocol is not suitable for anything else except a private local area network situated behind a firewall. Should you wish to use X through the internet then you need to tunnel it through a program like “ssh” or similar, which will be described later.
16
Discover the versatility and power of the X Window System
Issue 10 LANs have security issues too. If ignored one person on a LAN can open and close windows on another’s willy-nilly without permission. That wouldn’t be good. Don’t worry though, there are various ways you can protect machines and only permit allowed connections to an X-server. The first is using the “xhost” program as described above. However, this isn’t recommended as it tends to leave the X-server too open. I have only used it in the above example because it was the easiest way to demonstrate the networking attributes of X. A far better way is with the “xauth” program, which can use a number of ways to authenticate clients, the main one using “cookies”. In this, for an X-client to connect to the X-server it needs to know a piece of data randomly generated, or accessed, by the server known as a cookie. It works somewhat like a session password. If this cookie is not transmitted by the X-client, the X-server will reject the connection. I won’t go into the details of it here because starting X using xdm automatically sorts this out, and is the recommended way. Over the network, the X server, by default, listens on port 6000 for display :0, 6001 for display :1 and so on.
Xdm in detail The xdm program, and its gdm and kdm cousins, are small daemons that run in the background usually started by the “init” program at boot up, and run all the time. It does things such as start appropriate X-server(s) where necessary (it is not one itself), run the graphical login screen programs, sort out the xauth cookies when people log in and start the appropriate programs for each desktop and so on. There is a comprehensive configuration file that controls what it does. The first thing it usually does is to run the appropriate X-servers on the local machine. Usually this is just one, but can be more. It then runs the graphical log in a screen known as a “greeter” on the X-servers that it has been configured to do so. These are usually the X-servers that the xdm process starts. However, it can start it on other X-servers, such as workstations directly connected to the machine, or even not start it on any, including its own X-servers. Xdm can be configured to accept special XDMCP requests through the network. These are special UDP packets that an X-server can transmit (usually on port 177) to request a client machine’s xdm to start a graphical login greeter process for it. This would then permit remote logins. On most distributions XDMCP can be enabled through a GUI from the desktop. On Red Hat Fedora Core this is done from the menu: Desktop -> System Settings -> Login Screen: (root password) Here on the XDMCP tab the check box titled “Enable XDMCP” should be enabled. All other settings should be unchanged. A similar configuration exercise would be needed for other distributions. Xdm can be configured to accept special XDMCP requests through the network allowing remote logins. When an X server needs to “be connected” by another machine running xdm then the X-server needs to be started with the “-query” or “-broadcast” option. The “-query” option tells the X-server to send an XDMCP request to a specific machine, whereas the “-broadcast” transmits one to all machines on the network and will use the first machine to reply. For example... X -query 192.168.0.2
or X -broadcast
I use this in real life. When I am working on my windows machine, and need to do things on my Linux box, I open a Cygwin session and enter: X -fullscreen -broadcast
This presents me with my graphical login greeter for my GNU/Linux box.
Discover the versatility and power of the X Window System
17
Issue 10
Spoilt for choice There are occasions when there are more than one machine running xdm on the network. For this scenario a setting can be tweaked so that xdm runs a “chooser” instead of the usual login greeter. In this case a list of machines is presented to the user. Upon the selection of one, the X server is restarted, connecting to the appropriate machine chosen displaying that login greeter.
Gdm’s chooser program: selecting a machine restarts the X-server and displays the appropriate machine’s greeter This is useful for maintaining a large number of GNU/Linux or POSIX machines on a network. You only need one X-server on your desk (which can be MS-Windows and Cygwin), and you can run xdm on all the GNU/Linux machines you maintain. As xdm doesn’t need to run its own X-server these can be headless, in so far as that they need no video hardware or drivers. In short, you can have full graphical administration functionality on all GNU/Linux machines that you maintain on the network from a single workstation, with a minimum of resources used on the GNU/Linux machines. You only need one X-server on your desk to graphically access a large number of machines.
X through the internet As mentioned before, the X protocol is not really suitable for the internet from a security point of view. Getting xdm or the authentication to work correctly from afar would be difficult. However, you can tunnel the protocol through ssh. It is worth noting that for this to work the “ssh” server sshd needs to be configured to permit X requests. This is done by setting the “X11Forwarding” option to “yes” in the /etc/ssh/sshd_config file. In the following examples it is assumed you are trying to access a remote server called “centralbox.com”, say the machine at your office, from a local user machine perhaps from your home. To demonstrate running an X program using ssh start an X-server using “startx” on the local user machine. Ensure a window manager is running then in the Xterm connect using ssh to the central machine using the command: ssh -C -X myuser@centralbox.com
where “myuser” is your user code. When this connects you can enter X-client commands on the ssh server and the windows will open on the user’s machine. As the Cygwin package also includes ssh this can be done from a MS-Windows/Cygwin box as well as a Linux one. It is worth taking some time out to explain how ssh does this. When an ssh client connects to the ssh server using the “-X” command, the ssh server on the central machine sets up a “virtual” X-server on a higher display number (usually starting at 10); it then creates its own authentication cookie for it. When an X-client (on the ssh server) opens up a window it connects to display 10, the pseudo ssh X-server validates the X-client
18
Discover the versatility and power of the X Window System
Issue 10 using its cookie and encrypts and transmits the request to the ssh client on your user machine. This then authenticates the request on the real X-server using the real cookie and passes through the X requests to it for displaying. The “-C” argument is in fact optional. It compresses the data to transfer over the network decompressing it at the other end. This can improve performance especially over a slow internet connection. This means that all transmissions through the network are encrypted and compressed. Also, the ssh server is unaware of the real X-server’s cookie, maintaining the X-server’s security integrity but at the same time allowing the remote user’s machine to use the GUI in an easy and transparent way. Running X programs over the network through ssh means that all transmissions are encrypted and compressed, the integrity of the X-server is maintained and the interface is transparent
A schematic diagram of X programs running through the network using SSH X forwarding Should you wish to run your desktop through ssh rather than an individual program then you simply run “Xnest” through the tunnel. Beforehand you would have had to enable XDMCP, then from an xterm on the user’s local machine run: ssh -C -X myuser@centralbox.com
to connect to the central ssh server forwarding X signals and compressing data. After authentication and connection enter: Xnest :20 -query localhost
which will then start a “nested” X-server on display 20 of the central machine, displaying the output of which on a window of the local machine. The login greeter then should appear. If a different screen resolution is required the “-geometry” argument can be used to specify one: Xnest :20 -query localhost -geometry 1280x1000
To follow what is happening here may seem a bit complicated, so watch out for this next bit. First of all, I need to explain the Xnest program in detail. This is both an X-client and X-server. The best way to imagine it is as an X-client that creates a window on the session it was started from. And, at the same time, an X-server that instead of displaying and accepting its goods on its own physical screen, keyboard and mouse, uses the window it created in the session from which it started. Xnest and ssh allows you access your desktop remotely and securely
Discover the versatility and power of the X Window System
19
Issue 10 Now for the meat of it. On the ssh server you are running an X-client Xnest, which is connecting to the X-server on the ssh client through the network tunnel ssh provides. However, Xnest is also an X-server, which is running on the ssh server providing an environment where you can run your desktop remotely, and securely, through the network but as though you were on the local machine. Confused? Well, I did warn you. I hope the following diagram helps, but don’t worry if you still don’t get it, this is not essential knowledge.
A schematic diagram of an X desktop running through the network using SSH X forwarding and Xnest If two or more people are likely to connect at a time in this way, then each needs to use a different display number. A way to do this is to simply add a number to the display number the ssh server gives you: NDISPLAY=`echo $DISPLAY | cut -d: -f2 | \ cut -d. -f1` # Get the main display no. NDISPLAY=`expr $NDISPLAY + 30`# Add 30 to it Xnest :$NDISPLAY -query localhost
If you use this I recommend placing it in a shellscript. The above should work if no more than thirty people are connected to the machine at any one time. You can also run X through a VPN such as IPsec or PPTP and connect using XDMCP as described above.
Dynamic duo: X and VNC Using the X protocol, even through ssh, can have a performance hit. The X protocol will transmit every line, area and character whether required or not. There’s no cache-comparisons or transmit-changes-only functionality. This means that if an area of the screen is redrawn three times between user inputs the X protocol will transmit that area three times whereas it only really needs to transmit the last of the three. This can make running X over a slow network like a dial-up painful. VNC’s RFB protocol helps to take care of this problem, and X interfaces have been written for it. VNC is a cross-platform facility to access desktops remotely using a more conventional client/server mechanism. The VNC server is located on the machine where you would have the GUI and listens for connections from a VNC client. When a client connects, VNC transfers the entire desktop to the client where a user can work on it. The program used for using VNC in a remote X-session environment is “Xvnc”. This is both an X-server and a VNC server. Xvnc, when started, listens on a VNC TCP port number (5900 + display number) for any VNC clients to connect to it. It also listens on the display socket and TCP port like any other X-server, but instead of displaying the requests on a monitor or screen it caches it in memory to transmit it to a VNC client that is connected to it. VNC’s X-server—Xvnc—is included with most GNU/Linux distributions. If it isn’t installed, it can normally be done easily through the distribution’s package manager. The VNC client, on GNU/Linux, is called
20
Discover the versatility and power of the X Window System
Issue 10 “vncviewer”. There’s also a native MS-Windows version of this, so it’s possible to log into GNU/Linux desktops from remote Windows machine without installing the considerably large Cygwin and X-Window packages. Should you need a VNC client for MS-Windows one can be downloaded from www.realvnc.com (http://www.realvnc.com). And if the machine you are connecting from doesn’t have a VNC client installed all is not lost, as there’s a Java one that comes with the server. VNC is a network-efficient means of running the desktop remotely There are several ways the VNC server can be set up on a GNU/Linux box; I’ll deal with two of them here. Both require the VNC server to be fired up manually. The first is using a wrapper called “vncserver”. To prepare for this you need to set up a VNC password. This is achieved by logging into the GNU/Linux server and entering the command “vncpasswd” at the $ prompt. You will then be prompted to enter a password, which needs not be the same as your log in password (in fact it’s better if it’s not) but needs to be secure, i.e a mixture of upper and lower case letters, numbers and symbols. The way to connect remotely this way is to log in using ssh, but without the “-X” option: ssh myuser@centralbox.com
then when connected, enter the command on the central machine... vncserver :1
This will start an Xvnc process up and fire off a few X-clients running inside it (such as window manager and so on). It will accept X-clients on display “:1” and VNC client connections on TCP port “5901”. Should “:2” be used instead it will accept X-clients on display “:2” and connections on port number “5902” and so on. If you leave the display number from the command, then an unused one is assigned and reported as the output of the command. You can control the size of the screen by using the “-geometry” argument. This works the same way as described in the “Xnest” program. After the server has fired up, from the user’s machine start up the VNC client. When asked for the VNC server enter “centralbox.com:1”. You should then be prompted for the VNC password; upon successful entry you will be presented with a desktop. If you don’t have a VNC client installed then you can use the Java one. To do this point a browser to “http://centralbox.com:5801” (5802 if display :2 was used and so on) and a java client will be fired up. The precise X-client programs started here will depend on the shellscript “.vnc/xstartup” on the central server in the specific user’s home directory. This can be tuned accordingly. When finished, in the original ssh screen enter... vncserver -kill :1
which will stop the server.
Discover the versatility and power of the X Window System
21
Issue 10
A schematic diagram of an X desktop running through the network using VNC The second method is to use the XDMCP features with VNC. For this to work XDMCP needs to have been enabled on the central server as described above. To connect access the central server from the user’s local machine with ssh in the same way as above... ssh myuser@centralbox.com
Then, when connected, enter the command... Xvnc :1 -query localhost -securitytypes none -once
This will start Xvnc the same way as above except it will simply display the greeter in the X session. The “-geometry” argument works the same way here too, as does alternative display numbers. Connecting to this using VNC (or the Java) clients are no different with the exception that no password is asked for.
Dynamic trio: X, VNC and SSH Although the above works, there are security issues. Ports 590x and 580x need to be opened up if there is a firewall. Everything with the exception of the VNC password is transmitted over the internet unencrypted. A far better method is to tunnel the connections through our old friend ssh. To connect using this method, from the user local machine... ssh -C -L5901:localhost:5901 \ -L5801:localhost:5801 \ myuser@centralbox.com
Please note the “-X” argument is not used here. Then, when connected, on the central machine enter: vncserver :1 -localhost
or Xvnc :1 -query localhost -securitytypes none -once -localhost
depending on how you want the X server to be. Back at the local machine the VNC server to connect to is “localhost:1” and the URL to point the browser at for the Java client is “http://localhost:5801”. After finishing, don’t forget to enter the command vncserver -kill :1 if the “vncserver” command was used.
22
Discover the versatility and power of the X Window System
Issue 10 The above works for display :2 if 5802 and 5902 ports are used instead as before. VNC through ssh provides a secure, efficient, fast and versatile means of accessing the desktop remotely I will now go through go through the mechanics of this, but again be warned, it’s complicated too! The ssh command creates two tunnels, one for TCP port 5901 and the other for 5801. The VNC clients (and possibly browser clients) then connect through these, encrypting the data, to form the connection. The attributes of the tunnel mean that the user clients need only connect to ports listening on the user’s local machine that are created by the ssh client for this purpose. The data is then encrypted and transferred to the central server where they are unencrypted and regenerated by the ssh server process. A connection is established from that to the VNC server which is now local to it. What? That, complicated? Naahh...
A schematic diagram of an X desktop running through the network using VNC and SSH This has the best of a number of worlds. The data transfer is encrypted and compressed by ssh and the GUI windows are properly cached by VNC, creating a reasonably fast and very secure open platform remote desktop facility. However, should more than one user need to connect this way to the central server simultaneously, then each user should be given a unique display number which they would always use. For completeness I need to point out that there are other methods which can be used to fire up Xvnc, including starting it as a service. These are documented as part of the VNC distribution and I won’t go into it here.
Conclusion A popular misconception exists that X is only available for GNU/Linux and some UNIXes. The fact is that X was designed to be platform independent and is available for many other systems as well. This article mentions many times the free X suite that runs under Microsoft’s Windows with the use of Cygwin. On top of that, this article demonstrates how a MS-Windows machine, as well as GNU/Linux ones, can be used as a terminal for a central GNU/Linux system both on a LAN and through the internet. One project that makes good use of the versatility of X is the Linux Terminal Server project (http://www.ltsp.org). This uses X features to create scenarios where many users are connected to a central GNU/Linux machine using thin clients producing massive running and installation cost savings. This article has only scratched the surface of the X infrastructure. Hopefully though, it has demonstrated X’s concepts, proven its versatility and given a taste of its power. While the Microsoft Windows GUI is functional and intuitive, I’m finding it now no more so than GNOME or KDE on X running on GNU/Linux. And when I need to do some remote accessing or use other machines as terminals, I find the Microsoft Windows infrastructure to be encumbered, restrictive and clunky compared to the slickness of X. In short, if given a choice between Microsoft’s own windowing environment and that of the free software community, GNU/Linux and POSIX, then you can give me an “X” every time.
Discover the versatility and power of the X Window System
23
Issue 10
Biography Edward Macnaghten (/user/18" title="View user profile.): Edward Macnaghten has been a professional programmer, analyst and consultant for in excess of 20 years. His experiences include manufacturing commercially based software for a number of industries in a variety of different technical environments in Europe, Asia and the USA. He is currently running an IT consultancy specialising in free software solutions based in Cambridge UK. He also maintains his own web site (http://eddy.edlsystems.com).
Copyright information Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is available at http://www.gnu.org/copyleft/fdl.html. Source URL: http://www.freesoftwaremagazine.com/articles/what_is_x
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
24
Discover the versatility and power of the X Window System
Browsers for Mac OS X Comparing FOSS browsers for Mac OS X By Martin Brown When Apple migrated the Mac operating system platform to Mac OS X, one of the key components was an underpinning based on the FreeBSD operating system. The use of an open source operating system as the core has in turn led to an increase in the use and availability of free and open source software (FOSS). It is now much easier to develop software for the OS X platform (development software is included, instead of being an expensive addition) and this makes it both easier for people to get involved and more likely to take part in open source community projects. Despite the ease, there are still some areas of software development where the complexities of the application are too great. Web browsers are a classic example of this—although the principles of web browsing are quite simple, making all of the components of a typical web page (JavaScript, different image formats, plug-ins, CSS etc.) work effectively is quite difficult. Hence the need for community-based projects where many people can pool their experience. As a knock on effect, web browsing in the free software space is now based around two main camps, those based on the Mozilla codebase (Camino, Firefox, Mozilla) and those based on the KDE sourced KHTML rendering engine (Safari and OmniWeb). Projects based on the former are completely free software, with both the rendering engine and interface being a free software project. The latter are not strictly free software, but the core rendering engine on which they rely is. It is only the interface which is not free software. For the review, I’m going to be taking a closer look at the features and functionality of the three free software browsers—Mozilla, Firefox and Camino, and what differentiates the functionality of these three browsers from each other. I’ll primarily be comparing speed and functionality, but I’ll also use the opportunity to cover some of the issues that affect the browsers on individual sites. I’ll then take a very brief look at the quasi-free software browsers; Safari and OmniWeb. Web browsing in the free software space is now based around two main camps, those based on the Mozilla codebase (Camino, Firefox, Mozilla) and those based on the KDE sourced KHTML rendering engine (Safari and OmniWeb)
Mozilla Suite The Mozilla project was borne out of the Netscape Navigator project; Mozilla being a combination of the term Mosaic killer (which is what they hoped Netscape would be) and a reference to Godzilla. Mozilla was also the name of the Netscape mascot—appropriately enough usually represented by a humanoid-lizard, like Godzilla. The Mozilla Organization was created in 1998 to create the next generation of Netscape and was registered as a not-for-profit organization (the Mozilla Foundation) in 2003. Today, Mozilla is the name used to refer to the range of browsers produced by the Mozilla group, notably the main Mozilla Suite and the forks of Firefox and Camino. The Mozilla Suite is the name given to the software package that incorporates the functionality of a browser, email, newsgroup reader and HTML editor. The core of the Mozilla suite is the Gecko rendering engine which is also shared by Firefox and Camino. The Gecko engine is based on the HTML/XHTML, DHTML and other standards that make up the web environment and is specifically designed to render these in a consistent fashion across platforms. This means that you browse a page on Mac OS X and Windows within a Mozilla browser and get a consistent view. It also means that, in theory, different browsers that use the engine will have a consistent view. This isn’t quite the case for reasons that will become clearer as I look at the other browsers from the Mozilla stable. For example, let’s look at the homepage for MCslp.com (http://www.mcslp.com), my website, which is based on the WordPress blogging engine with a theme which is XHTML compliant. You can see how this looks in Figure 1.
Browsers for Mac OS X
25
Issue 10
MCslp.com in Mozilla As the larger “internet suite”, the Mozilla browser is more geared toward users who want a consistent interface and environment for all of the browsing and internet needs. The interface, as shown in the figure, is based around the original Netscape browser. Shown is the Classic theme but a standard installation also includes the Modern theme which is a little more aesthetically pleasing. This retains the “toolbar” style of the original—including “handles” for moving the different toolbars around and changing their order and layout. You can also click on these tabs to “collapse” the tool bar so that it takes up less space. What you don’t get in the standard format is a convenient search bar for Google or other sources; instead you get a button that takes you to a predetermined Search button and the location field doubles up as the source for search criteria. What you cannot do is customize the toolbar you can enable and disable buttons, but not actually modify or add to the suite of buttons. Like Firefox, Mozilla supports both themes—which change the appearance of buttons and other aspects of the interface—and extensions, which provide additional functionality. For example there are extensions that provide Ad filtering, download management or additional information during browsing. For example, the LiveHTTPHeaders extensions can display the header information sent during an HTTP transaction—invaluable during development. Unlike Firefox there is no convenient extensions manager, and in fact Mozilla almost hides the existence of extension support from the application. In terms of simple internet browsing Mozilla is more than capable. It does sometimes feel slower than the alternatives and there is often a discernable pause just before the page is displayed that can become a little frustrating. For simple text sites it isn’t a problem, but those with heavy graphics, or complex tabular pages do demonstrate the problem. In terms of the quality of the display and parsing, I experience very few problems with Mozilla when laying out different pages and sites. The only times you experience any specific difficulty is either when the site is not standards compliant, in which case some interesting “default” choices are made about how to layout components, or where the site has deliberately used Internet Explorer capabilities to provide functionality. For example, some menus and items that rely on JavaScript or ActiveX simply don’t work. I’ve also experienced some occasional issues when logging into sites through a simple password system. For example, my blog at Computerworld never loads properly after login. Again, I suspect this is more of an issue of the site code, rather than Mozilla, but it can be a frustrating experience. Like Firefox, Mozilla supports both themes—which change the appearance of buttons and other aspects of the interface—and extensions, which provide additional functionality Being a suite of internet applications, rather than just a bare browser, Mozilla has the benefit of including most of the tools you will need. I also love the way that you can read/reply to email directly from within Mozilla. This comes in particularly handy if you are frequently using sites that use email links; there is none of the delay sometimes experience while the OS works out what to do with the link. The embedded IRC client is also very handy. However, once you start using all of the components at the same time, you can sometimes experience some performance problems. Also, as a single monolithic application the memory usage can be prohibitive. Although memory management in OS X is efficient enough that you don’t often experience problems, the memory issue is almost certainly related to some of the performance issues I experienced,
26
Comparing FOSS browsers for Mac OS X
Issue 10 especially while monitoring the load and memory usage of Mozilla while browsing various pages with all of the options enabled. I liked Mozilla, but I have to admit to preferring other applications for my Email and chat requirements, and this is where Mozilla, as the “do all” application, starts to look like overkill if all you want to do is use the browser. A good example here of the limitations of the built in system is the address book. Within Mozilla you have your own special address book section, but this doesn’t integrate with the OS X address book and that means you can’t share instant messaging details with iChat and there is no provision for SMS messaging, an option built into the standard OS X Address Book application. Where Mozilla is most likely to be used is within an organization where they want to standardize on a single platform for browsing and email access, and are probably supporting an LDAP server for address book functionality. In fact, I deployed Netscape Communicator, which provides similar functionality, in this way several years ago. Mozilla Suite Name Maintainer(s) License Platforms MARKS (out of 10) Installation Vitality Stability Usability Features
Mozilla Suite Mozilla Mozilla Public License (OSI Approved) Linux, Unix, Mac OS X, Windows 10 10 9 9 10
Firefox If you removed the web browsing component from Mozilla and turned it into a separate application, you would essentially get the Firefox application. It is a bare-knuckle, browser only, application that combines the Gecko display engine on top of the Mozilla application base to produce a light-weight internet browser. Because it’s built on the Mozilla core you can add extensions (just as with Mozilla, although with a convenient extensions manager) and adjust the theme and display of the browser. Because Firefox is just a browser it is a direct competitor to the standard Safari browser supplied with OS X and as a cross platform solution has obviously gained some popularity as a more secure alternative to Internet Explorer on Windows. Since Microsoft have dropped support for Internet Explorer support under OS X, it is often the main choice if the user is unhappy with Safari. If you removed the web browsing component from Mozilla and turned it into a separate application, you would essentially get the Firefox application Fortunately, Firefox has a lot of additional benefits that make it a popular alternative. The support for extensions enables the use of ad blocking and enhanced security functionality that Safari does not support. Many also find Firefox faster than Safari, although I have to admit to finding so little difference between the two that this wouldn’t be my reason for choosing Firefox. The ad blocking extensions though are invaluable, and a combination of AdBlock and the filter updater (which automatically reloads new filter sets for you) can eliminate nearly all ads to the point where only interstitials remain. As with Mozilla, the rendering of Firefox pages is generally excellent, all because of the Gecko engine. The same, occasional, site-specific problems occur. Unlike Mozilla, Firefox is more flexible in its interface. Toolbars are customizable, and, as can be seen in Figure 2, you get a convenient search field that can be used to search a variety of locations simply by selecting an alternate engine from the embedded popup. Tabs are also slightly better supported with switching between tabs noticeably faster (for me at least), and with convenient hot keys (Command-1, Command-2, etc.) for flipping between multiple tabs in a window.
Comparing FOSS browsers for Mac OS X
27
Issue 10
MCslp.com in Firefox Firefox is undoubtedly a very fast browser and is very quick and easy to use. With the extensions, particularly ad blocking, Firefox becomes an even faster alternative to the full Mozilla Suite product. You may not get the embedded mail/news reader, address book and IRC, but you get the flexibility to use other clients that may contain richer features (I find Apple’s Mail best for Email), and the reduced memory footprint is certainly an improvement if you have 512MB of RAM or less. If free software mail/news tools are important then Thunderbird, also from Mozilla, is their standalone email application. Firefox Name Maintainer(s) License Platforms MARKS (out of 10) Installation Vitality Stability Usability Features
Firefox Mozilla Mozilla Public License (OSI Approved) Linux, Unix, Mac OS X, Windows 10 10 9 10 10
Camino The Camino project is based on the same rendering engine as both Mozilla and Firefox, but unlike these two packages, the Camino interface is designed to be more Mac OS X like, rather than the architecture neutral interface offered by its brethren. The effect provides all of the power and speed of the Firefox browser—Camino is just a browser—but with the OS X look and feel. The architecture neutral format of both Mozilla and Firefox is obviously a benefit in a cross platform environment, but some prefer the look and feel of OS X. Most of the basic look and feel remains the same; key differences when browsing are the use of the OS X style popups, fields and buttons in the display, as you can see here in Figure 3. The most significant differences are in the dialog boxes. For example, the preference panel looks and works almost identically to the main OS X System Preferences application.
28
Comparing FOSS browsers for Mac OS X
Issue 10
MCslp.com in Camino The other main aim of the Camino project is to produce a very fast browser. Based on the same Gecko engine the Camino project includes some improvements and uses the functionality of the OS X display engine to help improve the overall speed of displaying web pages. The result is quick—pages display very quickly, even the complex ones. There are some problems, well documented on the site, that can cause some interesting effects, but as a product still considered young and in pre-release form (the latest at time of writing was 0.8.4), Camino is still impressive. Although Camino includes a lot of integration with OS X—for example the bookmarks integrate with Rendezvous and the standard Address Book to include sites described in these repositories—Camino is not a typical Mozilla project. The extensions and themes supported by both Mozilla and Firefox are not supported. Although the themes limitation makes sense (since Camino is essentially permanently in the “OS X” theme) the lack of extension support means that we cannot block ads. Ironically, this often has the effect of severely slowing down the browsing experience. Most of the extensions are not supported because they rely on the XUL interface for configuration, which is not included in the Camino project. Although Camino is very quick and is the most OS X-friendly of the Mozilla applications, the occasional display problems can be a pain. Also, the lack of extension support, which eliminates the ability to filter adverts, is a feature that Camino could certainly take advantage of. If you browse an intranet or other controlled sites where adverts are not a problem then Camino is probably the best of the bunch, but if you need ad blocking you will need to consider Firefox. Camino Name Maintainer(s) License Platforms MARKS (out of 10) Installation Vitality Stability Usability Features
Camino Mozilla Mozilla Public License (OSI Approved) Linux, Unix, Mac OS X, Windows 10 10 9 10 8
Safari and OmniWeb When Apple launched Safari the OS X community was rightly amazed. It included a number of features that in todays browsers we take for granted. Easy access to Google, for example, was functionality we didn’t have in Internet Explorer, and Firefox was not available at the time. Snapback functionality (which takes you back to a Google, or indeed any other, page after clicking through other choices) makes it easy to browse sites without losing your root. Finally the integration with the operating system is absolute—in the latest versions,
Comparing FOSS browsers for Mac OS X
29
Issue 10 you can even synchronize your bookmarks between Macs using the Apple .Mac service. Most interesting of all, Safari uses the KHTML rendering engine. KHTML is part of the Konqueror browser and the KDE Project, a desktop environment commonly used under Linux. The KHTML engine is incorporated into what Apple call the WebKit core, which provides HTML rendering for a variety of systems within OS X, including Safari, Sherlock and the built-in help system. The KHTML engine is a mature product and the rendering of HTML pages is based on the same open standards for HTML that drive the Mozilla project. There are some circumstances, just as with Mozilla, where you get odd rendering issues but they are few and far between and again very site specific. The use of free software is perhaps not that surprising—Mac OS X is itself built on top of the free software Darwin project which is in turn based on the FreeBSD operating system. Unfortunately, although KHTML is a free software project, Safari is not and therefore cannot strictly be included in this roundup review. In terms of functionality, Safari is probably closest to the Camino project, it supports browsing tabs and integrates well with the rest of OS X, but it does not do ad blocking as standard. There are however some products that support this (PithHelmet for example).
MCslp.com in Safari Also based on the KHTML engine is the OmniWeb browser from OmniGroup Software. OmniWeb is a commercial product and is one of the richest of the browsers available for the platform. It supports a number of features designed to make it easier for people who do a lot of browsing to organize their environment. For example, as well as supporting text tabs, OmniWeb also supports icon tabs, which show a live thumbnail of the tabbed page. You can also create multiple workspaces—these are groups of windows (including all the tabs in each window) and the information in a workspace is retained, even across application restarts. This means that you can create individual workspaces for different projects or tasks; I have separate workspaces for shopping, business work, recreation and a “transient” workspace that I use to browse the pages that I’ll probably only ever view once.
MCslp sites in OmniWeb
30
Comparing FOSS browsers for Mac OS X
Issue 10 OmniWeb also includes a built-in ad filter and as an added security measure you can configure security settings on a site by site basis. For example, you can enable JavaScript on your financial websites, while having a default setting where JavaScript is disabled. All of this additional security and functionality is built right into the browser, making it more convenient than Firefox or Mozilla which support this functionality only through additional extensions. As a commercial product though, these features do, quite literally, come with a price. Still, OmniWeb remains my favourite browser.
Conclusion If what you want is a plain and simple web browser that is free software based then really your choice is limited to Firefox or Camino. Mozilla, while a very capable browser, is technically a complete internet suite and you will only get the best out of Mozilla if you are also happy to use the included email, news and IRC clients. Camino offers the best all round environment, from an OS X perspective, simply because it offers the best integration with the system and retains the look and feel of the operating system. However, Camino does not support the entire range of Mozilla extensions and this means that if you want to avoid adverts or improve the security of your browser you may want to consider the Firefox project instead. Firefox is fast, configurable and supports all of the extensions and themes you would want to use. It lacks integration with the operating system, storing authentication information in its own database instead of the keychain for example, and there is no syncing of bookmarks through .Mac or iSync. However, it remains the strongest choice in the free software market.
Biography Martin Brown (/user/6" title="View user profile.): Martin â’ MCâ’ Brown is a member of the documentation team at MySQL and freelance writer. He has worked with Microsoft as an Subject Matter Expert (SME), is a featured blogger for ComputerWorld, a founding member of AnswerSquad.com, Technical Director of Foodware.net and, and has written books on topics as diverse as Microsoft Certification, iMacs, and free software programming.
Copyright information This article is made available under the "Attribution-NonCommercial-NoDerivs" Creative Commons License 2.5 available from http://creativecommons.org/licenses/by-nc-nd/2.5/. Source URL: http://www.freesoftwaremagazine.com/articles/mac_osx_browsers
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Comparing FOSS browsers for Mac OS X
31
Issue 10
32
Comparing FOSS browsers for Mac OS X
GRUB tips and tricks Spicing up a great utility for more IT fun By Jeremy Turner The GRand Unified Boot loader, or GRUB, has all but replaced the default boot loader on many GNU/Linux distributions. It includes some conveniences over LILO, the LInux LOader. One advantage is not having to remember to run /sbin/lilo every time you make a configuration change. It also can function as a boot loader for removable media such as floppies, CD-R/W and USB flash memory keys. It is short-sighted to view GRUB only as a boot loader to be installed on a hard drive of a GNU/Linux system. Combined with a few other utilities, GRUB can be a powerful and good-looking tool for your home, organization or workplace.
Introduction First, what exactly is GRUB? GRUB is a boot loader, which means it passes control of the boot process from the Power-On Self Test (POST) to the kernel of your GNU/Linux distribution. GRUB works in a modular, layered fashion so that any unneeded modules are not loaded. Not only does this reduce execution time, but it saves valuable resources when running from removable media. GRUB optionally loads its configuration file at run/boot time, so you don’t have to type in commands manually each time. However, the command-line option is still available in case there is an error in your configuration file. So why use GRUB when there are other options out there? The beauty of free software is that you have choices. Alternatives to GRUB include LILO, syslinux and isolinux. The benefit of GRUB is that it will work in many different types of boot devices, but you only need to learn one set of menu commands. In addition, GRUB can work on other forms of bootable storage, such as CD-R/W, USB flash memory keys, floppy disks, and even via a TFTP server with PXE ROM booting.
Installing GRUB on a USB flash memory key
Figure 1: you can run GRUB on a USB flash memory key! I got the inspiration for this article after trying DSLinux (or DSL), which is a fully graphical Linux distribution weighing in around 50 MB. After seeing an advertisement on their website for a USB flash memory drive with DSL installed, I figured I could probably learn how to set DSL up myself on my Lexar 256 MB JumpDrive. The DSL documentation pointed towards installing via syslinux and reconfiguring the cylinder/head/sector information of my JumpDrive, but I didn’t have any luck trying to get my USB flash memory key to boot successfully. Finally, I tried using GRUB and I was up and running with DSL in no time! First, I recommend creating a directory structure to organize your boot-related files, and keep them separate from any other files you’d like to keep on the USB flash memory key. You could create two partitions, but I couldn’t get both partitions to load correctly when I inserted the key back into Windows. On my USB flash memory key, I created a root folder named boot to hold all the data necessary for USB booting (see figure 2). Under the boot folder, I created a directory named grub for GRUB-related files, images which are initial ramdisk (initrd), floppy, or disk images, and finally kernels to hold all the kernels. You may want to organize
GRUB tips and tricks
33
Issue 10 your boot folder differently, but make sure that you change the corresponding paths and directory names in GRUB’s menu.lst file. The menu.lst file that I use can be found in Sidebar 1.
Figure 2: this is the file structure on my USB Memory Key Sidebar 1: Contents of menu.lst default=0 timeout=10 root=(hd0,0) splashimage=/boot/grub/debsplash.xpm.gz title DSL 1.2 (2.4.26) 1024x768 (save to RAM) kernel /boot/kernels/dsl-linux24 ramdisk_size=100000 init=/etc/init lang=us apm=power-off vga=791 toram nomce noapic quiet knoppix_dir=images knoppix_name=dsl initrd=/boot/images/dsl-minirt24.gz title Debian Sarge Installer kernel /boot/kernels/di-vmlinuz initrd=/boot/images/di-initrd.gz ramdisk_size=10240 root=/dev/rd/0 devfs=mount,dall rw initrd /boot/images/di-initrd.gz title HP nx5000 F0.d BIOS Upgrade kernel /boot/kernels/memdisk initrd /boot/images/hpnx5000f0d.img title Memtest86+ (1.60) kernel /boot/kernels/memdisk initrd /boot/images/memtestp.bin Next, you’ll need to copy some of GRUB’s stage files, including stage1, stage2, and fat_stage1_5, and put them into the boot/grub directory on the USB flash memory key. These will allow GRUB to boot into GNU/Linux and other operating systems. After the files are copied over, it’s time to install GRUB to the Master Boot Record (MBR) of the USB flash memory key.
34
Spicing up a great utility for more IT fun
Issue 10 Luckily, it’s the same process as installing to a hard drive: # grub grub> find /boot/grub/stage1 (hd0,1) (hd2,0)
On my system, hd0 is /dev/hda and hd2 happens to be /dev/sda. Just to make sure, we can use a bash-like tab completion to look through a filesystem: grub> find (hd2,0)/boot/im<TAB> grub> find (hd2,0)/boot/images/
Since the /boot directory on /dev/hda doesn’t have an images directory, I know that (hd2) is the hard drive that I want to install GRUB on: grub> root (hd2,0) Filesystem is type fat, partition type 0xb grub> setup (hd2) Checking if "/boot/grub/stage1" exists... yes Checking if "/boot/grub/stage2" exists... yes Checking if "/boot/grub/fat_stage1_5" exists... yes Running "embed /boot/grub/fat_stage1_5 (hd2)"... 15 sectors are embedded. succeeded Running "install /boot/grub/stage1 (hd2) (hd2)1+15 p (hd2,0)/boot/grub/stage2 /boot/grub/menu.lst"... succeeded Done. grub> quit
Great! Now we have GRUB in the USB flash memory key’s MBR. Now, we have to put some files on the memory key to boot into and create a menu.lst file!
GRUB with disk images One cool trick is to use GRUB and memdisk to boot floppy disk images. Using the memdisk kernel from the syslinux package (http://syslinux.zytor.com/memdisk.php), you can load disk images and execute them in a non-emulated environment. How might this be useful? Let’s say you have an organization with several different models of desktops and laptops. You could create a CD-R/W or a bootable USB flash memory key with all of the different BIOS upgrades or hardware tests. Rather than carry around a book of floppies, you can simply copy the floppy image and boot from the CD-R/W or USB flash memory key. Using this method, you can also add Memtest86+’s floppy image to your bootable CD-R/W or USB flash memory key and have it at your disposal. Here is an example of a menu.lst snippit using memdisk to boot into Memtest86+: title MemTest86+ Ver 1.60 kernel /boot/kernels/memdisk initrd /boot/images/memtestp.bin
There is nothing special about the filenames. The only important thing is that the path and name referenced matches with the actual files. Check out Sidebar 1 for more examples of disk images.
GRUB with DSLinux
Spicing up a great utility for more IT fun
35
Issue 10
Figure 3: DSLinux is a 50 MB fully-graphical live GNU/Linux distribution So how can you boot a full GNU/Linux desktop off a USB flash memory key with GRUB? First, download the DSL ISO9660 image, and either burn it to a CD, or mount it via loopback: # mkdir dsl-test # mount -t iso9660 -o loop dsl-image.iso dsl-test
Next, copy the KNOPPIX file, kernel, and initial ramdisk: # # # # #
cp dsl-test/KNOPPIX/KNOPPIX /media/usb/boot/images/dsl cp dsl-test/boot/isolinux/linux24 /media/usb/boot/kernels/dsl-linux24 cp dsl-test/boot/isolinux/minirt24.gz /media/usb/boot/images/dsl-minirt24.gz sync umount dsl-test && rmdir dsl-test
This assumes that your USB flash memory key is mounted at /media/usb. Next, edit the /media/usb/boot/grub/menu.lst file and make sure it looks like the entry in Sidebar 1. You might have noticed that the root line at the top of the menu.lst file says (hd0,0) even though we used (hd2,0) earlier. When you boot from the USB flash memory key, the key itself becomes hd0, even before the primary master hard drive. Once you have the menu.lst file edited, go ahead and reboot. Make sure that your BIOS is set to USB-HDD, USB-ZIP, or USB-FLOPPY. You might need to experiment to see which one works. Once you get to the menu, select the option for DSL. If you get a GRUB error and are unable to successfully boot into a kernel, press the ‘c’ key to open a GRUB prompt. You can try commands like find to help locate files to boot from.
GRUB splash images Another cool function with GRUB is putting a splash image on the boot menu screen. By default, GRUB will make the menu screen a plain black-and-white menu. There are menu options to change the black and white colors, but why stop there? Grab your favorite picture, or head to one of the URLs listed below which have splash images created for you. If you are creating your own, it will need to be in XPM format, a maximum color palate of 14 colors, and 640x480 resolution size. The GIMP can help transform your graphic to these specifications. As an alternative, you can use the ImageMagick suite of programs. The application convert can help with this conversion process. You can run it as follows: $ convert -resize 640x480 -colors 14 mycoolpicture.jpg mybootsplash.xpm $ gzip mybootsplash.xpm
In this case, the file mycoolpicture.jpg will be resized to 640x480, reduced to 14 colors, and saved in the XPM graphical format. The second step compresses the XPM file using the gzip compression method. GRUB can display the gzipped-xpm splash images well. If you don’t want to create your own splash image, check out some of the following web sites which have them available for download:
36
Spicing up a great utility for more IT fun
Issue 10 • GNU GRUB Public Splashimage Archive (http://ruslug.rutgers.edu/~mcgrof/grub-images/images) • GRUB Splash (http://vision.featia.net/linux/grubsplash) • Once you have the XPM or gzipped-xpm file, you need to add a line at the top of GRUB’s menu.lst to instruct GRUB to load the specified boot splash image. The line should read as follows: splashimage=/boot/grub/debsplash.xpm.gz
In my case, I’m using the Debian boot splash image as my background.
GRUB on CD-R/W media I’ve also mentioned several times about installing GRUB to use for a bootable CD-R/W disk. When generating the ISO image, you will use some special settings of mkisofs. The command to build a bootable GRUB CD-R/W looks like: $ mkdir -p iso/boot/grub $ cp stage2_eltorito iso/boot/grub $ mkisofs -R -b boot/grub/stage2_eltorito -no-emul-boot \ -boot-load-size 4 -boot-info-table -o grub.iso iso
The key is to copy the stage2_eltorito file into the /boot/grub directory of the CD image tree, and run mkisofs with the options specified above. As mentioned earlier, you can also burn a menu.lst file along with kernels and disk images and put them all on the CD. In the menu.lst, you will need to use (cd) as the device, rather than (hd0). Splash images work well, too.
Conclusion Congratulations! Now you have a bootable USB flash memory key, with DSL and memtest86+, and even a nice boot splash image. You even have the knowledge to add extra images, like your favorite BIOS update disk image, and others. For more information, check out some of the links in the resources section below.
Resources • Grub Homepage (http://www.gnu.org/software/grub) • Grub Manual (http://www.gnu.org/software/grub/manual/grub.html) • Memdisk (http://syslinux.zytor.com/memdisk.php) • Syslinux (http://syslinux.zytor.com) • Memtest86+ (http://www.memtest.org/#downiso) • GNU GRUB Public Splashimage Archive (http://ruslug.rutgers.edu/~mcgrof/grub-images/images) • GRUB Splash (http://vision.featia.net/linux/grubsplash)
Biography Jeremy Turner (/user/21" title="View user profile.): Jeremy Turner enjoys freelance writing when given the opportunity. He often plays system administrator, hardware technician, programmer, web designer, and all-around nice guy. You contact him by visiting his web site (http://linuxwebguy.com/).
Copyright information This article is made available under the "Attribution-NonCommercial-NoDerivs" Creative Commons License 2.5 available from http://creativecommons.org/licenses/by-nc-nd/2.5/. Source URL: http://www.freesoftwaremagazine.com/articles/grub_intro
Spicing up a great utility for more IT fun
37
Issue 10
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
38
Spicing up a great utility for more IT fun
Jump to Debian GNU/Linux! A guide to why the Debian distro is a good choice By Arturo Fernánde... There are hundreds of GNU/Linux distributions around, each with its strengths and weaknesses. One that stands out from the masses is Debian. It is the only major distribution not developed (or even backed) by commercial vendors, but by a group of volunteers around the world. Its main features are robustness, great software package management, a huge software collection consisting of more than 15,000 pre-compiled packages ready to install and run, and a transparent and always helpful support system based on mailing lists and a bug tracking system. But, there is something else that makes Debian special: No other distribution has seen as many offspring distributions as Debian has. Among them you’ll find customized Linux distributions for regional markets like LinEx (a government-driven project in Spain), or the shooting star Ubuntu (developed by a commercial vendor). The reason for this popularity is obvious: The Debian distro is not only free, but boosts flexibility and transparency. If you use another Linux distribution and you are interested in changing, after you read this article you should install Debian for sure.
Introduction Debian/GNU Linux is a universal operating system. You can install and run it not only on Intel- and AMD-based 32 and 64 bit PC systems, but also on different computer architectures like Compaq’s and Digital’s Alpha systems, ARM, Motorola 680x0 processors (m68k), SGI’s big-endian MIPS systems and Digital’s DECstations, Sun’s SPARC and UltraSPARC systems, the PowerPC (using IBM and Motorola processors), IBM S/390 mainframe systems and Hewlett Packard’s PA-RISC machines (hppa). The Debian project doesn’t only produce a free (as in freedom, not only of charge) distribution, but is itself a strong supporter of free software. While many people spend hours discussing the differences between “free software” and “open source software”, Debian explicitely explains its position in two documents: “The Debian Free Software Guidelines” (DFSG), which defines what constitutes free software according to the Debian project; and the “Social Contract” with the free software community which positions the project itself and defines its links to the outside world: • Debian will remain 100% free • The project will give back to the free software community • The project will not hide problems • It prioritizes its users and free software • It describes how to deal with work that does not meet its free software standards You can read this document at Debian Social Contract (http://www.debian.org/social_contract). The reason for this popularity is obvious: The Debian distro is not only free, but boosts flexibility and transparency Differing from other Linux distributions, you’ll find not just one Debian version at a time, but three different releases: “stable”, “testing” and “unstable”. These are three different distributions. Each one ships with its own software packages which may or may not stem from the same original source-code version. For example: Debian “stable” at the time of writing shipped with a gimp package tagged 2.2.6, while “testing” at the same time included version 2.2.7. The “unstable” release uses the most recent software versions. There is, however, only one official release: “stable”. Debian recommends it for production environments. The “testing” distribution contains packages that haven’t been accepted for the “stable” release yet, but after extensive testing will eventually move over. The “unstable” tree is the Debian developers’ working ground. At times this distribution can show problems like broken dependencies. Nevertheless, this distribution is usually completely functional since quality assurance (QA) is a task the Debian project takes serious.
Jump to Debian GNU/Linux!
39
Issue 10 Packages in “unstable”, however, simply have not been tested in depth. If you wish to run a system equipped with the latest software versions “testing” is a good bet, but if you need a robust server you should choose “stable”. Each of these distributions has a codename which (apart from “unstable”) changes with every release. The codename of the most recent “stable” distribution is “Sarge”, also known as Debian GNU/Linux 3.1. It was released June, 6th, 2005. The current “testing” distribution is nicknamed “Etch”, and “unstable” always remains “Sid”. All codenames are taken from Pixar’s movie “Toy Story” since a Debian project leader worked for this company. Obsolete releases are “Woody” (3.0), “Potato” (2.2), “Slink” (2.1) and “Hamm” (2.0). Each distribution groups its packages by their software licenses: • Main: includes all software compatible with the DFSG, for example the GNOME web browser “epiphany” • Contrib: you’ll find free software that depends on non-free (according to Debian) software to run. “ant” (a Java development tool) is an example for this • Non-Free: consists of software with a DFSG-incompatible license like “doom-wad-shareware”, a package that includes shareware game files for the 3D game DOOM
/etc/apt/sources.list configuration The codename of the most recent “stable” distribution is “Sarge” also known as Debian GNU/Linux 3.1
Advantages and disadvantages on the technical level Debian’s different approach becomes visible not only formally but also in technical details. It ships with a unique and robust package management system, centering around the APT tools and the “dpkg” utility, that Debian developers and users are especially proud of. It is the best way to install software quickly and easily on your machine—even as a newbie you’ll appreciate and love it. The package management system uses “dependencies” between packages to ensure correct software installation. Pre-compiled packages are distributed in a specific archive format with “.deb” file extension. While utilities like “dpkg” and “apt-get” are pure command-line tools, Debian also provides a set of package management front-ends to choose from like “dselect”, “aptitude” and “synaptic”. Most tools access software repositories via FTP or HTTP, provided the user wishes so. You type a single command, and Debian will download, install and configure the software for you. You don’t need to worry about where the software resides—as long as the configuration file /etc/apt/sources.list contains the repository’s proper URL. Each repository entry looks like the following line: deb http://www.debian.org/debian sarge main contrib non-free
This means that the tools will download and install software belonging to the “Sarge” distribution from the main Debian web site. To install the GNU Image Manipulation Program (GIMP) the “root” user will now type e.g. the following command line:
40
A guide to why the Debian distro is a good choice
Issue 10 #apt-get install gimp
The “apt-get” program will download the GIMP package and all packages it depends on (i.e. all software needed to run the GIMP). Debian does not hide errors and bugs. Users can to send bugs using the bug tracking system and the Debian developers can quickly access them by web or e-mail. Bugs are accessible to everybody because of the importance Debian places on Quality Assurance. “The Debian Policy” is a specification for the standards of quality using by Debian. On the other hand, Debian also has some disadvantages when compared with other distributions: its hardware auto detection is still way behind the appropriate functionality in SuSE or Mandriva, and the installation process doesn’t make it easy for beginners.
Installing Debian/GNU Linux To install Debian you need at least a little patience. To give you a headstart, a short summary of the Debian “Sarge” installation process for a desktop machine follows. The official Debian installation page at Debian installation page (http://www.debian.org/releases/stable/installmanual.en.html) provides more detailed information. First you have to choose an install method: • Network: you can install Debian via the internet or using a partition that one of the other machines in your LAN provides by means of NFS (Network File System) • CDs or DVDs: books, magazines and independent software vendors will provide you with Debian installation media. You can also download the appropriate images from the Debian web-site and burn them yourself. To boot from the installation DVD is the easiest way for newbies. Check your BIOS to ensure your system will do so! Before you start make sure you have as much information about your hardware as possible, because during the install process the installation program will request the information. “Sarge” allows you to use the 2.4.17 or the 2.6.8 kernel. To choose the latest Linux version, type “linux26” at the “lilo” prompt when booting from the install media and press “Enter”. The Debian installer program asks you about your language, country and keyboard layout. If possible it configures your network and loads appropriate kernel modules for the hardware it auto-detects. “Sarge” allows you to use the 2.4.17 or the 2.6.8 kernel Now it’s time to partition your disk. Be careful. After you have selected the installation partition the next step is to install the base system. To be able to boot your system you need to install a boot loader like GRUB or LILO. When done, reboot your machine. Now Debian starts some post-boot configuration routines: you’ll be asked to: • Configure your time zone • Choose a name for your machine • Set up users and passwords. During this stage you must choose the password for the “root” user • Create an ordinary user for your day work • Set up PPP or PPPOE for dial-up connections with the internet • Configure the APT system in charge of the package management • Install some software packages. To do this you first have to choose the relevant software. On a desktop machine you’ll probably want the office-suite OpenOffice, the Mozilla Firefox web browser, the e-mail client Kmail, the instant messenger client Gaim, the image manipulation program GIMP, Totem (a video player) and a desktop environment like GNOME or KDE
A guide to why the Debian distro is a good choice
41
Issue 10 • Configure your Mail Transport Agent. This is optional but useful since internal system notification depends on e-mail After these steps you’ll be presented with the login prompt. Your system is now ready to use.
Debian official web site
International example Back in 1998, the government of Extremadura, a small region of Spain, launched an educational project for technological alphabetization. The success of this educational project depended totally on the chosen software, and access to source code was a very important issue. The government developed a new Linux distribution based on Debian GNU/Linux 2.0 called “Linex” (Linux + Extremadura).
Linex web site Currently, Linex has an installation base of over 10,000 computers, mainly in government offices and schools. Another region of Spain (Andaluc? follows the example and makes their own distribution called “GuadaLinex” based on Linex.
42
A guide to why the Debian distro is a good choice
Issue 10
GuadaLinex web site Countries like China, Italy and Brazil have been studying the success of the project.
Conclusion Debian GNU/Linux as a distro is different due to its technical details and for its philosophy. Freedom is a very important concept for the Debian Project and it is a non-commercial distro. Due to its robust, flexible and highly configurable nature, some governments have chosen Debian upon which to base the development their own Linux distro and the project has been a success. If you use other Linux distributions, I recommend you give Debian a try. Debian is not only for geeks—everybody can install and use it. Don’t doubt it, jump to Debian! Due to its robust, flexible and highly configurable nature, some governments have chosen Debian upon which to base the development their own Linux distro and the project has been a success
Acknowledgements The author would like to thank Patricia Jung for her comments and grammatical revision.
Notes and resources The Debian web site (http://www.debian.org) The Linex distro web site (http://www.linex.org) The GuadaLinex distro web site (http://www.guadalinex.org)
Biography Arturo Fernánde... (/user/80" title="View user profile.): Arturo is a software engineer specialist in web development and a freelance author for various Linux magazines. He works with Debian GNU/Linux since 2000.
Copyright information Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is available at http://www.gnu.org/copyleft/fdl.html.
A guide to why the Debian distro is a good choice
43
Issue 10 Source URL: http://www.freesoftwaremagazine.com/articles/jump_to_debian
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
44
A guide to why the Debian distro is a good choice
64 Studio Building a native 64-bit creative distribution By Daniel James Creative computer applications are a niche, and a relatively small one at that. Even brand-leading proprietary software companies like Steinberg, the developers of the long-established Cubase music sequencer, have been recently bought out. Consolidation in the creative application market has seen Adobe buy Syntrillium, who created Cool Edit, Avid buy Digidesign and Apple buy Logic—and there are plenty of other examples. What this means is that a handful of multinational companies could now effectively monopolise the gateway to creative expression, at least as far as computers are concerned. This might not be an issue if it were not for the wide proliferation of powerful, general purpose computer hardware in the first world. In addition, the internet, and by extension the personal computer, are now the principal channel for distribution of creative works in many fields. Proprietary tools on the creative desktop mean proprietary formats will dominate the internet, now that it is no longer a purely textual medium. The landscape today for “industry standard” creative software on the proprietary platforms looks a bit like this:
Not much choice here... In case you’ve heard Apple described as the creative alternative or competitor to Microsoft, we should not forget that back in 1997, before the launch of OS X, Microsoft actually invested $150 million dollars in Apple. That’s not something a company usually does with a competitor. More recently, Adobe and Macromedia have announced a partnership, although it’s not yet clear what form that partnership will take. What is clear is that there is less real choice in “industry standard” creative software than ever. The retail cost of just the software listed here for a single computer runs into thousands of euros. The high cost of entry to this field helps maintain the artificial distinction between consumers and producers. Now of course the software companies who make these programs would say that they represent great value for money, and that educational institutions can benefit from generous discounts. On the first point, it would be fair to mention that as mass access to personal computers has brought the cost of hardware down, the creative applications haven’t followed suit. Quark Xpress was really expensive in 1995 and it still costs a lot today. A handful of multinational companies could now effectively monopolise the gateway to creative expression To answer the second point, I believe proprietary software in education, even with discounts, represents a hidden subsidy from the state to the manufacturers of this software. If we look at one Adobe educational pack on offer in the UK for example, it appears very reasonably priced at £9.99 per seat. But a school or college needs to buy 200 seats to get that price, i.e. pay nearly two thousand pounds, in order to run feature-limited “Elements” versions of just two applications. What this means is that taxpayers around the world are sending millions of dollars every year to fund proprietary software development, and yet many schools and colleges still can’t afford to put a full set of creative tools in front of each student.
64 Studio
45
Issue 10 Even if the software was practically free, consider the value to a company like Adobe that every graduate of the creative disciplines around the world leaves college with training in their software—which the company doesn’t contribute to directly at all. Even the potential funding benefits of corporate taxation for education are offset by the fact that most, if not all multinational software companies use offshore accounting. It’s certainly the case that artists in a number of disciplines are using GNU/Linux and other free software—free in both the economic sense and the political sense—to realise their ambitions. It would also be fair to say that this software is only just starting to penetrate the consciousness of the mainstream. But how can we make it practical for the average computer user?
The 64-bit question Since any software project takes a while to get to a mature stage, when I launched a start-up earlier this year, I decided to concentrate on the kind of desktop systems which I believe will be common among creative users in the future. We’ve had native 64-bit Linux on the Alpha and the Itanium for years, but these architectures never reached the mainstream desktop—and I don’t think they ever will. SGI now has an Itanium2 based GNU/Linux desktop product aimed at the creative market, but it costs US$20,000 per machine. Compared to Windows or any other operating system, GNU/Linux clearly has a head start on x86_64, and you can choose from a number of natively compiled desktop distributions for the platform. Unfortunately for the creative user, all of these are aimed at the general purpose audience. It’s impossible to be all things to all people, and what’s good for the so-called consumer is rarely right for the content creator. GNU/Linux clearly has a head start on x86_64 For example, typical distributions use Arts or ESD to share the sound card between applications, while many GNU/Linux musicians would want to use JACK—admittedly more complex, but far more powerful. (I was asked recently what was so difficult about JACK that means it isn’t found as the primary sound server in any mainstream GNU/Linux distribution. I don’t think it is difficult to use, but for the time being it still requires a patched kernel, and some knowledge of sample rates and buffers. Many users just want to be able to throw audio at any sample rate to the soundcard, and could care less about real-time priority.) In addition, the creative user’s default selection of applications would be very different to—for example—a sysadmin. Even gigantic distributions like Debian don’t package all of the specialist tools needed for media creation, and the integration between packages is often less than perfect. So the goal of 64 Studio Ltd. is to create a native x86_64 distribution with a selected set of creative tools and as much integration between them as possible.
The 64 Studio default desktop is a slimmed-down Gnome install
Why Debian? Most of the packages in 64 Studio come from the unofficial Pure 64 port of Debian testing, with some from Ubuntu, some from DeMuDi and some custom built. A more obvious choice might be Red Hat, given that
46
Building a native 64-bit creative distribution
Issue 10 many of the high end (which is to say expensive) proprietary tools used in Hollywood studios and elsewhere are sold as binary-only Red Hat packages. However, the split between Red Hat Enterprise and Fedora Core presents serious problems for any derived distribution. You could rebuild Red Hat Enterprise from source as long as you removed all Red Hat trademarks, but that’s a lot of extra work—and you’d have to follow Red Hat’s agenda for its distribution, which you couldn’t have any input to. On the other hand, you could build a distribution on top of Fedora Core. It’s broadly Red Hat compatible, and there are the beginnings of a community process taking place—although it’s still far more centrally controlled than genuine grass-roots distributions. The key problem with this approach is that Fedora Core is not designed or built to actually be used. I can say this with some confidence because I was able to ask Michael Tiemann, former Red Hat CTO and now vice president of open source, this question myself. Fedora Core remains a technology preview for Red Hat Enterprise, and the Fedora Project has absolutely no commitment to stability or usability. If Red Hat wants to try a major update to see what breaks, it can. The work of the Debian Pure 64 port team is of a very high quality Debian does have a commitment to stability, and a bona-fide community process. There are other reasons for favouring Debian over Red Hat: apt-get is just better than rpm when it comes to upgrades, and on the creative desktop we’ll be upgrading continuously. The work of the Debian Pure 64 port team is of a very high quality, not to mention that of all the many Debian package maintainers. I recognise that whatever packages we put into 64 Studio, users will want some of the packages that we haven’t included—so being able to use thousands of binaries straight from the Pure 64 port without modification would be a major advantage. Because we’re sticking very closely to Debian with the 64 Studio design, it’s our intention that users will be able to install any application from Pure 64 simply by enabling an additional apt source. This will include most of the well-known applications with the exception of OpenOffice.org, which just won’t build natively on x86_64 yet. In fact, 64 Studio is not so much a distribution based on Debian as a Debian remix. 64 Studio maintainer Free Ekanayaka is a Debian Developer, so we hope to contribute our improvements back directly—where they are Debian Free Software Guidelines compliant. However, we do benefit from the flexibility of not being an official part of Debian. For example, the Debian project has decided that they do not want to package binary audio interface firmware, which is required to be loaded by the driver for the interface to work. That’s fair enough, and I understand the reasons for their decision, but it’s a major pain if you own that kind of interface, because it won’t work out of the box.
The alpha releases There are a number of challenges we still have to face. The first is following the rapid pace of kernel development. We are currently using Linux 2.6.13 with Ingo Molnar’s real-time preemption code and a few other patches. Not so long ago, these patches didn’t build on x86_64 at all, and as far as I know, we are the only native 64-bit distribution using them. The first indications are that this combination works really well for audio with full preemption enabled, the most aggressive setting. For the time being we are using the realtime-lsm framework to give real-time priorities to non-root users, because we know it works. We may switch to rlimits in the future, as this has been merged into the mainline kernel now.
Building a native 64-bit creative distribution
47
Issue 10
JACK running with realtime priority on a native x86_64 desktop Another challenge is the issue of support for proprietary formats within free software. At the level of encoding and decoding, we think the best solution we’ve seen is the Fluendo plugin collection for GStreamer, which as far as we can tell meets the requirements of free software licences regarding linking, and also the legal requirements of the patent holders. It’s simply not sustainable to expect users to locate and download libraries of dubious legal status, and install these by themselves. Apart from any ethical problems, it’s impossible to support users properly in that situation. Downloading these libraries is certainly out of the question for any institutional user, such as a college. It’s almost a mantra that “everyone mixes in ProTools” At the level of project interchange, for example moving a complex project from Ardour to ProTools, there does seem to be a move among proprietary applications towards support for AAF, the Advanced Authoring Format. Free software must support this kind of high-level project compatibility format, otherwise it doesn’t stand a chance of gaining a significant user base. When I talk to people in the music industry, it’s almost a mantra that “everyone mixes in ProTools”. I’m not aware of any free software audio application that supports ProTools format import or export, but at least with AAF we have the chance of finding a middle way. 64 Studio version 0.3.0 alpha is currently available for download (http://www.64studio.com/) as an .iso image. Changes from stock Debian include X.org instead of XFree86, the custom kernel package, and a base selection of packages including the Gimp, Inkscape, Blender, Ardour, Jamin and Kino. Version 0.4.0 came out at the end of September with more packages and enhancements, and the distribution is seamlessly upgradeable from a 0.3.0 install with apt-get of course. We’d be more than pleased to hear your test reports and suggestions for the distribution—you can help us make free software the creative desktop of choice.
Biography Daniel James (/user/12" title="View user profile.): Daniel James was one of the founders of LinuxUser & Developer Magazine (http://www.linuxuser.co.uk/), and the original director of the linuxaudio.org consortium.
Copyright information Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved. Source URL: http://www.freesoftwaremagazine.com/articles/c64_studio_project
48
Building a native 64-bit creative distribution
Issue 10 Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Building a native 64-bit creative distribution
49
Issue 10
50
Building a native 64-bit creative distribution
Convincing management to approve free software Tips to better advocacy By Maria Winslow The grassroots efforts of system administrators have brought Linux and other free software into the mainstream. To be an effective advocate for free software at work, you need to speak the language of management and convince them from their point of view. This article discusses how to present your case, why your audience makes all the difference, how to hook them with proof of cost savings, and reveals two secret weapons for your quest to promote free software. This article explains why bashing Microsoft won’t help you in your case, which migration recommendations will seem the most practical and feasible to management, and how to present those recommendations in terms that management will respond to. It talks about how your goals are different from those of management, and how to adjust your approach accordingly. Finally, it demystifies return on investment (ROI) and shows how you can put together simple calculations to back up your case.
The golden rule of advocacy The most important thing to remember when you are making a case for free software is to be sensitive to your audience. Who are you talking to? How technical are they? What is most important to them? You must address their concerns. Most people consider it a courtesy if you tell them that you’re researching different free software solutions and you’d like to take their concerns into account as you investigate the options. The most important thing to remember when you are making a case for free software is to be sensitive to your audience
Tip 1: don’t bash Microsoft Criticizing Microsoft can be counter-productive in your advocacy for free software. There are three important reasons for this. Criticizing Microsoft can be counter-productive in your advocacy for free software The first and best reason is that you really don’t have to. Jupiter Research conducted a survey in 2003 of IT professionals in the small- to mid-size business market, asking what free software they were using and why. The number one reason for migrating to Linux wasn’t cost savings, security, or reliability. Perhaps surprisingly, the primary reason stated for migrating to free software was to get away from Microsoft. Therefore, you don’t have to say anything negative; your audience may be thinking it anyway. The second reason involves a little pop psychology. Consider this: when you denigrate Microsoft, you insult people who only have experience working with Microsoft. Either your audience or at least a percentage of their reports may be in this camp, so be careful of insulting the base of their professional knowledge. The third reason is that you will seem more credible and professional if you stick to the positive and resist the urge to use a derogatory nickname.
Tip 2: be practical Focus on recommendations that will seem the most practical and feasible to your audience. Your campaign should begin on the edges of the enterprise, with systems that are less critical. It’s best to start with migration recommendations for systems that are less likely to cause unacceptable disruption. Remember to consider management’s perception of disruption as you consider potential candidates. For example, e-mail systems can be a difficult place to start unless you have a compelling reason to promote a migration in this area. Management will naturally shy away from migrations perceived to have a high risk of disruption.
Convincing management to approve free software
51
Issue 10 In your advocacy efforts, be sure to stress to management that free software adoption doesn’t have to be radical Basic internet infrastructure is usually the best place to start. Basic web servers, DNS servers, web content, spam/virus filters, and similar services are commonly trusted to Linux and free software. Sometimes it’s possible to get approval for a Linux deployment on reused hardware. So don’t overlook this possibility, especially for file servers. It’s easier for management to say yes when the capital outlay is so low. In your advocacy efforts, be sure to stress to management that free software adoption doesn’t have to be radical; just a few systems in non-critical roles can add up to savings now and increased staff experience for future deployments.
Tip 3: lose the tech-speak Sometimes it might seem that management doesn’t take your technical advice seriously. Understand that your goals are different from management’s goals. You want to work with the best technology, advance your career by keeping up with new technologies as they develop, and do your job efficiently using tools that will make your job easier. You’re technology-centric. Management wants to hold down the fort, make sure nothing goes terribly wrong, try to keep people from complaining, and do it all within the budget. They are not technology-centric. Note that the actual technology doesn’t make it into the list of most important goals—it’s just a means. If your pitch to management focuses primarily on the superiority of the technology, you’re only giving them a small part of the story from their point of view The significance is that when you focus on the technology, management thinks you don’t understand the big picture. To management, it’s about making things work well enough to keep everyone reasonably happy within the constraints of a budget. If your pitch to management focuses primarily on the superiority of the technology, you’re only giving them a small part of the story from their point of view. Even worse, if you argue about the technology with a colleague in front of management, you’re likely to get even more pushback. For example, you may recommend dumping the mail server for Sendmail on Linux. Your colleague may advocate Postfix because it’s more secure. When you enter a discussion on the technical merits of each solution, your boss may tune out the tech-speak while perceiving that either solution may pose a risk by being the wrong choice. By focusing on the technical, and especially by debating technical points, you and your colleague introduced more risk (or at least perception of risk) into the equation. So lay off the tech-speak and focus on your two secret weapons instead.
Secret weapon 1: case studies Management loves case studies. They are anecdotal, easy and quick to read, and easy to identify with. Most importantly, case studies feel like proof to management. Case studies usually reflect their peers—IT directors, mainly, and an endorsement from a peer goes a long way. So start googling. Case studies feel like proof to management As you search for case studies to help you in your recommendations, follow these two basic rules: • Avoid case studies provided by vendors, as they usually sound too much like marketing materials to be truly credible. Your best source is always going to be a magazine because the analysis will be independent. • Choose case studies that are as similar to your scenario as possible to help management recognize how your free software recommendation would work in your situation as well as it did in the case study. Consider whether the case study is reflective of your industry segment, or if the basic IT landscape is familiar. To get started, look at (http://www.windows-linux.com/articles) a few of the case studies I presented in my book The Practical Manager’s Guide to Open Source (http://windows-linux.com/practicalOpenSource).
52
Tips to better advocacy
Issue 10
The Practical Manager's Guide to Open Source
Secret weapon 2: show them the money The single most effective way to influence any decision to deploy free software is to prepare a cost justification. When you hand your boss a spreadsheet showing the cost savings gained, they will consider your migration recommendation more seriously. Fortunately, this doesn’t have to be complicated. The single most effective way to influence any decision to deploy free software is to prepare a cost justification Return on investment (ROI) Return on investment (ROI), also called the cost-benefit ratio, is the benefits gained divided by the costs to acquire the investment. You can think of ROI as “for every dollar spend, I get back X percent”. Benefit = the total amount of savings (proprietary price—free software price) ROI = benefit / cost To calculate the ROI of a particular migration, you need a few cost numbers. Whenever possible, use historical budgetary data for the best accuracy. Not everyone pays the same for software, so if you use the actual value your organization pays, your calculation will be more credible. You need to find out two figures: the cost of the status quo replacement system and the cost of deploying your free software alternative. Calculate the cost of the status quo system as everything your organization will pay if you don’t go with free software. The cost of the free software deployment should not be $0. It’s neither believable nor true. Be sure to include support contracts from a third party and outsourcing costs for deployment or staff training, even if you pay nothing for the actual software. Why ROI and not TCO? TCO stands for Total Cost of Ownership, which measures the cost of the system over its entire life. So why should you calculate ROI and not TCO? There is one simple reason to use ROI—it is easy to calculate now. ROI involves the costs and benefits of the acquisition of a system, which are known at the time of deployment. By its very nature, TCO involves guessing, simply because you can’t know the total cost of the system until the end of the cycle. TCO also doesn’t show the benefit gained by the organization, just the total cost over the life of the system. The TCO of paper and pencil is lower than desktop computers, but clearly most organizations employ desktop computers. An ROI example Take as an example a Windows to Linux file server migration. You’re making the recommendation because the server, accessed by 100 users, is now five years old, and you’re coming up on a scheduled hardware/software upgrade. This is the perfect time to make a migration recommendation.
Tips to better advocacy
53
Issue 10 To calculate the ROI of this recommendation, you need to know: 1. What is the cost to upgrade the hardware and software, if we remain status quo? 2. What is the cost to migrate to Linux and Samba? Because eWeek Labs found Linux/Samba to be two-and-a-half times faster at file serving than Windows 2003 Server, you’re also recommending that you reuse the hardware. Once you have the pricing, you can find the benefit to the organization of migrating (status quo cost - free software cost). Then divide the benefit by the cost to arrive at the ROI. Status quo upgrade: • Windows 2003 Server, Enterprise Edition, includes 25 client access licenses ($3999) • 75 additional CALs ($2448.75) • New server hardware ($2000) Total cost of a status quo upgrade: $8447.75 Linux replacement: • Debian GNU/Linux with Samba ($0) • Initial installation support contract with local system integrator ($1500) • Reuse server hardware ($0) Total cost of a Linux/Samba replacement: $1500 Total benefit: 8447.75 - 1500 = $6947.75 ROI = total benefit (6947.75) / cost of free software deployment (1500) = 463% This example shows that a simple migration can bring a very healthy ROI, which will help convince management that free software is a good idea for your IT environment. Use this example as a base for your calculations, and be sure to get the most accurate cost information possible.
Making your case Remember, the golden rule of advocacy is to know your audience and be prepared to address their concerns. Technology matters less to management than it does to you, so make an effort to tone down the bombardment of information and give them just the highlights. Recommend migrations that will seem the most feasible and least risky to management. Be professional and positive in your recommendations. Search for case studies that are relevant to your situation—they are one of your secret weapons. Additionally, it is worth taking the time to put together a simple ROI calculation. Using the example in this article as a base, you can impress management with your well-rounded view of free software in the enterprise and meet with better success in your internal advocacy.
Biography Maria Winslow (/user/28" title="View user profile.): As an Open Source Practice Leader with Virtuas, Winslow assists clients in understanding the technical and budgetary impact free software will have on their computing environments. Her recent book, â’ The Practical Managerâ’ s Guide to Open Sourceâ’ , guides IT directors and system administrators through the process of finding practical uses for free software that will integrate seamlessly into existing infrastructures, as well as understanding the costs and savings. She is a frequent speaker and author on the topic of free software. She can be reached via the Practical Open Source website (http://windows-linux.com/about.html).
Copyright information This article is made available under the "Attribution-Sharealike" Creative Commons License 2.5 available from http://creativecommons.org/licenses/by-sa/2.5/.
54
Tips to better advocacy
Issue 10 Source URL: http://www.freesoftwaremagazine.com/articles/convince_management
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Tips to better advocacy
55
Issue 10
56
Tips to better advocacy
Free software liberates Venezuela The free software revolution comes to Venezuela By David Sugar The third International Forum on Free Knowledge brought together many groups and individuals interested in the development of free software worldwide to the city of Maracaibo. One reason Venezuela choose to host this event is because starting in January (2006), their new free software law, directive 3.390, comes into effect, which mandates all government agencies to migrate to free software over a two year period. I was invited to speak about Telephonia Libre: the use of free software in telecommunications.
Map of Venezuela Directive 3.390 mandates all government agencies to migrate to free software over a two year period While I am invited to speak at many events and conferences worldwide, most often I reject them immediately because they’re not open to the general public. This is one of the reasons I rarely speak in the U.S.—virtually all U.S. scientific conferences in my field are for profit and are organized by groups who charge fees so high that it discourages the general public from participating, or they are organized for the benefit of commercial vendors who are trying to market themselves to potential customers. Science Fiction conventions actually would be closer to my choice of U.S. venue, although I don’t seem to get invited to those. I accepted this invitation for several reasons; first, it was open and free to the general public. Second, it was Juan Carlos Gentile who personally asked me to attend. And finally, I have always been immensely curious about Venezuela. While there, I had the extremely lucky chance to speak with directors in many of the organizations charged with carrying out Chavez’s vision of a “Bolivarian Revolution”. While my travel had been planned a number of weeks in advance, as with all travel I have experienced in Latin America, this turned out to be on a different concept of time. I didn’t hear back at all from Venezuela until the weekend before departure, but this is actually not that remarkable. By Monday the 21st of November, I knew I’d arrive in Maracaibo the next day, and return on the 29th. That much was confirmed to me by Ambar Rodriguez, who works for Conatel, which is their state telephone regulatory agency. I had a chance to speak with Ambar over the weekend, but I didn’t know which airport I would departing from, or even what airlines I’d be flying, until Monday morning. To understand the blissful attitude I had taken, you have to understand this: I recall one time I was staying with a family in San Paulo, where we were scheduled to take a flight to Porto Alegre. The airport was across town, and our departure time was about half an hour away when we finally wandered out to the car. We didn’t even travel in much of a hurry. Yet, somehow, in the twisted and bizarre time warp that is Brazil, we arrived on time for our flight anyway, and I never quite figured that out either. Time often has a very different meaning in Latin America. Many of the events and presentations at the event were, much like mine, of a rather technical nature. My presentation caused some difficulty for the translator I was given, who had no experience or understanding of
Free software liberates Venezuela
57
Issue 10 the specialized technical terms I was using. This was only corrected near the end when a different person came forward to translate my speech. Some presentations were from groups who were using free software in some social setting. The event was heavily attended by many people, and particular technical directors from many key parts of the Venezuelan government, because of their migration plans for 2006. I eventually meet up with Jeff Zucker from Perl Mongers, who traveled by bus from Caracas and the well known international free software activist, Juan Carlos Gentile, who drove all the way from Caracas along the same roads with Marko, who is also from Italy. While it is said to take ten hours to drive from Caracas to Maracaibo, as he and Marko are Italian, naturally I expected he would arrive in only five. These three, and Ana Isabel Delgato from the Debian Venezuela group, were my primary “translation team” whenever I spoke with others who didn’t speak English.
The People’s Ministry of Economics Venezuela is blessed with not one, but two economic ministries. There is the old ministry of economics, which deals with the traditional capitalist economy. It is worth noting that capitalism continues in Venezuela and will likely continue to do so for some time. While lands are at times redistributed to landless laborers, for the most part existing industries and businesses are left alone, and left to the old ministry of economics. Instead, they have a different idea of how to transform society here, and this brings me to the second ministry. The Ministerio Para La Economia Popular, or roughly, the People’s Economic Ministry (and for simplicity, to be referred to simply as Minep) is tasked with transforming Venezuela with a new economy. While the ministry does a number of important tasks, I believe their most interesting is to train and educate ordinary Venezuelans, who volunteer on how to run a worker co-operative. This is done by providing co-ops the tools, financing, and practical training they need to operate their new enterprises. My interest in this aspect of Minep came in part from their interest in providing VOIP services along with the computers they are offering to their worker managed co-ops. This was a rather specific technical issue, and one they were very interested in discussing with me. Many of these worker co-ops are composed of very small startups that typically have 10 people or less. Minep offers training and support, as well as financing, to allow co-ops to purchase computing systems for their business needs. These systems are now offered entirely with free software, starting with the Debian GNU/Linux operating system, along with Open Office for general business use, and web hosting under Apache. Co-ops that go through the Minep program also have the ability to host web sites with their own content, and these usually feature the products or services a given co-op wishes to offer. Co-ops are also trained in the use of the free software they receive and in how to maintain their own IT infrastructure. The Minep co-op training program was piloted in 2004, with some 3000 such worker managed co-ops formed. During this year (2005) they have formed over 45,000 such co-ops nationwide, and are expecting to train over 700,000 Venezuelans in how to form and be part of a new economy. This suggests to me that perhaps 40% of those that go through the Minep program eventually do form a commercial enterprise. The use of free software and offering of computer systems for business use as part of the co-op program is actually relatively new. I believe, if I understood correctly, the full version of free software training program is a 6 month course, and so is rather comprehensive. This year (2005), they’ve only trained people from at most a few thousand of the co-ops on the use of free software through the initial pilot program. In 2006, however, that program, and free software training should be available to all interested.
The Ministry of Intellectual Prosperity SAPI, the Independent Service ministry of Propiedad Intellectual, is the ministry that used to define Venezuela’s so called “Intellectual Property” laws. I understand SAPI also at one time concerned itself with the issue of what was called “Piracy”. I would have thought, however, that controlling murderous gangs of anarco-capitalist “gentlemen of fortune” who raid ships, would be the job of the navy, or perhaps the interior ministry. “Intellectual Property”
58
The free software revolution comes to Venezuela
Issue 10 The term intellectual property itself is of course a new-speak propaganda word that didn’t even exist 20 years ago. First, the topic it covers varies from Copyrights, Patents, Trade Secrets and Trademarks, to a variety of other things, all of which are in reality all very different and unrelated. Second, it’s based on the premise that you can give something intangible to someone else, and yet control it and decide what other people do with it, as if it or they (and even the ideas they may have about it) were your physical property. Intellectual property amounts in part to thought control through legal fiction. Some even say it amounts to Intellectual Slavery. The consequence of treating ideas and thoughts as if they are tangible property is the very destruction of science and education and the elimination of individual rights and freedoms. Science is in part built upon the idea that new knowledge is created by incrementally improving ideas. Education is based on the idea that one can learn from existing things and then use that knowledge to create new works. The idea behind “Intellectual Property” interferes with both. It is barbarism, and could well lead to a new “dark ages”, where only a privileged few are allowed to learn, under the exclusive control of greedy intellectual monopolies. Since “Intellectual Property” involves exclusive licensing, when public universities do this and then let others license their discoveries, the public is made to fund research that only benefits a small number of people. Even worse, those companies which receive such funding can then use this exclusive grant to sell back to society the fruits of what society already paid for. This can be thought of as paying for something twice. This could also be thought of as public welfare for private corporations, or more simply: exploitation. I had the good fortune to meet the current director general of SAPI, Eduardo Sam? while I was in Maracaibo. He has very different ideas for the purpose of SAPI. He is a well known internationalist, and had been a key person in establishing the program for promoting a developing nations agenda within WIPO. Rather than creating new intellectual restrictions, Sam?proposes that the mission of SAPI should instead become that of promoting “Intellectual Prosperity” by creating laws and services that promote the ability to share knowledge as the common heritage of all mankind. Sam?proposes that the mission of SAPI should instead become that of promoting “Intellectual Prosperity” by creating laws and services that promote the ability to share knowledge as the common heritage of all mankind Assuming that private corporate interests in the developed world today do succeed in the great program of owning what people are allowed to think, it is very possible that places like Venezuela will become the new leading nations in science and technology.
Hugo Chavez
How oil fuels the Bolivarian Revolution Maracaibo is also the heartland of the oil industry, and the state run oil company, PDVSA. Oil companies are also traditionally conservative in nature. However, PDVSA also is a contrast, as both the primary wealth producing institution in the country, and the strongest source of support for President Hugo Chavez’s revolutionary changes.
The free software revolution comes to Venezuela
59
Issue 10 I met a number of PDVSA oil workers, who seemed well represented among the ranks of PDVSA management. I also had the chance to talk over lunch with one of their directors, Socorro Hernendez, as well as Jose Luis Rey, whose renoun is both as a skilled hacker and financial genius who was involved in helping rebuild the financial trading systems that were sabotaged in 2003. Today, the state-run oil company is a major backer of the free software movement (software libre) in Venezuela and is a major sponsor of the 3rd International Forum on Free Knowledge, which is what brought me to Maracaibo. Every question related to the use of free software in Venezuela, and to how the Bolivarian revolution started, seems to come back to PDVSA and the worker oil lockout in 2002. A little history... Before the worker lockout, the administration of the state oil company was strongly connected to the wealthy elite of Venezuela. Many of the wealthiest people in Venezuela had been getting much richer thanks to the oil company, in part through contracts and corruption, not unlike what has been happening here in the U.S. with politically connected companies like Halliburton. President Hugo Chavez was originally elected on a platform to use the oil wealth to help pay for the poor of the country through education and health programs, rather than simply making the country’s wealthy even wealthier. Many of Venezuela’s wealthier citizens, used to having money from the state oil company, would not tolerate this, and so they decided President Hugo Chavez had to go at any cost, even if it meant sabotaging their own nation to do it. So they tried to close the oil company in December of 2002, by locking out the workers, holding the oil resources of the nation as a whole hostage, and by having the entire IT infrastructure under their control. If the data and systems present then had been destroyed, it would have been years before another drop of oil could have been produced. Out of 4800 managers, about 200 chose to stay behind, and together, with the help of many by then retired former managers who were less corrupt than the ones who left, the workers tried to save the oil company. But the biggest challenge was the computer infrastructure. Management of IT was at the time contracted to SAIC, (Science Applications International Corp), which has well known political and business connections to Cheney’s office, to the U.S. DOD, and the CIA. At first, when the Venezuelan army was called out to secure the oil facilities during the lockout, the SAIC staff created videos of the troops securing the facilities in an attempt to claim they were under attack and tried to persuade the U.S. congress to give Bush war powers to seize the oil fields. When this scheme failed, the SAIC workers fled the country, but changed all the passwords and kept remote control of all of the computer servers of PDVSA. They choose not to destroy the data on them because they thought they’d be back in a few months once the government of President Chavez finally capitulated. Much of the infrastructure of PDVSA was under Microsoft Windows-based servers, and used proprietary database software such as Microsoft SQL. The IT managers didn’t expect a bunch of oil workers to be capable of thwarting their plans. Those same oil workers, working together with local computer hackers, were able to secure control of vital computer servers, and in doing so saved the oil infrastructure. The Venezuelan revolution is perhaps the first revolution in history saved by computer hackers and this is one of the reasons the government is so very strong on promoting the use of free software, particularly in public administration. The Venezuelan government wishes never again to have vital infrastructure held hostage or sabotaged by agents of foreign nations. This cannot be accomplished by source secret proprietary software, such as Microsoft Windows, with its infamous backdoor NSA key. Even proprietary software from a trustworthy source has to be suspect for possible tampering, and so must be rejected, not just by Venezuela, but by any nation that wishes to protect and maintain its sovereignty against sabotage. The Venezuelan revolution is perhaps the first revolution in history saved by computer hackers
60
The free software revolution comes to Venezuela
Issue 10 Back to the present... Everyone I had met from PDVSA appears completely committed at all levels to the basic idea of converting Venezuela’s oil resources into long-term and self-sustaining wealth for the nation as a whole. This is done in part through the development of a new economy, as planned for through Minep. Capturing this wealth is viewed as an urgent matter because, even though Venezuela posses one of the largest known reserves of oil, they expect world oil production to begin declining and see this wealth as very temporary. Socorro Hernendez said PDVSA believes that nobody will “burn” oil (as for example in automobiles) in as little as 20 years. He also said they believe that, while oil will remain important to the multitude of other industries in which it is used, the price will settle to $5 a barrel, so now is not only the best, but also the last, chance to create something useful from this wealth. Capturing oil wealth is viewed as an urgent matter because, even though Venezuela posses one of the largest known reserves of oil, they expect world oil production to begin declining and see this wealth as very temporary
Conatel Telecenter
Conatel and Conclusions I flew from Maracaibo to Caracas on November 26th. Even in Venezuela’s revolutionary republic, custom officials are still custom officials, and airports are still like airports everywhere. Given the lack of revolutionary posters, pictures of Chavez, or military checkpoints promised by the state department, what is worth noting is the rather ordinary way society and most institutions operate in Venezuela. One interesting program is run by Conatel, Venezuela’s telecom regulatory agency, which now runs a program to deploy telecenters into communities around the country. Conatel is a regulatory agency for the telephone and broadcast services in a manner akin to the FCC in the United States. However, in this instance, Contatel also runs a community telecenter project in order to bring computing and telephone resources directly to communities across the nation. There are other similar programs running in various Latin American nations today. I actually saw the model Venezuelian telecenter at the Conatel building while I was in Caracas. A typical community telecenter comes with up to a dozen PC workstations, and a server. Connectivity is offered through a telecom carrier for both internet data and for voice. These systems entirely use free software, and each telecenter includes a staff of two people. One of the people is trained to manage and teach how to use the computers and resources of the telecenter, and charged with maintaining the equipment. The second person is someone trained in the social needs of a given community. For example, for a telecenter that is deployed in an agricultural town, the second person would likely be someone who was educated in agriculture. In a mining town, it would likely be a miner. Each telecenter desktop PC runs Debian GNU/Linux, and includes software for internet browsing, for performing routine work such as Open Office, and includes a camera along with GNOME Meeting for voice
The free software revolution comes to Venezuela
61
Issue 10 and video conferencing. The telecenters also have VOIP telephones that are made in China and that load an embedded Linux. Many carriers in Venezuela offer direct H.323 connectivity for VOIP, and presumably, like Deutsch Telecom, more than likely use GNU GateKeeper to form their mesh network. The client workstations use GNOME Meeting, which is an H.323 client, and even the telephone instruments use H.323. No doubt, it would bring a tear to the eyes of Craig Southern, who heads the OpenH323 project, to know that there is a complete end-to-end national H.323 network in Venezuela, running the OpenH323 project stack, from the national carrier down to the individual telephone instruments. I believe telecenters are or will be the public libraries of the new millennium. Unfortunately, most existing libraries elsewhere in the world today, while often they include computers, don’t understand how they should be used. For example, many libraries in the U.S. have computers, but they are really only used for web browsing, and come “attached” with nutty politicians more deeply concerned about library patrons potentially reading about sex rather than about the laws that require library content to be filtered for this reason. I believe telecenters are or will be the public libraries of the new millennium All these things began with the oil worker lockout. Rather than bringing down the government of Hugo Chavez, by working together with foreign interests to directly sabotage the country’s most vital industry, the wealthy elite of Venezuela radicalized the oil workers in a way that no other action could. The workers of PDVSA are now fully committed to creating the new economy, and will remain so regardless of who is in power. When the rich of Venezuela ponder who it was that made Venezuela become a revolutionary nation, they shouldn’t look at President Chavez, who may not have even been thinking of this at the time, and certainly had no means to accomplish it if he had; instead, they should look in the mirror. When the rich of Venezuela ponder who it was that made Venezuela become a revolutionary nation they should look in the mirror
Biography David Sugar (/user/15" title="View user profile.): David Sugar is an active maintainer for a number of packages that are part of the GNU project (http://www.gnu.org), including GNU Bayonne (http://wiki.gnutelephony.org). He has served as the voluntary chairman of the FSFâ’ s DotGNU (http://www.dotgnu.org/) steering committee, as a founder and CTO for Open Source Telecomm Corporation, and currently owns and operates Tycho Softworks (http://www.tychosoft.com).
Copyright information Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved. Source URL: http://www.freesoftwaremagazine.com/articles/professional_services_venezuela
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
62
The free software revolution comes to Venezuela
Towards a free matter economy (Part 4) Tools of the trade By Terry Hancock A good scientist is a person with original ideas. A good engineer is a person who makes a design that works with as few original ideas as possible. There are no prima donnas in engineering.—Freeman Dyson Imagine where free software would be today if it weren’t for the GNU C Compiler! Just as free software depends heavily on free compilers, so does free design rely on having free computer aided design and authoring tools.[1] Before the gcc was created, when free software had to be written on proprietary compilers, the software development community was limited to the very small number of people who could afford to purchase such tools—either because they were professional programmers or very dedicated amateurs. Imagine where free software would be today if it weren’t for the GNU C Compiler! The idea of “bazaar style” development hadn’t yet been conceived, but that was just as well, since such small bazaar sizes would lead to a breakdown of the bazaar development strategy[2]. No doubt some centrally controlled “cathedral” projects (such as the GNU project) would’ve continued, but the overall effect would be an extreme chill compared to the hotbed of innovation that free software currently represents. Lacking the kind of professional-quality design authoring tools that are found in the commercial engineering workplace, the free design community is in just that situation today. Of course, it can be (and has been) argued that engineering is a specialized discipline and that therefore only a small technocratic elite can participate in the process—hence bazaar size might always be too small to be effective. But twenty years ago, this was the conventional wisdom about software development, too!
By users for users A compiler is a program to write programs, so the users of the tool are also those qualified to create it. With design tools, we’re not so lucky: it’s a fairly rare engineer who has the programming expertise to develop proper engineering design software. This is a problem because bazaar development works best when applications are developed by the people who need them. In order to get a CAD/CAM system started, it will probably be necessary to start with a centrally organized solution—a cathedral development model—working as quickly as possible towards a solution that relies heavily on scripting that can be done by the typical end user of engineering software. It’s a fairly rare engineer who has the programming expertise to develop proper engineering design software This is the sort of approach that has proven itself functional in many projects in the multimedia and desktop application sphere: customization and scripting facilities have been the free software solution for desktop environments like KDE [3], vector graphics applications like Skencil [4], 3D animation and modelling applications like Blender [5], and particularly in game engines. All of these are situations where the same problem applies: the typical user is not a particularly skilled programmer, so a simplified programming environment is needed to make a more graded slope from user to developer. This effectively increases the bazaar size for the parts of the program that can be scripted. User interfaces matter the most to the serious user and are the hardest for the programmer (who is not particularly skilled in the application domain) to predict. A user interface with a scripting engine gives a lot of leverage for the overall program, and the availability of easily-embeddable high-level interpreters such as Python make such a facility easy to provide. Thus, the “by users for users” philosophy of free software design can be stretched to suit real world applications, where skillsets vary between users and developers.
Towards a free matter economy (Part 4)
63
Issue 10
Computer aided design, simulation, and manufacturing Although there are many other areas of design software which are useful and available, and many others for which no free solution yet exists, the most pressing and obvious need is for a general purpose 2D/3D mechanical Computer Aided Design and Computer Aided Manufacturing (CAD/CAM) system.
Figure 1: QCAD GUI. QCAD has an excellent and intuitive palette-based GUI that allows quick access to drawing constraint features, and is easy to learn Specialized cases of CAD have already been covered by free software offerings—such as xcircuit [6] for drawing and capturing electronic schematics and pcb [7] for designing printed circuit boards, and general purpose 2D CAD drawing applications such as QCAD [8] are moderately well-developed (although they still fall short of proprietary competitors). The GNU EDA project[9] is making progress in the direction of integrated circuit design, and it should not be surprising that these “highly ephemeralized” technologies are among the first addressed. Specialized cases of CAD have already been covered by free software offerings Getting into the harder technologies, such as aerospace or mechanical engineering or robotics, requires much more sophisticated 3D CAD tools than those that are currently available. The modern manufacturing industry relies extensively on these tools to produce the kinds of complex technologies that we are accustomed to, and we won’t have much chance of being competitive in the free design world, until we’re able to use tools that are at least as good as these. Friendlier interfaces The interfaces in modern versions of proprietary CAD systems such as AutoCAD [10] have hardly changed at all in 20 years. They have traditionally relied on the high expressive power at low development cost of linguistically-oriented command line interfaces, leaving their visual-tactile graphical interfaces to stagnate. End user oriented free software artistic 3D tools like Blender, however, have seen much greater innovation, and they have demonstrated that a well-designed graphical interface for power users can greatly increase productivity, even for very complex tasks.
64
Tools of the trade
Issue 10
Figure 2: Blender GUI. Blender’s interface is cursed by new users, but greatly loved by power users. It does seem a bit daunting at first, but I quickly became attached to it. It compresses an enormous amount of options into a small space with an intuitive and highly internally-consistent design, using color-coded widgets, icons, and words as needed. If not actually used in a 3D CAD system, I think it should at least be emulated Collaborative design In order to maintain a sufficient size for a free-design bazaar to establish itself and grow, CAD systems which are adequate to the task must be created. They must be free-licensed, so that all comers to the community can participate. They must use free, standardized drawing storage formats, and they must be friendly to version control and any other necessities of online collaboration. Furthermore, they must permit a form of markup that allows communications about drawings to occur electronically. To my knowledge, this has never even been fully achieved even with proprietary software, although I am aware of an abandoned attempt at the Jet Propulsion Laboratory called Supernova, based on “Lambda MOO” technology[11]. Network effects are precisely the sort of problems that free software is better positioned to solve! However, these network effects are precisely the sort of problems that free software is better positioned to solve! CVS, Subversion, and most forms of internet interpersonal communication are the types of software that started out free. Integration with Narya Bazaar In the Narya Bazaar system (see parts 1 (http://www.freesoftwaremagazine.com/free_issues/issue_07/free_matter_economy), 2 (http://www.freesoftwaremagazine.com/free_issues/issue_08/free_matter_economy_2) and 3 (http://www.freesoftwaremagazine.com/free_issues/issue_09/free_matter_economy_3) of this series), there’s a vital role played by specification objects which must define work to be done adequately to form a contract between the donors, projects, and vendors. Whatever CAD system we create must be up to the task of defining these specification objects adequately for such agreements. That will require a commercial-quality CAD system, and it will need to support a range of meta-data from tolerances and materials to the relationship between components in large assemblies. Furthermore, as the bazaar community grows, parts from many different vendors will have to be integrated. This has long been a problem in commercial design applications (Consider what happens when space station components from RK Energia in Russia need to be mated with components from Lockheed Martin in the USA), and good CAD systems constitute the solution. In the bazaar environment, virtually all components will be “outsourced”, so it will be necessary to use good engineering communications processes to ensure that components interact properly.
Tools of the trade
65
Issue 10
Breaking the proprietary grip on the CAD market There is a kind of circular logic that keeps people locked into the systems they have—they are expensive and difficult to use because they have a very small user base. But the user base is small, precisely because they are expensive and difficult to use! The providers of CAD software have effective monopoly power of a very limited market, defined by the restriction to the commercial industry who can pay the high licensing costs (typically $3000 per seat! [12]). Since the proprietary industry mostly develops proprietary technology, there hasn’t been that much progress in making CAD systems fully collaborative, nor in making them integrate with communications channels to allow the virtualization of such standard engineering rituals as design-reviews and mark ups.
Figure 3: AutoCAD GUI. Although AutoCAD certainly has a GUI, many users find themselves relying on the command run line to accomplish most 3D modelling tasks, perhaps reflecting AutoCAD’s origin as a 2D CAD system (Image credit Wikipedia) These systems should be “low hanging fruit”, easy to pick off, but for one very unpleasant reality: 3D CAD systems are intrinsically very complex pieces of software that take a lot of work to write. Ultimately, however, they exhibit all of the criteria for a free software success story: they are complex, have rich network effects, and are just the type of high buy-in software that users expect to pay maintenance and support contracts for. Unlike the desktop computing market, the manufacturing market is very much aware of the importance of standardization and the dangers of lock-in Unlike the desktop computing market, the manufacturing market is very much aware of the importance of standardization and the dangers of lock-in. Lock-in to single-source suppliers is a long-standing trap in the manufacturing industry, and managers already know the importance of avoiding it, which is why the value proposition of free-licensed software will be easier to present to them. On the other hand, there is still a credibility gap in proving that better (or even adequate) software can be maintained using free software development models—since conventional wisdom in the engineering community understandably favors engineered solutions (but there is a recent “rapid prototyping” trend, which could be comparable). Special domain design tools Although I believe 3D mechanical CAD is the most important design authoring challenge to be met, there are a lot of specialized domains for which there are existing free software design applications. For software and systems design, there is GNU’s dia program which makes several kinds of specialized diagramatic drawings easy to create. What can’t be drawn with dia can be represented with more general vector graphic programs such as Inkscape or Skencil, and presented with KPresenter, or other free software slide presentation packages. Electronic schematics can be drawn easily with more than one program, including for example, xcircuit and oregano.
66
Tools of the trade
Issue 10 The GNU EDA project primarily addresses integrated circuit design, and there is pcb for designing printed circuit boards. The biggest complaint I have about these applications is that they have very inconsistent interfaces. I can imagine a project to refactor these programs to a common GUI standard, but they are each well-established in a small design community of their own, so it isn’t clear how popular such a project would be. Shop drawings for mechanical projects can be created in 2D using programs like QCAD or PythonCAD, and indeed, this is how things were once done in the manufacturing industry, although it’s easy to see that relying on this leaves us decades behind the commercial proprietary design world.
Pieces of the puzzle We have to consider the individual components that are available to piece together a CAD system out of major elements that already exist in the free software community. To do this, I’m going to assume a “Model-View-Controller” (MVC) model of development [13], in which steps are taken to keep these major components separate from each other, although there are some reasons why we might want to relax this constraint. Here I’m going to assume Python as an integration language, to simplify some of the choices, and also because Python is a language that is pretty popular and fairly easy to use for engineers as opposed to career programmers. For a project to succeed in the free software world, it’s important that the implementation language be accessible to a sufficient quantity of the program’s users to form a healthy development bazaar. [14] Model (standard representation of CAD data and file formats) CAD drawings are complex structured data, which is intrinsically object-oriented. Therefore it makes sense that some form of object database is used to hold drawing data, but the design of the schema is pretty complex. There isn’t one obvious way to represent a 3D object mathematically; there are many different systems, optimized for different applications. Fortunately only a relatively small number of the nearly infinite possibilities are actually in use, and so the representation of CAD data remains solvable. But it’s a big enough problem to be recognized by the commercial manufacturing industry, and to call for some form of standardization. There isn’t one obvious way to represent a 3D object mathematically, there are many different systems, optimized for different applications Luckily for us, this has happened in the form of the Standard for the Exchange of Product model data (STEP) standard (also known as ISO 10303 and ANSI PDES) [15]. The most important contribution of STEP is not the data representation, but the classification and standardization of the object-oriented data models for representing CAD data. There are so many representations in use, segmented across so many industrial classifications, that collecting all of that into one standard is a monumental task, and the STEP standard schemas are indeed a monster as a result. However, this means we can regard the STEP standard as a fairly complete assay of what the manufacturing users need in a CAD system, which provides a blueprint not just for the file formats that we will use, but the internal data representations, and even the GUI controls that must be provided to manipulate that data. Even though STEP is intended as an exchange standard, it also provides a plan for developing the software itself. Even though STEP is intended as an exchange standard, it also provides a plan for developing the software itself STEP is not as open as we would like it to be. There don’t appear to be any patent encumbrances, which is important, but the “source” documents from ISO are mostly copyrighted and available only for a fee [16], which you usually must pay to your national standards organization (ANSI in the USA, for example), and at least at present, the fees are too high for individual developers to pay. Since a free software development project relies on the cooperation of unaffiliated parties to operate, there’s no easy way to spread the cost of
Tools of the trade
67
Issue 10 these documents over a development team as there would be in a single company, so this practice is effectively protectionist to proprietary software vendors. This is not particularly popular with the manufacturing industry (who are the customers of those vendors of course, and would stand to benefit considerably from a free-licensed CAD solution), nor apparently with the ISO standards developers with whom I corresponded in preparing this article. [17] So, if you look closely enough, you will find that a free-licensed open-source standard is there, in the form of the Express schema listings for STEP which are available freely (they are explicitly copyright disclaimed by the ISO, which means they are essentially subject to public domain rules [18]). If you regard the ISO “source” documents as documentation, which is really what they are, and the Express listings as the true source, then STEP can at least unofficially be used as a free format. There are already two important sites dedicated to this mapping of the STEP standard into a free format, including the NIST’s Step Modularization project [19], now hosted at Sourceforge, which provides an Express-to-XML translator, and the Engineering Exchange for Free (EXFF) site [20], which contains a number of useful resources for STEP. Combined with the free Express listings defining STEP, it’s therefore possible to map the standard first from Express to XML, and then from XML to a variety of object-oriented programming languages, by using existing XML-to-object mapping libraries, available for popular free software programming languages, such as C, C++, and Python. Clearly, the wide range of alternatives in this mapping process means that there will be some issues with assuring compatibility, but it’s a definite start at liberating the STEP standard.
Figure 4: Liberating STEP. The STEP standard could be “liberated” using the mapping tools from the Step Modularization project to convert Express schemas to an XML format. Once converted to XML, there’s a wide range of object-mapping tools for different languages, including Python. It won’t be a unique mapping, as it will also depend on the translation tools used, so it will probably be desireable to standardize on a toolchain after testing the alternatives, and provide free documentation for the XML-based derived standard There’s also a library called OpenCascade [21] based on the SDAI C++ implementation of STEP, but despite the name, this package is “non-free”. Free software developers who want to base their work on an SDAI implementation would do better to go back to the original NIST C++ implementation [22], which is “copyright free” (produced by the US government). Less expensive electronic forms of the complete ISO “source documents” may become available in the future, but there is no plan to make them freely distributable, as far as I have been able to ascertain. Working from the freely available documents to create a “liberated” CAD standard is probably the best strategy. View (3D rendering) At the most basic level, of course, are fast 3D rendering libraries and hardware, such as Open GL, but these are primarily for interfacing with (or emulating) hardware, and so are really more primitive than we want. Fortunately, many 3D rendering libraries have been built on top of these standards, primarily driven by game and computer animation applications.
68
Tools of the trade
Issue 10 Some of the more prominent free-licensed 3D rendering libraries that exist include: Crystalspace [23] (written in C++, mainly to support massively-multiplayer online games), Soya 3D [24] (written in Python and Pyrex, also mainly for games), and Blender [25] (written in C, mainly for commercial computer animation applications). All are under suitable free licenses (GPL). Crystalspace and Blender are both primarily frameworks, which embed a Python interpreter; while Soya 3D is a python module library. The latter is better from the point of view of embedding the viewer as a major component, especially if the integration language will be Python. Blender also has embeddable builds: a game engine and a browser plugin, that might be investigated as options for a more pure “view” component. In fact, though, the “impurity” of Blender as both “view” and “controller” may be an advantage, since there is already excellent control for viewing or browsing a 3D model at various detail levels. This is a large part of what one would hope to gain by using Blender, so it’s worth considering abandoning the MVC model for this more complete system. Controller (graphical user interfaces) Given that the concept here is for a fully internet-connected collaborative CAD system appropriate for free-licensed design projects, a very attractive idea is to integrate it with cross-platform browser technology. This would mean using Mozilla’s XPFE system, and therefore XUL for the GUI environment. The biggest drawback to this has hitherto been the limitation to Javascript as a GUI scripting language, which is less than ideal for a project of this type. By the time you read this, however, XUL will likely support Python bindings, so this problem is being resolved [26]. This would have the further advantage of easing integration with other out of channel communications such as email, web forums, and chat systems, in addition to the formal markup and review process that must be intrinsic to a collaborative CAD system. Although Blender’s non-standard GUI is often criticized by newcomers, it is much loved by the people who use it daily The design of the GUI, both as a skin style and as a guide to organization would do very well to follow Blender’s lead. Although Blender’s non-standard GUI is often criticized by newcomers, it is much loved by the people who use it daily. It is highly optimized for efficient use in 3D modelling applications by people who use it a lot. That makes it an excellent choice for a 3D CAD GUI as well. Although using the Blender interface itself (rather than emulating it with XUL, for example) is an attractive option, it has some technical obstacles: Blender isn’t really factored into an MVC model. And, although there’s some developer interest in this, it isn’t a priority for Blender developers, who are focused on the core tasks of artistic 3D modelling and animation. Other GUI options include the usual suspects, such as PyGTK, Qt, and wx.
Design concepts There are obviously many, many different ways in which these components could be assembled to create a functioning, network collaborative 3D CAD/CAM system. Using Python as an integration language (as I have chosen to do for the server side on the Narya project), one could use ZODB [27] as an object model, deriving the schema from the express schemas for STEP via the Step Mod package. Then one could, for example create a Mozilla-based design using the new Python bindings for XUL and integrate with a custom build of Blender based on the game engine or plugin versions.
Tools of the trade
69
Issue 10
Figure 5: Ideal MVC design. Ideal MVC designed CAD system elements. The purist approach to the CAD system would be to create separate model, view, and controller components (in a really ideal case, each would be pluggable, allowing for a variety of separate but compatible tools to coexist, but I’ll be happy with just one of each). This approach is the soundest for development from scratch (or even nearly from scratch) Or one could abandon pure MVC design, and build the system entirely within the Blender Python API. Since Blender’s internal model lacks features of the STEP model, and there is not likely to be a round-trip guaranteed set of transformations between the two, it is likely that it would be necessary to maintain two parallel models—a “representation model” within Blender, used for visualization, and a “design model” within the CAD database, which is the true “original”. A transformation would be provided to render the design to representation in order to display it. Tweaks to the Blender source code (in C) would probably be needed to provide CAD objects with callbacks that could be handled by Python callbacks acting on the design model (otherwise, the representation model could drift away from the design model whenever non-CAD operations were performed on it, causing a synchronization problem and a lot of user confusion). Too many cooks In researching this article, I found nearly a dozen different initiatives to create free software CAD/CAM systems, started by different people, and ranging widely in success. None of them was on the scale of the need expressed in this article, but they make a clear case that such software is wanted, and there are people willing to do work on it. Also, few projects referenced the others, which suggests a lack of communication and organization. I know that I don’t really have the resources to pursue a CAD programming project myself (unless through some form of commercial collaboration), but what I certainly can do is provide a resource Wiki to keep track of other people’s projects. Please have a look (http://client.narya.net/Wiki/CadSystems), and help me update the site with any additional projects that you know about. Maybe if the information is collected in one place, a solution will present itself. Sometimes you need a cathedral after all Sophisticated 3D graphics and design modelling software is no picnic to write. This is one of those big, complex, highly-interdependent problems at which “cathedral” engineering seems to excel, and on which “bazaar” development frequently stumbles. The many half-finished projects in this area attests to this point. It will be necessary to create a successful prototype of the core of such a system before it can be built on and extended by a bazaar community. That prototype will require a lot of “up front design” contrary to the new conventional wisdom of “extreme programming”. This is one of those big, complex, highly-interdependent problems at which “cathedral” engineering seems to excel, and on which “bazaar” development frequently stumbles Machines in general are complex, tightly-coupled systems, so it shouldn’t come as a surprise that the mathematical models of mechanical systems are also complex and tightly-coupled, nor that the CAD/CAM software systems that manipulate those models need to be.
70
Tools of the trade
Issue 10 Fortunately, there is strong reason to believe a free software CAD system could be developed commercially. CAD systems are high-buy-in software, not unlike web servers, software compiler and library toolkits, or data reduction and analysis systems. They are primarily used within organizations which rely heavily on them working without a hitch. This means there is a substantial motivation for those organizations to pay for maintenance and support contracts, and therefore a Cygnus Solutions [28] or Zope Corporation [29] business model (free-license the software, but charge for support services) would seem to be viable. Free software tools and free data exchange formats are in the interest of the majority of the manufacturing industry, as demonstrated by initiatives such as EXFF. Thus we can expect to see cooperation and support from the end users of CAD once a serious effort is made to implement it, even though we’ll also see competition from existing vendors of proprietary CAD systems, such as AutoDesk, and free software will still have to prove itself to a new set of users. Free software tools and free data exchange formats are in the interest of the majority of the manufacturing industry The real challenge is getting started. Cygnus started out supporting GNU software that already existed and was developed for quite different reasons, while Zope Corporation (then called Digital Creations) originally developed their product on a proprietary basis. It was only after these software products already existed and were relatively mature that these businesses made the decision to make support contracts their primary source of income. I’m not aware of an example where such a complex piece of software was developed by a company planning to release it for free and support it commercially.
Planning for the future Although free design is already beginning to be done in limited domains now, the availability of better design authoring tools to a wider range of potential developers is needed to spur an explosion of interest in bazaar model development of hardware. Nowhere is this gap more severe than in the area of 3D mechanical CAD/CAM, which is extremely important for the kinds of aerospace and mechanical engineering that will be called for in the development of hardware that will be required by space pioneers as they attempt to settle and build using the materials they find in their new environment. If we want to follow a free development model with all the advantages it implies, we’ll need this kind of software to be created, and there is reason to believe that it can be profitable as well as widely useful to develop it, although it will require a fair amount of entrepreneurial vision to make that happen.
Figure 6: CAD visualization. Model of hardware for the NASA “return to the moon” initiative. Visions of space frontier technology have relied extensively on CAD and 3D models, to the point that people expect to see such images before they take a technological idea seriously. It is far too much of a disadvantage for free designs to either reject 3D CAD or use proprietary 3D CAD systems. Blender can already create scenes like this, but without the technical data model to back it up (Image credit NASA/John Frassanito and Associates) This is an area where a free software CAD system could not only match the proprietary competition, but leave it in the dust
Tools of the trade
71
Issue 10 This is an area where a free software CAD system could not only match the proprietary competition, but leave it in the dust—simply by leveraging the many years of experience in building collaborative, internet-based systems that are the legacy of free software developers worldwide. A true free software, general purpose, collaborative 3D mechanical CAD/CAM system won’t be simply as good as the proprietary alternatives, it will likely be better than anything else on the market.
Notes and resources [1] Richard Stallman. The GNU Project (http://www.gnu.org/gnu/thegnuproject.html). [2] Forrest J. Cavalier, III. Some Implications of Bazaar Size (http://www.mibsoftware.com/bazdev/), 1998. [3] KDE Documentation (http://docs.kde.org/). [4] Skencil Development Guide (http://www.nongnu.org/skencil/Doc/devguide.html). [5] Blender Documentation (http://www.blender.org/modules/documentation/). [6] xcircuit (http://opencircuitdesign.com/xcircuit/). [7] pcb (http://pcb.sourceforge.net/). [8] QCAD (http://www.ribbonsoft.com/qcad.html). [9] GNU EDA (http://www.geda.seul.org/). [10] AutoCAD (http://en.wikipedia.org/wiki/AutoCAD). [11] Michael Brundage, Network Places (http://www.qbrundage.com/np/index.html). [12] Price estimated from CDW Catalog (http://www.cdw.com), August 2005. [13] Model View Controller (http://en.wikipedia.org/wiki/Model_view_controller) Definition. [14] Terry Hancock. Praise for Python (http://blog.freesoftwaremagazine.com/users/t.hancock/2005/11/11/praise_for_python), 2005. [15] STEP / ISO 10303 (http://www.tc184-sc4.org/SC4%5FOpen/SC4%5FWork%5FProducts%5FDocuments/STEP_(10303)). [16] ISO Restricted Documents Notice (http://www.tc184-sc4.org/Notices/Authorized%5FUsers%2DPadlocked%5FDocuments/). [17] Howard Mason, Chair, ISO TC 184/SC4. Private communication. [18] Express Listings (http://www.tc184-sc4.org/SC4%5FOpen/SC4%5FWork%5FProducts%5FDocuments/STEP%5F%2810303%29/) (Search each block for “schema”). [19] STEP Modularization Project (http://stepmod.sourceforge.net/). [20] EXFF (http://exff.org/). [21] OpenCascade (http://www.opencascade.org/) (non-free). [22] NIST SDAI C++ Library (http://www.mel.nist.gov/msidstaff/sauder/SCL.htm) (free-licensed). [23] Crystalspace (http://www.crystalspace3d.org/).
72
Tools of the trade
Issue 10 [24] Soya 3D (http://gna.org/projects/soya). [25] Blender (http://www.blender.org). [26] Brendan Eich Python for XUL scripting (http://weblogs.mozillazine.org/roadmap/archives/008865.html), 2005. [27] Using ZODB Standalone. (http://www.zope.org/Wikis/ZODB/FrontPage). [28] Cygnus Solutions (http://en.wikipedia.org/wiki/Cygnus_Solutions). [29] Steve Litt. Zope Corporation/Digital Creations (http://www.troubleshooters.com/tpromag/199906/_digcreate.htm), 1999.
Biography Terry Hancock (/user/5" title="View user profile.): Terry Hancock is co-owner and technical officer of Anansi Spaceworks (http://www.anansispaceworks.com/), dedicated to the application of free software methods to the development of space.
Copyright information This article is made available under the "Attribution-Sharealike" Creative Commons License 2.5 available from http://creativecommons.org/licenses/by-sa/2.5/. Source URL: http://www.freesoftwaremagazine.com/articles/free_matter_economy_4
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Tools of the trade
73
Issue 10
74
Tools of the trade
A techno-revolutionary trip on the internet Reflections on the lessons from Dean for America By Tom Chance When I think about American presidential elections, three things come to mind: money, corporate power and disenfranchisement. One of the big political stories of our time is the decline of party politics, especially for the young. But another story is that of the internet revitalising democracy, empowering and connecting citizens in a new, vibrant space. Often Utopian, theoretical and romanticised, this vision of the future was made real in the race for the Democratic presidential candidacy recently in America by Howard Dean. With a campaign team numbering in the hundreds of thousands mobilising over the internet, Dean went from being a no-hoper to pole position in a matter of months. The campaign manager credited with much of this success, Joe Trippi, wrote a book recounting the experience and his lessons for America, under the provocative title: The revolution will not be televised: democracy, the internet and the overthrow of everything. Trippi’s big idea is that the internet is going to totally change American politics, in part returning it to a golden era of localised engagement while also propelling it into a bright new future of decentralised participation. In this article, I reflect on some of the wider issues he tackles and try to understand the implications for representative democracies and their established organisations. Because of the enormity of the subject area, I’m simply going to step past the many activities happening outside the mainstream. First, I should recount some of the facts of his campaign. They started with very little money, a tiny campaign team and a candidate with no real chance of winning. When hired in as a new campaign manager because of his decades of experience, Trippi immediately started to make some changes. His first was to leverage a web site called Meetup, a social networking site where people registered their location and an interest in meeting about some subject—in this case Howard Dean. The web site would suggest a time and location, and so people began to meet and discuss ways to help the Dean campaign. The more Trippi’s team ceded control to volunteers on the internet, the more the campaign became successful, to the point where he was the front runner for a short period of time. He was eventually brought down, ironically, by the mainstream media.
Breaking down the broadcast media To understand Trippi’s book, and many other arguments to do with the internet, you need to understand the difference between broadcast and multicast media. Television, radio, newspapers and other “traditional” media broadcast information at the consumer. Any opportunity to feed your ideas back to the media, and to have others hear your views, tend to be trivial. Examples include a “points of view” programme, a “letters to the editor” page or live comments sent in from mobile phones. The internet, Trippi suggests, is fundamentally different because it is multicast, which is to say that information comes from many outlets and you are free to add your own. Blogs covering politics, for example, will necessarily provide a far wider range of views and generally allow you to comment directly on articles, start debates or even start up your own web site or blog to counter opinions you disagree with. Trippi suggests that the broadcast media have subverted democracy and that the internet will “overthrow everything”. That’s a little hyperbolic. Rather, as the centralised corporate broadcast media have taken an increasingly central role in democratic discourse—telling us what the political parties think, deciding what constitutes news, and so on—that discourse has been subverted. We may still discuss politics around the dinner table, at the pub or at the workplace, but our relationship with politicians is marked by our consuming their ideas and the media’s interpretation. We are in a sense no longer citizens unless we get involved with certain mainstream political organisations, which often provide little scope for genuine participation and grassroots influence.
A techno-revolutionary trip on the internet
75
Issue 10
Are we in a dictatorial democracy? “America”, by ?(c)LoVe(c)? Released under the Creative Commons Attribution license Worse still, because politicians and their parties tend to only get short media spots with which to communicate their message, they become simplified. In the US elections political adverts tend to be no longer than thirty seconds. How can a candidate possibly convey their complex programme of policies and opinions in that time frame? The answer is that they can’t, and so instead the choose to emote, to whip up fear or anger, and to vilify the opposition. As with many other aspects of politics, the power to do this depends upon money; good adverts take a lot of funding; “opposition research folders” needed for attack ads take additional time and expertise. So parties and campaigns become ever more geared towards winning the allegiance of rich donors, leaving the people behind. Trippi appeals to this dystopian vision to emphasise the power of his “peoples’ campaign”, but it is (perhaps intentionally) misleading. People simply aren’t this apathetic and apolitical, their political interests just tend to manifest themselves outside of the mainstream, where the media lens fails to reach. The broadcasted message is that local politics, such as a campaign to save a well-loved building, is fundamentally different to national politics, and that both are declining. Broadcast media turn citizens into consumers, while the internet can help reassert their role in the democratic process Into this valley of death strode Trippi and his cohort of internet entrepreneurs. While his writing is a little dramatic, Trippi’s team was truly innovative. Though the Dean for America campaign still ran the traditional media ads, they saw an opportunity to use the internet to break this lock, both for the good of democracy and their campaign. The big idea is that web sites are cheap to set-up and can be opened up in any number of ways to encourage participation. Trippi’s team blogged about their work, read feedback and responded to it; they encouraged activists to set-up local meetings via the innovative social networking web site, Meetup; they showcased others’ work, and promoted efforts that went far beyond their own designs. They turned the logic of politics on its head, from a one-to-many relationship in which a politician leads the masses, into a many-to-many network of citizens engaging in issues and choosing a politician to represent and lead this discourse. Of course, the internet is still open to the abuse of the broadcast media because it can replicate the broadcast methodology. Consider that when Xerox first designed graphical user interfaces, they made them resemble a secretary’s office, with folders, documents, a waste bin and so on. Rather than working on an innovative interface that would change the way we work, they chose to adapt new technology to old methodologies. Similarly, most politicians use web sites to simply broadcast their message, replicating the methodologies of television campaigning on the internet. Indeed, Dean’s campaign team was reluctant even to link to the Meetup web site from their own until Trippi convinced them otherwise; when they did the numbers on Meetup shot up. The obvious point here is that it is not the technology alone, the internet and web sites, that will revitalise democracy. Their decision introduced a new dynamic to their campaign, one that made it ever more democratic and open. As they became increasingly dependent upon participation for momentum, so they were forced to spend more time and effort working in that area. From the moment the Meetup link went live on the official web site,
76
Reflections on the lessons from Dean for America
Issue 10 Trippi says, the people took over the campaign. First thousands, then tens of thousands, then hundreds of thousands of people were registering to meet up; over 180 campus groups were started; millions of dollars were raised not through pandering to the interests of the rich but by appealing to the civic spirit of Dean’s supporters. At its height “the discussion was the campaign”, and the hundreds of thousands of volunteers were dwarfing the brainpower and efforts of the thirty-odd official campaign team.
Transforming the internet and the dinosaurs One of Dean’s slogans was: “you have the power”, reflecting his belief in a new kind of politics. Like many disillusioned with the bureaucratic modern political parties, the Dean campaign wanted to get away from transactional politics, where politicians negotiate deals, to transformational politics, where politicians empower citizens to work together towards a common cause. The internet provided them with the tools to mobilise on a national scale, where previously grassroots politics had generally happened on a far more local scale. The status of a presidential campaign, and the fact that Dean’s message was so different to the standard Democratic party line, saw people flocking to his cause. By radically decentralising control, ceding it to the people, and running a passionate and controversial position, Dean mobilised one of the most apathetic demographics in American society—young people Young people were a big surprise. Called “deanie babies”, they pushed the grassroots campaign forward more than any other demographic, with some even travelling vast distances across the country to sleep on the office floor and work 17 hour days. Trippi hadn’t seen anything like it since Bobby Kennedy in the 1960s. While some like to romanticise the internet as an underground that is undermining the dinosaurs of politics, what was significant about Dean’s campaign is that it was relevant to the kids. They tapped a mainstream organisation into an apparently apathetic demographic and found that the kids wanted to be engaged, but that they were alienated by middle aged baby boomers who had obviously forgotten their youth. So obsessed with their “apathetic youth” frame were these politicians and media moguls that they began to ridicule the deanie babies as extremist left-wing vegans—an unfair characterisation by Trippi’s account—when they should have been praising them for their involvement. Civic involvement spread beyond the confines of Dean’s bid for the Democrat nomination. Bloggers and Meetup aficionados organised “Dean Corps”, which Trippi describes as “a sort of low-impact, weekend Peace Corps... they got together in neighbourhoods to clean up riverbanks, to read to children, and to collect food for homeless people”. It’s not as though Trippi’s team used the internet to invent the notion of civic community participation; they didn’t even come up with the idea of combining the two. Given the tools to organise them, and the profile to attract enough attention, people took it upon themselves to organise and participate in all kinds of community activities that would have the likes of Margaret “there’s no such thing as society” Thatcher running for cover.
Will demonstrations become a thing of the past thanks to the internet? Photo by dustpuppy. Released under the Creative Commons Attribution license Dean’s campaign was, in American campaigning jargon, “insurgent” because he started with apparently no hope of success. His team took his party and his opponents, lumbering dinosaurs in comparison, by surprise. The campaign may well be the first ripple in a series of waves that wash away levels of bureaucracy and
Reflections on the lessons from Dean for America
77
Issue 10 corruption in representative democracies. At least, that’s what Trippi would have us believe. At the very least we can place this series of events in a wider context to see how change is happening. For this we can look on the other side of the Atlantic to Britain. Dean for America, the British Broadcasting Corporation and Yahoo! are signs that the mainstream may adapt to the internet, rather than forcing the internet to adapt to their old logic The British Broadcasting Corporation (BBC), one of the largest and most respected in its field, and also one of the oldest organisations in broadcasting, is embracing open technology and content like no other comparable organisation. They have been opening up their web sites with RSS, XML and APIs; they are starting to release their archives under a Creative Commons-based license; they have even been developing their own video compression and transmission technology in a series of new free software projects, rather than doing it all in-house and “protecting” their work with patents and proprietary software licenses. Their strategy is not only opening the corporation up technologically but also socially, encouraging civic participation in the public institution whether through their developers’ network, their web-based collaborative projects or the imperative to remix implicit in their Creative Archive. But you would be wrong to assume that some top executives in the BBC have decided and decreed that the BBC must embrace the digital era. Rather, small groups of visionaries are working towards that vision in their department, and the cumulative effect is that the lumbering dinosaur of public service broadcasting is in fact evolving faster than its commercial competitors. Those visionaries made some ripples that have turned into a wave, just as Dean’s team may have done in politics. The core message of Trippi’s book could almost read like a guide book to corporations wanting to adopt the free software community’s approach. Free software hackers used free software as the basis for the Dean campaign web site. Slowly these interests are coming together—free & open technology, participatory politics and the mainstream—to change our world. Trippi’s seven rules for internet activism 1. Be first 2. Keep it moving 3. Use an authentic voice 4. Tell the truth 5. Build a community 6. Cede control 7. Believe again
Are we heading towards an anarchistic utopia? If you’re like me, you would have put down Trippi’s book having finished it thinking: “the internet is the first technology that truly gives people full access to knowledge... we can accomplish anything ... [I] have the power!” That’s a selection of the claims Trippi makes in the last page. Part of the problem with his book, and that of many writers in the field, is that he is an unabashed technophile. Because his purpose in the book is to sell his Big Idea and to enthuse us with his success, he doesn’t spend much time looking at where the internet might be bad for democracy, nor the extent to which it is the internet specifically, and not other factors, that led to his success. Of course he’s not so stupid as to suggest that it is the internet alone—his emphasis on his team’s strategy makes that clear—but he is still uncritical in his appraisal. This is not new. According to Douglas Kellner: “...film, for instance, was celebrated by some of its early theorists as providing new documentary depiction of reality, even redemption of reality, generating a challenging art form and novel modes of mass education and entertainment. But film was also demonized from the beginning for promoting sexual promiscuity, juvenile delinquency and crime, violence, and copious other forms of immorality” (Douglas Kellner, New Technologies and Alienation: Some Critical Reflections (http://www.gseis.ucla.edu/faculty/kellner)) Trippi is rightly critical of the mainstream broadcast media for the mediocrity of much of its content and for the very nature of its transmission. But while television can be uncritical, biased and unengaging, so can the internet. Many blogs are every bit as bad as the worst of the broadcast media, and an active minority can gain
78
Reflections on the lessons from Dean for America
Issue 10 disproportionate power on the internet just as easily as through the broadcast media, given the right infrastructure. Almost 2500 years ago, Plato warned of the powerful speaker, the dictatorship of orators, who can whip up a frenzy amongst communities. Would Trippi have been so rapt if an ultra-conservative candidate had done the same, bringing hundreds of thousands of people forward to campaign against everything he thinks is important, or would he have engaged more critically with the issues he has raised?
Hopefully internet campaigning will make balloons unnecessary. Photo by Tom Chance. Released under the Creative Commons Attribution license So let’s summarise some of these issues quickly. The internet can foster a sense of community, of civic participation and belonging to a mainstream political party. It provides a significant source of fundraising, though generally only for candidates in the main political parties. But television still reaches more voters, it broke Dean’s candidacy and those who benefit from the power imbalances it creates are unlikely to simply give up their position when the internet becomes the main mode of communication. Of course the internet, both in terms of the technology and the social and political practices associated with it, are evolving. What will this future look like? We can find some answers by looking at the cutting edge in mainstream politics today. Already, some politicians in the UK use blogs that allow comments, and actually respond to comments posted. The people behind the Dean campaign software have been working on CivicSpace, “an integrated and extensible platform of online organising tools for all manner and size of organisations”. Though no political party in any election that I am aware of has replicated Dean’s successful use of the internet, it must surely be a matter of time. A promising development in Europe is Greensnet (http://www.greensnet.org), which will try to link together all European green organisations, parties and individuals in a grassroots activism network using similar software to CivicSpace. To answer my own question, the future is very unlikely to be an anarchistic Utopia any time soon. So long as mainstream organisations adapt to the internet, the political spaces on the internet will probably adapt to their presence; if the parties can really change the way they interact with people online, then people previously lost to the world of mainstream politics will reintegrate the parties into their lives. Challenges from outside these norms, from organisations and collectives using the internet for radical politics, will continue to shape those political spaces but so long as the dinosaurs evolve they won’t replace them. The challenge for all of us who are both technically minded and politically aware is to ensure that the internet continues to be a tool for empowerment rather than a tool for control, to push the mainstream in the same direction that Trippi took Dean’s wing of the Democratic party. We won’t see a technorevolution, but we might just form a revolt that will change politics forever.
Biography Tom Chance (/user/40" title="View user profile.): Tom Chance is a philosophy student, free software advocate and writer. He is the Project Lead of Remix Reading (http://www.remixreading.org/), the UKâ’ s first localised Creative Commons project. You can contact him via his web site (http://tom.acrewoods.net/).
Reflections on the lessons from Dean for America
79
Issue 10
Copyright information This article is made available under the "Attribution-Sharealike" Creative Commons License 2.5 available from http://creativecommons.org/licenses/by-sa/2.5/. Source URL: http://www.freesoftwaremagazine.com/articles/revolution_not_televised
80
Reflections on the lessons from Dean for America