http://www.freesoftwaremagazine.com/node/1208/issue_pdf

Page 1

Issue 9



Issue 9

Table of Contents Issue 9..................................................................................................................................................................1 .................................................................................................................................................................1 Patents.................................................................................................................................................................3 Kill.........................................................................................................................................................3 Interview with Patrick Luby.............................................................................................................................5 Tony interviews Patrick Luby, the person behind OpenOffice for Macintosh.......................................5 I read the news today, oh boy...........................................................................................................................9 Reading RSS...........................................................................................................................................9 Mozilla: a development platform under the hood of your browser............................................................15 Should Java programmers migrate to it?..............................................................................................15 Introduction to Zope........................................................................................................................................25 Part 1: Python........................................................................................................................................25 Code signing systems.......................................................................................................................................31 How to manage digital certificates, Software Publishing Certificates and private keys for code signing...................................................................................................................................................31 Free, open or proprietary?..............................................................................................................................35 Philosophical differences in software licensing....................................................................................35 Does free software make sense for your enterprise?....................................................................................43 Finding free software at your office is like finding a Republican in San Francisco.............................43 The will to code.................................................................................................................................................49 Nietzsche and free software..................................................................................................................49 How to get people to work for free.................................................................................................................55 Attracting volunteers to your free software project..............................................................................55 Towards a free matter economy (Part 3).......................................................................................................59 Designing the Narya Bazaar.................................................................................................................59 What is code?....................................................................................................................................................69 A conversation with Deleuze, Guattari and code..................................................................................69

i


Issue 9

ii


Issue 9 By Tony Mobily In Issue 9 of Free Software Magazine Saqib Ali gives the public a lesson in Private Key management and David Horton shows us ways to attract volunteers for free software projects. There's also an intro to RSS news feeds by John Locke, and much, much more. Source URL: http://www.freesoftwaremagazine.com/issues/issue_009

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Issue 9

1


Issue 9

2


Patents Kill By Tony Mobily On the third of September 2005, I was diagnosed with cancer—testicular cancer. The pain started during a party (Dave Guard, our Senior Editor, was there as well). In just one night, I went through a sudden and unexpected change: from being a young healthy person, full of life, and enjoying hanging out with his friends, to the ER of Fremantle Hospital being told that I may have cancer and I needed to be operated on immediately. I am fine now. I’ve just been told that the tumour seems to have gone. On the 14 of November I will have the final answer which will determine whether I will need to go through the ill-famed chemotherapy. In any case, my prognosis is encouraging; I promise you won’t get rid of your favourite Editor In Chief that easily! People say that cancer is a life changing disease, regardless of what its outcome is. I can confirm it, without a shadow of a doubt. Cancer changes you deeply, it makes you realise that we are here, in this world, only for a short ride—a ride that might stop any moment and without warning. For people with cancer, CT scans are a life-saver. They tell you if your lymph nodes are too big or if they are changing, and make accurate diagnosis possible. The same applies to the PET scanning technique, which promises to be the new generation of full body scanning. At the moment, there is only one PET scanner in Western Australia (a rather rich state). The cost of such a machine is insane (I have no other word for it); at the hospital, they are already thinking about upgrading it because only three years after the purchase, it’s already obsolete. Apart from injecting radioactive material in my body, a PET scan would confirm for sure whether the lymph node near my kidney (the big suspect in my case) has been attacked by the tumour, or if it’s just simply large. The problem is that there are 20 people every day who need a PET scan, while the hospital can only complete 13 scans a day. The government is saying that they cannot afford another PET scanner, and I am not considered a high-risk patient. For the diagnosis, I will have to trust the good old tumour markers and CT scans. Why? Because software and medical patents make PET scanners ridiculously expensive (and also because Philip Davies, from the Department of Health and Ageing in Australia, has decided that Australia needs to take its time (http://www.mja.com.au/public/issues/180_12_210604/dav10271_fm.html) before adopting PET scans. Fortunately though, there have been some interesting responses (http://www.mja.com.au/public/issues/181_09_011104/letters_011104-4.html) to his decision, which might speed up this process). First of all, I have to admit that my research wasn’t very thorough. In fact, I stopped researching half way through, because I started to feel sick from what I was finding out (and because right now, for me, avoiding stress is an absolute priority). Also, please beware that I am biased: I am very wary of medical patents, and I consider software patents to be a ridiculous idea. So, I find software patents applied to medical equipment particularly disturbing. Searching for “PET AND SCAN AND ALGORYTHM” from the patents office in the US returns 869 (yes, eight hundred and sixty-nine) patents released. A skilled body imaging technician I interviewed confirmed that when a new imaging technique comes out, a new gold rush starts—where gold is represented by patents. He also confirmed that these new machines become affordable only after a few years (normally, around seven), when the patents related to those machines start expiring. Apparently, the same thing happened with the MRI. In ten years, when the PET gold rush is over, PET scans will be as common as CT scans are today. To me, it’s absurd that governments allow pharmaceutical patents that last more than 7 years—especially if the same governments find themselves, because of those patents, in the position of not being able to afford the

Patents Kill

3


Issue 9 medical equipment used to keep their citizens healthy. It’s absurd that a scanning technique turns into a gold rush, rather than an attempt to help people with illnesses to improve their health. It’s absurd that one third of patents around PET are on software-techniques which improve the representation of the information collected by the scanner. If the world made sense, the world’s governments wouldn’t allow medical patents which last more than two years, and would only allow pharmaceutical companies to charge very reasonable rates to other companies willing to use patented methods. They wouldn’t allow the enforcement of patents against third world countries (which is, incidentally, exactly what the United States government is allowing right now with their “Free” Trade Agreements, which are a nice way to rip off all those third world countries. They wouldn’t allow software patents, which often look like bad jokes (one click shopping, anyone?). Why not? Because patents—especially patents related to medical research—can kill. They turn legitimate life-saving research into another way to make a quick buck; they make this world—the only one we have—less liveable, especially for those people who aren’t lucky enough to be rich and healthy. Ironically, patents were invented for the opposite goal: to guarantee that everybody could make use of everybody else’s inventions, paying a little share to the original inventor. Because software patents turn from financially expensive jokes into a life threatening stranglehold, when they relate to medical equipment. If I sound too radical, imagine yourself (or a person you love) not having access to a vital technology because a pharmaceutical company director’s wife “needs” to go shopping in a more expensive four wheel drive, or “needs” to go on holiday in a bigger boat. It’s a disturbing thought, unfortunately it’s reality. We must say “no” to software patents, and (even more importantly) set definite limits to current patent systems; this is especially true for medical research, because in some cases patents can kill, and we, the smartest species on the planet, ought to know better.

Biography Tony Mobily (/user/2" title="View user profile.): Tony is the founder and the Editor In Chief of Free Software Magazine

Copyright information Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved. Source URL: http://www.freesoftwaremagazine.com/articles/editorial_09

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

4


Interview with Patrick Luby Tony interviews Patrick Luby, the person behind OpenOffice for Macintosh By Tony Mobily Patrick Luby wrote the software layer which allows OpenOffice to run on Macintosh computers without running an X server. This way, OpenOffice also looks like a native application. Since OpenOffice is one of the most relevant free software projects out there, the importance of his work cannot be underestimated. Patrick agreed on answering a few questions for Free Software Magazine. TM: Patrick, first of all: please tell us a little bit about yourself. What do you do? What’s your programming background? PL:I run my own software development consulting company called Planamesa Software. I have spent nearly a decade working as a software developer in a variety of commercial and open source projects including OpenOffice.org and Apache Tomcat using the C, C++ and Java programming languages on a variety of operating systems such as Linux, Mac OS X, Solaris and Windows. TM: When did you first decide to get involved with OpenOffice? PL: Back in 2000 and 2001, I worked at Sun in the OpenOffice.org group. I was the lead engineer of a small team that was trying to port OOo to Mac OS X. Although we made significant progress, Mac OS X was still in the developer preview stage so progress was extremely slow and difficult. It wasn’t until late 2003 when I had been working in Sun’s J2EE group that I began to have renewed interest in porting OOo to Mac OS X. By that time, Mac OS X was much more stable and Ed Peterlin and Dan Williams had successfully implemented a working version of OOo 1.0.x using X11. TM: At the beginning, the plan was to have NeoOffice/C and NeoOffice/J (with the interface written in C and in Java respectively). Now, NeoOffice/C seems to be dead. Can you clarify this for us? PL: Well, actually, at the beginning Ed Peterlin and Dan Williams were working on a Cocoa-based version of OpenOffice.org that had native Aqua widgets which they called NeoOffice. While Ed and Dan were able to put together some amazing code, they found that Cocoa and OpenOffice.org were just not meshing well and they were spending most of their type tracking down crashes in the Cocoa APIs. Around this time, I had been dabbling with using Java as a shorter path to getting a stable native version of OpenOffice.org. I was impressed with the work that Ed and Dan were doing and I thought that maybe using Java might make a nice interim version that would keep users’ interest in OpenOffice.org alive while they worked on NeoOffice. So Ed and Dan invited me to join the NeoOffice.org project and we came up with the name NeoOffice/J for my code. While this name worked well, many people refer to NeoOffice/J as NeoOffice so, to avoid confusion, we now refer to the old NeoOffice product as NeoOffice/C. While no work is being done on NeoOffice/C, the work was extremely valuable as it served as the proving ground for Ed and Dan’s integration of native widgets into OpenOffice.org. Their work, which was the basis for the native widget support in Red Hat and Ximian’s custom versions of OpenOffice.org, will definitely help us when we add Aqua widgets to NeoOffice/J. TM: Right now, NeoOffice/J on Mac is an amazing achievement, but its interface is still not quite there yet. For example the widgets and the file dialog box are still non-native. Do you think OS X users will eventually have, thanks to your efforts, a version of OpenOffice that uses Apple’s widgets? And... well, I have to ask you this: when do you think it will happen? PL: Now that a stable release of Neo/J is out, implementing Aqua widgets and dialogs is getting much closer. We need to upgrade to Java 1.4 first since Apple will not support Java 1.3.1 on the new Intel machines. However, after we move to Java 1.4, our goal is to add native Aqua widgets and dialogs.

Interview with Patrick Luby

5


Issue 9 TM: What’s your relationship with the OpenOffice developers like? Are you on good terms? Did your project take a while to gain acceptance? PL: I think that we have a good relationship. I am regularly talking with OOo staff at Sun and Collab.net. While I have heard rumors that Neo/J is separate from OOo due to some conflict, this is far from the truth. The primary reason that Neo/J is separate is that it benefits both OOo and us. Since we are always several months behind OOo’s official releases, doing our development outside of the OOo development process allows us to make changes without breaking the officially supported OOo platforms. Then, once our changes are stable, we donate back any changes that are common to the X11 version of OpenOffice.org. TM: What do you think about the fact that most of OpenOffice’s developers are Sun employees? Do you think this was behind the decision of dropping the “official” Aqua port? PL: This does not bother me at all. An application the size of OpenOffice.org requires a lot of highly skilled developers and my belief is that if Sun or some other company didn’t fund all of those developers, OpenOffice.org development would be extremely slow. TM: Do you think the NeoOffice/J team should get a good sponsorship from a company (Sun?) so that the development can be sped up? PL: We are always open to that and I am constantly pursuing outside funding. While a few companies have always been supportive of our efforts, funding has not materialized. So, instead of spending too much energy on looking for a big sponsor, we have worked on building our community. This, in turn, has led to a continual stream of small donations. These donations, while miniscule to a company the size of Sun or Apple, have had a huge impact on NeoOffice/J as I now am able to use these donations to reduce the amount of consulting work that I must do and use that time to work on NeoOffice/J. It really has amazed me how much of an impact small donations can make. I would guess that many donors may wonder where their $10 donation goes. But collectively, these donations have translated into real improvements to NeoOffice/J. TM: If you were to rewrite OpenOffice from scratch, what would you change? Would you write something that is completely interface- and OS-independent? Do you think this is what the OpenOffice team should have done in the first place? PL: I can’t really provide an answer to this question. In my opinion, rewriting OOo from scratch is such a huge task and would require many people over several years to get a first working release out. This would be very costly so I don’t really consider it an option for anyone other than a large company with tens of millions of dollars to burn. TM: I have an iBook 900Mhz, and I just can’t run NeoOffice/J on my laptop. It takes about 30 seconds to start, and it’s quite sluggish after that. Do you think this is just a matter of waiting for faster CPUs? Or do you think that something can be done to improve the performances? PL: The latest Neo/J 1.1 release patch improves the program’s performance a lot. In general, what we have found is that memory, not CPU speed, is what makes the big difference in NeoOffice/J performance. Unfortunately, since we are running Java and the huge amount of OpenOffice.org code at the same time, I can’t deny that NeoOffice/J is definitely a memory hog and adding more memory to a machine will make a sizable difference in speed. NeoOffice/J will work with 256 MB of memory, but 512 MB is closer to optimal. While part of the slow startup time is due to OpenOffice.org, part of it is caused by the NeoOffice/J code. I have noticed that Sun’s engineers have made OpenOffice.org 2.0 start much faster than OpenOffice.org 1.1. Hopefully, when we eventually move NeoOffice/J to the OpenOffice.org 2.0 codebase, we will see improvement in overall performance. Thanks for talking with us!

6

Tony interviews Patrick Luby, the person behind OpenOffice forMacintosh


Issue 9

Biography Tony Mobily (/user/2" title="View user profile.): Tony is the founder and the Editor In Chief of Free Software Magazine

Copyright information Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved. Source URL: http://www.freesoftwaremagazine.com/articles/patrick_luby_interview

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Tony interviews Patrick Luby, the person behind OpenOffice forMacintosh

7


Issue 9

8

Tony interviews Patrick Luby, the person behind OpenOffice forMacintosh


I read the news today, oh boy Reading RSS

By John Locke I spent several years of my childhood in a remote corner of bush Alaska. When thinking about those times, I remember one village in particular: Point Lay (http://maps.google.com/maps?q=point+lay,+alaska&ll=69.754601,-163.020287&spn=0.101109,0.149217&t=k&hl=en mid-way between Point Hope (http://maps.google.com/maps?q=point+hope,+alaska&ll=68.341484,-166.733322&spn=0.193411,0.298433&t=k&hl= and Barrow (http://maps.google.com/maps?q=barrow,+alaska&ll=71.286163,-156.738510&spn=0.386822,0.596867&t=k&hl=en). In Point Lay, in the late 1970s, we got our news twice a week from people and mail arriving on our regular mail planes. Every Tuesday and Friday, depending on the weather, news from the outside world would arrive, filtered by the people who happened to be on the airplane or the magazines we were subscribed to. We didn’t have television, or good radio reception. Aside from the delivery vehicle, the news we received was much like living in rural America a hundred years ago—second or third hand, heavily filtered. Meanwhile, much of the rest of the country got their news from Walter Cronkite, or their local newspaper. In the newspaper, a growing number of stories came from syndication services: the Associated Press and Reuters being the most prominent. Most comics and many other feature stories come from various syndication services, and for freelance writers, being a syndicated columnist, such as Dave Barry, is one way to make meager newspaper sales add up to a nice income. In the print world, newspapers and magazines pay for every story or comic published. But becoming a syndicated content producer was still a challenge—you had to persuade each newspaper to pick up your content. The newspapers were the publishers, as were the television networks. All content, all news, was filtered through the mass media. And there were only three nationwide commercial television networks. Instead of hearing about what the average traveller on a bush airplane thought was important, most of America would only hear what Walter Cronkite or the various newspaper publishers thought was important.

Everyone’s a publisher Times have changed. Aside from making it easy to publish content, wikis, blogs, and other content management systems often include an automatic way to syndicate that content. It’s called Really Simple Syndication, or RSS for us acronym-loving technophiles. The web made it possible for anybody to be a publisher. RSS makes it easy to see when there are new stories at your favorite web sites. Whether you realize it or not, you probably already use RSS news feeds. If your home page has headlines that update every day, it’s probably using RSS. When you visit a site that shows snippets of content from other web sites, it’s using RSS to get that content. Because nobody trusts anybody online automatically, the blogging world has developed a simple but powerful way of building credibility: comments and trackbacks If you use one of the content management systems to publish content, you too can become a news source. Several implications here: • Anyone with a computer and an internet connection can subscribe to news feeds from anywhere in the world • There’s a news feed for every interest under the sun • Anyone with something to say can find a place online to say it • Big media companies no longer have a monopoly on the news.

I read the news today, oh boy

9


Issue 9 Blogs have made RSS popular, and along with two other important features, are creating a new generation of citizen reporters. These two other features are trackbacks and ping services.

TrackBacks: Cross-site comments If everyone’s a publisher, how do you judge the quality, accuracy, or fairness of what you read? You obviously can’t believe everything you read online. We used to be able to trust mass media to hold to an ethical standard of journalism, but anybody who believes that’s true today hasn’t compared Fox cable news assertions with the facts. Now, with millions of blogs contributing to the content mix, it’s absolutely certain you’re going to find biased, unfair, poorly-thought-out arguments. Given such a chaotic mix, how can you trust anything anyone has to say? The answer is, you can’t, without knowing the back story. Who is writing the story, and what are their biases? Everyone has bias. The blogging world was built upon that assumption, unlike traditional media with its claimed objectivity. Because nobody trusts anybody online automatically, the blogging world has developed a simple but powerful way of building credibility: comments and trackbacks. Built into all blogging software is a way for anybody who cares to leave a comment about each story. While the blog owner can censor comments (and thanks to various types of spam that can appear on blogs, it’s necessary), you can make some judgement about the quality of the content by the quantity and nature of the comments. Trackbacks are a form of dialog between blogs. Trackbacks appear in the comments for a particular story, and link to a blog entry by another author, on another web site. By using trackbacks, bloggers can conduct extended dialogs, and you can trace the entire conversation. Comments and trackbacks are crucial elements for judging the quality of content. If a story sparks a long, heated debate, you can infer that it’s controversial. If all of the comments support the story, perhaps the story is right on target—or perhaps the blogger has censored comments from people who disagree. If there aren’t many comments, it becomes much harder to judge.

Pinging update services Another new key feature of the blogging world are ping services. The word ping originally comes from sonar, which sends out a “ping” sound through water so that the sonar operators can detect objects by hearing the sound come back. In the computer world, you send a ping out to various things to see if they’re there. In the blogging world, your blog software sends out a ping to various services to let them know you’ve written a new entry. Blogs rank highly on Google results because they are updated regularly—but more importantly, because so many of them link to each other through the various ping update services By pinging various aggregator sites, you essentially announce to the world that you have a new entry. Search engines find you quicker. Your entry appears in content aggregators. If you refer to other blogs in your story, they get notified about your story. This feature alone makes blogging more powerful than the other content management systems out there.

Straight from the source So why read what amounts, at best, to a whole bunch of amateur journalism? Generally because you’re bypassing the “professional” journalists and sometimes getting stories directly from the people making the news. Several CEOs of companies are now blogging. Activists blog. Politicians are starting to blog. And people in extraordinary circumstances blog every day. The prisoner abuses at Abu Ghraib reached the main stream media because a soldier posted pictures on his web site. Would we have ever heard about these abuses, without the internet? Perhaps not. Executives at major technology companies are starting to blog—companies like HP, Sun, and Microsoft. Mark Cuban, the owner of the Dallas Mavericks basketball team, has a blog. Porn stars and Washington interns have blogs. Darth Vader (http://darthside.blogspot.com/) even has a blog—or had one, until his fateful meeting with Luke.

10

Reading RSS


Issue 9 If you want to find out what’s going on in any area, a Google search will likely return a lot of results written on various blogs. Blogs rank highly on Google results because they are updated regularly—but more importantly, because so many of them link to each other through the various ping update services. If you want to search just blog entries for a topic, try Technorati (http://www.technorati.com/).

How to read RSS RSS reading is being built into more and more applications, web sites, and systems. As I already mentioned, you’re probably already using RSS, whether you know it or not. But I find reading news feeds in the Firefox browser to be a great way to dip your finger into the virtual information torrent that is the modern media.

Figure 1: The Sage extension in Firefox provides an easy way to read news feeds. Firefox has a built-in feature for accessing RSS feeds, called Live Bookmarks. I don’t use Live Bookmarks myself—I think it takes more than a headline to determine if I’m interested in reading a story. There are several bookmark extensions you can install to read news feeds. My favorite is Sage. To install Sage: 1. Open Firefox. 2. On the Tools menu, click Extensions. 3. Click the Get More Extensions link. 4. Follow the News Reading category link. 5. Find the Sage extension. At this writing, it’s on page 2 of the category listing. Click the Install icon. 6. A pop-up window should appear, asking whether you want to install this extension. If it does not, Firefox may be configured to block software installations—you should see a light-yellow strip at the top of the browser window, where you can click and change the settings to allow software installations from this site. 7. Click the Install Now button when it becomes available, and restart Firefox. You now have the Sage extension installed. Using it is easy—you’ll find it on the Tools menu at Tools -> Sage. Thanks to the internet, traditional mass media no longer holds a monopoly on information channels Sage uses a folder of bookmarks to track its RSS feeds. You can designate any bookmark folder as the Sage folder by clicking the little Options button and choosing Settings. You’ll see the BBC front page and Yahoo! Sports as news feeds to get you started.

Hints using Sage Here are some tips and tricks when using Sage: Use the refresh icon to highlight feeds with new items

Reading RSS

11


Issue 9 In the Sage window, click the refresh icon. This checks each RSS feed to see if it has changed. If it has, the news feed title changes to bold text.

Figure 2: Customize the Firefox toolbar to make Sage easier to open. Make the Sage sidebar available on your toolbar Right-click in the Firefox window, somewhere near the very top menu. Then you’ll be able to click the Customize item as shown in Figure 2. Not all that many people are aware that you can do this, but it can make a big difference. Drag the Sage icon up to the toolbar somewhere for easy access. While you’re here, you may want to drag the printer icon up there too. Open individual stories in tabs Tabbed browsing is one of the best features in Firefox, and makes reading news feeds a two step process: first scan the news items for stories you want to read in depth, opening them in background tabs. Then read the stories. If you have a wheel mouse, click on the headline with the wheel to open it in a background tab. If you don’t have a wheel mouse or a middle button, you have to right-click the link to open it in a new tab.

Figure 3: Use the Discover Feeds button to find the RSS feeds on a web page. Add feeds from the page you are browsing When you visit a web site with an RSS feed, sometimes you’ll see a little orange icon in the browser status bar. But not always. Some RSS feeds appear as various icons on the page, with labels such as XML, RSS, Atom, or Feed. Sometimes they’re just links. Sage can find all of the feeds linked on whatever page you’re viewing, by clicking the Discover Feeds button as shown in Figure 3.

12

Reading RSS


Issue 9

Choosing news feeds As mentioned earlier, you can find news, opinions, and insights for all sorts of topics through RSS news feeds. So, where do you start? I suggest starting with the periodicals that you already read. Most newspapers publish RSS feeds of their headlines, and often different feeds for different sections of their paper. Many magazines publish news feeds, too. But these are just the start. Thanks to the internet, traditional mass media no longer holds a monopoly on information channels. You can go straight to the source of news for your interests. Or choose any other channel or filter to get your information. As journalistic standards have evaporated, we all must learn to be more critical and discerning about where we learn what we know—this is a prime lesson the internet has to teach. But it also gives us the tools to easily find contrasting points of view, hear all sides of an argument before coming to a conclusion. I have quite a mix of news feeds in Sage, a mix of traditional news media, other organizations, and individual blogs which I’ve come to respect. Here’s a sampling: • New York Times (http://www.nytimes.com/)Technology section (and several other sections). • Christian Science Monitor (http://csmonitor.com/)Work/Money section (and other sections). • Wired News (http://wired.com/), news about technology and culture. • Slashdot (http://slashdot.org/), “News for Nerds, stuff that matters”. Slashdot is basically a big conversation board with topics that point to stories all over the internet. • Linux Today (http://linuxtoday.com/), another news aggregator that points to stories published elsewhere about Linux and free software. • Seth Godin's blog (http://sethgodin.typepad.com/seths_blog/). Seth is an author of a marketing book who has lots of ideas about how the internet impacts on traditional marketing. • Joi Ito's blog (http://joi.ito.com/). Joi is an investor in Japan who travels all over the world, participates in various standards bodies such as ICANN, and has been involved in many internet startups and ideas. • The Long Tail blog (http://longtail.typepad.com/the_long_tail/), by Chris Anderson. Anderson is the editor of Wired Magazine, and he writes about how the sheer availability of online shelf space is leading to entirely different market behavior. • Freakonomics blog (http://www.freakonomics.com/blog.php), by the authors of the new economic book called, what else, Freakonomics.

Biography John Locke (/user/24" title="View user profile.): John Locke is the author of the book Open Source Solutions for Small Business Problems. He provides technology strategy and free software implementations for small and growing businesses in the Pacific Northwest through his business, Freelock Computing (http://freelock.com/).

Copyright information This article is made available under the "Attribution-Sharealike" Creative Commons License 2.5 available from http://creativecommons.org/licenses/by-sa/2.5/. Source URL: http://www.freesoftwaremagazine.com/articles/rss_feeds

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Reading RSS

13


Issue 9

14

Reading RSS


Mozilla: a development platform under the hood of your browser Should Java programmers migrate to it? By Davide Carboni This article compares two development platforms: Java and Mozilla. The object of this comparison is not to establish which one is best, but rather to measure the maturity, the advantages, and the disadvantages of Mozilla as a platform from the point of view of a Java programmer (as I am). Such a comparison is not a speculative exercise but it is the result of a process of technology scouting that I have performed during the last months with the objective being to find more effective tools, languages, and patterns for the development of distributed, pervasive, and location-aware internet applications. The article briefly introduces Java and Mozilla, and then points to similarities and differences. A detailed analysis of some important programming domains such as GUI, multitasking, remote applications, community process, and development tools are presented together with a comparison of functionalities provided by the respective API.

A short introduction to Java Java is a multi-platform object-oriented language. In other words, you can use constructs and patterns of object-oriented world to write programs deployable in different operating systems without changing a line of code and without “recompiling”. Although this is quite common for interpreted languages, the Java code is not interpreted. Java compilers build binary code called bytecode, which is loaded and executed by a virtual machine, independently upon which the operating system is running in the “real” machine. Thus, a bytecode compiled under Windows can be loaded and executed by a virtual machine running on Unix. The Java virtual machine hides underlying system-related details and it is provided with a unified API, common to all platforms. This implies that only the functionalities that are common to all platforms can be exploited by Java programs. The idea of “virtual machines” is not a new one; older languages such as Smalltalk introduced this solution years before Java, but as often occurs, the best and the first are not automatically the most successful. If the portability of bytecode is one of the key features of Java, another important feature is that at the beginning of its life Java was considered the “language for the web”. This was due to Applets—programs that can be downloaded and executed inside a web page. Since its birth, both of these features of Java have revealed their limits. On one hand, bytecode portability assumes that the footprint of different machines is equivalent: loading bytecode compiled for a desktop PC in a PDA or to a cellular phone is in general not possible. On the other hand, Java Applets have gained less success and diffusion than other competitor technologies such as Macromedia Flash. Nevertheless, Java has gained a huge popularity among research teams and also for proprietary software. Among the most noticeable applications are the tools for Java programming written in Java. Moreover, to extend the adoption of Java in small footprint devices, different “editions” of the Java platform were created: J2SE is the Java 2 standard edition virtual machine for the desktop computing and personal system applications; J2EE is the enterprise edition which comprises server side components such as Servlet and Enterprise Javabeans. The J2ME profile aims at small devices such as mobile phones and embedded systems. For a complete classification of Java virtual machines and API please refer to Java’s web site [1]. In the domain of web applications, even though Java applets lost the war against Flash on the client side, things evolved favourably to Java technology on server-side. The J2EE provides a robust framework for secure and scalable applications competing with the ASP and PHP technologies. One of the success points of Java is its technology pervasiveness. Java programmers “make themselves at home” in a vast range of domains: they can write stand-alone programs, application plug-ins, applets for web pages, web applications either in the form of servlets (Java code which embeds HTML tags) or Java Server Pages (HTML which embeds Java expressions), and Enterprise JavaBeans which build the application logic in multi-tiered architectures. Furthermore, Java scales the OOP paradigm to distributed systems by means of RMI (Remote Method Interface) and Jini (a framework which makes a deep use of code mobility to build dynamic collection of distributed services). In the domain of “small footprint” devices, Java has its JVM tailored for cell phones, PDAs, and smart cards. One of the success points of Java is its technology pervasiveness. Java programmers “make themselves at home” in a vast range of domains

Mozilla: a development platform under the hood of your browser

15


Issue 9

A short introduction to Mozilla Mozilla is not a language. It is a known free software suite of internet clients. However, looking under the hood, you can discover that Mozilla is not only a browser and a messenger, but it is a platform with a complete component-based architecture. You can develop new stand-alone programs, add-ons for the browser, and code that can be loaded from remote hosts. The Mozilla project has quietly become a key building block in the open source infrastructure. (Tim O’Reilly ) The most remarkable elements of the Mozilla platform are its component architecture called XPCOM (which recalls Microsoft COM) and the Gecko layout engine, which can render both HTML and the XUL mark-up language—an XML language for the definition of rich (as opposed to poor HTML forms) graphical user interfaces. The entire GUI of Mozilla is written in XUL and rendered by the HTML/XUL Gecko rendering engine. The XUL approach has produced two main results: Firefox and Thunderbird. Aside from these two software masterpieces there are hundreds of small third-party add-ons, which cover any possible requirement of final users. Firefox nudged IE below the 90 percent mark for the first time since the height of the browser wars in the 1990s. (Ina Fried and Paul Festa [3]) It is worth noting that Microsoft is likely to adopt a XUL-like solution for its Longhorn operating system. The Microsoft solution consists of an XML language called XAML for the definition of graphical user interfaces integrated by C# scripting [4]. Differently from Java, Mozilla does not define a unique programming language. According to its philosophy, Mozilla exploits the best (or the worst, depending on the point of view) of existing languages and adds its own dialects. XUL is not a programming language but rather a GUI description language; events in the GUI are handled by Javascript scripts which can connect to the XPCOM architecture using the XPConnect bridge. New XPCOM components can be developed in both C++ and Javascript and their interfaces are defined by means of XPIDL which is the Mozilla dialect for IDL (Interface Description Language). Data sources and configuration files are written in RDF. Finally, although XUL is used to define the structure of user interface, the final visual appearance is defined using CSS. Mozilla implements W3C standards and its architecture reflects this acquaintance giving a running environment and a framework in which processing HTML, RDF, CSS and connecting to HTTP servers is almost a “primitive” functionality. On the other hand, Mozilla is mainly an internet-client platform and it does not pervade other domains. So the only possible comparison is with the Java Standard Edition in the domain of desktop applications. Differently from Java, Mozilla does not define a unique programming language. According to its philosophy, Mozilla exploits the best (or the worst, depending on the point of view) of existing languages and adds its own dialects

Similarities Classes are a central concept in Java. Any Java project is a set of classes (and other files like properties and resources), and even the entry point of a standalone program is a class containing a method called main(). >java -cp $(CLASSPATH) bar.foo.MyClass

where MyClass “must” implement a method called main(). Similarly Mozilla can be launched and instructed to load a special kind of XML document using a protocol called “chrome”. The usage is:

16

Should Java programmers migrate to it?


Issue 9 >mozilla -chrome chrome://chatzilla/content/chatzilla.xul

The result is the opening of an application called ChatZilla without opening the Navigator window. One of the characteristics of Java is the portability and the mobility of bytecode. This allows the deployment of Java applications in the form of applets. In other words, applications that are downloaded in the context of a web page and that can run using a portion of the web page as display. The receiving browser must be powered with an internal or external JVM to execute the bytecode. In a similar way, you can write a XUL application for Mozilla, and publish it on a web server. A Mozilla browser can download the XUL file from the network and execute its code. A remarkable example of remote XUL is the Mozilla Amazon Browser [5] and some on line games [6].

The community The community of Java programmers is huge: there are hundreds of web sites, forums, newsgroups and mailing lists. Java is a successful programming language and Java technologies are governed by the Java Community Process which is open to experts outside Sun. Nevertheless, a large number of hackers and programmers see the FLOSS (Free Libre Open Source Software) as the ultimate model for an open, dynamic, fair, and effective software development and distribution process. This group — the Open Source Initiative — claim that Sun holds the control of Java, and OSI’s popular founder, Eric Raymond asked Sun to “let Java go” [7]. Sun’s terms are so restrictive that Linux distributions cannot even include Java binaries for use as a browser plugin, let alone as a standalone development tool. (Eric Raymond [7]) From its side, Sun advocates its “Sun Community Source License” [8] and claims that the pure OS model has some disadvantages: • There is no clear control over compatibility issues and there may, therefore, be fragmentation. • There may be no responsible organization. Bugs introduced by one organization may be too difficult for another organization using it to fix and of too low a priority for the author to fix in a timely manner. • Progress can be chaotic and have no direction. • There are limited financial incentives for improvements and innovations, leading commercial developers to use the proprietary model. The reaction of the FLOSS community to the strict control of Java is the development of new and free software Java platforms such as Kaffe and GNU/Classpath. Mozilla is the result of opening Netscape to the public giving free access to the source code in the 1998. The first attempt to write a license for the free software community was a failure: the first draft of such a license was published to usenet and received a lot of negative feedback: One section of the license acted as a lightening rod, catching most of the flames: the portion of the NPL that granted Netscape special rights to use code covered by the NPL in other Netscape products without those products falling under the NPL (Jim Hamerly and Tom Paquin and Susan Walton [9]). Currently, Mozilla is distributed under MPL/GPL/LGPL triple license, allowing use of the files under the terms of any one of the Mozilla Public License, version 1.1 (MPL), the GNU General Public License, version 2 or later (GPL), or the GNU Lesser General Public License, version 2.1 or later (LGPL). The community was born and raised around the site mozilla.org.

Graphical user interfaces Java has basically two different GUI toolkits: AWT and Swing. The first one is a bridge between native widgets of the underlying operating system and the Java library. The advantage of AWT is in good

Should Java programmers migrate to it?

17


Issue 9 performance while the shortcoming is mainly poverty in terms of features and native components. Swing, on the other hand, doesn’t rely upon native components, but instead every widget is rendered by the JVM inside a canvas in the screen. The Swing API is far larger than AWT, and it provides a full object oriented and model-view-control oriented architecture but, compared to AWT, performances are poor. In the middle between AWT and Swing, there is the SWT library. It provides a rich set of functionalities, and controls with good performance because low-level rendering is performed via C/C++ libraries. The Mozilla approach is completely different: instead of defining widgets as components of a library in an object-oriented programming language, Mozilla programmers can define a GUI as an XML document. Mozilla’s philosophy of using “the right tool for the right job” is manifested most prominently in the design of the user interface [10]. So, GUI’s are represented by a tree, and the DOM interface can be used to manage the item in the tree. The XML language used is called XUL and it goes beyond the simple HTML forms giving to the developer a full set of rich graphical components. Under the hood of Mozilla there is a remarkable piece of software called Gecko which can perform both the rendering of HTML pages and the rendering of the XUL user interfaces. Javascript, considered by many to be the best scripting language ever designed is ideal for specifying the behavior of your Interface widgets. Where speed is the foremost consideration, we provide C++ libraries with multi-language interfaces for comprehensive, performant access to networking, filesystem, content, rendering, and much more [10]. In fact, there are many browsers based on Gecko, some of them like Mozilla Navigator and Firefox use Gecko to render both GUIs and web pages, while others like Camino uses Gecko only for web pages while the GUI is rendered with Cocoa GUI toolkit. Thus, the Mozilla Navigator is a XUL document that can be loaded in the browser. In fact, if one tries to open the URL with Mozilla: chrome://navigator/content

the result is quite amazing: an instance of a browser rendered inside another browser (figure 1).

The figure shows how the gecko HTML engine is also able to render XUL. The XUL resource loaded into the Mozilla window is another instance of the Mozilla Navigator

Multithreading Another interesting characteristic of Java is the support to concurrent programming. As Java programs are executed in a virtual machine, multi-programming is in theory possible even in those systems where

18

Should Java programmers migrate to it?


Issue 9 multi-tasking and multi-threading is not supported by the kernel. In practice, today’s operating systems are all provided with a kernel able to schedule multiple threads and multiple processes. Another remarkable feature is that Java provides a single API for all platforms and thus a single construct for the concurrent programming which makes use of the object primitive functions wait, notify, and of the keyword synchronized. This construct is available from Java 1.0 to Java 1.4 and can be considered as a simplified form of the “monitor” construct. The model for concurrent programming has been improved since Java 1.5 to address some issues that are explained in [11]. Also Mozilla allows concurrent programming, but only by writing C/C++ code can you exploit the multi-threading API. In fact, an application written exclusively using XUL and Javascript is executed in a single threaded running environment, thus a blocking operation performed by our code cause the GUI to block. Javascript provides alternative tools like timers which are pieces of code executed at a given time with a given period, but in my opinion these are not as expressive as monitors and synchronized code in Java.

Portability Portability is one of the key factors in both platforms. Java ensures the portability of the bytecode because the JVM provides a layer which isolates the code from the underlying machine. On the other hand, Mozilla applications are not homogeneous: they can contain XUL, Javascript, HTML, CSS, C/C++. The main issue is about the portability of C/C++ components. While binary portability is not ensured Mozilla, nevertheless, provides an API called Netscape Portable Runtime, which builds an abstraction layer between the C/C++ code and the underlying machine. Using this layer, the source code can be easily compiled in different operating systems. Portability is an important value but it prevents the full exploitation of the underlying system, as code is portable only if it exploits those physical features which are common to all platforms. This is also a problem in mobile computing where Java provided API for Bluetooth and “mobile messaging” only when these features became “common” to many devices, causing the release of Java applications to be late in comparison with their C/C++ counterparts. The limited integration with the hosting system also affects the full exploitation of Desktop Managers like Windows and Gnome/KDE. Portability is an important value but it prevents the full exploitation of the underlying system, as code is only portable if it exploits those physical features which are common to all platforms

Java API and XPCOM Both platforms provide a rich set of API. The Java Development Kit provides the following categories of functionalities: • GUI • Images • Input/Output on file/network/device • Language Reflection • Multithreading • Network Datagram, Socket and ServerSocket • Remote Method Interface • Security • Text manipulation • Sound • XML parsing • Abstract database connections On the other hand Mozilla provides a language neutral API. Components can be written and accessed in different languages while interfaces are defined with a common Interface Description Language called XPIDL (see figure 2).

Should Java programmers migrate to it?

19


Issue 9

The access to low-level functionalities is provided by a special element of the architecture called XPConnect which acts as the “glue” between XUL/Javascript in the front-end and the implementation of components. The interface is defined in XPIDL The XPCOM API provides the following categories of functionality: • Access to web platform components like address book, bookmarks, etc • Multithreading • Collections, Sets, Dictionaries • LDAP • Mail • Network • RDF and XML • SOAP, XML-RPC and WSDL. SOAP and web services are directly supported in the Mozilla API, while in Java Standard Edition they can be used loading some additional libraries, or are alternatively bundled with the J2EE. On the other hand, Java provides the Remote Method Interface (RMI) which allows the building of distributed systems with object mobility and remote classloading.

Development tools The luxuriant choice of tools for Java is probably the reason why Java programmers don’t migrate to other development platforms The luxuriant choice of tools for Java is probably the reason why Java programmers don’t migrate to other development platforms. The quantity and quality of tools is impressive: code completion, refactoring tools, automatic generation of code portions, hyperlinking in source files, GUI visual editors, debuggers, test suites, build management, and automatic documentation. Moreover, integrated environments such as IntelliJ Idea, Eclipse, and others provide a single access point to all of the mentioned tools (figure 3) with quick access to CVS repositories.

20

Should Java programmers migrate to it?


Issue 9

Eclipse is a powerful IDE with a lot of tools integrated in a single environment So, the first problem for a Java programmer willing to experiment coding with Mozilla tools is “where is the IDE?”. Probably the main obstacle in having an IDE for Mozilla is the very heterogeneous nature of its languages, but in my opinion the Mozilla Foundation should focus in this direction to attract more developers. Although an IDE for Mozilla is still missing, there are some interesting tools. Among others there are: xpcshell: xpcshell is a command-line Javascript shell. It allows the testing of simple Javascript code snippets and also to load and test any kind of XPCOM. Unfortunately, the running environment of the xpcshell is not the same as a script in a XUL document, thus scripts that are expected to run in a GUI cannot be tested in the xpcshell. jsshell [12]: jsshell is an interesting service which runs as a socket server in the Mozilla browser. It allows a TCP client to connect to the browser and prompts for Javascript commands. Differently from xpcshell, jsshell has the browser as the environment and it is very useful for testing extensions. venkman [13]: venkman is a debugger for Javascript. It is installed as a tool in Mozilla and it is widely considered the best debugger for Javascript code.

Conclusion The following table summarizes the comparison: Comparison between Java and Mozilla Mozilla Java Desktop Good Good Mobile Poor Excellent Server Absent Good Web Excellent Good License Excellent Poor Community Good Good Tools Good Excellent Documentation Good Excellent Portability Good Good Multithreading Poor Good Assigning the scores: 0 to Absent, 1 to Poor, 2 to Good, 3 to Excellent, Java beats Mozilla 22 - 18. Nevertheless, the choice depends on the features you consider “really important” for your application. Java is a language and on its platform you can build complete systems using Java for mobile client, desktop client, and server side components. Certainly, this is great for programmers, who can use a single

Should Java programmers migrate to it?

21


Issue 9 development environment to code, test, debug, and deploy their application with an impressive productivity. Nevertheless, one characteristic of the internet is the “heterogeneity” and even if Java provides code mobility, this feature seems to have gained scarce popularity in the internet and it is limited in some niche of market and academia. So the Java bytecode has not become the lingua-franca; instead this role is probably assigned to XML. Mozilla is open to inter-operability and is a language neutral platform as components can be written in any language (but to be honest they are consistently written in C++ and Javascript). Mozilla applications are often the integration of elements written in XUL, Javascript, RDF, HTML, CSS, and C++. Maybe, the right question is “when should a programmer use Mozilla instead of Java?”. The ability to handle web content, GUIs, and multi-media inside Gecko is of course very interesting. Attempts to build an HTML layout engine in Java have never yielded good results (Sun HotJava has been discontinued, also the Netscape attempt to port the Navigator in Java failed. Currently, there is the Jazilla project which has the objective to build a Java version of Mozilla, but at the moment the result is quite poor). As a last resort, it is possible to integrate Java and Mozilla: either embedding Gecko in a Java application or embedding Java applets in XUL documents. There is an on going project called blackconnect [14] which aims at the development of XPCOM in Java, but at the moment the project doesn’t seem very active. In perspective, the internet is evolving from a web of documents to a web of services and the traditional HTML will become just one of the many ways to access the net. In any case Mozilla is ready to provide rich user interfaces for a satisfactory interaction with web services.

Notes and resources [1] Java's web site (http://java.sun.com)/ Last visited 10/04/2005 [2] Tim O’Reilly. Mozilla.org Unleashes Mozilla 1.0 (http://www.internetnews.com/xSP/article.php/1299381) Last visited 15/04/2005 [3] Ina Fried and Paul Festa. Reversal: Next IE divorced from new Windows (www.news.com) February 15, 2005. [4] Inside XAML (http://www.ondotnet.com/pub/a/dotnet/2004/01/19/longhorn.html) Last visited 15/04/2005. [5] MAB Mozilla Amazon Browser (http://mab.mozdev.org) Last visited 15/04/2005. [6] mozdev.org - games (http://games.mozdev.org/) Last visited 15/04/2005. [7] Eric Raymond. Let Java go: an open letter to Scott McNealy, CEO of Sun (http://www.catb.org/~esr/writings/let-java-go.html) Last visited 15/04/2005. [8] Richard P. Gabriel and William N. Joy. Sun Community Source License Principles (http://www.sun.com/981208/scsl/principles.html) Last visited 15/04/2005. [9] Jim Hamerly and Tom Paquin and Susan Walton. “Freeing the Source: The Story of Mozilla Open Sources”. Published in “Voices from the Open Source Revolution”. O’Reilly. ISBN 1-56592-582-3. [10] The Mozilla Application Framework in Detail (http://www.mozilla.org/why/framework-details.html) Last visited 15/04/2005. [11] Qusay H. Mahmoud. Concurrent programming with J2SE 5.0 ( http://java.sun.com/developer/technicalArticles/J2SE/concurrency/) Last visited 15/04/2005. [12] Croczilla.com (http://www.croczilla.com/jssh) Last visited 15/04/2005. [13] Venkman Javascript Debugger (http://www.hacksrus.com/~ginda/venkman/) Last visited 15/04/2005.

22

Should Java programmers migrate to it?


Issue 9 [14] Java-to-XPCOM Bridge (http://java.mozdev.org/blackconnect/) Last visited 15/04/2005.

Biography Davide Carboni (/user/16" title="View user profile.): Davide Carboni holds a PhD in Computer Science. He is currently employed as "senior software engineerâ— at the Center for Advanced Studies, Research and Development in Sardinia (CRS4). His research interests are in the field of peer-to-peer systems, distributed computing, web applications, and agile software engineering. He runs his blog in http://powerjibe.blogspot.com (http://powerjibe.blogspot.com)

Copyright information Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is available at http://www.gnu.org/copyleft/fdl.html. Source URL: http://www.freesoftwaremagazine.com/articles/from_java_to_mozilla

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Should Java programmers migrate to it?

23


Issue 9

24

Should Java programmers migrate to it?


Introduction to Zope Part 1: Python By Kirk Strauser Zope is a web application server, similar in concept to proprietary products like Cold Fusion. However, it is free software that is available under the GPL-compatible Zope Public License, which is very similar to the BSD License. Zope was designed with the specific goals of creating a powerful, secure framework for the development of robust web-based services with a minimum of effort. However, Zope’s biggest distinguishing characteristic is how closely it models the language it is written in: Python. In fact, many of its features are directly derived from its underlying Python structure. Because of that, it’s difficult to truly understand or appreciate Zope without having a basic knowledge of Python. This article, the first in a two part series, is intended as a high-level introduction to the language. Next month’s instalment will build upon this by demonstrating practical examples of Zope code. Zope’s biggest distinguishing characteristic is how closely it models the language it is written in: Python

Language features Although Python has been in use since the early 1990’s, it’s only become relatively popular in the last few years. Many programmers view it as the spiritual successor to Perl. That is, it’s an expressive, interpreted language that’s equally at home in small system scripts or much larger applications. However, it has the deserved reputation of usually being easier to read and maintain than the equivalent Perl code. Python also sports an excellent object oriented approach that’s much cleaner and more integral to the overall design than is Perl’s. Perhaps most important, though, is a belief by the core development team in doing things the right way. It was designed from the beginning with an emphasis on practical elegance—Python strives to allow programmers to easily express their ideas in intuitive ways. It was designed from the beginning with an emphasis on practical elegance—Python strives to allow programmers to easily express their ideas in intuitive ways

Significant whitespace The first thing that everyone notices about Python is its use of significant whitespace. Rather than marking blocks of code with keywords such as “begin” and “end”, or curly brackets a la C, Python sets them apart with indentation. Frankly, a lot of programmers hate the idea when they first see it. If you’re one of them, don’t be discouraged; the feeling passes quickly. It enforces the style guidelines that most good programmers would be following anyway, and soon becomes quite natural. Python is flexible regarding the use of spaces versus tabs, as long as you consistently use the same kind and amount of whitespace to indent. Furthermore, almost all programming editors have Python modes that handle the details for you. The standard comparison of formatting between C and Python is the “factorial” function. In C, that could be written as: int factorial(int i) { if(i == 1) { return 1; } else { return i * factorial(i - 1); } }

(or in one of many other common styles). A Python programmer would probably write something extremely similar to: def factorial(i): if i == 1:

Introduction to Zope

25


Issue 9 return i else: return i * factorial(i - 1)

Except for the missing curly brackets, the formatting is almost identical between the two.

Interactive development Python includes an interactive shell where you can experiment and test new code. Running the python command without any arguments will result in something like: Python 2.3.5 (#1, Apr 27 2005, 08:55:40) >>>

At this point, you can enter Python commands directly to see their effect. If you’re working on a large project, you can load specific parts of it for manual testing without affecting other modules. It’s equally handy for verifying that short functions will work as expected before embedding them into a larger body of code. It’s difficult to convey exactly how convenient this is, and how efficient the code-experiment-code cycle can be. Finally, the interactive prompt is an excellent place to explore objects, and the data and functions inside them. Typing dir(someobject) will return the list of objects referenced by someobject, and most of the functions in Python’s core libraries contain a __doc__ attribute with usage information: >>> dir(str) [lots of stuff, ..., 'translate', 'upper', 'zfill'] >>> print str.upper.__doc__ S.upper() -> string Return a copy of the string S converted to uppercase.

Tiny core language Python 2.3.5, the version recommended for use with the latest production release of Zope, has just 29 reserved words. Perl has quite a few more: 206 as of version 5.6.8. PHP tips the scales with up to an incredible 3972 commands and functions in the base language (although many can be added and removed at compilation time). The practical upshot is that any experienced programmer should be able to memorize the entire language in an evening. This simplicity does not reflect a lack of power though. Although most of the familiar commands are similar to their counterparts in other languages, several are significantly more flexible. The for command, as an example, will cheerfully iterate across a set of numbers, a list of strings, or the keys of a dictionary object. Any experienced programmer should be able to memorize the entire language in an evening Python keywords The whole language is built upon a short list of words: and, del, for, is, raise, assert, elif, from, lambda, return, break, else, global, not, try, class, except, if, or, while, continue, exec, import, pass, yield, def, finally, in, and print. If you’ve ever written a program, you probably already have an accurate idea of what most of them do.

Strong dynamic typing Python is dynamically typed, which means that it executes its type checks during program execution (as opposed to C). It is also strongly typed, meaning that it won’t convert data from one type to another unless you explicitly ask it to (as opposed to Perl). The language makes great use of this flexibility by passing parameters to functions as reference instead of by value. The net effect is that you can pass almost any object to a function, and if the operations in the function make sense for that type of object, then the function will work as expected. For example, the following code defines a function that will add any two compatible values together:

26

Part 1: Python


Issue 9 >>> def add(a, b): ... return a + b ... >>> add(1, 2) # Two integers 3 >>> add([1,2,3], [4,5,6]) # Two lists [1, 2, 3, 4, 5, 6] >>> add('1', 2) # A string and an integer TypeError: cannot concatenate 'str' and 'int' objects

In practice, this allows you to write generic code that can operate on any number of data types without additional modification.

Garbage collection Never alloc() or free() memory again. Python automatically allocates space to store your data structures and frees it when you’re finished with them. This has numerous large advantages. First, it frees programmers from the low-level details that waste their time. By allowing you to concentrate on algorithms and design instead of pedantic record keeping, it gives you the freedom to spend your time where it’s most useful. Second, it eliminates an entire class of efficiency and security errors. You don’t have to worry about the buffer overruns or memory leaks that C programmers must carefully avoid. Finally, it’s fast. While experts could possibly write optimized memory management routines for themselves, Python is much better at the task than the vast majority of average users. Garbage collected memory management has quite a few other non-obvious benefits, with complex datatypes being near the top of the list. To compose a list of various types objects, you simply create them and put them together into such a list: >>> >>> >>> >>> ['a

a = 'a short string'; b = [1, 2, 3] c = [a, b] c short string', [1, 2, 3]]

Whenever the program’s execution moves past the scope where these objects are defined, they simply vanish. Compare this with other languages that would require you to track an objects existence by hand and decide whether (a) you’re truly finished using that object, or (b) another object still references them. This isn’t the same as saying that programmers never have to consider memory usage—bad design is still bad design—but the penalties for not doing so are far smaller. You don’t have to worry about the buffer overruns or memory leaks that C programmers must carefully avoid

Object oriented and functional programming In Python, almost everything is an object. A module is an object that contains definitions of other objects. Classes are objects that contain functions and variables. Functions themselves are objects. Since values are passed to functions by reference, this means that you can pass functions just as easily as integers, strings, or any other objects. In the example below, I define three simple functions that perform an operation on a number and return the result. Then, I define another function, which takes a number and a function to call with that number, and execute it with some sample values: >>> ... >>> ... >>> ... >>> ... ...

def plusone(a): return a + 1 def plustwo(a): return a + 2 def timesthree(a): return a * 3 def math(number, operation): return operation(number)

Part 1: Python

27


Issue 9 >>> math(1, plusone) 2 >>> math(2, plustwo) 4 >>> math(3, timesthree) 9

This simple pattern is used widely in Python. For example, the list.sort function can use an optional function to compare two values in a list and order them appropriately. Various GUI toolkits work by passing functions to event handlers so that they’re executed when the respective events occur. Functions can even be stored in other data structures, such as dictionaries, and retrieved as needed.

Pervasive namespaces As mentioned above, imported modules are just another kind of object. This means that rather than bringing the functions and variables from a module into the current namespace (as C does), they remain within their named object: >>> import time >>> dir(time) [a lot of stuff, ..., 'asctime', 'clock', ...] >>> clockName Error: name 'clock' is not defined

Thanks to this feature, you don’t have to worry about conflicting names from unrelated modules. Experienced programmers should immediately appreciate the organizational advantages this brings. Novices will like the fact that they’re not immediately faced with an overwhelming number of functions.

Conclusion Python is one of a new generation of cross-platform programming languages. It’s simple enough that new programmers can immediately start using it, but equipped with the tools that make experts rejoice. It freely mixes imperative, object oriented, and functional programming so that you can choose the approach most appropriate for the task at hand. It’s used by companies such as Google and websites like Wikipedia, and is quickly becoming a common choice for new application development. I haven’t forgotten about Zope. However, the features that have made it a powerful and popular application server originate in Python, and to truly “get” Zope, you must have a passing understanding of Python. In next month’s column, I’ll explore the ties between the two and demonstrate Zope’s power by implementing several practical web application components.

Notes and resources Python Keywords (http://www.python.org/doc/2.3.5/ref/keywords.html) Perl Functions (http://perldoc.perl.org/index-functions-by-cat.html) PHP Quick Reference (http://www.php.net/quickref.php) PHP 'Reserved' Words (http://us2.php.net/manual/en/reserved.php)

Biography Kirk Strauser (/user/25" title="View user profile.): Kirk Strauser (mailto:kirk@strauser.com) has a BSc in Computer Science from Missouri State University. He works as a network application developer for The Day Companies, and runs a small consulting firm (http://www.strausergroup.com/) that specializes in network monitoring and email filtering for a wide array of clients. He has released several programs under free software licenses, and is active on several free software support mailing lists and community websites.

28

Part 1: Python


Issue 9

Copyright information This article is made available under the "Attribution-Sharealike" Creative Commons License 2.5 available from http://creativecommons.org/licenses/by-sa/2.5/. Source URL: http://www.freesoftwaremagazine.com/articles/intro_zope_1

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Part 1: Python

29


Issue 9

30

Part 1: Python


Code signing systems How to manage digital certificates, Software Publishing Certificates and private keys for code signing By Saqib Ali This article looks at the management of the private key for the Software Publishing Certificate (SPC). SPCs are used to digitally sign binaries that are produced by software development vendors. Digitally signing executables proves the identity of the software vendor and guarantees that the code has not been altered or corrupted since it was created and signed. Signing the code requires access to the SPC and the Private Key (PVK) associated with the SPC. Digitally signing executables proves the identity of the software vendor and guarantees that the code has not been altered or corrupted since it was created and signed

Background In cryptography, key management includes secure generation, distribution, and storage of keys. Appropriate and successful key management is critical to the secure use of every crypto-system without exception. Secure methods of key management are extremely important. Once a key is randomly generated, it must remain secret to avoid misuse (such as impersonation). It is, in actual practice, the most difficult aspect of cryptography, for it involves system policy, user training, organizational and departmental interactions in many cases, and coordination between end users, etc. Most attacks on public-key systems will probably be aimed at the key management level, rather than at the cryptographic algorithm itself.

Windows XP SP2 will produce this warning message for unsigned binaries. The user can NOT verify the authenticity of the code Many of these concerns are not limited to cryptographic engineering and are outside a strictly cryptographic domain. The responsibility of the proper key management falls on the upper management in an organization of any size. Users must be able to store their private keys securely, so no intruder can obtain them, yet the keys must be readily accessible for legitimate use. There are many solutions available for proper key management of private keys owned by single individuals. Verisign, Entrust, RSA, and Microsoft’s Active Directory all provide a good mechanism for managing keys owned by individuals. However, the management of private keys owned by groups or organizations is an issue that lacks proper tools and guidelines. Examples of these types of private keys include: • The private key for the SSL certificates owned by groups or organizations

Code signing systems

31


Issue 9 • The private key for SPC owned by a software development vendor • The private key for the root CA (certification authority) This article looks at the management of the private key for the SPC.

For signed binaries the source (vendor name) is displayed. The user knows that the code is authentic, and has not been tampered with during the transmission A digital signature informs the user: 1. Of the true identity of the publisher. 2. Of a place to find out more about the binaries. Code signing digital certificates can be purchased from Verisign ( http://www.verisign.com/products-services/security-services/code-signing/index.html)

So what’s the problem? In any software development firm more than one person needs to be able to sign the latest build, and release for download. However, allowing multiple developer access to the private key kills the security of the key. Every time a secret key is shared, the confidentiality provided by that key gets halved. Some questions that come to mind: 1. What are the best practices for managing Code Signing Digital IDs and private keys? 2. What can be done to secure the private key for code signing? 3. Who should have the possession of the private key? Multiple people or just the project manager? 4. What key escrow (recovery) techniques can be used, if the private key holder is not available? 5. Who should be allowed to digitally sign the build? If one person is responsible for signing all binaries, that person becomes the single point-of-failure. So it is recommended to give several people the ability to sign builds. However, this needs to be done in a way that several developers DO NOT end up with the private key for the SPC.

Code signing system: option one One option is to use a code signing system (let’s call it CSS1), which uses a shared secret scheme (Shamir, Blakeley or Trivial). A secret sharing scheme allows you to distribute a secret (such as a private key) over some number of people such that a specified number of people (the threshold) must work together to recover the secret. For more information on secret sharing, have a look at this Wikipedia entry (http://en.wikipedia.org/wiki/Secret_sharing). In its simplest form, this option would involve giving out “parts” of keys to each developer, and the CSS1 would have the knowledge on how to reconstruct the private key. If at least three developers are able to provide parts of the whole key, the CSS1 would go ahead reconstruct the key and sign the binary. The developers will never get to see the whole private key. This way, you can avoid a single person being able to sign, while at the same time making sure that no single person is

How to manage 32 digital certificates, Software PublishingCertificates and private keys for code signing


Issue 9 critical for the signing. To further secure the system Shamir’s or Blakeley’s scheme can be used. A secret sharing scheme allows you to distribute a secret (such as a private key) over some number of people such that a specified number of people (the threshold) must work together to recover the secret Secret sharing is a good start, but it doesn’t get you all the way there. Suppose its time to sign a new build: then some parties reconstruct the secret key, and poof all of a sudden CSS1 knows the secret key, and you are back to a single point (or even multiple points) of (security) failure. Now you are dependent on the security implemented in the CSS1 to safe guard the privacy of the key. An attacker might be able to make use of the window in time between the key being reconstructed and the key being destroyed, to retrieve the key from the CSS1.

Code signing system: option two Another low-tech option is to use an isolated computer dedicated for code signing. Let’s call this system CSS2. CSS2 should have no hard drive or network connection of any type. The operating system and the code signing tools, including the PVK and SPC, should reside on a read-only DVD. The following are security considerations: 1. The operating system contained on the DVD should be very minimal. It should restrict any file copying or displaying functionality. It should also restrict any access to the console. This way an attacker cannot copy the PVK and the SPC from the DVD. 2. The DVD should be encrypted, and the decryption key should be embedded into CSS2. This way any copies of the DVD will be useless, outside CSS2. 3. The DVD should be placed in a safe box, which requires three more or keys to open it. At the time of signing, at least 3 developers must get together to perform the code signing. They must use their keys to open the safe box, retrieve the DVD, boot the computer using the DVD, sign their code, and place the DVD back into the safe. Inevitably, there is still the potential for collusion, i.e. three or more developers with malicious intent can work together to create and sign malicious software This system will prevent any attacker from gaining access to the PVK file. Even if an attacker gets hold of a copy of the DVD, it will be useless since it is encrypted. Also, since there is no file copy or display mechanism on CSS2, the PVK file can not copied. Inevitably, there is still the potential for collusion, i.e. three or more developers with malicious intent can work together to create and sign malicious software.

Conclusion Code signing can provide a very good security tool for verifying the authenticity of a released binary, and also guarantee that the code was not tampered with during transmission. However, management of the certificate and private key for code signing is very critical. If these private keys and certificates are misused, both the customer and software vendor can be adversely affected. The customer might end up with malicious software on their computer, while the vendor may lose money and reputation. If proper key management techniques are utilized, any misuse can be avoided.

Biography Saqib Ali (/user/37" title="View user profile.): Saqib Ali is a Snr. Systems Administrator and Technology Evangelist at Seagate Technology. He also manages a free software web based application (http://validate.sf.net/) that allows online conversion of DocBook XML to HTML or PDF. Saqib is also a active contributor to The Linux Documentation Project

Copyright information Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no

How to manage digital certificates, Software PublishingCertificates and private keys for code 33 signin


Issue 9 Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is available at http://www.gnu.org/copyleft/fdl.html. Source URL: http://www.freesoftwaremagazine.com/articles/private_key_management

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

How to manage 34 digital certificates, Software PublishingCertificates and private keys for code signing


Free, open or proprietary? Philosophical differences in software licensing By Tom Chance Software is a tool, a compilation of code that directs computer hardware, a program that empowers people to work more productively. Before Richard Stallman founded the GNU Project, many outside of hacker communities would have reasonably asked: why on earth is the ethics of software distribution philosophically interesting? But by formalising hacker conventions, Stallman kickstarted a revolution in the industry that now raises profound questions about areas of philosophical interest, most notably property. However, the precise differences between Stallman’s conception of “free software”, the term “open source” and the alternative: “proprietary” are often confused. This article seeks to disentangle the issues and present a clear analysis of each approach to software licensing. Before I jump in, I ought to make clear a few ground rules. First, by “philosophy” I mean more than “thinking about something”. By my use of the word, it would be nonsense to talk of “a philosophy of software engineering” when you really mean “an approach to software engineering”. I want to go beyond the general beliefs and attitudes of each approach, beyond their techniques. It is true that, in terms of their techniques, free software and open source mean the same thing, and so identical philosophical issues will arise from their techniques. But the development methodologies (open sharing of code, many eyeballs finding and fixing bugs, etc.) were never an important part of Stallman’s free software philosophy. So instead of looking at their techniques, I will analyse their orientation (or goal), their logic (why they adopt their particular orientation and techniques) and the limits of the space in which they apply. Free and open source approaches to software development may be identical, but their philosophies are radically different

Proprietary software The orientation of proprietary software is to create good software. It’s that simple. Its techniques, from a philosophical point of view, are similarly banal, involving various development methodologies and the application of copyright to both protect the software from outside interference and to protect the financial interests of the authors. The logic behind this technique is, its proponents tell us, in the spirit of copyright: to reward the authors, and to promote future creativity. However, since propietary software may be released for free (freeware), the reward isn’t necessary. Given that both the free and open source approaches also allow for rewards, we have to discount this as being philosophically distinct to the proprietary approach (though it is an open question for economists). Rather, the distinctive quality of proprietary software is that the source code is closed, making creation and modification the exclusive preserve of those to whom the owner gives access.

“data_cloud”, by Campbell Orme. Released under the Creative Commons Attribution license

Free, open or proprietary?

35


Issue 9 This is closely related to the logic of copyright, which isn’t so clear. Traditionally property—including physical property—is defended in one of two ways: by reference to the inalienable right of the owner, or by reference to the benefits to society. In both cases, the crux of the argument is the justification of exclusion, such that the owner can exclude the public from using the property as he or she chooses. For example, the English philosopher John Locke suggested that we have an inalienable right to own that which we have worked on. His argument was “not that the existence of private property serves the public good, but rather that rights of private property are among the rights that men bring with them into political society and for whose protection political society is set-up” (Waldon, 1998: 137). Locke went on to explain how we can come to own physical objects such as land that were previously in the commons. By mixing our labour with that which nobody owns, we come to own that part of commons (Locke, 2003). In the context of software, by mixing my work with ideas that I discover, I come to own the result; the commons can be thought of as containing undiscovered ideas, and ideas that are given by their creators to the commons, such as programming languages and techniques. Locke suggested that we owned ourselves, and so we could own those objects that we mix our labour (ourselves) with Of course in this context, Locke’s argument faces a problem: in mixing my own ideas with those of others’, I am not taking their ideas away in the same way that I might remove land from the commons by cultivating it and consequently claiming it as my own. This is because ideas aren’t rivalrous or scarce, meaning that an infinite number of people could use the idea without reducing its utility. So there is no need for me to extend ownership over an idea if I can gain the same utility from it in the commons. Furthermore, because software builds upon the commons and upon ideas from other software, it is difficult to say in what sense you created a piece of software. If I write a classic “hello world” script in Python, should I be able to own that nugget of information held on my hard drive and exclude others from it? Should I be able to own the idea itself, excluding others from writing “hello world” scripts in Python? With large programs like OpenOffice.org it is slightly more clear that there is a significant amount of innovation and labour in the code, but the problem remains. The impracticality of this technique suggests the necessity for a different logic, that of copyright as a bargain for the public good. According to this argument, the public benefits from the creation of software, but authors can develop software better if they can control access to the source code, whether for financial reasons or some other, and they can dictate the terms upon which the software is used and distributed. The latter point, however, doesn’t apply to freeware, where the author employs the techniques of proprietary software without seeking financial reward and without restricting the public’s rights of redistribution. To explain this, one must explore one other aspect of the logic of proprietary software, that of producer and consumer. The user passively consumes the software, and though it may well enable creativity, the software itself is an unchangeable commodity. This isn’t true of any other kind of property; all physical objects are only limited in their mutability by the technical expertise of the owner. Therefore, uniquely the relationship between producer and consumer in this context allows no opportunities for productive community, be it hobbyist’s clubs or businesses that will modify the software, except where the author steps outside of the proprietary norm and gives special access to the source code to particular individuals or groups.

Open source software The orientation of open source software is described by the Open Source Initiative as producing good software. The definition of open source software is given in relation to proprietary software, comparing the techniques in terms of development methodologies and copyright licensing terms. It is the techniques that set the two approaches apart, not least because open source software rejects the main premise of proprietary software licensing—that it is better to restrict access to the source code. The logic for this difference, according to the OSI, is that “when programmers can read, redistribute, and modify the source code for a piece of software, the software evolves. People improve it, people adapt it, people fix bugs. And this can happen at a speed that, if one is used to the slow pace of conventional software development, seems astonishing.” (OSI, 2004a)

36

Philosophical differences in software licensing


Issue 9 Open source shares the philosophical orientation of the proprietary approach, but rejects its techniques This is achieved by subverting the traditional licensing of copyrighted works, specifically granting the right to use, modify and distribute the software to all who would take the opportunity. The logic behind this subversion is that it should result in better software. The open source approach therefore subscribes to the same philosophical justification of property in the software context as the proprietary approach, namely that being able to own the software serves the public good. If this seems paradoxical—that society’s ability to modify the software depends upon the author(s) owning the software—it is because an author could release software into the public domain without the source code, and be under no compulsion to do so upon request, in effect releasing proprietary software. The technique of open source is subversive because it abuses the technique of proprietary software to renounce its logic. Open source advocates rely on “economic self-interest arguments” without recourse to “moral crusades” and “ideological tub-thumping” (OSI, 2004d). In other words, open source as an approach explicitly avoids making itself philosophically distinct from proprietary software and any other intellectual property regime. Eric Raymond, a leading open source advocate, even tries to fit open source into Locke’s approach to property. Locke suggested that property rights, based upon mixing one’s labour with some part of the commons, only hold if the object of ownership is plentiful and promotes the public good. So Raymond says that, if we open the source code and forbid restrictions upon use, modification and distribution, we will increase the yield of useful work produced, and thus further the public good better than if we followed the proprietary approach. Restricting access to the program and its source code unreasonably abridges our access to a potentially infinite resource.

data_cloud_002, by Campbell Orme. Released under the Creative Commons Attribution license On these limited terms there can be no philosophical difference between the two approaches; both are based upon a particular application of copyright being the best way to produce good software. Even in the Open Source Definition, logic such as “no discrimination against persons or groups”, which seems at odds with the logic and orientation of proprietary software, are explained in relation to their capacity to “to get the maximum benefit from the process”, where a benefit is defined as the production of more good software (OSI, 2004b). On community verses the producer-consumer model, open source advocates are a little more confusing. On the one hand, they claim that they are “promoting the Open Source Definition for the good of the community” (OSI, 2004a) and on the other hand they claim to promote the definition on “pragmatic, business-case grounds” (OSI, 2004c). As with the open source approach to property, this is because the community is recognised as the basis of the approach’s pragmatic advantage over the proprietary approach. This has the interesting consequence that communities are only important if they contribute to the software, meaning that end-users who provide no input (whether it be code, documentation, money, etc.) are unimportant. The Open Source Initiative maintains three central advocacy documents: one for hackers, one for customers, and one for businesses (both those producing and consuming software) (OSI 2004e; OSI 2004f; OSI 2004g). Their approach maintains the producer-consumer relationship, because the limits of its space encompass only those that can contribute to the development process. Non-paying customers aren’t stopped from moving into the “producer space” in the same way that proprietary licenses do, and they’re not restricted in their use of the

Philosophical differences in software licensing

37


Issue 9 product, but neither are they afforded any more importance than customers of proprietary software. To summarise, the open source “philosophy” is philosophically similar to the proprietary approach, because they both emphasise techniques that produce more high quality software. Their logic is subtlely different, their techniques radically so, and the limits of the space in which the open source approach operates are slightly wider. But their orientations, and therefore their overall approach to questions of property and community, are identical.

Free software The orientation of free software is to create good software that provides certain socially useful freedoms. It is defined in terms of “liberty not price”, a frame of reference entirely absent from both the proprietary and open source approaches. And crucially it is defined as an ethical orientation, not a pragmatic orientation (Stallman, 1992, 1994). According to the Free Software Foundation, the orientation is related to four kinds of freedom (FSF, 2004a): • The freedom to run the program, for any purpose (freedom 0). • The freedom to study how the program works, and adapt it to your needs (freedom 1). Access to the source code is a precondition for this. • The freedom to redistribute copies so you can help your neighbour (freedom 2). • The freedom to improve the program, and release your improvements to the public, so that the whole community benefits (freedom 3). Access to the source code is a precondition for this. Free software advocates reject the orientation and logic of both proprietary and open source approaches The free software approach achieves this with exactly the same techniques as the open source approach: a range of software development methodologies based upon the free redistribution of the source code, made possible by the subversion of copyright through licensing. As Eric Raymond says, “the software, the technology, the developers, and even the licenses are essentially the same. The only thing that differs is the attitude” (Engel, 2004). But the logic that connects these techniques to the “freedom orientation” is quite different to the open source approach. To begin with, the free software approach renounces the concept of software ownership. Software ownership is unethical, the leading figure of the free software movement Richard Stallman often declares. In contrast to Raymond’s omission of natural property rights in his attempt to lend a Lockean justification to property, Stallman explicitly rejects this notion, though without reference to Locke or any other natural right theorists. Rather, in typical American style, he uses the position of the US constitution, which describes the copyright bargain, as precedent for his position. Property rights, he asserts, require a social justification and in the case of software there can be no such justification, therefore ownership of property is unethical (Stallman, 1992). Though, as with open source, this logic concerns itself with socially justified property rights, Stallman judges the social justification upon grounds of the impact on the freedom of society rather than on the quality or quantity of software produced. He proceeds on a basis of comparative harm, asking if the harms resulting from the restrictions advocated by the proprietary approach outweigh the harms resulting from the freedoms advocated by the free software approach (Stallman, 1992), and advocates a kind of rule utilitarianism—a philosophical doctrine that creates ethical rules based upon maximising utility—that says: the (social) utility of always sharing software under a free license outweighs any harms, thus it is an ethical duty to always do so. This ethical bias is also present in the free approach to the producer-consumer conflict. Stallman says that you “deserve to be able to cooperate openly and freely with other people who use software”, and he encourages “the spirit of voluntary cooperation” (Stallman, 1994). One can apply these and other related ideas to any information-based work, where the hacker mantra that “information should be free” can overcome unethical restrictions. Quite how far this goes is unclear. One could advocate a limited form of the free approach and say that the limits of the space in which it holds only extend to the community of producers, as with the open source approach. Or one could extend the “spirit of cooperation” to empower consumers to engage with the producers in a way that can’t be characterised as consuming.

38

Philosophical differences in software licensing


Issue 9 Communities who customise or localise their software like KDE-Farsi and GNOME-Bengali are good examples of how this might take place, as are communities and cooperatives that are set-up to manage the spread of free software, such as in Venezuela and Brazil (Chance, 2004). Users, who with proprietary software would have simply consumed the software, are able to use it in a community context, and in so doing develop new communties and strengthen existing ones not based around software development, nor even necessarily software use (it may be an ancillary concern for the community). These activities are distinctive from the normal productive cycle that the open source approach endorses because they may often only benefit communities that are seperate from the wider "open source community"; to put it crudely, the free software approach advocates universal empowerment and liberation, whilst the open source approach endorses the good of the community in terms of software production.

“fields_and_fields” by Campbell Orme. Released under the Creative Commons Attribution license In conclusion then, the free software approach is philosophically distinctive because, in contrast to both the proprietary and open source approaches, it is based on an ethical claim about the absolute importance of social utility, and about the relative social utility of different legal and development techniques. The approach rejects both natural rights and social bargain arguments for property in the software context, and subverts copyright law to create a global commons of software. Notions of community and cooperation are also central to the approach, both within the development community, amongst users, and between the two.

The two faces of the same animal? Both the open source and free software approaches share the same techniques. Raymond was quite right when he said that the difference lay in their attitudes. The fact that most projects share contributors who hold either the free software or open source “philosophy” lends weight to the idea that the two approaches are just different faces on the same beast. From the perspective of an open source or free software advocate, the two approaches may seem identical. But philosophically speaking, they’re quite different Whether or not you accept that conclusion depends on where your interests lie. Looking at it from the framework of either approach, it would seem that the important thing is to develop more software and, if you’re a freedom person, to do so in a way that doesn’t abridge our freedoms. From either perspective, their shared techniques achieve the goal admirably; the installation of GNU/Linux I’m using to write this article demonstrates as much. However, both approaches have attracted the attention of many a thinker from philosophy to economics, politics and law. For Marxists, the free software approach represents a critical challenge to property regimes; for some economists both approaches represent an experiment in gift economies; for many a political theorist they both present opportunities for democracy, freedom, you name it. It’s safe to say that because the term “open source” was coined to capitalise on free software’s techniques in the business world, any thinker that leans upon the open source approach will be fairly content with less radical changes within the space they study. Management theorists, for example, can be content with applying the open development methodologies to their own previously heirarchical theories. Despite differences in orientation, many free software advocates are just as reformist. But some advance more radical critiques of contemporary approaches to property, community and the producer-consumer relationship, bouyed by the

Philosophical differences in software licensing

39


Issue 9 ethical basis of Stallman’s position. At the start of this article I admonished people for talking about their “software philosophy” when they actually meant their non-philosophical thinking about software. It should now be clear what the difference is, and why people confuse the two when looking at software production and licensing. Both approaches want to differentiate themselves from the proprietary approach; open source advocates refer to the set of techniques they advocate as the open source “philosophy” and free software advocates refer to the ethical orientation and logic they advocate as their “philosophy”. The word “philosophy” is being used in a different sense each time, masking their actual philosophical similarities and differences. This is harmless semantics, a quibble from a philosopher, but underlying it are a range of questions that have been the bread and butter of philosophy for millennia. Long may they continue to plague our minds.

Notes and resources T. Chance (2004). In defense of free software, community, and cooperation (http://tom.acrewoods.net/writing/free-sw-community) Creative Commons (Date unknown). Frequently Asked Questions (http://creativecommons.org/faq) A. Engel (2004). Free as in Freedom - Part Two: New Linux (http://www.pressaction.com/news/weblog/full_article/engel12122004), Press Action Web Site FSF (2004a). The Free Software Definition (http://www.fsf.org/philosophy/free-sw.html) J. Locke (2003). “Second Treatise of Government, in Locke: Two Treatise of Government”, ed. P. Laslett OSI (2004a). Open Source Initiative, Web Site (http://www.opensource.org) OSI (2004b). The Open Source Definition (http://www.opensource.org/docs/definition.php) OSI (2004c). History of the OSI (http://www.opensource.org/docs/history.php) OSI (2004d). Frequently Asked Questions (http://www.opensource.org/advocacy/faq.php) OSI (2004e). Open Source Case for Business (http://www.opensource.org/advocacy/case_for_business.php) OSI (2004f). The Open Source Case for Customers (http://www.opensource.org/advocacy/case_for_customers.php) OSI (2004g). The Open Source Case for Hackers (http://www.opensource.org/advocacy/case_for_hackers.php) E. Raymond (1999). “The Cathedral & The Bazaar: Musings On Linux and Open Source by an Accidental Revolutionary” R. Stallman (2001). Free software: Freedom and Cooperation (http://www.fsf.org/events/rms-nyu-2001-transcript.txt) R. Stallman (1992). Why Software Should Be Free (http://www.fsf.org/philosophy/shouldbefree.html) R. Stallman (1994). Why Software Should Not Have Owners (http://www.fsf.org/philosophy/why-free.html) J. Waldon (1988). “The Right to Private Property”

Biography Tom Chance (/user/40" title="View user profile.): Tom Chance is a philosophy student, free software advocate and writer. He is the Project Lead of Remix Reading (http://www.remixreading.org/), the UKâ— s first localised Creative Commons project. You can contact him via his web site (http://tom.acrewoods.net/).

40

Philosophical differences in software licensing


Issue 9

Copyright information This article is made available under the "Attribution-NonCommercial" Creative Commons License 2.5 available from http://creativecommons.org/licenses/by-nc/2.5/. Source URL: http://www.freesoftwaremagazine.com/articles/philosophical_diff_fs

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Philosophical differences in software licensing

41


Issue 9

42

Philosophical differences in software licensing


Does free software make sense for your enterprise? Finding free software at your office is like finding a Republican in San Francisco By Tom Jackiewicz “Dude, I can, like, totally do that way cheaper with Linux and stuff.” These were the words of a bearded geek running Linux on his digital watch. As he proceeded to cut and patch alpha code into the Linux kernel running on the production database system, the manager watched on in admiration. Six months later, long after the young hacker decided to move into a commune in the Santa Cruz hills, something broke. Was it really “way” cheaper?

Nostalgia and first impressions I remember the first time I actually opened up a Sun server after years of using them remotely. Coming from a background of PC-based hardware (that were somehow deemed servers), it was a memorable moment. Some obsessive-compulsive engineer with an eye for detail had obviously spent countless hours making sure that the 2-inch gap between a hard drive and an interface port was filled with (gasp!) a 2-inch cable. Each wire was color-coded, the appropriate length, and carefully held down with ties and widgets to ensure that they never got in the way. Each server I opened was configured the exact same way, and each component matched—down to the screws (which were custom fitted and just right). This was a far cry from 9 foot 2-device IDE cable I got out of the junk drawer that was used to add a random hard drive held in place by duct tape. As the halo above the server lit up the room, I was suddenly struck with a justification for the hefty price tag for these magical machines. Quality. As the halo above the server lit up the room, I was suddenly struck with a justification for the hefty price tag for these magical machines

The SPARCstation 20, as beautiful as a rose The same quality control that went into the server packing went into the hardware. Sure, I couldn’t connect my Colorado tape drive, digital camera, or latest toy to the back of these servers as I could to the PC, or use a really cheap hard drive created by some no-name vendor, but all the supported hardware that I actually did connect was controlled by a perfect driver and was carefully integrated into the operating system. It all just worked. Magically. If it was on the support list, you knew that some team of detail-oriented engineers took care to make sure that there were no flaws. Someone had to pay for all of this, hence the hefty price tags, but I knew that most of the servers that I was running my mission critical applications on were deployed when I was in diapers. Baby diapers, not the diapers that people who remember this type of quality are wearing today—or the ones I’ll be wearing when these servers finally crash.

Does free software make sense for your enterprise?

43


Issue 9 The alternatives to Sun Microsystems, IBM, Solaris, AIX, HP, HP/UX and all of the other commercial software running on proprietary hardware were untested. Linux was in its infancy and focused on desktops and attempted to support everything you plugged into it. It was a time of instant gratification and first to market that was necessary for it to gain acceptance in the desktop world. If a digital camera worked in Windows, the Linux community had better jump on providing support for it in their world. This led to terrible drives, kernels with goto statements and functions with the comments “Uhm, we’ll get to this later. Return 0 for now”. This led to instability when these systems were used as servers. The BSD community, while providing more stable software, wasn’t seen as sexy and didn’t gain as much acceptance. FreeBSD, NetBSD and OpenBSD were “theoretical” operating systems that were coded well but weren’t supported enough to provide integration with some of the more common components in use in current infrastructures. Additionally, more focus within the BSD community was spent on ensuring software was functional (and up to standards) than pretty—which led to a lack of usable interfaces. This gap seemed to propel commercial software more and more. They were well programmed and the vendors had enough resources to cover usability and stability. When the engineers, system administrators, programmers, and jack-of-all-trades geeks move and finally become managers (or are forced into the role), they remember these days. And they associate Sun Microsystems, IBM, and other giants with this type of quality. To them, the quality they first saw and admired is still around today. Decisions on what to use is made by these new managers.

Who runs this thing? Use of free software and alternatives to expensive proprietary hardware went crazy during the early days of the new AOL—err internet. Instead of the goal being stability and an infinite life, new companies were satisfied with getting something out quickly and, upon gaining venture capital or selling out, they could “move up”. The investment required for the commercial side was just too much for a fledgling company. But as we all know, a temporary solution usually becomes a permanent part of your infrastructure. Even worse, a temporary solution in place until more funding is available will outlast the company CEO. But as we all know, a temporary solution usually becomes a permanent part of your infrastructure. Even worse, a temporary solution in place until more funding is available will outlast the company CEO Adding to the equation, the open source and the free software movements were growing and the quality of the software was definitely increasing, providing a reasonable solution that wouldn’t instantly crash. Unfortunately, the problem facing management was responsibility and support. If there was a problem with software written by a 16 year old kid in Finland, who was responsible. If an administrator walked in and deployed a quick solution to fix your immediate needs, who would support his undocumented idea when he left? As employees leave and are no longer expect to grow up with the company (especially during the years of high school age CEOs) as they once have been. A need for a redundant array of inexpensive administrators is created. The prerequisites for this need are an infrastructure that is supportable by many. Your free software based system running alpha drivers for that new array to save you 500 bucks? Gone, you’re the only one who can run it. Your sendmail infrastructure optimized to take advantage of inferior hardware using custom interfaces for account management? Gone, welcome the appliances. The need to have software and hardware maintainable by anyone with a base level of experience has replaced making the most of the hardware and software in your infrastructure. If the configurations aren’t default, the performance improvement that you might be giving them with your take on things just won’t be cost effective. Look at it like blackmail or extortion. You walk into a company, “save” them hundreds of thousands of dollars on software, get them locked into your infrastructure, and then demand a huge raise just to keep it going because no one else can. By the time they’ve moved their operations towards your infrastructure, they can’t easily go back. Ironically, the same came be said for commercial software—even those based on open standards. That one extra little feature that Microsoft has in their implementation of a product over the free software version will lock your company into using Microsoft products for all eternity. At the same time, your company will feel that they could easily move away and use any vendor before, hey, it’s an open standard!

44

Finding free software at your office is like finding a Republicanin San Francisco


Issue 9

Consistency in project life cycles Ok, so I used a buzzword, but my head isn’t standing up into point. This matters. There are processes in place for development of software, quality assurance testing, and validation that happen before software reaches the customer. In the commercial realm, there are people paid to do some of the most boring tasks just to make sure they get it right. While 30 QA engineers aren’t necessarily going to be as good as a public user community, they are consistent in their operations and try to make sure that nothing slips through the cracks. The user community will often test a few things here and there but won’t go through the tedious tasks of making sure some of the most basic operations work with each subminor revision of code. If something is missed, the assumption is that someone else will find it and any potential problems will be taken care of. These things are boring. But someone has to do it. From the programming, QA, and other areas of software, each has a defined process that needs to be followed. The same is true for deployment within your infrastructure. What would you do to bring Postfix, which is amazing software, into the mix within your environment? Most people would take some of the old email infrastructure, validate a few things here and there (to make sure no users requiring the delivery of mail are missed). An old legacy host doesn’t speak well with your email gateways? Uh oh, you overlooked that. Important emails being inappropriately tagged as spam? My bad, sorry. These mistakes happen, and because these little things were overlooked in your haste to show off Postfix, someone is going to look bad. Try deploying any commercial package (especially with a support contract). All of the little caveats you run into will surely be documented by the vendor. A list of supported products will also be given so you know which integration efforts will require a bit of extra work or should remain pointed to an interim solution. And if all else fails…

Who’s to blame? We’re a society of lawsuits and passing the buck. Slip and fall while running the wrong way on an escalator drunk? First thing someone will say is that there wasn’t a sign telling you not to do that. Sue someone for your own stupidity. Fall behind on a project and lose your bongus because of a bug in software from a vendor? The threat of tarnishing their reputation lets you strong arm them into giving you anything you want. Big or small, there’s someone on the other end of the software with a livelihood. The threat of tarnishing their reputation lets you strong arm them into giving you anything you want. Big or small, there’s someone on the other end of the software with a livelihood

Have a problem? Blame sendmail.com, the commercial side The only person to blame when free software fails is the person who deployed the software. But the person in charge of your organization can’t just say “Oh, that crazy Ted! He did it again!” and get away with it. Heads roll when something bad happens. If the bad thing is free software, the heads are internal. If the bad thing can be blamed on a vendor then more people within your organization are safe from the axe. Companies all like it when there’s someone outside of the organization that can be blamed for a failure.

Finding free software at your office is like finding a Republicanin San Francisco

45


Issue 9 Sun and other large vendors have teams of highly trained engineers ready to parachute into your company at a moment’s notice. All are trained consistently, all read from the same handbook, and all can come in and give you 2080 hours of work in a weekend just to get you up and running. Try doing that by bringing in the same people “trained” on free software. Each has their own idea on how to do things, no consistent sets of manuals, and, for the most part, all from different companies. There isn’t a single source of gurus who know the same Linux implementation who can run out to help you when you’re in a bind. At the same time, a list of truly supported packages will be there. If you try to integrate a commercial package with something that isn’t supported, there are no guarantees. There’s no “It should work” here. These certifications by the vendor are often done after extensive testing from both sides—the package being deployed and the package being integrated. These often leave you with an all-commercial environment as no one is going to have the time or money to make sure that something is going to work with the latest free software version of something. These things leave free software back a bit but make your management rest a little easier at night. After all, if Microsoft certifies something will work, it will, right?

If you can’t beat ’em…

Check out sendmail.org, the original free version of the software The open source and the free software communities saw some of the same issues that I just ranted about on my proprietary soap box. Their response? Sendmail, so established a software that is has been the victim of a love-to-hate mentality, never had much competition in the commercial realm for much of its life. Eric Allman (father of sendmail) could have chosen to sit quiet and not do anything to further sendmail. Instead, he chose to create a commercial version of the software, offer support, and create an alternative to commercial email packages that were up and coming but not yet a threat. The end result was the same sendmail everyone was already running with the added aspect of accountability and a helping hand. This ended up being a good move because when the dot com boom happened, the former telemarketer turned “engineer” didn’t understand the new set of simpler m4 configuration macros used within sendmail, let alone the ability to understand anything required more than 3 mouse clicks and a “Duh!”

46

Finding free software at your office is like finding a Republicanin San Francisco


Issue 9 Check out apache.org, home of quality projects near you Apache, like Sendmail, has always had a stable following. It lived through commercial pressure (though one can always question the legitimacy) of Netscape Commerce and Enterprise Servers, iPlanet, IIS, and a slew of poor quality commercial versions of their software. A foundation was started and an entire community grew to support each other and their relevant projects. While there wasn’t necessarily someone on the other end of a phone line, there was an established group, which your managers would have a hard time convincing you, would magically disappear. Sendmail and Apache moved away from the unorganized efforts seen by the free software community when it was smaller and less significant. This gave them credibility and made some of their software a viable replacement for commercial products.

What can be done? What can you do about all of this, if anything? Don’t be the bearded geek I mentioned at the beginning of the article and put in some extra effort to make sure that what you deploy is supportable. Baby steps. You won’t win anyone over right away but when you deploy a free replacement for software, document the process, live by standards, and make sure that the life cycle of your project involves a significant amount of quality assurance, integration and interoperability testing. Put in the extra time to see what sorts of processes exist within your company and what will make everyone comfortable. Take the extra effort required to follow these procedures. Not only will you earn respect, you might learn something, come up with a bug, and let management view your free software deployment in a better light. Don’t just “do it” and expect everything to work magically with your 3am software install. The community as a whole should also spend time making sure that their software meets the requirements set by other vendors for interoperability. Software from Sun and Microsoft have a strict set of guidelines before they’ll list a product as fully supported. While expensive, going through the process of certifying free software as fully operable with commercial packages will give a big boost to the movement. One problem area that people fall into is the latest and greatest upgrade cycle. The stability of a system goes down if you keep upgrading your kernel, libraries, and applications to the latest and greatest revision. There’s really no reason to fix something that isn’t broken. In many environments, constantly upgrading libraries will cause components to fail. It will also cause custom code written by programmers in your company to act different. The lack of backwards compatibility in some packages just doesn’t work in a production environment. Sure, these problems exist in the commercial realm as well (and the hiccups are even worse) but those systems aren’t upgraded as often. A patch upgrade every 6 months or a year (with minor security updates in between) aren’t going to create as big a problem as a weekly kernel upgrade just for the sake of a kernel upgrade. It might be boring to sit around and wait while your office servers fall a few revisions back from your desktop at home, but it’s worth it. Schedule significant system upgrades after testing in development environments (you know what those are, right?) for a while. Create integration environments so that developers and application groups can make sure their code is going to work with your proposed update. Stay behind the curve a little bit and let others find out bugs in the new code before you’re running it on a system that must be up 99.9999% of the time. Discipline, process, and doing a few extra tedious tasks will give everyone a better impression of some of the solutions you’re proposing. Maybe, just maybe, quality software will catch up with the vendors. Maybe the smiles on their faces won’t be so big and their thinning bank accounts will make them realize that we should all work together to create better code and worry more about things that matter—not just the bottom line.

Notes and resources Jackiewicz, Tom “Deploying OpenLDAP”, Apress:2004

Biography Tom Jackiewicz (/user/41" title="View user profile.): Tom Jackiewicz is currently responsible for global LDAP and email architecture at a Fortune 100 company. Over the past 12 years, he worked on the email and LDAP capabilities of the Palm VII, helped architect many large scale ISPs servicing millions of active email

Finding free software at your office is like finding a Republicanin San Francisco

47


Issue 9 users, and audited security for many Fortune 500 companies. Jackiewicz has held management, engineering, and consulting positions at Applied Materials, Motorola, and Winstar GoodNet. Jackiewicz has also published articles on network security and monitoring, IT infrastructure, Solaris, Linux, DNS, LDAP, and LDAP security. He lives in San Franciscoâ— s Mission neighborhood, where he relies on public transportation and a bicycle to get himself to the office-fashionably late. He is the author of Deploying OpenLDAP, published by Apress in November 2004.

Copyright information Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved. Source URL: http://www.freesoftwaremagazine.com/articles/does_fs_make_sense

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

48

Finding free software at your office is like finding a Republicanin San Francisco


The will to code Nietzsche and free software By David Berry, Lee Evans “To refrain from injury, from violence, from exploitation, and put one’s will on a par with that of others: this may result in a certain rough sense in good conduct among individuals when the necessary conditions are given (namely, the actual similarity of the individuals in amount of force and degree of worth, and their co-relation within one organisation). As soon, however, as one wished to take this principle more generally, and if possible even as the fundamental principle of society, it would immediately disclose what it really is—namely, a Will to the denial of life, a principle of dissolution and decay”—Nietzsche, Beyond Good and Evil, §259 Free software has been described by theorists such as Benkler (2002) as commons-based peer-production. It is hailed for the revolutionary potentials inherent in its oft-described decentred, non-hierarchical and egalitarian (dis)organisation (e.g. Moglen 1999; Hardt & Negri 2004). However, in this paper we intend to see whether reading Nietzsche offers an alternative insight into the workings of free software projects. In particular, an insight that starts from a different point to that of an egalitarian theory and points, instead, to explanations that may cohere around a coding aristocracy. Does an analysis that focuses on the will to power (or perhaps more accurately the will to code) provide any explanatory value in understanding the extremely complex interactions and processes involved in software development within copyleft groups? How might reading Nietzsche help us to question the morality instantiated in such software and associated cultural projects? This short article is a preliminary sketch of how we feel a reading of the practices of the free software movements could be usefully understood through Nietzsche. In Beyond Good and Evil, On the Genealogy of Morals and elsewhere, Nietzsche examines the origins of “conventional” morality, claiming that prevailing ascriptions of the labels “good” and “evil” are the secularised legacy of Judeo-Christian “resentiment”. Ideals of compassion and neighbourliness, originating in the “slave” mentality of the oppressed and marginalised Jewry of antiquity have, through the rise of Christianity, come to exert a pernicious sway over European morality and politics. Reflecting upon the 19th century European milieu, he argued that the democratic-egalitarian impulse is not intrinsically “good” at all, but rather the product of an extended historical process of contest between aristocracy and slaves, rulers and ruled. How might reading Nietzsche help us to question the morality instantiated in such software and associated cultural projects? This short article is a preliminary sketch of how we feel a reading of the practices of the free software movements could be usefully understood through Nietzsche But this genealogical analysis was not the endpoint of Nietzsche’s investigation. His work can be understood as an extended commentary upon, and dialogue with, this democratic impulse in which its core premise—that of the possibility and desirability of the drawing of moral and political equivalences between human beings—is subjected to normative (re)evaluation. Possibility, because in the concept of “will to power” he claimed that humans were fundamentally competitive rather than compassionate; desirability, because he forcefully claimed the implications for the health of the community of a moral complex which elevates facility to its central ethical core, was fundamentally deleterious.

The will to code

49


Issue 9

“Will to Code 1”—art by Trine Bj?ann Andreassen The claim that the democratic egalitarian impulse is immoral makes for difficult reading, particularly in an age notable for its proselytising of choice, freedom and liberty. But in the spirit of “untimely meditation”—to think outside or against the times—it raises some pertinent questions about the form and consequences of morality instantiated in contemporary contestations over intellectual property regimes. The aristocratic moment in Nietzsche’s philosophy, where the majority exist to facilitate the pursuit of Beauty, Truth and Legacy by a select group of ubermensch, is redolent of a hierarchical social form to which few would subscribe today. And yet, insofar as he sought to rethink the legitimating narratives of his day in such a way that the contestation of authority became problematic for the “health” of the community rather than its salvation, we argue that it provides an important corrective to uncritical, unreflexive assumptions that the morality inscribed in the free software moment is “good”. Indeed, reading Nietzsche calls on us to (re)consider how to understand and evaluate the moral claims of the free software movement and its contributors in toto. So, for example, insofar as this movement accentuates the democratic-egalitarian impulse, do its members not inadvertently contribute to the ongoing enervation of the res publica in which they are located? Or, conversely, might they be understood as a code aristocracy which, in undertaking a “copyfight”, instantiate a process of self-overcoming though which the res publica is revitalised? And what moral judgement might we ourselves pass on them as a result?

The morality of free software—a code aristocracy? Before passing moral judgement, then, a moral assessment of the free software projects and contributions to them is required. This assessment has two dimensions: first, does the free software/open-source movement’s elite group of individuals, such as Richard Stallman, Eric Raymond, Linus Torvalds, Alan Cox, Bruce Perens, Tim O’Reilly, Brian Behlendorf, Eben Moglen et al, amount to a Nietzschean coding aristocracy; and second, does the will to power represented by Stallman et al signify the refraction of a novel moral complex through the social whole in which they are embedded, or are they merely (re)articulating more widely held and understood concepts of what counts as good and evil? What then is the morality instantiated in the free software movement by its contributors—the desire to “level” or the desire to lead? In the first case it is clear that there is indeed an argument to be made for the existence of an upper tier of programmers, self-selected and their authority legitimated by the claims to “hacker” status. These hackers are often extremely productive and active in their coding activities, sometimes even having the title “benevolent dictator” bestowed upon them (Linus Torvalds being a notable example). They also feel free to proclaim the morals and ethics of the communities they nominally claim to represent and sometimes take extremely controversial positions and actions (e.g. the Torvalds bitkeeper debacle). Much research is underway in a number of disciplines to understand the free software and open-source movements but the empirical studies undertaken so far seem to point towards a large number of developers in these projects but with a much smaller core cadre of programmers who undertake the majority of the work. When it comes to discussing difficult issues, decisions and future directions, those that have a “reputational” weight that can help to carry a particular position (of course, notwithstanding the dangers of “forking” and the need therefore to keep some semblance of consensus—or perhaps, more pessimistically, hegemony). In the first case it is clear that there is indeed an argument to be made for the existence of an upper tier of programmers, self-selected and their authority legitimated by the claims to “hacker” status

50

Nietzsche and free software


Issue 9 Additionally, nobody can ignore the proclamations of individuals like Richard Stallman and Eric Raymond (whose controversial and widely differing views on the ethics of these software communities we cannot go into here, see for example Berry 2004). But suffice to say that the two movements (i.e. free software and open-source) are important “nodal points” around which discussions are often polarised. Here we concentrate particularly on the arguments made by those who support the position of the free software movement, as we believe that they can and should be separated from the more individualistic and rational choice theory presented by the open-source community. Additionally, their explicitly moral and ethical claims allow us to examine their arguments within the framework we have discussed. We intend to return to the question of the open-source counter-claims in a later article. Secondly, although a Kantian notion of a categorical imperative seems to underlie the philosophical foundations of the position advocated by Richard Stallman (i.e. what is ethical for the individual must be generalisable to the community of coders), the nature of the language which is utilised by the Free Software Foundation (FSF), and Stallman in particular, draws on the benefits and importance to society as an original reading of the republican values of the US constitution. Separating a “free as in free speech” (i.e. libre) from a “free as in free beer” (i.e. gratis) he argues forcefully against the dangers threatened through the ownership and control of knowledge. He advocates a voluntaristic project that can counter the damaging constriction of human knowledge through corporate or governmental control (i.e. the right to access code, tinker, use and reuse ideas and concepts). He is also remarkably active internationally, giving Zarathrusta-like warnings of the dangers from the coming intellectual dark ages in presentations to governments, corporations and “civil society” organisations. A lone voice in the wilderness for many years, Stallman has had the last laugh, as all warnings regarding the enclosure and restrictions placed on knowledge through intellectual property law (e.g. patents and copyright) have come to pass. Yet, during this time, although to a large degree distanced from the wider community, he continued to (almost single-handedly) develop the most important tools necessary to build a philosophy and an operating system that remained outside of the private ownership of individuals (e.g. GNU). Indeed, it could be argued that the Free Software Foundation, which controls the development, is more akin to Res Universitatis than Res Privatae (i.e. it remains outside of private property as normally understood due to both its non-profit status and the ingenious General Public License). However, in a cruel twist of fate it was left to a young Finnish student, Linus Torvalds, to write the essential core kernel, to name it “Linux”, and thus complete the system. Perhaps more surprisingly, Torvalds also demonstrated a political naivety and lack of appreciation of the underlying ethical and political project that made his work possible in the first place. It could even be argued that Torvalds apolitical technocratic mentality has aided Stallman’s critics and the open-source movement’s project of de-politicisation of free software rather than confirming Stallman’s prescient forecasts. Nonetheless, Stallman’s project of the GNU/Linux system has paid off in a global debate which has truly unforeseen consequences (witness for example the spectacle of a music industry finding itself for the first time on the wrong side of the argument against “the system”, appearing less a radical/progressive force in tune with youth culture and more as corporate suits allied with the conservative hierarchy fighting file-sharing and peer2peer networks). The consequences of this project gradually revealing themselves: from technical questions over software to the (always implicit but now increasingly evident) concerns with morality... sharing or profit; our “right” to information against the private ownership of knowledge.

“Will to Code 2”—art by Trine Bj?ann Andreassen

Nietzsche and free software

51


Issue 9

Without regard for persons? Or, the res publica vs human beings In turning to Nietzsche we tread a familiar path in contemporary political thought. Such is the scope of his works that his texts have provided a rich seam for thinkers during the past four decades or so. In fact, there has been no time since his death when he has not been a feature of the political terrain. And yet for all this attention to Nietzsche, the normative core of his political diagnoses is all too often elided, particularly where he has been mobilised to refine various schema—democracy, feminism and socialism—to which he was implacably opposed. To acknowledge the legitimacy of the method is one thing—his work is a resource to be played with. But we argue that to invoke Nietzsche it is necessary to recognise and engage with his emphatically anti-democratic injunctions. We are not advocating Nietzsche’s binary social distinction: our intention is not to recalibrate the aristocratic moment. But we are intrigued by the possibility of invoking his untimely challenge to the conviction that human beings can be the subject of moral evaluation qua human beings. That we might, in Nietzsche’s words, be able to undertake some form of “revaluation of values”. In this vein we suggest that it is not origins on which moral evaluation should be based, but on consequences. In an era in which social democracy’s pact with the market demands that citizen’s rights be balanced by “responsibilities”, and political philosophy continues its Sisyphian struggle to resolve the unresolvable—to proclaim the ethos of community while retaining that lonely figure of the modern sovereign individual as its real ethical core—we wonder whether this revaluation might include re-consideration of the yardstick by which we judge moral agents. And to extend this line of thought, it might be possible to envisage a moral schema in which evaluation of a citizen be accomplished in terms of the service they perform to the community. In other words, that people be judged in terms of actions, and that actions be judged in terms not of their service to human beings qua human beings but to the social whole. Nietzsche then, calls upon us to question whether, in this age of utterly unreflective indulgence of the democratic impulse, we might not serve ourselves, and our community better by pausing to think what we are doing In the free software world that hackers inhabit, participants believe themselves to live in a meritocracy, where only the best programmers rise through the ranks to decide the rules of the game for others. But even here there are stark differences in how the contributions hackers make to a community might be judged: witness for example the different ethical standpoints of the free software versus the open-source movement (e.g. community based ethics against a form of selfish utility maximisation). It is also instructive to see how technological tools are developed by the hackers to discuss technical issues but also inevitably politics, economics and social issues (see for example slashdot.com for a good example). Yet key to a Nietzschean assessment of the morality of the free software movement is the establishment of a meta-morality that enables us to view its claims not oppositionally but historically: to provide a basis for moving beyond evaluation of which is the “most good” to think anew about what is “good” in the first place. If the defeat of old values creates nihilism, the task confronting us is precisely not to place faith in our agency, to think that we can “build” our way out of the moral impasse (as might be implied by the moral topology of contemporary resistance/struggle). The subversion of the old values by their own call to truth does not mean that we now exist in a moral vacuum into which we can add our own progressive morality (borne of countering authority, in this case in the form of IPRs). No, reading Nietzsche compels us to pause and consider anew the moral topography in which we are located and to which we all contribute. The task is not to innovate values through our agency, but to think how we may contribute to a revaluation of values through that agency—how we may help recalibrate the hierarchy of values. Not to make new morality but to refashion the existing one. Nietzsche then, calls upon us to question whether, in this age of utterly unreflective indulgence of the democratic impulse, we might not serve ourselves, and our community better by pausing to think what we are doing.

Notes and resources Benkler, Y. (2002). Coase’s Penguin, or Linux and the Nature of the Firm. The Yale. Berry, D. M. (2004). The Contestation of Code: A Preliminary Investigation into the Discourse of the Free/Libre and Open Source Movement. Critical Discourse Studies, 1.1. Bull, M.(2000) Where is the Anti-Nietzsche?, New Left Review, 3.

52

Nietzsche and free software


Issue 9 Hardt, M., & Negri, A. (2004). Multitude : war and democracy in the age of Empire. New York: The Penguin Press. Moglen, E. (1999). Anarchism Triumphant: Free Software and the Death of Copyright. Retrieved 01/03/2003, from http://www.firstmonday.org/issues/issue4_8/moglen/index.html Nietzsche, F. (1997). Beyond Good and Evil, Mineola: Dover Publications. Nietzsche, F. (1998). On the Genealogy of Morals: A Polemic, Oxford: Oxford University Press.

Biography David Berry (/user/14" title="View user profile.): David Berry is a researcher at the University of Sussex, UK and a member of the research collective The Libre Society (http://www.libresociety.org/). He writes on issues surrounding intellectual property, immaterial labour, politics, free software and copyleft. Lee Evans (/user/63" title="View user profile.): Lee Evans is a doctoral student at the University of Sussex. He is currently working on theories of civil society and civic republicanism in International Relations.

Copyright information This article is made available under the "Attribution-Sharealike" Creative Commons License 2.5 available from http://creativecommons.org/licenses/by-sa/2.5/. Source URL: http://www.freesoftwaremagazine.com/articles/moral_claims

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Nietzsche and free software

53


Issue 9

54

Nietzsche and free software


How to get people to work for free Attracting volunteers to your free software project By David Horton As time marches on and our lives become more complicated, it seems we have less and less time to devote to that free software project we started back in our idealistic youth. Rather than abandoning a good project due to lack of time, consider seeking out the assistance of other members of the free software community. With a few simple steps you can make it easy to find volunteers to help you complete your project.

A roadmap to finding volunteers You need to start with a solid understanding of your own project before you can expect other people to help you with it. Have you thought about where you want your project to be one year from now? Think about it. Now write it down. Once you know the direction you want your project to go, you can start communicating the big picture to other people. As more people begin to understand your project’s ultimate destination it’s easier for some of them to become interested in helping you get it there. You may start receiving emails asking, “How can I help?” and when people offer to help you need to come up with a better response than, “I don’t know, what can you do?” Be prepared to reply with specific tasks that can be worked on and completed in a reasonable amount of time. You should also make sure you communicate the benefits the volunteers can expect to get from contributing to your project. It’s probably not money, but there are things of value that people can gain as free software volunteers. If this seems like a lot of information to digest, don’t worry, this article will cover each of these topics in greater detail. By the time you finish reading you should have some pretty good ideas of how you can make your free software project more attractive to all of that untapped volunteer talent out there.

Communicating your project’s vision You probably understand your project’s vision better than anyone else, after all it’s your project and you designed it. But what about everyone else? Can the average free software user look at your project and think, “I know what this project is about and where it wants to be a year from now”? Chances are you’ve gotten so wrapped up in writing code and releasing patches that you forgot about communicating your project’s big picture view. If people don’t know where the project is going they won’t know how to help it get there. Your project needs a vision. If people don’t know where the project is going they won’t know how to help it get there Now, if you are saying to yourself “I’m a coder, not a management guru”, and wondering how to tackle this vision stuff, don’t worry. Start by looking at some of the popular free software projects on the internet to get some ideas. Most of them will have an “about” section on their web site that communicates the big picture view in a mission statement. Take OpenOffice.org as an example. Their mission is “To create, as a community, the leading international office suite that will run on all major platforms and provide access to all functionality and data through open-component based APIs and an XML-based file format”. That one sentence sums up the goals of the entire project. Your project may not be as monumental as OpenOffice.org, but you can still have a mission statement. Keep it short and to the point and remember that you’re not describing the state of your project as it is today, but rather where it is going to be when all the work is finished. Say, for example, that you are working on a killer free-software recipe management system. Your project already has a nice looking web browser interface and a really powerful database back-end. But, it would be a lot better if it could read recipe files from other, proprietary recipe management software packages. The vision for this project might be summed up as “To build a powerful, free, web-based recipe management system that is able to import files from the popular proprietary recipe management programs”. Now that wasn’t too difficult was it?

How to get people to work for free

55


Issue 9

Identifying goals and tasks Creating a vision for your project is similar to deciding where to go on vacation. You might know that you want to end up on a sunny beach with a cool drink in your hand, but you still have to figure out how you’re going to get there. Do you fly or drive? If you drive, where will you stop for lunch? Do you need to book a hotel? Free software projects have similar questions that need to be addressed. To answer these questions you need to set some goals. You might know that you want to end up on a sunny beach with a cool drink in your hand, but you still have to figure out how you’re going to get there Start by breaking your project into its major components. For example, if your project’s vision is “to build a free, web-based recipe management system that is able to import files from the popular proprietary recipe management programs”, you could set goals as follows: • Create an easy to understand user interface with HTML/PHP • Build an efficient database back-end • Write code to import recipe files from other programs If you’ve already put some work into the project there may only be a few items that need attention. These items can be identified and recorded as specific tasks. Suppose that you are happy with the look and feel of the browser interface for your recipe manager, but it’s marked up with HTML 3.2 and really should be updated to XHTML. So the only thing preventing you from completing the goal of creating an easy to understand user interface is the fact that your HTML is outdated. Congratulations, you have just identified a task. Record this task and continue looking at the other goals to identify more tasks. Another goal you’ve stated is to be able to import recipe files from other programs. This can comprise several tasks such as the following: • Write code to convert Meal Master file format into native file format • Write code to convert AccuChef file format into native file format • Write code to convert RecipeBook-XML into native file format Continue the process of examining goals and picking out tasks until you have identified all of the significant tasks for the project.

Creating job descriptions Now that you have identified a number of tasks that need to be completed it’s time to find someone to help you work on them. Think about each task for a moment. Is it a one-time thing or is it on-going? How long will it take to complete the task? Can it be accomplished by one person or will it take several? Answering these questions will help form the basis of the job description for a person who would perform the task. Posting these job descriptions on your project’s web site will help you recruit the right people to get the task finished. Several major free software projects have a “tasks” or “to-do” section on their web site that can be used to get an idea of how to structure a job description. The format is largely a matter of preference, but be sure to include basic who, what, where, when, why and how information in the job description. Who is the type of person to successfully complete this task? What exactly are they working on? Where do they submit the finished work? When does it need to be finished? How should they go about working on this? Including this information ensures that the volunteer working on your project knows what is expected to get the task finished. Take the example of updating the HTML 3.2 mark-up to XHTML. You could do this yourself, because you consider yourself to be an HTML guru, but unfortunately you just don’t have the time. So you need to find another HTML guru to help you. Congratulations you’ve just identified the “who” portion of your job description. You need “an XHTML Guru”. Continuing on with the what, where, and how you might end up with the following simple job description: XHTML Guru needed to update HTML 3.2 mark-up for a web-based recipe management system. Approximately 300 lines of mark-up need updating. Use of vi editor is a must. Please contact

56

Attracting volunteers to your free software project


Issue 9 admin@free-recipe-project.com if interested. Be sure to include basic who, what, where, when, why and how information in the job description Now if you are paying attention you may have noticed that the question of “why?” was skipped. That’s because “why?” is often the hardest question to answer. There are only twenty-four hours in a day and most people seem to need about twenty-five. If time is so scarce why should people give it to you for free? Think about that when you answer the “why?” question.

Enticing others to join your vision There are many reasons that people volunteer to work on free software projects. Some people like the challenge or want to build their skills, others feel obligated to give something back to the community and some simply want to see their name associated with a great free software project. When asking people to volunteer their time to your project, make sure they know that they will get something back in return for their efforts. It is easier to describe concrete benefits so concentrate on them rather than the abstract ideas, like giving something back to the community. Make the benefits of volunteering part of the job descriptions. Thinking about things from the volunteer’s perspective can help you make your job descriptions more attractive Take another look at the job description that was created for XHTML Guru and think about the “why?” question. Why would someone want to volunteer to update HTML 3.2 to XHTML? It doesn’t sound particularly glamorous. But, what about someone who is a student enrolled in a web development class? This type of volunteer job might sound like a good opportunity for a class project or as a resume builder. Thinking about things from the volunteer’s perspective can help you make your job descriptions more attractive. Take a look at the XHTML Guru job description with this new information added. Looking to gain some experience as a web designer? The Free Recipe Project is looking for an XHTML Guru to update HTML 3.2 mark-up for a web-based recipe management system. Approximately 300 lines of mark-up need updating. Use of vi editor is a must. Please contact admin@free-recipe-project.com if interested.

Putting it into practice Now that you’ve had a crash course in recruiting volunteers take some time to apply this information to your own free software project. Take a break from coding and think about the big picture for a while. Where do you want your project to be in a year from now? What are the major pieces of the project. What specific tasks need to be addressed so these major pieces can be completed. Who has the skills that are required to get these tasks done? How will you show appreciation to the people who help you? Now record this information where everyone can see it. If your project has a web site use it as a volunteer recruiting and recognition tool. Add your project’s vision to the top of the main web page. Create a “help wanted” page to list job descriptions for tasks that need to be finished. And make sure you create a “thank you” page recognizing all of the people who have volunteered their time to help you advance the project.

Biography David Horton (/user/13" title="View user profile.): David Horton got started with GNU/Linux in 1996 when he needed a way to share a single dial-up internet connection with his college room-mates. He found the solution he needed with an early version of Slackware and a copy of the PPP-HOWTO from The Linux Documentation Project. Ten years later he is older and wiser and still hooked on GNU/Linux. Many of Dave's interests and hobbies can be explored on his website (http://www.happy-monkey.net).

Copyright information This article is made available under the "Attribution-Sharealike" Creative Commons License 2.5 available from http://creativecommons.org/licenses/by-sa/2.5/.

Attracting volunteers to your free software project

57


Issue 9 Source URL: http://www.freesoftwaremagazine.com/articles/recruting_people

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

58

Attracting volunteers to your free software project


Towards a free matter economy (Part 3) Designing the Narya Bazaar By Terry Hancock Space is open to us now; and our eagerness to share its meaning is not governed by the efforts of others. We go into space because whatever mankind must undertake, free men must fully share.—John. F. Kennedy The beginning of this series presented the motivations behind creating a protocol for creating a free-licensed design marketplace for material products. Now, I hope to detail the design concept of a specific package: “Narya Bazaar”[1] is to be a web e-commerce application designed for a free-licensed economy. It will need to have many features in common with other e-commerce systems (shopping carts, credit card payments, and so on), but here I want to explain the unique part of the design, which is the “Bargain Protocol” that links our three principle actors: projects, donors and vendors.

Actors Projects in the Narya system are more-or-less as they are in the GForge[2] (or SourceForge) free software project incubator. They may be run by single individuals or groups of people. Groups might be affiliated only through common interest, or be co-workers funded by a commercial institution, though the former is much more likely. There are particular roles within the project that are common to other project systems, such as the “project leader”, but Narya Bazaar introduces another particular role, which I call the “quartermaster” (QM). Like a real quartermaster, the project QM is in charge of the project’s material stores, and is trusted by project members with this role. Furthermore, in order to interact with the Bazaar system, the QM must have a physical mailing address and account information on file at the site. Since this represents some loss of privacy, it’s understandable that projects will not want all of their members to have to agree to these terms, but at least one member needs to if they want to take advantage of Bazaar’s provision of materials and services. Like a real quartermaster, the project QM is in charge of the project’s material stores Donors, are the wonderful people who have money to spend on seeing that projects of interest to them succeed. Without them, of course, there would be no point in designing Bazaar at all. We do not treat them as a magic source of funding here, however, but as customers who expect to see value for the money they contribute. This value comes in the form of newly available technologies that they can use, and in the manufactured incarnations of those technologies as provided by vendors. Finally, vendors, are the commercial providers of the materials and services that are needed by projects to complete their development work, and who must be paid using funds that come from donors. These are fairly well-understood commercial entities, although they may take any commercial form from a self-employed consultant to a large contract manufacturer or commodity supplier. In some cases, the “vendor” in the Bazaar system will actually be a reseller of services procured by external means. Vendors sell to donors in essentially two modes: the first is direct sales in which the “donor” is more-properly called simply the “customer”; and the second is providing services to projects that the donor is supporting. In a successful free-matter economy, the former kind of sale will dominate in sales figures, although the latter will probably be the main form of sale at the beginning, and may always be the largest in number of unique transactions. However, since the former case is handled by standard e-commerce solutions, I will not expand further on it. It is the latter mode that most needs explanation here.

Project needs In the previous two articles, I have outlined the fears and needs of the donors and vendors by way of outlining the requirements of this protocol. The one remaining class of actors is probably the most obvious one—the project developers. Fortunately, the fears and resulting needs that project developers have from the Bazaar

Towards a free matter economy (Part 3)

59


Issue 9 system are pretty straightforward: Funding • Fears: that project will be impossible to complete because of materials costs. • Wants: a fundraising system that allows interested parties to help them with costs. This is the whole point of the Bazaar system. Services • Fears: project will require work that participants are unable to do for themselves. • Wants: way to purchase necessary services. In short, we need to have vendors available to provide the services. Given the difficulty of sourcing, ordering, and receiving high-tech components in prototype quantities and the difficulty of finding testing services which are normally only marketed to commercial organizations, this would be a problem even if money were no object. Commercialization • Fears: commercial needs will overwhelm project participants and steal control of project goals. • Wants: to control project without the threat of “takeover” by donors or vendors. Project developers have chosen the free-design route for a reason. Any design which tends to destroy the free-market of ideas surrounding free-licensed development must be avoided. Although Bazaar will undoubtedly include a “tip-barrel” much as SourceForge now does, the focus will be on “provision” funding rather than “grant” funding for this reason (projects wishing to operate on a direct-payment basis can always function as vendors in the Bazaar system, of course). Combined with the needs outlined in parts 1 and 2, this gives us the requirements that drive the design of the Narya Bazaar Bargain Protocol. CMS and design tools Most project developer needs are not for the funding system described here, but the content creation and management systems that will be needed to collaborate with developers on a global (or even interplanetary) scale. Fortunately, this is one area where there is much support from the free software world already. In the simplest case we could simply use the GForge application (the free branch of the SourceForge web application). Or we can extend upon any of a wide variety of CMS systems. For Narya[3] (which is being written in Python[4]) I have chosen to develop a content system based on Zope[5], which provides very useful structural abstractions for the job. I’m hedging my bet though, by providing “application views” that allow me to window other services inside the Narya framework. This would, for example, allow running GForge at least as a transitional measure. Unfortunately, there are some gaping holes in the available free design tools for serious hardware design. I hope to cover the most important ones in the next installment of this series.

Resolving trust faults Most of the needs outlined for these three actors are types of assurances that need to be made to resolve the natural trust faults of the bargain arrangement, as diagrammed in Figure 1. Note that faults exist not only between the classes of parties involved in the bargain, but also among the individual actors within each group. Indeed, these include some of the most serious problems to be solved. Most of the needs outlined for these three actors are types of assurances that need to be made to resolve the natural trust faults of the bargain arrangement

60

Designing the Narya Bazaar


Issue 9 The “free-rider” problem is the classic “tragedy of the commons” problem, in which donors fear that other donors will not “ante up” to support projects that they do, thus resulting in wasted funds. On the other hand, if the donor has some assurance that their contributions will leverage or control other people’s contributions, they will be more likely to spend (since there is an amplification of utility involved). Naturally, of course, such control of others’ funds must not exceed the willingness of those others to participate. There are proven methods of solving these problems, such as “matching funds” and “spending caps”. In fact, there is an electronic protocol called the “Rational Street Performer Protocol” (RSPP)[6] which seeks to maximize funding from such sources, using conditional pledges. These pledges outline the basic facts about a particular contribution: an amount the donor is willing to spend unconditionally, a fraction indicating how much leverage they want to insist on, and finally a cap on how much they can afford to spend.

Figure 1: Trust Faults. In order to ensure fair-dealing between the participants, the Bazaar software must provide a solution for the trust faults between the three principle classes of actors, as well as among individuals in each group. The internal problems among vendors are the same old problems as with all commercial free-markets. Essentially the playing field must be kept level enough, and individual players have to be kept from monopolizing the field through winner-take-all tactics. Furthermore, there must be penalties for rule-breaking to keep dishonest businesses out and to protect the honest ones (as well as to keep them honest). In an electronic marketplace, anonymity and distance heighten the threat of fly-by-night operators, so a means of automating the word-of-mouth knowledge that protects smaller communities is needed. Fortunately, good examples of this kind of strategy can be found in pre-existing applications including moderated forums and E-Bay’s rating systems[7]. The “free-rider” problem is the classic “tragedy of the commons” problem, in which donors fear that other donors will not support projects that they do Projects have similar problems to vendors in terms of ensuring fair competition, even if money is not involved. Fair assessment of projects on an objective basis, and fair sharing of site resources are necessary to avoid conflict and avoid losing projects from the system. The most complex trust fault in the system, however, arises from the problem of paying for goods to be manufactured. It takes irrecoverable time and energy to manufacture a good or provide a service. Payment in advance would solve this problem for vendors, but for donors, the problem would then be reliance upon vendors to deliver the requested product or service at a service-quality level that is appropriate for the money paid. Market forces will tend to force vendors to cut corners, and shoddy work may result if no direct feedback on service quality is provided. Again, however, there is a standard solution to this type of trust fault: an outside party, trusted by both sides can be chosen to hold the funds in escrow while the transaction is being completed. This provides assurance to the vendor that they will get paid if they deliver, but also assurance to the donor that payment will not be made if quality standards are not met. This introduces a new actor: the “Quality Assurance Authority” (QA). Fair assessment of projects on an objective basis, and fair sharing of site resources are necessary to avoid conflict and avoid losing projects from the system

Designing the Narya Bazaar

61


Issue 9

Specifications Relying on quality assurance for payment, however, does of course increase the burden on the vendor to ensure that there are objective QA tests that they can pass. This in turn requires the project to provide unambiguous specifications and tests. Hopefully, developers familiar with “test-driven development” will find this a tolerable requirement. It also, of course, makes sense in terms of the project’s own reliability needs.

Figure 2: Specifications. Project participants will create many design documents in the process of coming up with a design that needs to be prototyped. From these, the project quartermaster must construct a specification that formally defines each of the steps or items that are to be provided to the project by the vendor, as well as defining rules which will affect the bargaining process and other options. In order to clarify the process of creating these specifications, and to show that it is feasible to provide automated means of doing so, it’s necessary to segment the possible types of specifications that I anticipate. Naturally, all of these categories are really “services”, but there are policy and delivery issues unique to each: • A general service spec requests a certain service such as professional (certified) engineering review or design-to-requirements be done for the project. Other examples include: simulation, computation services, and programming or documentation. The product falls under the project license as a work-for-hire and will be subsumed into the project. • A standard service spec requests services which can be quoted automatically, such as notary services or engineering review of standard types of designs. Prices might be flat-rate or based on easy-to-measure factors such as “number of plan pages”, etc. Terms are otherwise the same as for a general service. • A manufacture to plan spec requests that a project design be built to the specified plans and tolerances. The vendor assumes no responsibility for functionality, only that the manufacturing was done as specified. Since the design is the project’s, free-licensing is guaranteed and prior-art precludes patentability (assuming it was not previously patented). • A manufacture to requirements spec requests that a product be made to meet certain functional requirements, but the implementation is left up to the vendor. In this case, the licensing is controlled by the vendor, within whatever vendor licensing policy is in effect on the site. • A standard manufacture spec, like the standard service spec can be quoted automatically, but is otherwise similar to plan or requirements manufacturing. Examples include electric motors, gears, printed circuit boards, etc. Vendor warrants performance because design uses standard practice. Licensing is generally a non-issue because either the product is too standardized to be proprietary (e.g. a spur gear), or proprietary aspects of the design are hidden behind standard interfaces (e.g.electric motor). • A standard supply spec is for a standardized part that is probably already stocked, such as discrete electronics, standard sized pulleys or belts, or even a whole personal computer. Note that in later generations of the Bazaar, a free-licensed design may be offered by vendors as a standard part in this way. Each type of specification has particular special behaviors in the system from either the vendor or the project perspective. For example, prices on supply specs are likely to be fixed in bulk, and “standard” specs will

62

Designing the Narya Bazaar


Issue 9 probably allow vendors to provide scripts and standard specification content objects—the scripts would inspect the content object for the necessary information and generate a quote automatically. Each type of specification has particular special behaviors in the system from either the vendor or the project perspective Fuzzy MRP Manufacturing resource planning (MRP) is the process by which companies try to control the costs and timescales associated with manufacturing parts which are made of parts which themselves must be manufactured or purchased, and so on. If properly done, MRP allows for “just in time” manufacturing and very reduced needs for warehousing space. MRP for a free-licensed matter economy, however, introduces some special challenges, since the agents producing each component of a product are not internally controlled by any one organization. Instead, the application of market forces will determine estimated production timescales and costs. In all probability, this will actually be more efficient than centralized manufacturing in the average case; but the worst case scenario cannot be so easily constrained. Fortunately, the Bazaar’s bargain transaction database should provide some basis for making statistical predictions about the time to deliver on particular products and services, and allow estimates to be made for new ones based on that data. Since times and costs will be tracked, it makes sense to think that predictions can be made in this way. Adapting MRP[8] systems to handle the intrinsic fuzziness of bazaar-type marketplaces will probably be an interesting challenge. Finally of course, the higher levels of involvement of general services allow for involvement as extreme as writing pieces of software or designing major system components on a for-pay basis, to be incorporated into a free design. This is a likely place to find a “project acting as vendor” in the system—Bazaar will place no restrictions on users playing multiple roles, although when acting as a vendor one must follow vendor rules, and so on. In the interest of avoiding scams, such multiple-role cases will likely be flagged so that it is clear to all participants what is happening. MRP for a free-licensed matter economy introduces some special challenges, since the agents producing each component of a product are not internally controlled by any one organization Transactional specification In addition to providing the above segmentation of specification types, it must also be noted that specifications will usually be multiple, with bargains requiring all to succeed in order for any to proceed (that is, they are “transactional”). There are two major use cases for this: • Build and test—the cases where the QA can verify delivery by simple visual inspection are probably rare. In order to provide a meaningful assurance of quality, he will need to test data results. So for example, a critical welding job might need to be sent to a lab for X-rays, or a less critical one might simply need to be inspected by a trained welder. The QM must carefully determine what tests will be done, as they will be the main assurance of quality. • Shopping list—for many jobs, there may be a need to collect a long shopping list of components. Clearly if they can’t all be purchased, just buying a few is a waste of resources. So, all of the components for one job might be combined into one transactional specification. Placing a specification up for bid will be a task for the project QM, although she can obviously be assisted by other members of the project to clarify finer points. The specification will be created out of existing project content objects, designed to be adequate to the purpose. Specifications will usually be multiple, with bargains requiring all to succeed in order for any to proceed In addition to the individual specifications, the QM will have additional options available to her. The most important is probably the choice of QA authority to administrate the bargain. Clearly it must be someone that

Designing the Narya Bazaar

63


Issue 9 both potential vendors and project members trust. Choosing to let the QM or the prospective vendors themselves act as QA is also possible (and probably cheaper), but it clearly has certain risks. Another factor is the length of time to allow the bargain to continue—all bargains will have a finite time-limit, and whether the bargain should terminate as soon as a solution is found, or wait to see if a better deal is offered before the deadline runs out (is speed or lower price most important?). Other parameters include how fast delivery is to be made, what form of shipping is to be allowed, and so on. Vendor constraints One objection to this type of system is that it makes price the only factor in determining who gets a contract. This could result in poor quality standards, even with QA inspection. Since a system for rating vendors is envisioned, however, it seems reasonable to allow projects or donors to apply constraints on whom they are willing to do business with. Vendors below a certain rating might be rejected due to questions about their quality of service or sales practices, allowing a “reputation game” to exist among vendors. There are other reasons for restricting vendors, too, such as shipping costs or customs problems. It may be desireable to ensure that a product will only have to be shipped within the QM’s home country, for example, or even that it is available from a supplier close enough for the QM to pick it up in person. Or, the supplier may have to provide particular shipping options suitable to the QM. There are other reasons for restricting vendors, too, such as shipping costs or customs problems Such a constraint system will need to be used by the QM when putting the specification up for bid.

Bargaining process Once the specification has been created to the QM’s satisfaction, the bargain can proceed to bid. The strategy is a kind of “reverse auction” where vendors provide bids on each sub-specification (and possibly more than one if spec constraints allow it). As soon as at least one bid exists for each part of the spec, a “bargain cost” is established. After that, subsequent bids may lower, but not raise the bargain cost.

Figure 3: Bargain. The bargaining process is essentially just a matter of weighing the funds that can be raised by donors against the sum of the minimum bid costs from vendors. Meanwhile, RSPP pledging is allowed to start as well, so that donors can pledge support for the bargain. These are summed following a modified version of the original RSPP fund-raising algorithm. Pledges, once made, are not withdrawable until the bargain ends, so the available money to meet the bargain cost can go up, but not down. Pledges, once made, are not withdrawable until the bargain ends Both bidding and pledging continue at least until the available funds match the bargain cost. Calculations may be complicated a bit by constraints placed by donors on vendor-qualifications, since if a vendor is questionable to one donor, we have to consider both the solution without that donor and the solution without that vendor. Essentially, however, this is a simple comparison of two sums (see Figure 3).

64

Designing the Narya Bazaar


Issue 9 If the QM has opted for “close on solution”, the bargain will end early if a solution is found. Otherwise, the bargain will persist until the prescribed deadline, waiting for the possibility that another vendor will underbid one of the current vendors in order to capture the contract, possibly reducing the cost.

Quality assurance and delivery The entire bargain protocol is summarized in Figure 4, which also shows the final steps taken once the bargain succeeds, as well as the flow of cash through the system. As soon as pledges have been made, an estimate of the maximum pledge amount is locked from the donor’s account to avoid potential overdrafts. If and when the bargain succeeds, the actual amount for each pledge is computed (using a modified version of the RSPP algorithm that avoids overshooting the bargain cost), and moved into a site escrow account reserved for this bargain. Any excess “pledged” money is returned to the donor’s account so they can use it elsewhere.

Figure 4: Whole bargaining system. (1) The QM creates a specification from project content objects which are usable for that purpose. (2) This becomes the Bargain object. (3) Vendors are free to bid as soon as the bargain object is created. (4) Donors can transfer funds into their Bazaar account at any time. (5) Once they have funds to spend, donors can contribute to bargains that interest them. (6) If and when a solution is found, the bargain succeeds, and funds are transferred to escrow (any excess returns to the donors’ accounts). (7) Vendor provides proposed service. (8) QA inspects the vendor’s product to make sure it meets the specification. If so, the product is passed on to the project quartermaster, and the funds are released to the vendors’ accounts. (9) Vendors can collect their earnings at any time. At this point, notices are sent to all parties that the money has been escrowed to pay for the contracted services. The vendor is cleared to begin work on delivering the order. As each vendor completes their work, the QA is notified and the project proceeds to the next vendor (if sequenced processing is required, otherwise all components can be processed in parallel), possibly requiring a shipping step. Finally, when all stages are complete, the QA receives all material and documentation results. Assuming that QA standards are met (e.g. tests confirm that the work is acceptable), then the QA releases the escrow, allowing all of the vendors to collect their payments. At the same time, the QA ships all product materials to the QM. At this point, the QA collects a small transaction fee from the sale to compensate them for their service (this fee is included in the bargain cost computation and is advertised by the QA to the QM who chose him). Collection of QA transaction fees is one means by which a Narya Bazaar site owner might make revenue. What if it didn’t work? One of the other options for the QM is to constrain what sort of remedies are available if a product or service fails QA testing. In the worst case, the service is rejected, the bidding cycle must start again, and the vendor absorbs a loss. This is bad for the vendor, but is clearly the right behavior for critical components. In the most lenient case, the QA may notify the QM of the problem, allowing the QM to accept the product anyway. At the same time, the vendor would be given the same information, with the option to rework the product or service to try to bring it up to spec.

Designing the Narya Bazaar

65


Issue 9 One of the other options for the QM is to constrain what sort of remedies are available if a product or service fails QA testing Clearly, there are other possibilities which can be resolved between the QA, QM, and vendor before giving up completely, but if all else fails, the bargain is rejected and must start again. In that case, funds are released from escrow back into the donors’ accounts. As a special exception, it might be possible to allow the bargain to revert to “open” again if its time has not yet expired (but this can only happen in the “close on solution” case). A similar situation occurs if no solution is found before the bargain’s closing date. In that case, funds simply revert to the donors, and it’s up to the project to revise their specification to see if they can make it more attractive and try again. Expediting the bargaining process Given that many common classes of specification can be bid on automatically, it should be possible for the bargaining system to approach the simplicity and speed of a standard “shopping” or “requisitions” experience for some transactions, while retaining the flexibility to deal with more complex manually-quoted contracts. At least this will be so if adequate suppliers are available, and the investment is made in automated bidding scripts. Competition would occur through the proxy of the automated agents created by vendors. Many strategies are possible for maximizing the utility of this system, and it assures lower overall prices than can be expected from a standard “e-store” system. For common cases, the risks may be low enough that the reduced cost and greater convenience of allowing the vendor or project QM to act as QA will be desired, eliminating QA costs and double shipping.

The shape of things to come The future economy will deal with many realities that contrast with today’s close knit ship-to-anywhere world, especially as we move out into space. Present models for colonizing Mars, for example, rely heavily on concepts of In Situ Resource Utilization (ISRU), or “living off the land” as Dr. Robert Zubrin called it in The Case for Mars[9] (Figure 5), in order to avoid impossibly high expense. At the same time, there is a rising interest in commercial space development and dissatisfaction with today’s government subsidized programs, which do seem to be very stagnant. But how does ISRU work in a free market economy? The very nature of ISRU requires that products be manufactured on site, with manfacturing localized to the end-user (possibly done by the end-user). This is in stark contrast to our e-commerce mail-order economy where manufacturing localization is regarded as irrelevant, and it forces the question, “what will be sold by free market entrepreneurs in this coming interplanetary era?”

Figure 5: ISRU on Mars. Current plans for colonizing Mars rely heavily on drawing from natural resources on site, like this atmospheric processing station which, in the Mars-Society-inspired NASA “Design Reference Mission”[10] would produce fuel for the return journey by extracting gases from the Martian atmosphere. Such techniques are better suited for development by free-licensed design markets rather than existing proprietary ones, because they do not readily provide a material “product” to be sold to consumers.

66

Designing the Narya Bazaar


Issue 9 Which rules are we more willing to live by: those of intellectual property or of intellectual freedom? Clearly the delivery will not be products so much as the information required to make them, and this puts the matter economy on par with the information economy. Indeed, nearly all meaningful trade will be in information, while true material production is localized to the person who needs it, external to the trade economy. This places us in the position of deciding whether we want the hardware design economy to follow the model of the proprietary software giants or of the free-licensed software world. Which rules are we more willing to live by: those of intellectual property or of intellectual freedom? If we choose freedom, we’re going to have to build the tools to make it work, and create a culture of people who know how to use those tools.

Notes and resources [1] Narya Bazaar (http://bazaar.narya.net) [2] GForge (http://gforge.org/projects/gforge/) [3] Narya Project (http://narya.net) [4] Python (http://www.python.org) [5] Zope (http://www.zope.org) [6] Paul Harrison, The Rational Street Performer Protocol (http://www.logarithmic.net/pfh/rspp) [7] e-Bay (http://ebay.com) [8] ERP5 MRP (http://www.erp5.org/) [9] Robert Zubrin, The Case for Mars, 1996. [10] Mars Society "NASA Design Reference Mission" (http://www.marssociety.org/interactive/art/nasa_charts.asp)

Biography Terry Hancock (/user/5" title="View user profile.): Terry Hancock is co-owner and technical officer of Anansi Spaceworks (http://www.anansispaceworks.com/), dedicated to the application of free software methods to the development of space.

Copyright information This article is made available under the "Attribution-Sharealike" Creative Commons License 2.5 available from http://creativecommons.org/licenses/by-sa/2.5/. Source URL: http://www.freesoftwaremagazine.com/articles/free_matter_economy_3

Published on Free Software Magazine (http://www.freesoftwaremagazine.com)

Designing the Narya Bazaar

67


Issue 9

68

Designing the Narya Bazaar


What is code? A conversation with Deleuze, Guattari and code By David Berry, Jo Pawlik The two of us wrote this article together. Since each of us was several, there was already quite a crowd. We have made use of everything that came within range, what was closest as well as farthest away. We have been aided, inspired multiplied [1]. JP: Code is described as many things: it is a cultural logic, a machinic operation or a process that is unfolding. It is becoming, today’s hegemonic metaphor; inspiring quasi-semiotic investigations within cultural and artistic practice (e.g. The Matrix). No-one leaves before it has set its mark on them... DB: Yes, it has become a narrative, a genre, a structural feature of contemporary society, an architecture for our technologically controlled societies (e.g. Lessig) and a tool of technocracy and of capitalism and law (Ellul/Winner/Feenberg). It is both metaphor and reality, it serves as a translation between different discourses and spheres, DNA code, computer code, code as law, cultural code, aristocratic code, encrypted code (Latour). JP: Like the code to nourish you? Have to feed it something too. DB: Perhaps. I agree that code appears to be a defining discourse of our postmodernity. It offers both explanation and saviour, for example, the state as machine, that runs a faulty form of code that can be rewritten and re-executed. The constitution as a microcode, law as code. Humanity as objects at the mercy of an inhuman code. JP: True and it gathers together a disturbing discourse of the elect. Code as intellectual heights, an aristocratic elect who can free information and have a wisdom to transform society without the politics, without nations and without politicians. Code becomes the lived and the desired. Both a black box and a glass box. Hard and unyielding and simultaneously soft and malleable. Code as walls and doors of the prisons and workhouses of the 21st Century DB: Code seems to follow information into a displaced subjectivity, perhaps a new and startling subject of history that is merely a reflection of the biases, norms and values of the coding elite. More concerning, perhaps, code as walls and doors of the prisons and workhouses of the 21st Century. Condemned to make the amende honorable before the church of capital. JP: So, we ask what is code? Not expecting to find answers, but rather to raise questions. To survey and map realms that are yet to come (AO:5). The key for us lies in code’s connectivity, it is a semiotic-chain, rhizomatic (rather like a non-hierarchical network of nodes) and hence our map must allow for it to be interconnected from anything to anything. In this investigation, which we know might sometimes be hard to follow, our method imitates that outlined by Deleuze & Guattari in Anti-Oedipus (2004). It will analyse by decentering it onto other dimensions, and other registers (AO:8). We hope that you will view this article as a “little machine” (AO: 4), itself something to be read slowly, or fast, so that you can take from it whatever comes your way. It does not ask the question of where code stops and the society starts, rather it forms a tracing of the code-society or the society-code. DB: Dystopian and utopian, both can cling like Pincher Martin to code. Code has its own apocalyptic fictions; crashes and bugs, Y2K and corruption. It is a fiction that is becoming a literary fiction (Kermode). We wish to stop it becoming a myth, by questioning code and asking it uncomfortable questions. But by our questioning we do not wish to be considered experts or legislators, rather we want to ask again who are the “Gods” of the information age (Heidegger). By drawing code out and stretching it out, we hope to make code less mysterious, less an “unconcealment that is concealed” (Heidegger). JP: Perhaps to ask code and coders to think again about the way in which they see the world, to move from objects to things, and practice code as poetry (poeisis). Rather than code as ordering the world, fixing and overcoding. Code as a craft, “bringing-forth” through a showing or revealing that is not about turning the world into resources to be assembled, and reassembled forever.

What is code?

69


Issue 9 DB: And let us not forget the debt that code owes to war and government. It has a bloody history, formed from the special projects of the cold war, a technological race, that got mixed up with the counter-culture but still fights battles on our behalf. He laid aside his sabre. And with a smile he took my hand. We hope to make code less mysterious

Deleuze

Code as concept DB: A stab in the dark. To start neither at the beginning or the end, but in the middle: code is pure concept instantiated into the languages of machines. Coding is the art of forming, inventing and fabricating structures based on these languages. Structures that constrain use as well as free. The coder is the friend of the code, the potentiality of the code, not merely forming, inventing and fabricating code but also desiring. The electric hymn book that Happolati invented. With electric letters that shine in the dark? JP: And what of those non-coders who use code, or rather are used by code instead of forming it? Code can enable but it can also repress. Deleuze believes that we live in a society of control and that code is part “of the numerical language of control” requiring of us passwords, user names, and the completion of form fields to either grant or deny access to information, goods and services (1992). DB: Yes, code becomes the unavoidable boundary around which no detour exists in order to participate fully in modern life. It is ubiquitous. Formatted by code, harmonised with the language of machines, our life history, tastes, preferences and personal details become profiles, mailing lists, data and ultimately markets. Societies of control regulate their population by ensuring their knowing and unknowing participation in the marketplace through enforced compatibility with code. Watch over this code!... Let me see some code! JP: But there is no simple code. Code is production and as such is a machine. Every piece of code has components and is defined by them. It is a multiplicity although not every multiplicity is code. No code is a single component because even the first piece of code draws on others. Neither is there code possessing all components as this would be chaos. Every piece of code has a regular contour defined by the sum of its components. The code is whole because it totalises the components, but it remains a fragmentary whole. DB: Code aborescent. Plato’s building agile, object-oriented and postmodern codes under the spreading chestnut tree. JP: But computers are not the only machines that use code. Deleuze believes that everything is a machine, or to be more precise every machine is a machine of a machine. By this he means that every machine is connected to another by a flow—whether this flow is air, information, water, desire etc—which it interrupts, uses, converts and then connects with another machine. DB: I agree that human beings are nothing more than an assemblage of several machines linked to other machines, though century’s worth of history have us duped into thinking otherwise.

70

A conversation with Deleuze, Guattari and code


Issue 9 JP: But, does every machine have a code built into it which determines the nature of its relations with other machines and their outputs? How else would we know whether to swallow air, suffocate on food or drink sound waves? There is even a social machine, who’s task it is to code the flows that circulate within it. To apportion wealth, to organise production and to record the particular constellation of linked up flows that define its mode of being. DB: Up to this point, code is verging towards the deterministic or the programmatic, dependent upon some form of Ur-coder who might be synonymous with God, with the Despot, with Nature, depending on to whom you attribute the first and last words. But, does every machine have a code built into it which determines the nature of its relations with other machines and their outputs? JP: But Deleuze delimits a way of scrambling the codes, of flouting the key, which enables a different kind of de/en-coding to take place and frees us from a pre-determined input-output, a=b matrix. Enter Desire. Enter Creativity. Enter the Schizo. Enter capitalism? You show them you have something that is really profitable, and then there will be no limits to the recognition of your ability.

Code as Schizo DB: Deleuze & Guattari warned us that the Schizo ethic was not a revolutionary one, but a way of surviving under capitalism by producing fresh desires within the structural limits of capitalism. Where will the revolution come from? JP: It will be a decoded flow, a “deterritorialised flow that runs too far and cuts too sharply”. D & G hold that art and science have a revolutionary potential. Code, like art and science, causes increasingly decoded and deterritorialised flows to circulate in the socius. To become more complicated, more saturated. A few steps away a policeman is observing me; he stands in the middle of the street and doesn’t pay attention to anything else. DB: But, code is bifurcated between a conceptual and a functional schema, an “all encompassing wisdom [=code]”. Concepts and functions appear as two types of multiplicities or varieties whose natures are different. Using the Deluezean concept of Demon which indicates, in philosophy as well as science, not something that exceeds our possibilities but a common kind of these necessary intercessors as respective “subjects” of enunciation: the philosophical friend, the rival, the idiot, the overman are no less demons that Maxwell’s demon or than Einstein’s or Heseinberg’s observers. (WIP: 129). Our eyes meet as I lift my head; maybe he had been standing there for quite a while just watching me. JP: Do you know what time it is? HE: Time? Simple Time?... Great time, mad time, quite bedivelled time, in which the fun waxes fast and furious, with heaven-high leaping and springing—and again, of course, a bit miserable, very miserable indeed, I not only admit that, I even emphasise it, with pride, for it is sitting and fit, such is artist-way and artist-nature.

A conversation with Deleuze, Guattari and code

71


Issue 9 Guattari

Code and sense perception DB: In code the role of the partial coder is to perceive and to experience, although these perceptions and affections might not be those of the coder, in the currently accepted sense, but belong to the code. Does code interpolate the coder, or only the user? Ideal partial observers are the perceptions or sensory affections of code itself manifested in functions and “functives”, the code crystallised affect. JP: Maybe the function in code determines a state of affairs, thing or body that actualises the virtual on a plane of reference and in a system of co-ordinates, a dimensional classification; the concept in code expresses an event that gives consistency to the virtual on a plane of immanence and in an ordered form. DB: Well, in each case the respective fields of coding find themselves marked out by very different entities but that nonetheless exhibit a certain analogy in their task: a problem. Is this a world-directed perspective—code as an action facing the world? JP: Does that not consisting in failing to answer a question? In adapting, in co-adapting, with a higher taste as problematic faculty, are corresponding elements in the process being determined? Do we not replicate the chains of equivalence, allowing the code, to code, so to speak, how we might understand it? DB: Coders are writers, and every writer is a sellout. But an honest joy/Does itself destroy/For a harlot coy. JP: We might ask ourselves the following question: is the software coder a scientist? A philosopher? Or an artist? Or a schizophrenic? AL: For me the only code is that which places an explosive device in its package, fabricating a counterfeit currency. Which in part the knowing children sang to me. Dr. K: This man is mad. There has been for a long time no doubt of it, and it is most regrettable that in our circle the profession of alienist is not represented. I, as a numismatist, feel myself entirely incompetent in this situation. We might ask ourselves the following question: is the software coder a scientist? A philosopher? Or an artist? Or a schizophrenic? DB: For Deleuze, the ascription of these titles exceeds determining whether the tools of the trade in question are microscopes and test-tubes, caf?and cigarettes, or easels and oil-paints. Rather they identify the kind of thinking that each group practices. Latour claimed that if you gave him a laboratory he could move the world. Maybe prosopopoeia is part of the answer, he should ask code what it thinks. JP: But not just the kind of thinking, but the kind of problems which this thought presupposes, and the nature of the solutions that it can provide. To ask under which category the coder clicks her mouse is to question whether she is creating concepts as opposed to dealing in functives like a scientist, or generating percepts and affects like an artist. DB: If you’re actually going to love technology, you have to give up sentimental slop, novels sprinkled with rose water. All these stories of efficient, profitable, optimal, functional technologies. JP: Who said I wanted to love technology? DB: The philosopher loves the concept. The artist, the affect. Do the coders love the code? JP: If we say that code is a concept, summoning into being or releasing free software as an event, the coder is cast first and foremost as a philosopher. The coder, as philosopher, could neither love nor covet her code prior to its arrival. It must take her by surprise. For the philosopher, or more specifically the conceptual personae through whom concepts come to pass and are given voice, (Deleuze does not strictly believe in the creativity of an individual ego), Deleuze reserves a privileged role in the modern world which is so woefully lacking in creation and in resistance to the present. He writes: “The creation of concepts in itself calls for a future form,

72

A conversation with Deleuze, Guattari and code


Issue 9 for a new earth and people that do not yet exist” (1994, 108). Deleuze would hope this future form would be recognizable by virtue of its dislocation from the present. DB: If the software coder really is a philosopher, what kind of a future is free software summoning and who are the new people who might later exist? JP: Thanks to computers, we now know that there are only differences of degree between matter and texts. In fact, ever since a literary happy few started talking about “textual machines” in connection with novels, it has been perfectly natural for machines to become texts written by novelists who are as brilliant as they are anonymous (Latour). But then is there no longer any difference between humans and nonhumans. DB: No, but there is no difference between the spirit of machines and their matter, either; they are souls through and through (Latour). JP: But don’t the stories tell us that machines are purported to be pure, separated from the messy world of the real? Their internal world floating in a platonic sphere, eternal and perfect. Is the basis of their functioning deep within the casing numbers ticking over numbers, overflowing logic registers and memory addresses? DB: I agree. Logic is often considered the base of code. Logic is reductionist not accidentally but essentially and necessarily; it wants to turn concepts into functions. In becoming propositional, the conceptual idea of code loses all the characteristics it possessed as a concept: its endoconsistency and its exoconsistency. This is because of a regime of independence that has replaced that of inseparability, the code has enframed the concept.

Code

Code as science DB: Do you think a real hatred inspires logic’s rivalry with, or its will to supplant, the concept? Deleuze thought “it kills the concept twice over”. JP: The concept is reborn not because it is a scientific function and not because it is a logical proposition: it does not belong to a discursive system and it does not have a reference. The concept shows itself and does nothing but show itself. Concepts are really monsters that are reborn from their fragments. DB: But how does this relate to the code, and more specifically to free software and free culture? Can we say that this is that summoning? Can the code save us? JP: Free software knows only relations of movement and rest, of speed and slowness, between unformed, or relatively unformed, elements, molecules or particles borne away by fluxes. It knows nothing of subjects but rather singularities called events or hecceities. Free software is a machine but a machine that has no beginning and no end. It is always in the middle, between things. Free software is where things pick up speed, a transversal movement, that undermines its banks and accelerates in the middle. But that is not to say that capital does not attempt to recode it, reterritorialising its flows within the circuits of capital.

A conversation with Deleuze, Guattari and code

73


Issue 9 DB: A project or a person is here only definable by movements and rests, speeds and slowness (longitude) and by affects, intensities (latitude). There are no more forms, but cinematic relations between unformed elements; there are no more subjects but dynamic individuations without subjects, which constitute collective assemblages. Nothing develops, but things arrive late or in advance, and enter into some assemblage according to their compositions of speed. Nothing becomes subjective but haecceities take shape according to the compositions of non-subjective powers and effects. Maps of speeds and intensities (e.g. Sourceforge). JP: We have all already encountered this business of speeds and slowness: their common quality is to grow from the middle, to be always in-between; they have a common imperceptible, like the vast slowness of massive Japanese wrestlers, and all of a sudden, a decisive gesture so swift that we didn’t see it. DB: Good code, Bad code. Deleuze asks: “For what do private property, wealth, commodities, and classes signify?” and answers: “The breakdown of codes” (AO, 218). Capitalism is a generalized decoding of flows. It has decoded the worker in favour of abstract labour, it has decoded the family, as a means of consumption, in favour of interchangeable, faceless consumers and has decoded wealth in favour of abstract, speculative, merchant capital. In the face of this, it is difficult to know if we have too much code or too little and what the criteria might be by which we could make qualitative distinctions between one type of code and another, such as code as concept and code as commodity. JP: We could suggest that the schizophrenic code (i.e. the schizophrenic coding as a radical politics of desire) could seek to de-normalise and de-individualise through a multiplicity of new, radical collective arrangements against power. Perhaps a radical hermeneutics of code, code as locality and place, a dwelling. DB: Not all code is a dwelling. Bank systems, facial recognition packages, military defence equipment and governmental monitoring software is code but not a dwelling. Even so, this code is in the domain of dwelling. That domain extends over this code and yet is not limited to the dwelling place. The bank clerk is at home on the bank network but does not have shelter there; the working woman is at home on the code but does not have a dwelling place there; the chief engineer is at home in the programming environment but does not dwell there. This code enframes her. She inhabits them and yet does not dwell in them.

Code as art JP: You are right to distinguish between code as “challenging-forth” (Heidegger) and code that is a “bringing-forth”. The code that is reterritorialised is code that is proprietary and instrumental, has itself become a form of “standing-reserve”. DB: So how are we to know when code is a “bringing-forth”? How will we know if it is a tool for conviviality. How will we distinguish between the paranoiac and the schizophrenic? JP: We know, that the friend or lover of code, as claimant does not lack rivals. If each citizen lays claim to something then we need to judge the validity of claims. The coder lays claim to the code, and the corporation, and the lawyer, who all say, “I am the friend of code”. First it was the computer scientists who exclaimed “This is our concern, we are the scientists!”. Then it was the turn of the lawyers, the journalists and the state chanting “Code must be domesticated and nationalised!” Finally the most shameful moment came when companies seized control of the code themselves “We are the friends of code, we put it in our computers, and we sell it to anyone”. The only code is functional and the only concepts are products to be sold. But even now we see the lawyers agreeing with the corporations, we must control the code, we must regulate the code, the code must be paranoiac. We know, that the friend or lover of code, as claimant does not lack rivals DB: This is perhaps the vision offered by William Gibson’s Neuromancer, a dystopian realization of the unchecked power of multinational corporations which, despite the efforts of outlaw subcultures, monopolize code. Through their creation of AI entities code becomes autonomous, it exceeds human control. If indeed it makes sense to retain the term human, which Gibson pejoratively substitutes with “meat”. The new human-machinic interfaces engendered by software and technological development demand the jettisoning of received categories of existence as they invent uncanny new ones.

74

A conversation with Deleuze, Guattari and code


Issue 9 JP: This is the possibility of code. The code as a war machine. Nomadic thought. The code as outsider art, the gay science, code as desiring-production, making connections, to ever new connections. DB: Code can be formed into networks of singularities into machines of struggle. As Capital de-territorializes code there is the potential through machines to re-territorialize. Through transformative constitutive action and network sociality—in other words the multitude—code can be deterritorializing, it is multiplicity and becoming, it is an event. Code is becoming nomadic. JP: This nomadic code upsets and exceeds the criteria of representational transparency. According to Jean Baudrillard, the omnipresence of code in the West—DNA, binary, digital—enables the production of copies for which there are no originals. Unsecured and cut adrift from the “reality” which representation has for centuries prided itself on mirroring, we are now in the age of simulation. The depiction of code presents several difficulties for writers, who, in seeking to negotiate the new technological landscape, must somehow bend the representational medium of language and the linear process of reading to accommodate the proliferating ontological and spatio-temporal relations that code affords. DB: This tension is as palpable in Gibson’s efforts to render cyberspace in prose (he first coined the term in Neuromancer) as it is on the book cover, where the flat 2D picture struggles to convey the multi-dimensional possibilities of the matrix. The aesthetics of simulation, the poetics of cyberspace and of hyperreality are, we might say, still under construction. JP: Perhaps code precludes artistic production as we know it. Until the artist creates code and dispenses with representational media altogether, is it possible that her work will contribute only impoverished, obsolete versions of the age of simulation? DB: Artists have responded to “code” as both form and content. As form, we might also think of code as “genre”, the parodying of which has become a staple in the postmodern canon. Films such as “The Scream” series, “The Simpsons”, or “Austin Powers”; flaunt and then subvert the generic codes upon which the production and interpretation of meaning depends. More drastically, Paul Auster sets his “New York Trilogy” in an epistemological dystopia in which the world does not yield to rational comprehension as the genre of detective fiction traditionally demands. If clues are totally indistinguishable from (co)incidental detail, how can the detective guarantee a resolution, how can order be restored? As Auster emphasizes, generic codes and aesthetic form underwrite ideological assumptions and can be described as the products of specific social relations. JP: And what of code as content? Like the “Matrix”. Here is a film which has latched onto the concept of code and also its discussion in contemporary philosophy, almost smugly displaying its dexterity in handling both. DB: Or “I♥ Huckabees” with its unfolding of a kind of existential code that underlies human reality. Are our interpretations shifting to an almost instrumental understanding of code as a form of weak structuralism? Philosophy as mere code, to be written, edited and improved, turned into myth so that our societies can run smoothly. Like the “Matrix”. Here is a film which has latched onto the concept of code and also its discussion in contemporary philosophy, almost smugly displaying its dexterity in handling both JP: The hacker stands starkly here. If code can be hacked, then perhaps we should drop a monkey-wrench in the machine, or sugar in the petrol tank of code? Can the philosopher be a model for the hacker or the hacker for the philosopher? Or perhaps the hacker, with the concentrations on the smooth, efficient hacks, might not be the best model. Perhaps the cracker is a better model for the philosophy of the future. Submerged, unpredictable and radically decentred. Outlaw and outlawed. DB: Perhaps. But then perhaps we must also be careful of the fictions that we both read and write. And keep the radical potentialities of code and philosophy free. Wet with fever and fatigue we can now look toward the shore and say goodbye to where the windows shone so brightly.

A conversation with Deleuze, Guattari and code

75


Issue 9 Notes [1] We were, in fact, at least four, and we think you can guess who the others were.

Notes and resources Deleuze, G. (1990). Postscript on the Societies of Control. L’autre Journal, Nr. 1. Deleuze, G. (2004). Foucault. London: Continuum. Deleuze, G., & Guattari, F. (1994). What is Philosophy? London: Verso. Deleuze, G., & Guattari, F. (2004). Anti-Oedipus: Capitalism and Schizophrenia. London: Continuum. Deleuze, G., & Guattari, F. (2003). A Thousand Plateaus: Capitalism and Schizophrenia. London: Continuum.

Biography David Berry (/user/14" title="View user profile.): David Berry is a researcher at the University of Sussex, UK and a member of the research collective The Libre Society (http://www.libresociety.org/). He writes on issues surrounding intellectual property, immaterial labour, politics, free software and copyleft. Jo Pawlik (/user/60" title="View user profile.): Jo Pawlik is a doctoral student at the University of Sussex researching the interaction between the American counterculture and French poststructuralism, focusing in particular on the deployment and political purchase of the concepts of madness and schizophrenia.

Copyright information This article is made available under the "Attribution-Sharealike" Creative Commons License 2.5 available from http://creativecommons.org/licenses/by-sa/2.5/. Source URL: http://www.freesoftwaremagazine.com/articles/what_is_code

76

A conversation with Deleuze, Guattari and code


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.