Issue 20
Issue 20
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Issue 20 By admin Issue 20 of Free Software Magazine has hit the virtual newsstand! Mauro Bieg talks about DRM, while our tips&tricks hosts, Gary and Andrew, uncover more GNU/Linux secrets. Andrew Min then tells you how to have the best-looking desktop with Compiz Fusion. Scott Carpenter and Gary Richmond talk about Nautilus and Konqueror in their respective articles, Solveig Haugland comes back to FSM talking about OpenOffice.org... and these are just some of the articles in the User Space section! The hackers can rejoice reading David Welton's Hecl (yes, he created it!), how to install a UPS (Ken Leyba), get rid of the command line (Andrew Min), MySQL management (Alan Berg), running a free software project (John Calcote) and much more. This issue is a real feast. Now it's up to you to enjoy it: it's free as in freedom. Source URL: http://www.freesoftwaremagazine.com/issues/issue_020
Issue 20
1
Issue 20
2
Issue 20
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
So, why, why do people and companies develop free software? By Tony Mobily More and more people are discovering free software. Many people only do so after weeks, or even months, of using it. I wonder, for example, how many Firefox users actually know how free Firefox really is—many of them realise that you can get it for free, but find it hard to believe that anybody can modify it and even redistribute it legally. When the discovery is made, the first instinct is to ask: why do they do it? Programming is hard work. Even though most (if not all) programmers are driven by their higher-than-normal IQs and their amazing passion for solving problems, it’s still hard to understand why so many of them would donate so much of their time to creating something that they can’t really show off to anybody but their colleagues or geek friends. The first myth is that free software programmers are all starving. Many people don’t realise that a lot of free software programmers are actually paid to do their work. They are definitely lucky: they might be employed by a big company like Red Hat, that has never disappointed in terms of licensing and patch submissions. Or, they might work as contractors on specialised modules, on the basis that their code will be available to others; this happens a lot with the CMS Drupal, which we use for Free Software Magazine. On the other hand, for every (more or less) paid free software programmer there are many more who aren’t. They do it because they either need/want something that doesn’t exist (or, it does exist, but they need/want it done in different way), or because they just love programming and being part of a fantastic, enormous and ever growing community. Paid or unpaid, company or private programmers, the question remains: why do they do it? The answer, as amazing as it sounds, is “convenience”. It’s better, and more importantly cheaper, to develop free software. A good example is Red Hat, which created Red Hat Enterprise Linux (RHEL). RHEL is based on thousands of pieces of free software, as well as extra packages that are developed internally. Unlike many of their less successful competitors, everything—even the custom software they’ve written—is released under the GPL (or another license which is ultimately based around the idea of being able to share the code). By releasing everything under the GPL, they basically get thousands and thousands of beta testers who test their code and send patches back to make sure that things get fixed. (For those who aren’t developers: a “patch” is basically an improvement to an existing piece of code; it’s basically a modification to an existing program, in order to fix problems or extend functionality). If Red Hat didn’t release the code, they would have to spend enormous amounts of money to do what they do—and it wouldn’t be half as good. What about CentOS, the Red Hat Enterprise Linux clone which uses Red Hat’s source packages and doesn’t require you to have a support contract with Red Hat in order to use it? I am sure CentOS “costs” Red Hat decent amounts of money in terms of lost revenue; however, I also know that it actually helps Red Hat’s sales (I, personally, know of two different companies that started out with CentOS and “upgraded” to RHEL), and creates an army of system administrators who are used to CentOS and are going to pick Red Hat Enterprise Linux when their company wants a supported operating system. It’s a bit like paying for advertising, really. I talked about patches… why would all those people send patches back to Red Hat? Because it’s better to do so. Take Apache, for example. If your company runs Apache on its servers, you, of course, need it to work right. Now, if it doesn’t and you find a bug, you can report the bug to the Apache developers. However, the bug might be one that will only affect a small minority of users; this might mean that it will have a very low priority for the developers. If it’s important enough to you, you might decide to try and fix it yourself or, perhaps, pay somebody else to fix it.
So, why, why do people and companies develop free software?
3
Issue 20 If you send your patch back to the Apache developers, you will know that the bug will be fixed in every new release of Apache, so you won’t have to keep fixing it every time you upgrade to a new version. Not only that, but everybody else who uses Apache will benefit too. Your patch will be checked over by amazing developers, improved, discussed and improved some more. On the other hand, if you decide that you want to be greedy, and you don’t submit the bug, you will have to re-apply it every time a new version of Apache comes out—and hope that the your patch still works each time. You will also have to settle for a patch that hasn’t been peer-reviewed and, therefore, could (err… will), itself, be buggy. This is also true when you apply patches that would give your company a so-called “competitive advantage”: you might decide to improve Apache so that it’s vastly better than the “stock” version in some specific (and critical, to you) ways. However, you have the same problem: you will have to hope that whatever you change will keep on working over time with each version, and you will have to invest real money in developing and testing the patch(es). I recently developed a karma module for Drupal. We wanted it for Free Software Magazine; therefore, I can say that I “got paid” to write it. Now, the module has been used on many other sites and is reviewed and improved by many other developers. On top of that, I also get recognition for having written a very powerful karma module for Drupal. It might take the poetry away from free software, when you say that people and companies write it because it “suits them”. But, it may well be the case. Does anybody mind? I don’t, as long as software stays free—as in freedom.
Biography Tony Mobily (/user/2" title="View user profile.): Tony is the founder and the Editor In Chief of Free Software Magazine
Copyright information Source URL: http://www.freesoftwaremagazine.com/articles/editorial_20
4
Issue 20
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Information technology, 'piracy' and DRM The copyright war and its implications By Mauro Bieg Over at Sphere of Networks, I published a text that tries to give a simple overview of the workings of information production in the age of the internet, covering everything from free software to free culture. This article is a slightly modified version of a chapter of this text. I will show how peer-to-peer file-sharing networks work and how Big Media tries to prevent this sharing by means of random lawsuits and by using DRM. What does this copyright war mean for consumers and for our culture as a whole?
Figure 1: Computers continue to get smaller, cheaper and more powerful, and wireless internet is already available at many places. [shapeshift, CC-by-nc-sa 2.0]
Information technology Since the 1970s computers have become ever faster, smaller and cheaper. This led to the availability of personal computers in most homes in the more economically developed countries. By 1995, the public started to realize the potential of the internet. Through the more recent introduction of broadband internet, most home computers are constantly connected to the internet at several times greater speeds than only a few years ago. Individuals have received the power to manipulate huge amounts of data ever faster, and (more importantly) to share the data they produce with others over the internet. Now it is possible for everyone to manipulate, for example, hours of homemade video and enhance it with some special effects. On the other hand, it has also become easy to mix and manipulate the works of others and share them with all the world, as demonstrated on sites like The Trailer Mash: here lots of people present new creative works they created—remixes of official movie trailers, rearranged to tell other stories. Use past cultural production to create something new. What only some lucky entrepreneurs like Walt Disney could do in the beginning of the 20th century, everybody can do today. Networked computers enable individuals to manipulate huge amounts of data ever faster, and share the data they produce with others Unfortunately, at the moment this is mostly illegal because of lengthy and restrictive copyright law. You can neither copy nor modify any work without the originator’s explicit permission. It isn’t like those derivative works were hurting sales of official movies or that nobody would listen to classical Mozart anymore (himself long dead) because his music was used and newly interpreted by a DJ. Nonetheless, copyright law has not been altered to let everyone make use of the new technologies. On the contrary, under the pressure of Hollywood and the Big Four record labels that dominate over 80% of the music market [1], copyright law has
Information technology
5
Issue 20 been tightened to prevent people from using these technologies. Under the pretext of protecting established artists’ revenues, new artists are prevented from rising. But the truth behind all this is that the record industry fears for its existence—rightly so.
File sharing Today, some inexpensive home computers and the internet are superior to the distribution channels of the record industry with its CD manufacturing plants and many shops. Digital technology enables infinite copying of music and movies without any loss in quality. Shawn Fanning, a then 17 year old student, released Napster in 1999. It was the first peer-to-peer file-sharing system to gain widespread popularity for sharing music. In peer-to-peer (P2P) networks, the data isn’t stored on a central server and accessed by clients (which is the case with web pages), but many peers, usually ordinary home computers, share their data with one another. Soon this technology was adopted and improved; after Napster was sued by the record industry and ultimately shut down, new networks emerged which were even more decentralized. Every user downloading information is at the same time making available that very same information he/she just downloaded to other participants.
Figure 2: A network with a central, expensive server from which the clients are downloading (left). A peer-to-peer network where every node is both a client and a server, downloading and uploading (right). [Wikimedia Commons] The most popular networks as of today are eDonkey2000 (with client-programs as eMule or MLDonkey), FastTrack (with clients as Kazaa or Grokster) and Gnutella (with clients as LimeWire or Gnucleus) [2]. While these networks are searchable through a client program, in the BitTorrent P2P network, data is found through websites like The Pirate Bay where a small .torrent-file is downloaded and then opened in a BitTorrent client such as Azureus or BitTornado, where the actual download takes place [3]. Most of this software is free software, but some is also proprietary. Peer-to-peer networks are heavily utilized, millions of users all over the world share thousands of songs over the internet These services are heavily used. Millions of users all over the world share thousands of titles and even rare songs they wouldn’t find in stores. The major music labels don’t like that and have come up with the term “Piracy”, equating people that share music with one another with bandits that attack other people’s ships. The record industry argues that every downloaded song accounts for a loss in CD sales which ultimately hurts artists. However, it should be kept clearly in mind that not every song downloaded would have been bought. Also through sharing samples, people are exposed to new music and might come to buy CDs they otherwise would never have known of. And downloading old material that isn’t available in stores anymore surely doesn’t hurt artists. It is also a fact that under the current model of music distribution, the average artist gets something between 5 and 14 percent of the CD sales revenue [4]. The rest trickles away in the business that is the record industry.
6
File sharing
Issue 20 But now a new distribution model becomes feasible. Lots of ordinary people, connected through the internet, outperform the record industry and make it essentially obsolete in a time where high quality recording equipment to supplement home computers becomes ever cheaper. It simply isn’t necessary to buy physical records anymore. Especially for unknown artists, the internet represents a very attractive marketing ground; with services like Last.fm or Pandora, it has become very easy to discover new music. Artists who release their music can earn money by performing and going on tour (like artists always did before recording technology was invented). Alternative payments systems have also been proposed: mechanisms like an easy way to donate small amounts of money to musicians over the internet, or to let every person downloading pay a small monthly fee which is distributed to the artists based on their popularity. This could for example be done by bundling a voluntary fee with the broadband bill (Broadband, unlimited legal downloading included!) [5]. With these distribution and compensation methods, artists would most certainly be far better off than now, but the major labels aren’t willing to adopt yet. Instead they are fighting windmills, with all means available to them.
Figure 3: The record industry has always been cautious about new technology—here a 1980s campaign logo against home taping cassettes
Lawsuits As there is no single instance responsible for the operation of peer-to-peer file sharing networks, there is nobody in particular the music industry can sue. That’s why the RIAA (Recording Industry Association of America) turned to randomly suing people for copyright infringement that have allegedly participated in file sharing, in hope of deterrence. To find people in file sharing networks, they rely on tracing computer’s IP addresses. But it is often very difficult to find out who a specific IP belongs to and impossible to tell with certainty. That’s why the RIAA has already sued a 66-year-old grandmother for downloading gangster rap, but also families without a computer and even dead people were addressed [6]. The RIAA’s tactics are to intimidate defendants and force them into settlements outside the court under the threat that they are facing high legal fees. But recently, victims of such random lawsuits began fighting back and countersued the RIAA for malicious prosecution [7]. But it continues to be an uphill-battle and non-profit organizations like the Electronic Frontier Foundation (EFF), which fight for digital rights and provide individuals with legal defense, have only limited resources compared to the large legal departments of the major record labels. To preserve its last-century business model, the record industry has actually turned to sue its own customers, something that’s only possible because a few companies hold a monopoly on about 90% of the music produced [8].
Copyright in a digital world As video files are larger, downloading movies or TV series isn’t as common as sharing music yet, but it is only a matter of time when Hollywood will find itself in the same situation as the record industry is now. Social practices like going to the movies will remain popular in addition to watching films at home. As digital technology allows for ever cheaper production of videos and music, movies will eventually be produced at lower budgets than what is common in Hollywood today, but there will probably be more smaller films,
Copyright in a digital world
7
Issue 20 oriented towards more specific audiences, than the homogeneous monster productions we are seeing today. Copyright law was always meant to regulate copying. However, in the past this was something only competing businesses like other book publishers could do. But today, everyone can copy a file by a simple mouse-click. Thus the scope of this law has changed dramatically over time: from regulating anticompetitive business practices to restricting consumers. Keeping up these same rules in a digital world does nothing but label a large portion of citizens as criminals—for no obvious reason. Additionally, lots of creative works, which wouldn’t have been possible without inexpensive computers and the internet, are prevented from being published legally. Today’s copyright law just doesn’t make sense when applied to digital technology Today’s copyright law just doesn’t make sense when applied to digital technology. For example, it could even be argued that looking at a website is copyright infringement, as in order to display a webpage on the screen, the computer has to download it and make a local copy from the data stored on the server. As this is an unauthorized copy, this is actually illegal. Not only has the scope of copyright changed dramatically, but also copyright terms have gone up like crazy. Copyright law is there to give authors the exclusive right to copy their works for a limited period of time. After that, copyright expires and the work passes into the public domain. Then everybody can make whatever use of it he wants without restrictions. This term was originally 14 years after the publication of the work. Since then however, it has been regularly increased. Through heavy lobbying from the record industry and Hollywood, since 1962, the U.S. Congress has extended copyright terms eleven times in 40 years alone. Today, in the United States, copyright persists for 70 years after the author’s death, and for corporate works it’s 120 years after creation or 95 years after publication, whichever is shortest.
DRM Technology can also be used against people. Under the pretext of fighting “piracy”, the major entertainment companies have come up with ever stronger copy protections, all of them having something in common—they have been cracked very quickly [9].
Figure 4: A parody of the image above, this one campaigning against DRM. [Wikimedia Commons] In the ’80s, Hollywood was crying that home video recording would kill the film production. It didn’t, although you usually could easily copy VHS video tapes. On DVDs, copy prevention was already present from day 1. Now the industry is pushing new formats to protect HD video (high definition, resulting in a sharper picture) that are meant to replace DVDs. A standard hasn’t been reached yet and now two different optical disc technologies are fighting for dominance: BluRay and HD-DVD. Both implement new and stronger copy preventions which force the consumer to buy not only new players, but also new displays (so that Hollywood can even control the signal between the player and the display) [10]. Equipment has to be certified to be able to playback these media (as already the case with DVDs). Free software solutions are thus excluded from the system right from the start—in order to play DVDs on a GNU/Linux computer the copy prevention has to be cracked, which is done very quickly nowadays. However, even these new technologies,
8
DRM
Issue 20 implemented in both BluRay and HD-DVD, have already been cracked by numerous methods, even before the discs get to consumers homes. In the music market, the standard audio CD provides digital music of satisfactory quality to most listeners. CDs have been around for a long time, and stem from a time where copy prevention wasn’t common yet. There have been several attempts to implement copy preventions later on, resulting in audio CDs that some players couldn’t play which led to consumer frustration. With the rise of Apple’s iTunes Store, the music industry has slowly started to realize the possibilities of distributing music through the internet. Now people can buy songs and download them right away. The prices are comparable to CDs. Although virtually all distribution costs go away for the labels, artists don’t receive higher payments for the songs sold on the iTunes Store [11].
Figure 5: The iPod made carrying your whole music collection in your pocket mainstream. Although its monopoly on playing DRM-crippled media from Apple’s own iTunes Store on the internet is criticized, the iPod is no doubt one of the devices spurring the digital revolution. [Wikimedia Commons] Music doesn’t just reside on a CD anymore: it is sold through the internet and transferred to portable music players like the iPod; the music industry cannot prevent their customers from copying it; as a result, the music industry developed technologies to limit access to music. For these new technologies, the umbrella term DRM is used. Originally standing for Digital rights management, opponents like the Free Software Foundation refer to it as Digital restrictions management. With DRM, the data, for example music or video, is digitally encrypted and can only be played back by specific devices or software. “Unauthorized copies” can be prevented or content can even be set to expire after a specific period of time. Songs purchased on Apple’s market leading iTunes Store for example bear the following (compared to others still relatively lax) restrictions: while the track can be copied on up to five different computers, playback is only possible with Apple’s iTunes software and on no other portable music player than Apple’s iPod. Music isn’t bought anymore, it is rather just rented for limited use. Music isn’t bought anymore, it is rather just rented for limited use It is also possible to sell sustainably DRM-free music over the internet (which can then be played back by any device including the iPod): stores like emusic which has currently 250,000 subscribers. However, the major record labels refuse to sell their songs without DRM, leading emusic and the likes to specialize in independent music. As history has shown us, it is impossible to come up with a copy prevention or DRM system that is unbreakable. As long as the music and movie industries try to restrict access to the media they sell, they’ll be caught up in a cat-and-mouse game. Whenever a new DRM scheme sees the light of day, eventually it will be cracked by the many skilled people collaborating over the internet, and DRM-free copies will be available on peer-to-peer networks. However, it might one day become difficult enough for a large enough part of the population to set the media they paid for free. Consumers might just accept that they have only limited control over their legally bought music collection. That’s probably the goal of the entertainment industry. Not bringing “piracy” down, because they know that’s impossible, but controlling consumers to maximize revenue [12]. Then, because you can’t copy it and put it on an other device, the entertainment industry will be able to
DRM
9
Issue 20 sell you the same song or video several times: once for your computer, maybe a second time for your portable music or video player, then for playback in your living room, and one more time as a ringtone for your cell phone. To ensure that it’s difficult enough for most consumers, and first of all illegal, to crack the DRM on their media, additional laws were written. In the USA, the Digital Millennium Copyright Act (DMCA) was enacted in 1998, in the European Union the EU Copyright Directive (EUCD) of 2001, which is similar to the DMCA in many ways. Now it is illegal to circumvent DRM or other access control technologies, even when copying the content was permitted under simple copyright law, for example under the terms of fair use (e.g. for means of citation, educational purposes or maybe using a tiny extract of the work non-commercially). Under the DMCA, even the production or spread of circumvention technologies was criminalized.
Figure 6: Still frame of Neil Armstrong, stepping out onto the moon—a historical document. But much of our current cultural production is owned by few, and the public is prevented from accessing it freely. [Wikipedia]
Implications With DRM and laws to criminalize circumvention of DRM in place, our culture gets locked down even more. It becomes technically increasingly difficult (and simply illegal) to use many of our cultural products, like pop music or election campaign footage, to create something new. We live increasingly in a permission culture, where new creators have to ask the powerful or creators from the past for permission, rather than in a free culture that would uphold the individual freedom to create. At the same time, while digital technology would allow us for the first time in history to build a library accessible to everyone, larger than the Library of Alexandria, we run the risk of forgetting history as past culture is locked down by law and DRM. While you theoretically still could sometimes make legal use of such material under the terms of fair use, and fight for your right to do so in court, this is no longer possible if law is interpreted by your computer rather than a judge. If the footage of the landing on the moon had been broadcasted with DRM in place, you couldn’t reuse one second of the clip, regardless if legal under fair-use or not—because your DRM-crippled computer prevented you from doing so.
Conclusion We live in a time where digital technology enables individuals to produce and distribute creative works more easily and cheaply, rearrange old footage to tell new stories or cite from the past, effectively creating and shaping more of their own culture. These practices are increasingly hindered by the old industry that is threatened with losing its monopoly on producing the lion’s share of our cultural production. Wrapped up, the changes we have seen recently in law, technology and in the concentration of the media market lead to a devastating conclusion: There has never been a time in history when more of our “culture” was as “owned” as it is now. And yet there has never been a time when the concentration of power to control the uses of culture has been as unquestioningly accepted as it is now. (Lawrence Lessig [13])
10
Conclusion
Issue 20
Bibliography [1] Music market, Wikipedia [2] Client programs for various peer-to-peer networks that are free software: eMule, MLDonkey, Gnucleus (Windows only) or LimeWire [3] Some free software BitTorrent clients: Azureus, BitTornado or Transmission [4] The reasons to get rid of the major record labels, Downhill Battle [5] Various proposals summed up: Making P2P Pay Artists, Electronic Frontier Foundation (EFF); and an outlined proposal: A Better Way Forward: Voluntary Collective Licensing of Music File Sharing, EFF [6] Grandmother piracy lawsuit dropped, BBC News;RIAA sues computer-less family, Ars Technica;“I sue dead people…”, Ars Technica [7] Exonerated defendant sues RIAA for malicious prosecution, Ars Technica [8] Concentration of media ownership, Wikipedia [9] Hacking Digital Rights Management, Ars Technica [10] HDCP: beta testing DRM on the public?, Ars Technica; Why you should boycott Blu-ray and HD-DVD, Blu-Ray Sucks [11] Artists sue Sony for iTunes royalties, Macworld; iTunes and Digital Downloads: An Analysis (continued), Future of Music Coalition [12] Privately, Hollywood admits DRM isn’t about piracy, Ars Technica [13] Lawrence Lessig: Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity, 2004, p. 28
Biography Mauro Bieg (/user/6542" title="View user profile.): Mauro Bieg is currently a student in Switzerland. As he is still young, his only work worth mentioning is said text about the workings of information production in the age of the internet, covering everything from free software to free culture. The text is now part of an open wiki that tries to get some people together to create a place where our collective knowledge on these topics is collected and easily accessible: Feel free to come over! www.SphereOfNetworks.org (http://www.sphereofnetworks.org)
Copyright information Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved. Source URL: http://www.freesoftwaremagazine.com/articles/information_technology_and_piracy
Bibliography
11
Issue 20
12
Bibliography
Issue 20
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Tips and Tricks By Gary Richmond, Andrew Min This is a collection of tips&tricks written by Andrew Min and Gary Richmond. In this article: • How to create a GNU/Linux live USB stick with SLAX (Andrew)—see below • How to use Quake-style terminals on GNU/Linux (Andrew) • How to take screenshots with Scrot (Gary) • How to back up your Master Boot Record (MBR) (Gary)
How to create a GNU/Linux live USB stick with SLAX (Andrew) One of the biggest things holding back GNU/Linux adoption is the fact that most people haven’t tried GNU/Linux. That’s where SLAX comes into play.
Introduction What is SLAX, you ask? SLAX is a KDE-based GNU/Linux distribution that’s meant to run as a live GNU/Linux operating system. In other words, it will automatically wipe all traces of itself from the computer it is running on once you turn the computer off. And best of all, it includes tons of useful software including KDE, NTFS-3G, tons of wireless tools, CUPS, games, graphics tools, multimedia apps, development tools, the KOffice suite, and much more. This makes it perfect for showing off the power of GNU/Linux to your friend without messing up their settings. SLAX is perfect for showing off the power of GNU/Linux to your friend without messing up their settings
Putting it on a USB drive Although SLAX can not be used as a regular operating system, it is possible to put it on an external device such as a USB drive and carry around a live GNU/Linux system. There are three advantages to using a USB drive over a more traditional CD: it’s faster, it lets you add file and programs on the fly, and it (usually) has more storage. There are several ways to do this, but the easiest way is to use the make_disk script. First, format the drive using a program like QtParted or GParted (use FAT32 on Windows and ext2 on GNU/Linux). Then, download the standard SLAX ISO and extract it with a program like ISO Master. If you are using a GNU/Linux system, cd to the directory that the ISO was extracted to, and run ./make_disk.sh /dev/sda1, replacing /dev/sda1 with your hard drive (you can find the hard drive by running fdisk -l as root). If you’re using Windows, run make_disk.bat E:, replacing E: with the drive name. Now, reboot the computer and change the boot order of your machine (see this for a howto or refer to your computer’s manual) to make USB or removable drive the top. Note: dfego notes that this won’t work with SLAX 6. Use this method instead.
Adding programs Although the default version of SLAX comes with a ton of tools, it’s always nice to be able to add and remove programs. That’s why SLAX has support for modules, available at the SLAX Modules page. There, you can download programs like OpenOffice.org, The Battle For Wesnoth computer game, the Firefox web browser, and even desktop environments like Xfce or Gnome. After downloading the program, put it in the modules
How to create a GNU/Linux live USB stick with SLAX (Andrew)
13
Issue 20 folder on the USB stick. When you next boot into SLAX, look for the program under your K Menu. If you need to make your own program, use a tool like rpm2mo or deb2mo to convert an existing package to the SLAX format.
Conclusion SLAX is a great operating system. And we haven’t even scratched the surface of what it can do. You can create your own SLAX, build your own modules, and even install SLAX to a hard drive. And best of all, you now have a GNU/Linux system to show off to your friends. • Homepage • Modules • Forums • Alpha 6.0 Release
How to use Quake-style terminals on GNU/Linux (Andrew) We know all about how powerful the GNU/Linux terminal is. However, it’s a pain to have to fire up a terminal emulator like Konsole or gnome-terminal, wait for a few seconds for it load, and then have to keep Alt-Tabbing to it. Wouldn’t it be easier to just have a terminal that automatically hides and shows itself at click of a button? Today, I’m going to look at three different terminal emulators that do just that.
What the heck is a Quake-style terminal? Quake is a wildly popular first person shooter created by id Software. In the game, there is a terminal that is accessible by hitting the ~ key. It is used to edit settings and variables, show logs, and enter commands and cheats (for more, read the Quake-style Console article at Wikipedia). Quake isn’t the only program that has this functionality: Doom, Half-Life, Dark Engine, Lithtech, and several other games and game engines use similar consoles.
Figure 1: The console in NightFall (a mod for Half-Life 2)
Kuake: Quake-style terminal for KDE A long time ago in an IDE far far away… OK, so it wasn’t that long ago (unless January 2003 is really “long ago”) and it wasn’t that far away. Anyway, not so long ago in an IDE not so far far away, Kuake was born. Martin Galpin got the bright idea of creating a Quake-style front-end to Konsole. The idea was that you’d hit a hotkey (at the time, Ctrl-K) and Konsole would slide down from the top of the screen. You could resize it, realign it, and much more. When it came, it achieved great success (Unfortunately, development seemed to freeze after the release of an unstable 0.3 release in March 2004.)
14
How to use Quake-style terminals on GNU/Linux (Andrew)
Issue 20 Even though Kuake hasn’t been updated recently, you can still install it. The site offers a tar.gz source archive, a Debian package is available at deb ftp://ftp.berlios.de/pub/kazit/debs, Ubuntu has a package called kuake in the Universe repository, and many other distributions offer packages. The hotkey is Alt-~ (available after you launch Kuake). • Homepage • Screenshots
Figure 2: Kuake
YaKuake: Yet Another Kuake KDE terminal emulator Meanwhile, in another part of the galaxy, a French programmer named Francois Chazal was working on a forked version of Kuake known as YaKuake (Yet Another Kuake KDE terminal emulator). YaKuake added several features including inline tab renaming, better Xinerama support, and skins. Like its predecessor, its popularity skyrocketed, reaching over 25,000 downloads and earning a 5 star rating from the famous software repository Softpedia. YaKuake offers a tar.bz2 source archive. Many distributions offer it as a package in their repositories. The hotkey for launching YaKuake (after it is running) is by default F12, but you can change it to whatever you want (I like Kuake’s default Alt-~ myself). • Homepage • Screenshots
Figure 3: YaKuake
Kuake: Quake-style terminal for KDE
15
Issue 20
Tilda: Quake terminal for Gnome KDE users weren’t the only ones having fun with Quake-style terminals. In December 2004, Tristan Sloughter (aka kungfooguru) released Tilda (named so because tilde, the ~ symbol and often the hotkey for Quake-style terminals, was already taken), a GTK+ Quake-style terminal emulator. Like Kuake and YaKuake, it took off, reaching 12,000+ downloads in 3 years. Tilda provides a tar.gz source archive, but many distributions provide packages. Once you install it, run tilda -C to configure it. Then run Tilda with the command tilda. Options are available via tilda -h. Tabs are available too. Access keys are Ctrl-Shift-T (New Tab), Ctrl-Shift-PageUp (Next Tab), Ctrl-Shift-PageDown (Prev Tab) and Alt-#(go to 1 to 10th tab) • Homepage • Screenshots
Figure 4: Tilda
How to take screenshots with Scrot (Gary) Screenshots. Where would the internet be without them? They are ubiquitous and when you are researching that latest piece of cool software or the latest ISO of your favourite GNU/Linux distro they are an opportunity to preview the eye candy. There are many ways to make those screenshots and most KDE and Gnome users will be familiar with the GUI tools bundled with them: Ksnapshot for KDE and Take Screenshot for Gnome. They are good at what they do. However, sometimes you just need to take screenshots quick and dirty without the overheads (especially if you are using a lightweight windows manager on a relatively low-spec machine). If that’s your case, you can use “Scrot”.
Welcome to Scrot Scrot (SCReenshOT) will probably not come pre-installed with your distro; so, as ever, it is a case of a quick visit to the software repositories. If that turns up a blank you should be able to download a source tarball or a pre-compiled binary at the official site. If not you can always get it at Klik which should install it for you across a wide range of distros. Just follow the instructions on the website to enable the Konqueror and Firefox browsers to use it. This is not the place to launch into a detailed comparison of available screenshot tools, graphical or command line. Suffice it to say, by the end of this article, I hope I will have demonstrated the power and utility of Scrot which, despite being a command-line tool, offers the user an excellent screenshot tool with power options to suit most requirements. If you want to see the commands Scrot supports just type man scrot in a terminal.
16
How to take screenshots with Scrot (Gary)
Issue 20
Ways of running Scrot Scrot is a command-line tool (written in C and using the imlib2 library); so obviously you will be running it in a console within your X windows session. I find that in order to clear the way for the screenshot quickly and to avoid switching between the mouse and the keyboard, it is useful to have Yakuake installed. It is a Quake-style terminal emulator (see more at the previous article. You can either use your package manager to install it (if available) or get Yakuake here. Once installed and run, the terminal screen can be pulled down and retracted very quickly by toggling the F12 key. This is very useful speed tweak after you have issued a Scrot command. However, there is an even better way to launch a Scrot command: just press Alt+F2 and type in scrot, hit run and you’re done. If you want to take it one step further and avoid the hassle of even opening the run dialogue, then right-click on an empty space on the taskbar and select Add Applet to Panel and from the GUI scroll down to Run Command, click on it and add it to the panel. This will add the ability to type commands directly into the Gnome panel. So, you can type a Scrot command directly into the panel without ever needing to open a console in an X-windows session or call up Alt+F2 again. The added bonus of taking a few seconds out to set this up is that it will persist across reboots and always be there to launch any programme without resort to the Start menu or Alt+F2. Inevitably you will want to take more than simple screenshots, especially where it involves demonstrating menus, submenus and tabs. As with graphical tools like Ksnapshot you will need to incorporate a delay whilst you set up the screenshot. If it involves a lot of navigation through a thicket of menus/tabs then it it a good idea to do a dry run and roughly time how long it takes to set up that screenshot and then add on an extra five seconds to allow for sloppy mouse actions. Once you have done this you are ready to craft a Scrot command.
Don’t delay—or rather, do Having done everything in two preceding paragraphs, go ahead and set up a command. Here is one which will take a screenshot in the JPEG format after a delay (to allow you time to set it all up) of, say, 5 seconds: scrot -d 5 desktop.jpeg
We have liftoff! NASA aren’t the only ones who can launch with a countdown. If you like all the bells and whistles, you can run a neat little countdown facility by adding a simple parameter to that command. Just type: scrot -d 15 -c desktop.png
and you can experience the dubious pleasure of watching Scrot flaunting it’s numeracy skills. The length of the delay you set will depend on the simplicity or complexity of the screenshot you are setting up and you will of course give it a contextually appropriate name. Scrot is not too strict as regards syntax. The last command could also have been typed as: scrot desktop.png -d 15 -c
and it works equally well. If you wish to specify a window or part of a screen (use your mouse to draw out a selected area) then just append -s thus: scrot desktop.png -s
and then use your cursor to draw out the area for your screenshot. A number of useful points here: by default, Scrot automatically saves screenshots to the current directory you are in (usually the home directory) so if you wish to save to a different one, cd to it first before executing a command. Like ImageMagick, another useful
Ways of running Scrot
17
Issue 20 command-line tool for taking screenshots, Scrot supports many formats including, of course, the ubiquitous PNG and JPEG formats.
Scrot’s other tricks Scrot can do all of the above but it has a few more tricks up it’s sleeve: if you want to create thumbnails for a web gallery or to save space by way of compression, you can always do so manipulating the screenshot later using a program like Gimp. Scrot can incorporate that in one line by simply adding the -t option followed by the percentage by which you want to compress it. If you want to include the WM border too, append -b. You can set the quality (size and compression) with -q followed by a number between one and one hundred (seventy five is the default).
OK, say cheese everyone… We all like to find clever and cool ways to do things, and if you have acquired any reasonable command-line skills then you won’t need to be a genius to think of ways to combine that knowledge with Scrot’s power. Once a screenshot has been taken, you might want to do some editing—to change format, compress, crop, resize etc. Normally, you will right click on the saved image and select the graphical tool of choice to do the job, or just open the graphics package separately and navigate to the relevant directory; however a little command-line magic can do that for you too. When you want to run multiple commands you can join them together by using the double ampersand. This means that if the previous command is true it will execute the following one. In this case you can append the name of the graphical package of choice for editing the screenshot Scrot has just taken. So, let’s put all those options together in one big line and run it: scrot -d 5 -q 95 -t 30 screenshot.jpeg -b -s && gwenview screenshot.jpeg
Done! Yes, it’s a bit of a mouthful but once you’ve memorised it Scrot is a very powerful and useful piece of software and doubtless readers can think of many commands that can be used in conjunction with Scrot to extend its utility. The only features it seems to lack are the ability to convert image formats and take multiple screenshots. For that you will have to use ImageMagick— which, fortunately, comes pre-installed with most GNU/Linux distros.
Snap it the ImageMagick way Although this article has been about Scrot I can’t resist finishing off with a very brief howto with ImageMagick for taking multiple screenshot in one command-line pass. This is one feature that would make Scrot complete and it’s called snap. Append this parameter to the basic import command and you can take multiple screenshots of the number you specify: import -delay 20 -snaps 4 snapit.jpeg
Prepare the screens you want to capture, mimimize them all, type that command and then use Alt+Tab to toggle through them one at time, clicking on each one. (You can type into the Run command box in the Panel as per Scrot.) Go to the directory in which you saved them and you will see the screenshots. They will all have the same name (in this instance “snapit”) and they will be numbered from 0 through to 3—four in total. Neat. Obviously, you can specify the number of snaps and of course, like Scrot, you can specify the format, amongst other things.
A disclaimer For all of you who have the welfare of open-source software close to your hearts I wish to assure you that no GUI graphics packages were harmed in the making of this article.
18
We have liftoff!
Issue 20
How to back up your Master Boot Record (MBR) (Gary) Backup, like security, is a well-worn mantra in the world of GNU/Linux—and even the most battle-hardened, street-wise user has, for whatever reason, thought about regular backups after disaster has already struck. It is an all too familiar story. System Administrators, by the very nature of their work, will have that imperative carved on their headstones. For them it will be a way of life. Desktop users, being responsible only for themselves, can afford to be a little more louche about such things. If it all goes a bit “arms in the air” there is no one to reproach but themselves. You should backup many things: the files in your home directory, configurations files in /etc, (and there are many excellent graphical tools to do the job) but one of the simplest and best things you can do is to backup your boot master boot record (MBR). It’s one thing to experience lost or corrupted files, it’s quite another not to be able to bootup your computer at all. What follows may just get you out of a fix.
Windows manners It is highly unlikely that you came to GNU/Linux as a computing virgin. You almost certainly, like me, came via Windows and therefore either installed over it or decided to attempt to dual boot. Like Bill and Steve, Windows is a bit short on computing etiquette and if you installed GNU/Linux first on a blank hard drive and then followed up with an installation of some version of Windows you will have made a painful discovery. Windows will, without so much as a by-your-leave, stamp all over your GNU/Linux boot sector with great big hobnail boots. The first lesson is to install Windows first. However, you don’t need to be dual booting with Windows to court disaster. Dual booting with several versions of GNU/Linux can lead to boot problems too. At best, only one version will boot—or worse, none and you may find yourself googling furiously to understand terse and cryptic GRUB (GRand Universal Bootloader) error messages. Sometimes, boot sectors (including partition tables) can just get corrupted for no discernable reason at all. Whatever the reason, you need to prepare for all eventualities as GRUB does not make a copy of the MBR during installation.
Backing up The MBR, as I will refer to it hereafter, is a 512 byte segment on the very first sector of your hard drive composed of three parts: 1) the boot code which is 446 bytes long, 2) the partiton table which is 64 btyes long, and 3) the boot code signature which is 2 bytes long. These numbers are VERY important. Any careless or impulsive fingering at the keyboard of these numbers could well render your machine unbootable or the partition table unreadable. The sight of a grown man crying is not pretty. You have been warned! The core of the backup command is dd—which will be familiar to every system administrator, especially to those who intend to clone an entire hard disk. To see all the options type man dd. As we want to back up only the first 512 bytes we need to append some agruments to it. Here is the full command you need (and remember to run it as the root user, su (and sudo for Ubuntu users): dd if=/dev/hda of=/home/richmondg/mbr_backup bs=512 count=1
Obviously, you will need to substitute the partition where your boot sector resides and also use your own username. Now let’s see just what we did there. dd just stands for disk dump, if means input file, of means output file, bs simply means bytes and count=1 tells the command to do this just once. It makes sense to save this out to some removable device, usually a USB stick, in which case amend the file path to suit so that /home/richmondg/mbr_backup reads, say, /dev/sda/mbr_backup or just copy the original backup to the external device.
Or, only copy the first 446 bytes. Why? This could be a useful tip. If you change 512 to 446 in the above command you will only save the boot sector, but not the partition table. Why would you want to do that? The reason is that if you use 512 bytes and
How to back up your Master Boot Record (MBR) (Gary)
19
Issue 20 subsequently amend your partitons for any reason and then restore the MBR it will be out of sync. So, ensure that if you have made any partiton changes since your original MBR backup that you update that backup.
Restoring the MBR Not surprisingly, in order to restore the MBR it is only necessary to reverse that original command which saved it. If you managed to hose the MBR you will not be able to boot up, so you can use a live CD to access your hard drive and read the backup off any removable media such as a USB stick. Here is the command: dd if=/dev/sda/mbr_backup of=/dev/hda bs=512 count=1
Again, amend sda to read where you saved the MBR and run the command as root. If you wish to kill the MBR altogether, including the partition table, then you can overwrite it with a series of zeros: dd if=/dev/zero of=/dev/hda bs=512 count=1
If you want to kill the MBR but leave the partition table intact then simply change 512 to 446. There are, of course, alternatives which involve using your install CD in rescue mode to reinstall GRUB which will have the same effect (with the added advantage of not overwriting the partition table) but that is another topic in its own right. In the meantime, using the dd command with arguments will help familiarise you with other linux commands and the file structure. Mastery of the command line is a learning curve but one that can repay huge dividends when things go wrong.
Biography Gary Richmond (/user/3653" title="View user profile.): An aspiring wanabee--geek whose background is a B.A.(hons) and an M.Phil in seventeenth-century English, twenty five years in local government and recently semi-retired to enjoy my ill-gotten gains. Andrew Min (/user/37372" title="View user profile.): Definition: Andrew Min (n): a non-denominational, Bible-believing, evangelical Christian. (n): a Kubuntu Linux lover (n): a hard core geek (n): a journalist for several online publications including Free Software Magazine, Full Circle Magazine, and Mashable.com
Copyright information Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved. Source URL: http://www.freesoftwaremagazine.com/articles/issue_20_tips_and_tricks
20
Or, only copy the first 446 bytes. Why?
Issue 20
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Konqueror: doing it all from one interface Making the most of KDE's crown jewel By Gary Richmond When Julius Casear said, as reported by Seutonius and Plutarch, Veni, Vidi, Vici, (I came, I saw, I conquered) he was, depending on your historical interpretation, either referring to the Roman victory at the Battle of Zela or giving a two-fingered salute to the Patrician Senate of Rome. Every schoolboy and girl who has had to endure the exquisite tortures of Latin will know that famous phrase. Press the fast-forward button to the present and those words might not be out of place on the lips of the good people who developed Konqueror, the all-in-one browser and file manager, best described as a universal document viewer. Depending only on kdebase it comes as a standard part of the KDE desktop and most GNU/Linux users have browsed or managed files with it at some point. Precisely because it comes pre-installed we tend to take it for granted; besides, in the intervening period since it was launched in 2000, many additions and alternatives have become available—notably the immensely popular Firefox. Before I start a flame war between KDE and GNOME users and supporters of various file managers and browsers, just let me say that this is not a comparative exercise on their competing features. The purpose of this article is to acquaint the reader with features of Konqueror, of which, they may not be aware and to demonstrate that, with a few exceptions, it is possible to utilize and configure it in such a way that you can spend an entire session without leaving its confines. Of course, this can tend to make it look like a jack-of-all-trades and master of none, so if you need a particular tool GNU/Linux has an embarrassment of riches for the power user too. If many of these features are already known to you, hopefully the remainder will not be and you can use them to enhance your browsing experience. What follows is just my own personal selection of tips, tricks and features which I find most useful. Inevitably, you will have your own favourites (web search shortcuts, mouse gestures, remote files) and wonder why I left them out—so, sorry in advance.
Go smarter, go faster Buried treasure is the most fun, so let’s get that spade out and start digging for a few Konqueror gems. One of the first, best and simplest things you can do is to get an easy speed boost. You can get Konqueror to load faster by tweaking the settings: click on Settings, then Configure Konqueror and scroll down to the last option, Performance. You can minimize memory usage and “preload” a specified number of instances of the browser. Experiment with these setting until you get a combination that best suits your needs and machine specifications in terms of memory and processor capacity. If you would rather not tinker with those settings and you are using Konqueror from within the GNOME desktop, a quick and dirty alternative is to fire up a small KDE application and minimize it. That way, launching Konqueror will require a little less heavy lifting. You could also start Konqueror with a blank page by typing about:blank in an Alt + F2 KDE dialogue box or with about:konqueror—a default page which will allow you to choose where you go from there. (Later you will see how to incorporate these views into your Konqueror profiles.) If you want these start-up options on the Desktop, right click on an empty space there and select Create New and then Link to Location (URL) and in the prompt type a suitable name for the “profile” and, in the box directly below, type one of those commands. A final way to open Konqueror quickly and just where you want it is to exploit the fact that the location label is actually draggable. Navigate to the web page you want and drag the favicon (to the left of the address) to the Desktop and select Link Here_and a Desktop shortcut will be created. Better still, if you wish to create a series of desktop web page shortcuts, select _Bookmarks and then choose Edit Bookmarks and drag the links
Go smarter, go faster
21
Issue 20 you want onto the Desktops for as many bookmark icons you desire. Additionally, this method will not prompt you to copy or link to the desktop. It will install a Desktop icon automatically. The same applies to dragging to the panel.
Browse it, file it One feature that makes Konqueror unique—and I do mean unique—is that while others web browsers have many excellent, and indeed unique features, especially Flock, Firefox and Opera, Konqueror stands alone by virtue of its ability to surf the web and navigate your computer file system from the same interface. Yes, I know, Firefox has an extension which allows you to split the screen and view two separate websites—but that is “websites” and not “a website and a personal file” as in figure 1.
Figure 1: Konqueror in browser and filemode In this example, you effectively have a two-pane FTP client: just drag the FTP link in the right-hand browser screen to the file browser screen and Konqueror will prompt you to move/copy the ISO (figure 2).
Figure 2: Konqueror in “FTP mode” If you have not set up a specific folder for such downloads you don’t need to spend time searching for it. (If you have KGet set up to intercept all download requests to bespoke folders this will override that setting.) If you prefer you can open a terminal within Konqueror and drag, drop and download a file there (see more of this below.) To get this result, select Settings from the drop-down menu and choose Split Screen Vertically. However, for faster action with the keyboard simply do any of the following to open and close split screen configurations: • Ctrl + Shift + L—splits the screen vertically • Ctrl + Shift + T—splits the screen horizontally • Ctrl + Shift + R—closes the active split screen(s)
22
Browse it, file it
Issue 20 Copying and or transferring files is made much easier by the use of split screens as it makes drag and drop even simpler. This includes external media too: CDs, DVDs, USB drives and USB sticks. You cannot make a catapult out of Konqueror—but you can utilize its “rubber band” method to select contiguous files and select from the right click menu for options. Just use your mouse to draw an area around the files you want (figure 3).
Figure 3: Selecting contiguous files with the “rubber band” method If you need to to do some command-line work, Konqueror can save you the bother of opening up a separate console. If you select the Settings drop-down menu you will see an entry to “show terminal emulator” (or add the icon for it via the Settings drop-down menu, under Configure Toolbars and drag it onto the toolbar): this will open an instance of a terminal inside Konqueror. For a GNU/Linux newbie, this is a useful way to follow the system file structure—because clicking on a folder will not only take you to that item in graphical mode but it will be followed on the command line too. Once you have taken the shortcut to the desired directory you can execute commands on a particular file in the built-in terminal (figure 4).
Figure 4: Konqueror in file and terminal mode
Bespoke file management and browsing Another great feature of Konqueror will save time and ease navigation: the View Filter. You can access it from the Tools drop-down menu and also add the icon to the toolbar by selecting and adding it from Settings and then Configure Toolbars. If you select the file type you downloaded, then all files of that type will appear. Like all good menus, View Filter is contextual. You will not see it when Konqueror is in web browser mode. A boon for the terminally disorganised! View Filter also works incrementally; that is, if after your first filtering exercise you then select another file type, it is added to the selection. You need to select Reset to restore the file manager default.
Bespoke file management and browsing
23
Issue 20 If you are browsing a web page, View Filter is denied to you for obvious reasons; however, the view filter on offer in browser mode can offer the user a useful learning tool. For example, when writing articles for Free Software Magazine I frequently forget some of the allowed XML tags (or don’t actually know one at all)—just ask the editor! Split screen to the rescue. Open the web article of your choice, split the screen (thus replicating the page), click on the new screen to make it active (a green “LED” will highlight at the bottom of the browser) and select the View menu and change the View Mode to Embedded Advanced Text Editor. The XML tagging of the article will now appear and you can make a line by line comparison to see and understand how the article has been constructed tag wise (figure 5).
Figure 5: Split screen showing HTML page and as embedded text editor This is a very useful aide-memoire and learning tool. Yes, View→Page Source in Firefox will do the same, but in two separate windows, and while Opera will open another tab (Ctrl + UM) in 9.5 alpha version). However, these do not facilitate easy line-by-line comparison. Incidentally, if you want the duplicate split screen to follow the navigation of the original, just click on the small, empty square at the bottom of the browser. A small chain-link symbol will appear. I must admit, I can’t think of a use for this feature though. Can you? If you want mouse-free browser navigation on hypertext links, hit Ctrl and the links will be tagged with a small beige square with letters. Select a corresponding letter on the keyboard to follow that link. Your proverbial mileage may vary with this one: if you go to the Distrowatch site, a site thick with closely-spaced links, navigation by keyboard letters tagged to those links may be problematic. Like any self-respecting browser, you can of course set your homepage in the configuration files of Konqueror but unlike Firefox you cannot set multiple homepages to open up in separate tabs. Typing the following in Firefox does the trick: http://www.google.com/linux | http://www.distrowatch.com | http://www.bbc.co.uk. This will not work in Konqueror, but it is nothing if not versatile. It supports loading profiles (as do Firefox and Opera) to get round that problem—amongst others. Incidentally, I have noted that Konqueror can vary in the profiles menu from distro to distro. If you are using Ubuntu you will find some of the features missing. You can restore them—just point your browser here. Open Konqueror. Depending on how many homepages you want as your default, split the screen and resize as desired and in turn, make each one active, browse to a web page and repeat this for all screens until you have set them all up. Then click on Settings and select Save View Profile “Web Browsing”, choose a relevant and suitable name for the profile and check the box save urls in profile. That’s it. Depending on the size of your screen you might want to increase your real estate by hitting the F11 key. If you have a really big screen or are viewing on a large, high resolution TV screen a combination of these settings will give you plenty of viewing space which you might need if you want to display a screen as shown in figure 6.
24
Bespoke file management and browsing
Issue 20
Figure 6: Konqueror on steroids: every view you could want If space is at a premium you can set up a profile with tabs instead and call the profile, say, “tabbed homepages” as opposed to “splitscreen homepages”. The same can be done for Konqueror profiles in file mode too. An added bonus in file view mode is that you can set the background colour to distinguish different multiple split screens—figure 7 illustrates the possibilities.
Figure 7: Konqueror split screen with bespoke colours
Put it on the tab(s) In the dark, prehistoric days of Internet Explorer the concept of tabbed browsing was as exotic as the concept of secure computing at Microsoft, but Konqueror, like Firefox, Opera, Galeon, Epiphany and others have been offering users the indispensable advantages of tabbed browsing for years. Konqueror’s tabbed browsing integrates well with profiles, but it can do even more than that, with a little effort. The long way ’round the houses is to click on the drop-down file menu and select New Tab and right click on it to select Close Tab. There are other ways: middle click on a link in browser mode to open a new tab, or middle click on a directory or specific file in file mode for the same effect. If you prefer you can drag and drop a web link to a blank space to the right of the last tab. Frustratingly, Konqueror has no close button on its tabs by default, but you can add this very useful feature. Konqueror’s configuration files are all in plain text, easy to read and edit (as root). Open your favourite text editor as root and navigate here: /home/yourusername/.kde/share/config/konquerorrc scroll down to the section titled FMSettings and change the line MouseMiddleClickClosesTab to true as per figure 8.
Put it on the tab(s)
25
Issue 20
Figure 8: Konqueror config files A close button is a useful feature in tabbed browsing. Opera, Firefox and Flock have it. Konqueror can have it too. Just head over to the KDE wiki and get a useful little GUI for tweaking the hidden configuration settings or grab it here. To run it type tweak into an Alt + F2 run command box, or type the following kioslave into the Konqueror location bar: settings:/Components/ and the Tweak icon applet will be there for you (figures 9 and 10).
Figure 9: Konqueror with close button on tabs
Figure 10: Tweak gui with tabs If you want to make working with tabs easier, hover your mouse over any tab and scroll the middle button back and forward to switch between tabs; if you want to re-arrange the order of the tabs too, drag and drop it elsewhere along the line of the tabs.
26
Put it on the tab(s)
Issue 20 Konqueror has another neat trick up its sleeve. There may be times when you would like it to open up a file view in a particular mode every time and not just on a per session basis—be it icon view, file view, detailed file view, etc. Again, open up /home/yourusername/.kde/share/config/konquerorrc and edit (as root) the line highlighted in the figure 11, then set the value to true in the MainViewSettings section.
Figure 11: Konqueror setting for per folder view Once you make this change and then select the file view you want from the View drop-down menu (select View Mode) for a particular directory, close Konqueror and reopen it: that will be the default directory view until you change it. The gimlet-eyed amongst you will have spotted that this change can be “merged” with profile configurations to make your ideal file browser settings. A final tip on tabs. If you prefer your Konqueror tabs at the bottom, just edit (as root) home/yourusername/.kde/share/config/konquerorrc again and change in FMSettings section TabPosition=Top to Bottom.
Freedom through slavery Although that sounds like a cross between a piece of Orwellian Newspeak and some Gnomic Blakean wisdom it brings us to one of the most useful and powerful features of Konqueror: kioslaves. Kio is short for KDE input/output, which gives it seamless network transparency. They take the format, to cite just a few of the many instances, of help:/ man:/ and info:/. You can find a list of the supported kioslaves under the Protocols section of KInfoCenter and it’s pretty large. If you dislike the command line and the manpage display in a terminal, fire up Konqueror and type man:/ in the location bar followed by the manpage you want and you will get a nice clean, HTML page. Writing an article about Amarok and don’t want to open that application just to confirm some small detail? Well, just type help:/amarok instead and Konqueror will launch the application handbook. This is neat and handy but kioslaves can do much more powerful things.
Slave to the rhythm and something on the side(bar) Everyone will have their own favourites and my selection is a purely personal one based on my preferences and usage—you will surely have your own. Here is a brief selection of mine that I think demonstrate the great power and utility of their transparency. GNU/Linux has many fine ripping and encoding utilities for music files (including the latest K3B release which now includes a ripping facility) and I use them. However, audiocd:/ is a kioslave that can do much from a single Konqueror interface when you want it quick and dirty. If you insert a music CD, open Konqueror and type audiocd:/ in the location bar you will be presented with figure 12.
Freedom through slavery
27
Issue 20
Figure 12: Audio kioslave in action What you have is the original WAV files on the CD listed (and if you are connected to the internet when you do this the song titles will be displayed courtesy of CDDB, otherwise they will simply be listed as track 1, track 2, etc.). Konqueror also creates virtual files in all the music file codecs you distro supports natively, including any proprietary ones you have downloaded and installed. Ripping and encoding all of the selected tracks is simply a matter of copying/dragging and dropping the selected codec to a directory on your hard drive. Konqueror will do the rest. What it can’t do, without your direct and prior intervention, is to configure the parameters. So, in the spirit of this article, open another tab and type settings:/Sound/ in the location bar and select Audio CDs and you can tweak settings (bitrates, ID3tags) for MP3 and Ogg Vorbis. For command-line devotees with lots of hard disk space to spare you can drag the WAV file on to a terminal in Konqueror and the kfmclient (a Konqueror script) will offer you several options. cp is the one you want. This will copy the file to the default directory. If you want to copy it elsewhere you will have to cd to it first, then copy. Konqueror is not finished with your music files just yet. If you are not storing the files on your hard drive or on an external USB drive you might want to burn the track(s) to a CD. Well, if you have Konqburn (formerly Kio-burn) installed you can burn with Konqueror too. Just drag the selected track(s) to the CD icon in the sidebar and proceed. Unfortunately, I can’t give you a screenshot for this one on my system (dependency problems)—if you want it, you can see and get it here and follow the links. If you want to sample the music you just ripped and encoded with Konqueror, no need to fire up a heavy-duty media player (unless you need or want advanced features). Just click on the media player button in the Konqueror sidebar. (If it’s not there, add it by right-clicking in a blank space in the sidebar, select Add New and, er, add it. Incidentally, you can run more than one sidebar feature. just right click on a blank space of the sidebar and check Multiple Views.) Click on the Media Player icon button in the sidebar, drag the track onto it and you’re grooving (figure 13). (For something more substantial try the Amarok button in the sidebar.)
Figure 13: An MP3 dragged onto the Konqueror sidebar Another kioslave which allows you to do stuff without leaving the confines of Konqueror is Kio-sysinfo. You will need apt slave installed for this to work but you can see the results are pretty impressive (figure 14)
28
Slave to the rhythm and something on the side(bar)
Issue 20
Figure 14: The sysinfo kioslave gives you lots of actionable info If you are running an apt-based distro there is an actionable button for apt linked to the repository which allows you to do a local and online search and download; all the buttons under Common Folders and Disk Information are also actionable. You can download pre-compiled binaries of Kio-sysinfo for the most popular distros. If you don’t like the relatively high search overheads of Beagle and its ilk and/or you machine is low spec, the locate:/ slave could be a useful alternative. It presents the results in files and folders, the latter in non-default colours—and it’s very fast. There are many more kioslaves you can run. I have doubtlessly left out your favourite—especially the remote protocols like fish, ssh, webdav, smb, vnc, etc. I stopped counting the number of IO slaves listed in Kinfocenter at seventy; however many or few you use, especially in combination with the sidebar, you can get a lot of serious mileage out of a well-configured Konqueror interface.
Glad to be of service! The prescient, smug, existentially-stressed lifts designed by the Sirius Cybernetics Corporation which so annoyed Marvin the Paranoid Android might have been impressed by the designers of Konqueror and its tight integration with KDE. Both are anxious to please and give good service. The happy vertical people carriers did it one way (bar the occasional sulking in basements as a protest) and Konqueror does it with Service Menus. These are very similar in many respects to Firefox extensions; installing your own personal selection will round off the features which make it possible to spend an entire session in Konqueror without straying from its interface. Of course, all file browsers and web browsers have their right-click context menus, but additional service menus will turbo-charge Konqueror. The best place to cruise for them is at the KDE Apps website. Select “service menus” and you presently have access to two hundred and fifty one of them. Again, my selection reflects my preferences and requirements.
KIM As I like to add screenshots to my articles and sometimes need to do some editing on them without invoking a major graphics package, I find the KIM service menu (KDE Image Menu) very useful. Amongst the things contained in the Action Menu are the following: resize, convert, rotate, compress, add text, add borders. Undo functions would be nice, but nevertheless this adds some very quick and useful functions to Konqueror when you just need some “post production” editing on those screenshots you took from the command line (figure 15)
Glad to be of service!
29
Issue 20
Figure 15: Kim menu options
Audiokonvert This service menu supplements Konqueror’s ability to rip and encode a music CD. For those times when you have downloaded a music file and it is not in a format you can play, or that you want it, Konqueror will allow you to convert audio files to and from five different formats (figure 16)
Figure 16: Audiokonvert options Once you have selected your conversion option a progress window will open as shown in figure 17.
Figure 17: Audiokonvert output Despite the error message, the file converted without problems. If you are converting to the format for your iPod (M4A) then you can transfer it to the player via the iPod kioslave from the Konqueror location bar.
30
KIM
Issue 20
One for Debian users Usually, you open you package manager (Synaptic, Kynaptic, Adept, etc.) to install a binary or use apt-get on the command line. If you are doing rudimentary stuff with the package you can do it in Konqueror with the Debian Menu, a service menu that allows you to install, uninstall and query package information (figure 18).
Figure 18: Debian service menu These are just a few of my favourites. You will have yours. You have two hundred and fifty from which to choose. Go get ’em! and if you can’t find what you’re looking for then take a quick tutorial to learn how to make your own.
PDF toolkit This last one is a gem. It is a service menu with a great and powerful range of options, and you get it at the Kde-look site. As you can see from figure 19, it offers many options for manipulating PDF files.
Figure 19: PDF options
Conclusion Konqueror as a universal document viewer has proved worthy of the name; with the addition of service menus and kioslaves and a little inbuilt configuration, Konqueror can sustain you for a whole session, as long as your needs are not those of a power user. Yes, it passes the Acid2 test for web compliance but it does not work well with every website (which browser does?); it does work with GMail but only in the poor man’s HTML version (and you can always try Change Browser Identification in the Tools menu and set it to Firefox), and it does not have Firefox-style extensions. Nevertheless, I hope this article has proved both Konqueror’s power and versatility to do so much more than the others from within its own interface.
Conclusion
31
Issue 20
Biography Gary Richmond (/user/3653" title="View user profile.): An aspiring wanabee--geek whose background is a B.A.(hons) and an M.Phil in seventeenth-century English, twenty five years in local government and recently semi-retired to enjoy my ill-gotten gains.
Copyright information Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved. Source URL: http://www.freesoftwaremagazine.com/articles/konqueror_the_browser_file_manager_you_didnt_know
32
Conclusion
Issue 20
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Computer role-playing games for GNU/Linux A look at what's out there By Matt Barton Of all the various types of computers games out there, my favorite is the computer role-playing game, or CRPG for short. Almost everyone has heard of classic CRPGs like Ultima, Baldur’s Gate, and Fall Out, but what about free software CRPGs? In this article, I take a peek at what’s out there.
Introduction Do you ever get tired of listening to gamers who insist that all the best games are for consoles or Windows, so why bother with GNU/Linux? Do you have colleagues who maintain that GNU/Linux is suitable only for serious work, and that games are frivolous and unimportant? I can’t tell you how many times I’ve heard people go on about how expensive games are to produce, and how they just couldn’t possibly work under a GNU license. Even Richard Stallman once told me that games shouldn’t be treated the same way as other programs, and that he had no problem with separately licensing the creative material (story, characters, graphics, music) and the engine it ran on: “A game scenario can be considered art/fiction rather than software. So it is okay to split the game into engine and scenario, then treat the engine as software and the scenario as art/fiction”. Many free software advocates agree with this view, but others wonder whether we should really view writing fiction any differently than writing code. After all, everyone knows that fiction isn’t always read purely for pleasure, but rather to learn something or experience a new perspective. Stallman himself wrote a great short story called The Right to Read, which you should read if you haven’t already. If that doesn’t prove the point, I don’t know what will! But wait, you say, games don’t have anything to teach. They’re just simple diversions; just amusements to wile way the idle hours. This may be true for some people—I’m thinking of those whose idea of “gaming” means booting up solitaire, Tetris, or a Mahjong and literally killing time. There are many of these simplistic games available under the GNU license, and several are highly polished and as good, if not better, as many of the proprietary “casual games” being sold in stores. I have nothing against this kind of gaming, but it’s not what people like me are interested in. My two favorite genres of computer game are graphical adventure games (GAGs) and computer role-playing games (CRPGs). These games provide a much more intense experience—they’re addictive and really get you emotionally invested in the outcome. For this article, I’ve chosen to focus on free software CRPGs currently available for GNU/Linux. Sadly, there aren’t many worthwhile GNU-licensed CRPGs available, and it’s clear that there is a great deal of work left to be done. Nevertheless, a few ambitious developers are moving forward, and I want to tell you about a few of the more interesting projects.
Introduction
33
Issue 20
Figure 1: NetHack is a cult classic CRPG and is quite fun, even with character set graphics
Roguelikes By far the most popular CRPGs for GNU/Linux are “roguelikes”. Put quite simply, a “roguelike” is one of the many games that follow in the footsteps of a very popular UNIX classic called Rogue, which was itself based on older and lesser-known CRPGs for mainframes and the PLATO learning system. There are a few things you should know about Rogue. For one thing, it doesn’t have the kind of graphics you see in most videogames. Instead, it uses Ken Arnold’s “curses” library to make a sort of graphical interface using a terminal’s character set (i.e., the different symbols you can make with your keyboard or with special codes). To put it simply, instead of a graphical image of a wall, you’ll see rows of pound signs and slashes. The main character is represented by the @ symbol, and monsters are usually represented by the first letter of their name (Z for zombie). This may sound primitive, but remember that the alternative was to have no graphics at all, and rely purely on textual descriptions (think of games like Zork and Colossal Cave). One very nice thing about Rogue’s “graphics” is that you don’t need to make a map on your own—the computer does it for you! Besides the innovative “graphics”, Rogue also offers some pretty compelling gameplay. It’s a very intuitive game that’s easy to learn, but hard to master. The basic mission is to descend down to the 26th level of a dungeon, fetch the Amulet of Yendor, and get back to the top. Of course, achieving this goal will mean fighting plenty of monsters along the way. Thankfully, relatively lightweight monsters roam the top—the deeper you go, the bigger and badder the monsters. Thankfully, your character will learn how to fight better as he gains experience, and he’ll find better weapons and magical items to help vanquish his foes. Another nice touch is that the dungeons are randomized, so that it’s virtually a new game every time you sit down to play it. Many people really enjoyed and continue to enjoy the original Rogue, but of course hackers wanted to find ways to explore further possibilities. A series of forks developed, and eventually enough of these games were available to warrant coming up with a name for the genre—“Roguelike” seem to fit.
Nethack and Angband There are many Roguelikes, but two of the most popular are NetHack and Angband. NetHack is famous for being one of the first games to be developed using the internet (that’s where it gets its name). It’s actually based on an older Roguelike named “Hack”. Hack added some neat features to the mix, such as pets that would follow the character and help him out, and shops where you could buy items (rather than just finding stuff lying around). NetHack adds even more features, making it a very sophisticated and well-loved Roguelike. An even more feature-rich version of the game is Slash’em, which stands for “Super Lotsa Added Stuff Hack Extended Magic.” Angband is based on the writings of J.R.R. Tolkien. Instead of fetching an amulet, the mission is to gain enough power to take on the evil Morgoth, Lord of Darkness. Both NetHack and Angband were and still are quite popular.
34
Nethack and Angband
Issue 20
Figure 2: Angband is similar to NetHack, but with subtly different gameplay Roguelikes are a lot of fun, and if you’ve never played one, I would strongly suggest checking out Nethack. There is a bit of a learning curve in learning the keyboard commands and interpreting the symbols on the screen, but the effort pays off. These games really get at the essence of what makes so many CRPGs fun to play: exploring dungeons, fighting monsters, and building up a powerful character. Though you can play these games for long periods of time, I think of them more as “quick fixes”. You don’t have to sit through boring stories or waste time reading pre-scripted dialog from stereotypical characters. Who needs cut-scenes anyway? Just load up a roguelike and start hacking and slashing without all the frills. If you die, so what? It’s easy enough to start over with a new character, and the randomized dungeons keep you from getting bored.
Other Roguelikes There are many other modern roguelikes that might tickle your fancy, such as the NES-themed games available at the Villa of Darkness, where you can find roguelikes based on Castlevania, Metroid, and Zelda. Then there’s Diablo Roguelike, an interesting game indeed, since many critics argue that Diablo is itself a modern roguelike. You might also want to try Ivan, a roguelike that uses SDL for its tile-based graphics. It’s definitely worth checking out. If you’re looking for something with an older pedigree, try Moria or Ancient Domains of Mystery. There are many, many roguelikes still in active development all over the net—too many to list a brief article! Check out the Roguebasin for a big directory of them.
Roguelikes with advanced graphics However, most of us are using personal computers with a great deal of expensive graphics technology. There’s an instinctual need to play games that feature more than just simple character graphics (ASCII, ANSI, and so on). Most of us also have sound cards and speakers, so why not add sound effects and music? Why not make a roguelike as audio-visually impressive as any CRPG currently on the market? There are two answers to these questions. The first is that graphics and sound merely distract from these games. People who hold this view argue that you should use your imagination to make that Z look like a terrifying, rotting zombie; hear his awful moaning in your mind. Try to visualize the corridors you are walking through, feel the damp and the chill; feel the heat and smell the smoke from the torch burning brightly in your hand. After all, they might say, this is how it works in real Dungeons & Dragons. The dungeon master tells you what a room looks like, and you’re supposed to picture it in your mind; she doesn’t just turn on a TV or hand you a photograph. This is a good answer for many fans of roguelikes; people who need graphics and sound are just lacking imagination. The second answer is that there’s really no reason why we can’t add graphics and sound. Some people may prefer the old style, and there’s nothing wrong with that, but others enjoy graphics and sound and want them in the game. Many of the games mentioned above do have simple graphical interfaces available, though featuring the same gameplay. However, let me just say now that none of your Windows or console buddies are likely to be impressed with even the best of these efforts. The interfaces still continue to rely on a lengthy list of keyboard commands or hard-to-navigate menus, and the graphics make the 1997 game Diablo look
Roguelikes with advanced graphics
35
Issue 20 next-gen.
Iso-Angband and NetHack: Falcon’s Eye To learn more about the development of some of the more ambitious efforts, I spoke with Jaakko Tapani Peltonen and Hansjoerg “Hajo” Malthaner, developers of NetHack: Falcon’s Eye and Iso-Angband, respectively. Hajo’s game is graphical but pays homage to the original; for instance, your character is shown as a floating @ symbol. It’s a charming game, but somewhat difficult to control owing to the isometric view (it takes some getting used to). Unfortunately, the project is now defunct, though you can still download the game. Hajo just didn’t feel that people were interested enough in his project to warrant the time he was putting into it.
Figure 3: Iso-Angband is an isometric game with an interesting tileset What was the problem? According to Hajo, “The big problem was acceptance. Not technical issues; these were solvable—but acceptance was low. Some people were almost openly hostile towards the idea of a graphical frontend”. Hajo also cites some “political” problems with the Angband community, who didn’t seem to be too keen on Hajo’s bold vision. I was a bit down after hearing from Hajo; I liked his game and was sad to hear that it was no longer being developed. Fortunately, Jaakko’s Falcon’s Eye project seems to be going much better. Jaakko feels that there has been great interest in his project: “In more concrete terms, there have been over 178,000 downloads on Sourceforge.net of the Windows version alone (over 230,000 of all packages in total”. Furthermore, some distros are including the game. He’s also received over 1,000 emails about the game! Jaakko feels that audio-visuals are important to attract new people to the game: “non-graphical roguelikes may seem disappointing compared to commercial games, despite the complexity of the underlying gameplay”. Jaakko also feels that roguelikes are easier for aspiring developers to build than other types of CRPGs. This “do-it-yourself aspect” gives the game an additional boost at a time when commercial CRPGs cost millions of dollars and need armies of professionals to complete.
36
Iso-Angband and NetHack: Falcon’s Eye
Issue 20
Figure 4: NetHack: Falcon’s Eye has better graphics, but no animation I played Falcon’s Eye for quite awhile, and had a great deal of fun with it. The team has even created a nice introductory sequence, and the music is very pleasant. Overall, I was very impressed with the game, though I must admit it is not perfect. Perhaps the biggest problem is that the characters aren’t shown walking around the dungeon; instead, they simply zap to the next location on the grid. Animation is severely lacking, giving the game an unpolished feel. I also think the interface is difficult and should be much more intuitive. Perhaps they should look at the radial-style interfaces of some of the newer commercial CRPGs, or building a tutorial into the game that would familiarize new players with the basics. These tutorials are a “given” in almost every commercial CRPG. As with Iso-Angband, I also had some difficulty moving around, though one nice touch here is that you can just use the mouse to click on where you want to go. In short, the game has promise, but it still has room for development. Jaakko told me that the project has been forked, and a newer version of the game called Vulture’s Claw is floating around the net. However, I was unable to find and download a version of it.
Lost Labyrinth and Adonthell: Waste’s Edge I also tried Lost Labyrinth and Adonthell: Waste’s Edge, two games that reminded me of the Japanese-style RPGs find on the NES and SNES game consoles. These games are all based on the old Ultima games, particularly Ultima III. To put it bluntly, you get a top-down view and cute, munchkin like characters, and lots of dialog with the people you encounter. Lost Labyrinth maintains the turn-based style of a roguelike, but Adonthell is set in real-time, with fluid motion. Both have slick interfaces and are fairly easy to learn. Although I liked both of them, Adonthell seems to have a richer story and more interesting characters. Unfortunately, I’m not a big fan of this type of CRPG; the cartoony graphics put me off.
Figure 5: Lost Labyrinth reminds me of a console role-playing game
Lost Labyrinth and Adonthell: Waste’s Edge
37
Issue 20
Conclusion In short, I failed to find a GNU-licensed CRPG that really impressed me. The best of the lot are the many roguelikes, which I love to play, but their “quick and dirty” style gameplay is definitely no substitute for the long, drawn-out campaigns you get with commercial CRPGs like Neverwinter Nights and Diablo. I hate to say it, but maybe GNU developers should look beyond Rogue for inspiration—it’s “been done” in my opinion. What I think would be illiant idea would be for GNU game developers to go back and look at some of the earlier CRPGs, and also some of the tabletop, pen-and-paper role-playing games that never made it into computerized form. I’m thinking here of the many spy (Mercenaries, Spies, and Private Eyes) and WWII (Top Military System) role-playing games, as well as the science fiction game Traveller and Steve Jackson’s The Fantasy Trip and GURPS. All of these sources would be great for inspiring a new CRPG that will get us away from the rather clich?system of Dungeons & Dragons. Don’t get me wrong; I love TSR and D&D, but there are other models out there, and they have massive (if not even better) potential to make awesome new CRPGs.
Figure 6: Andonthell: Waste’s Edge is focused on story and characters I would also want developers to play some of the more innovative of the older CRPGs, such as Planescape: Torment and Ultima IV. These games really explored new strategies for gameplay, and put the focus on things other than combat. There’s also the Fallout series and the earlier Wasteland, games set in post-apocalyptic times and which are radically different than the average swords and sorcery games. I’d also recommend the console game Chrono Trigger, which is my favorite console RPG. It features a very good story and interesting characters, and the style of gameplay shouldn’t be hard to adapt. It seems to me that the first priority should be building a suitable engine, and then follow a Neverwinter Nights model to encourage users to build their own campaigns. This way, you get the programmers (building the engine), artists (tile sets and character models), musicians, and, of course, storytellers all doing what they do best.
Biography Matt Barton (/user/29" title="View user profile.): Matt Barton is an English professor at St. Cloud State University in Minnesota. He is an advocate of free software, wikis, and the Creative Commons. He also studies and writes about videogames and computing history. Matt also has blogs at Armchair Arcade (http://armchairarcade.com/neo/blog/4), Gameology (http://www.gameology.org/), and Kairosnews (http://kairosnews.org/user/195).
Copyright information This article is made available under the "Attribution-NoDerivs" Creative Commons License 3.0 available from http://creativecommons.org/licenses/by-nd/3.0/. Source URL: http://www.freesoftwaremagazine.com/articles/role_playing_games_gnu_linux
38
Conclusion
Issue 20
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Creating a book template with Writer A nifty Writer template for your next book By Dmitri Popov While Writer allows you to create an advanced book template that consists of a master document and a number of subdocuments, there are situations where using a simpler, one-file template makes more sense. The main advantage of a one-file book template is that it helps you to work around two major problems in Writer. First of all, there is a bug (1 and 2) that makes it rather difficult to manage figure numbering. Moreover, the current implementation of the cross-reference feature makes managing cross-references between sub-documents quite cumbersome. Besides that, you might find it easier to work on a single file, where you don’t have to keep tabs on all the sub-documents. It’s also easier to troubleshoot if something goes wrong with the book layout. Moreover, creating a one-file template requires far fewer steps, which saves you time. So if a one-file book template best fits your needs, then read on. The following description assumes that you are familiar with OpenOffice.org and that you know how to work with paragraph and page styles. First of all you have to outline the overall structure of the book. For this example, I will be creating a book template consisting of the following parts: • Front matter, including a copyright page, table of contents (TOC), and a preface • Book body consisting of several chapters • Alphabetical index
Creating paragraph styles To set up the book template, you need to create custom paragraph, character, and page styles as well as specify outline numbering for use with the TOC. Which paragraph and character styles you need to create is completely up to you. However, as a minimum, you have to create the following paragraph styles: • Text body for use in the book body • Headings for chapter titles and subtitles • Footer and header for use with the book’s headers and footers Depending on the contents of the book, you might need to specify additional styles. For example, if your book is going to include numbered and bulleted lists, you need to create paragraph styles for them, too. The same goes for figure captions, tips, boxouts, etc. To create a new paragraph style, click on the Paragraph Styles button in the Stylist window, right-click somewhere in the window, and select New from the context menu. (If you don’t see the Stylist, press F11 or choose Format→Styles and Formatting). Then use the available options to set up the paragraph style. While most of the available options are self-explanatory, there are a few settings that deserve a closer examination. The Organizer tab allows you to specify the Next Style and Linked with options. As the name suggests, you can use the Next Style option to select the style that follows the current style in the book.
Creating paragraph styles
39
Issue 20
Figure 1: The Organizer tab allows you to specify the Next Style and Linked with options For example, in figure 1, the BOOK_Heading style is followed by BOOK_Text_body style. In practice, this means that after you’ve entered a heading which has the BOOK_Heading style and then pressed enter, Writer automatically switches to the BOOK_Text_body paragraph style. This is a nifty trick that can save you a lot of time. The Linked with option allows you to select an existing style that you want to base the new style on. For example, when creating the BOOK_Tip_body style, you might want to link it with BOOK_Text_body. The linked BOOK_Tip_body style automatically inherits all the properties of the BOOK_Text_body style, so you don’t have to specify all the settings from scratch. More importantly, if you later make changes to the BOOK_Text_body style, they will automatically propagate to the linked styles, which can save you a lot of time.
Figure 2: Specifying the Text Flow option If you want each chapter in your book to start from a new (usually right) page, you have to specify the Text Flow options. To do this, click on the Text Flow tab in the Paragraph Style dialog window. In the Breaks section, tick the Insert check box, select Page from the Type list, select Before from the Position list. Tick the With Page Style check box and select the BOOK_First_Page (you have to create this page style first as described later) from the list. Make sure that Page number is set to 0, and click OK to save the setting and close the window. TIP: While you can modify Writer’s default styles for use in your book template (for example, Text body), it’s a good idea to create custom styles from scratch, so you can easily display them in the Navigator by selecting the Custom Option from the drop-down list at the bottom. This may seem like a minor thing, but it saves you a lot of time in the long run. Also, to make it easier to identify the custom paragraph, character, and page styles, you might want to add a prefix to their names, for example, BOOK_Text_body, BOOK_Monospaced, BOOK_First_page, etc., (figure 3).
40
Creating paragraph styles
Issue 20
Figure 3: Custom styles in the Stylist
Creating page styles Next step is to create the required page styles. Any book, even the most simple one, consists of several parts, and each of them requires its own page style. Based on your overall book structure, you can make a list of the necessary page styles that looks something like this: • Title page—no numbering, no footer/header. • Copyright page—no page numbering, no footer/header. • Table of Contents (TOC)—page numbering starts with i, no header, book title and page number in the footer. • Preface—page numbering continues from TOC, no header, page number in the footer. • Chapters—page numbering starts with 1, book title in the header, page number and chapter’s title in the footer. • Alphabetical index—two-column page, page numbering continues from the last appendix, no header/footer. There is another important thing you must take into consideration. Unlike conventional documents, the book is printed on both sides of the pages, which are then bound. This means that you have to create separate page styles for left and right pages that mirror each other, and you have to make inside margins for each page style (also known as gutters) wider in order to accommodate the binding. This means that you have to create a set of three page styles for the book chapters: BOOK_First_page, BOOK_Left_page and BOOK_Right_page. To create the BOOK_First_page style, click on the Page styles button in the Stylist window, right-click somewhere in the window, and choose New. This opens the Page Style window where you can specify the BOOK_First_page style’s settings, similar to those in figure 4. Note that the Page Layout is set to Only right, which ensures that all chapters in the book always start on the right page. Note also that the style’s left margin is wider than the right one: this is done to emulate the gutter.
Figure 4: Specifying the BOOK_First_page style
Creating page styles
41
Issue 20 The BOOK_Left_page style is identical to BOOK_First_page except for two things. The Page Layout option is set to Left only, and the right margin emulates the gutter. In other words, the BOOK_Left_page style mirrors BOOK_First_page. Finally, the BOOK_Right_page style is similar to BOOK_First_page: its Layout option is also set to Right only, and the gutter is on the left. Which begs the question why you need two separate right-oriented page styles at all? The answer is simple. Usually, the page of the chapter has a different layout: it may not have a header or footer, and it can use a completely different layout all together. Now you have to link all three page styles: • Right-click on the BOOK_First_page style, choose Modify, and select BOOK_Left_page from the Next Style list under the Organizer tab. Click OK. • Right-click on the BOOK_Left_page style, choose Modify, and select BOOK_Right_page from the Next Style list. Click OK. • Right-click on the BOOK_Right page style, choose Modify, and select BOOK_Left_page from the Next Style list. Click OK, and you are done. Linking the page styles allows Writer to automatically apply the correct page style.
Using pictures in chapters If you plan to use pictures in the book, then there are a couple of additional things you have to take care of. First of all, you have to adjust the Graphics style (in the Frame Styles section in the Stylist). If you plan to use captions with the pictures, then you either have to adjust the existing caption style (for example, Illustration in the Paragraph Styles section in the Stylist), or create a new one. To configure Writer to add captions automatically, follow these steps: • Choose Tools→Options→OpenOffice.org Writer→AutoCaption. • Tick the OpenOffice.org Writer Picture check box, and specify the Caption options. • Click OK to save the settings and leave the window.
Specifying outline numbering In a nutshell, outline numbering is a hierarchy of different paragraph styles required, among other things, for generating a table of contents. To specify outline numbering, choose Tools→Outline Numbering, select 1 from the Level list, and select BOOK_Heading from the Paragraph Style list. In a similar manner, you can specify paragraph styles for other levels as shown in figure 5.
Figure 5: Specifying outline numbering
42
Specifying outline numbering
Issue 20
Adding a table of contents and alphabetical index Using the defined outline numbering, adding a TOC to the book is rather easy. Place the cursor where you want the TOC to appear, choose Insert→Indexes and Tables→Indexes and Tables, and select Table of Contents from the Type drop-down list under the Index/Table tab. Tick the Outline check box in the Create from section. This forces Writer to use the specified outline numbering when generating the TOC. To add styles that are not specified in the Outline numbering, such as First chapter title, tick the Additional Styles check box, select the style and press the >> button to move it one step forward. Press OK when you are done to close the window and generate the TOC.
Figure 6: Creating a table of contents Adding the alphabetical index is equally simple. Place the cursor where you want to insert the alphabetical index, choose Insert→Indexes and Tables→Indexes and Tables, and select Alphabetical Index from the Type drop-down list under the Index/Table tab. Configure other options, then press OK to insert the alphabetical index.
Final word There are many ways to skin a cat, the described approach is the only way to create a book template. However, since I used this template for my book, Writer for Writers and Advanced Users, I can promise you that it works. If you find a better way of doing things, please drop a note to dmpop at openoffice.org.
Biography Dmitri Popov (/user/41918" title="View user profile.): Dmitri Popov is a freelance writer whose articles have appeared in Russian, British, and Danish computer magazines. His articles cover open-source software, Linux, web applications, and other computer-related topics. He is also the author of the book Writer for Writers and Advanced Users (http://www.lulu.com/content/221513).
Copyright information This article is made available under the "Attribution-NonCommercial-NoDerivs" Creative Commons License 3.0 available from http://creativecommons.org/licenses/by-nc-nd/3.0/. Source URL: http://www.freesoftwaremagazine.com/articles/create_book_template_with_writer
Final word
43
Issue 20
44
Final word
Issue 20
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Kopete: the KDE instant messenger How to connect to virtually any instant messenger network using Kopete By Andrew Min Today, everyone uses a different instant messenger. Your boss may use Lotus Sametime, your colleague AIM, your friend Google Talk, and your kid Yahoo! Messenger. However, these all take up hard drive space, RAM, and CPU usage. In addition, many of these are proprietary and Windows-only (two big minuses for GNU/Linux users). Luckily, the free software world has several alternatives that enable users to chat with users of all of these programs (and many more). For KDE users, the answer is Kopete. Note: This is part 3 of an instant messenger series. Part 1 deals with the history of instant messenger clients and protocols. Part 2 talks about the Pidgin instant messenger for GNOME.
History of Kopete If ICQ hadn’t blocked Licq around Christmas in 2001, Kopete probably wouldn’t have been born. At that time, ICQ had changed their protocol, causing the popular ICQ clone Licq to stop functioning. Since he didn’t want to wait for Licq to fix its problem, Duncan Mac-Vicar Prett began coding a KDE ICQ client. Several weeks later, Prett changed the focus of Kopete to a multi-protocol client and added support for the AIM and MSN protocols. Four months later, emoticons and IRC support were added. Jabber and a better MSN protocol were then released. By 2005, the metacontact, a better IRC protocol, and a Yahoo! protocol had been added. In March of that year, Kopete became a part of the official KDE 3.4 release.
Installation Kopete is available to virtually all GNU/Linux distributions. If your operating system uses KDE 3.4 or higher, Kopete is probably installed already. If not, it will most likely be available through the default package manager. If you need to build it from source, read the build tips and install tips from the Kopete site. Kopete is also available for Macintosh OS X 10.4 (Tiger) if you use fink. Read more at the Kopete fink page. Eventually, Kopete may become available for Macintosh and Windows users without fink (probably when KDE 4 comes out with support for these platforms). In the meantime, use the fink version or the native Mac app Adium.
How to set up accounts Once you have installed Kopete, you want to use it to chat with your friends. Here’s a step-by-step guide on how to configure it.
Setting up an AIM account All you need for Kopete to connect to the most popular IM network in the US is an AIM account (get one here). After that’s settled, open the accounts window by going to Settings→Configure. Under the Accounts section, click the New… button to add an account. Select AIM, and continue. Enter in your screen name, password (optional), and other preferences (you may need to be connected in order to edit these). Then, hit Next. At the last screen, hit Finish. All of your AIM contacts should show up.
How to set up accounts
45
Issue 20
Figure 1: AIM account set up
Setting up an ICQ account Kopete started out as an ICQ client, so of course they have an option to connect to it. All you need is an ICQ # (available here). First, open the accounts window by going to Settings→Configure. Under the Accounts section, click the New… button to add an account. Select ICQ, and continue. Enter in your ICQ #, password (optional), and other preferences (you may need to be connected in order to edit these). Then, hit Next. At the last screen, hit Finish. All of your ICQ contacts should show up.
Figure 2: ICQ account set up
Setting up an MSN (Windows Live) account We probably all know a lot of people who use the hated Windows Live Messenger (formerly known as MSN Messenger). Luckily, you don’t have to be running Microsoft software to chat with them. Kopete makes it easy to connect to your Windows Live account (available here. First, open the accounts window by going to Settings→Configure. Under the Accounts section, click the New… button to add an account. Select MSN Messenger, and continue. Enter in your MSN Passport ID, password (optional), and other preferences (you may need to be connected in order to edit these). Then, hit Next. At the last screen, hit Finish. All of your Windows Live contacts should show up.
46
Setting up an AIM account
Issue 20
Figure 3: MSN account set up
Setting up a Yahoo! Messenger account Like Yahoo! Messenger, but feel reluctant to download the 1.x version for GNU/Linux? If you have a Yahoo ID (available here), Kopete will do it for you. First, open the accounts window by going to Settings→Configure. Under the Accounts section, click the New… button to add an account. Select Yahoo, and continue. Enter in your screen name, password (optional), and other preferences (you may need to be connected in order to edit these). Then, hit Next. At the last screen, hit Finish. All of your Yahoo! Messenger contacts should show up.
Figure 4: Yahoo! account set up
Setting up a Jabber account Jabber is the only mainline protocol that is free software. And of course, Kopete lets you connect to your account easily (if you don’t have one, Kopete will help you register one). First, open the accounts window by going to Settings→Configure. Under the Accounts section, click the New… button to add an account. Select Jabber, and continue. If you don’t have a Jabber screen name, choose Register New Account, and hit the Choose… button to find a server (if you don’t like or trust any of them, KDETalk is the official KDE server). Enter in your ID and password, and hit register. Then, close and enter in any addition preferences. If you’ve already registered, enter in your screen name, password (optional), and other preferences (you may need to be connected in order to edit these). Then, hit Next. At the last screen, hit Finish. All of your Jabber contacts should show up.
Setting up an MSN (Windows Live) account
47
Issue 20
Figure 5: Jabber account set up
Setting up a Google Talk account Google’s a great company. They make great products, and they use standards. Unfortunately, they don’t always make it easy for 3rd-party software. Google Talk is a great example. Connecting to it in Kopete isn’t intuitive, though it is doable. First, open the accounts window by going to Settings→Configure. Under the Accounts section, click the New… button to add an account. Select Jabber, and continue. Your ID is your email address (e.g. andrewmin@gmail.com or andrew@min.com if you use Google Apps). Go to the Connection tab, check Use protocol encryption (SSL), Allow plain-text password authentication, and Override default server information, enter talk.google.com as the server, and put 5223 as the server port. Change any additional preferences you want. Then, hit Next. At the last screen, hit Finish. All of your Google Talk contacts should show up. Read more at the Google Talk Help Center.
Figure 6: Google Talk account set up
Setting up an IRC account IRC is especially popular among the tech support geeks, and Kopete makes it easy to connect to it. Open the accounts window by going to Settings→Configure. Under the Accounts section, click the New… button to add an account. Select IRC and add your nickname, username, and password (if you have one). Then, go to the Connection tab and choose your network (or add one if it isn’t listed).
48
Setting up a Jabber account
Issue 20
Figure 7: IRC account set up If you don’t already have an IRC account, it’s easy to get one. Just make up a nickname, and connect with it. Then follow your server’s instructions on setting up a new account.
Setting up a WinPopup account WinPopup is an old LAN messaging service that was never included in WinNT-based OSes. However, many people still use it on 98 or ME. To set up Kopete to connect to WinPopup, you need Samba installed. Then, open the accounts window by going to Settings→Configure. Under the Accounts section, click the New… button to add an account. Select WinPopup, and continue. Enter in your Hostname, and choose Install Into Samba. Then, hit Next. At the last screen, hit Finish. All of your WinPopup contacts should show up.
Figure 8: WinPopup account set up
Basic usage To start using Kopete, just double-click on a contact. A conversation window will pop up. Type in your text and hit enter. Additionally, you can add all sorts of formatting (see figure 9).
Basic usage
49
Issue 20
Figure 9: Instant messaging (with myself). Working hard, researching IM clients!
Chats Group chats are a ton of fun (especially if you use IRC, which is exclusively for chats). It’s a great way to get tech support, hang out with friends, or do some collaboration. To set it up, right click on your account icon in the bottom right corner and click Join Groupchat… Enter all the necessary information and hit Join. Note: this may not work some protocols, such as Google Talk.
Metacontact The metacontact is one of Kopete’s most unique features. Basically, it’s one contact for multiple accounts. For example, I have a friend named Tim who has a Jabber account, an AIM account, and a Yahoo! Messenger account. Wouldn’t it be easier to just put all three into one contact? That’s what the metacontact does. Just right-click on a buddy (say the Jabber account tim@tim.com), select the buddy’s account at the bottom of the drop-down list, and click Change Meta Contact… Select Create a new metacontact for this contact. Rename the metacontact to something more memorable than tim@tim.com (e.g. Tim), and then add the rest of your contacts. You will then be prompted whether to delete the old contact or not.
Global identity It’s a real pain when you need to update your profile information for AIM, ICQ, MSN, Yahoo!, Jabber, and IRC one at a time. Why can’t you just do it all at once? That’s what the developers thought when they added Global Identity to Kopete. Just go to Settings→Configure, select Identity, and check Enable global identity. You can then change your name, photo, and address book info (using KAddressBook).
Extending Kopete Kopete throws in a lot of features (like most KDE apps, giving the reputation for including everything except the kitchen sink). However, there is always something left out. That’s why Kopete has support for extensibility. You can create protocol plugins, add new emoticons, or skin Kopete with themes.
Emoticons Emoticons are an integral part of chatting. You can’t have an IM conversation without a :-), :-(, or even a ;-). However, the default Kopete emoticons aren’t my favorites. Luckily, Kopete lets you change them with ease. Just go to Settings→Configure, choose Appearance, and go to the Emoticons tab. You can choose from a list of pre-installed emoticons or install your own from KDE-Look’s Emoticons section.
50
Extending Kopete
Issue 20
Chat skins I love the default Kopete chat interface, but it can get old after a while. To change that, go to Settings→Configure, choose Appearance, and go to the chat window tab. You now have a list of styles to change to (Clean, Clear, Gaim, Hacker, Konqi, Kopete, or Retropete). You can also install new ones by hitting Get New… or by going to the KDE-Look.org Kopete Style section.
What to look forward to in future versions Kopete has a lot of things coming soon. Full voice and video support will be added (including Google Talk support, currently not available in the default install), along with better protocol support and more cool features.
Links • Kopete homepage • Handbook • Screenshots • Roadmap • Lead developer’s blog • Wikipedia article
Biography Andrew Min (/user/37372" title="View user profile.): Definition: Andrew Min (n): a non-denominational, Bible-believing, evangelical Christian. (n): a Kubuntu Linux lover (n): a hard core geek (n): a journalist for several online publications including Free Software Magazine, Full Circle Magazine, and Mashable.com
Copyright information This article is made available under the "Attribution-Sharealike" Creative Commons License 3.0 available from http://creativecommons.org/licenses/by-sa/3.0/. Source URL: http://www.freesoftwaremagazine.com/articles/kopete_kde_instant_messenger
Links
51
Issue 20
52
Links
Issue 20
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Creating a free CD or DVD database and labels in OpenOffice.org Base Going beyond the box of index cards to track and label your media By Solveig Haugland If you’re serious about music or DVDs, at some point you cross the threshold of having more than you can keep track of easily. The box full of index cards has served its purpose; it’s time to move on to storing information about your CDs and DVDs in a database. This might seem like more of a pain than you can stand. What’s the point of doing a database when you can just type it all into a spreadsheet, for instance? Well, a spreadsheet is a good start but with a database you get a lot more features, including easily printing custom labels for all those legal backups you’ve made. You could also print out a record of all your movies or music, if you keep notes on them such as summaries, who you’ve loaned them to, and anything else you track. Putting together a database and creating the forms to print what you want takes more than a couple steps, but it’s not that difficult. Even better, it’s free, since you can use the free software office suite OpenOffice.org to do everything. You can download and install OpenOffice.org from their website. Once you’ve got the program installed, you’ll complete these steps. 1. Create a database: There are several ways to do this, but I’ll show you the quickest way. 2. Create or download the document you want to print, from a source such as WorldLabel, and connect the document to the database.
Creating a database If you’re a database god and enjoy spending time deep in a database—well, you can do anything you want. But, if you want the most results for the least trouble, here’s what I recommend: get your data in a spreadsheet, then create a database file that can read the spreadsheet. This is simple and if you don’t need advanced database features, it’s all you need. This process creates an OpenOffice.org database file that points to your spreadsheet of information. The spreadsheet doesn’t have the power to do mail merges, but the database file pointing to it does. It acts as a middle man saying “The data’s over there, with these fields–go get it”.
Getting data into a spreadsheet You either have your data in a spreadsheet already, or you can get it into a spreadsheet pretty easily. You’ll want to have it set up to look something like this: field labels across the top, and with each piece of data separate.
The problem of blank [Address2] lines Some people live in houses; some in apartments. Some work in enormous campuses with mailstops or buildings in the address; others receive their work mail at a post office box.
The problem of blank [Address2] lines
53
Issue 20 Everyone knows this, of course. So why is this worth mentioning? Because it affects how you do mail merges when you send mail to these people. You’ll need all the relevant information for their addresses to print out on labels, or in the header of form letters. You want them to look like figure 1.
Figure 1: Data setup If your data is in .csv files, you can open those files in a spreadsheet. 1) In OpenOffice.org choose File→Open. 2) In the File Type list of the Open window, select Text CSV (click in that list and type T four times), as shown in figure 2.
Figure 2: Selecting the Text CSV file format 3) Select the CSV file and click Open. In the window that appears in figure 3, verify that the settings are correct for the data, then click OK.
54
The problem of blank [Address2] lines
Issue 20
Figure 3: Verifying settings 4) Save the file as a spreadsheet, in spreadsheet format, as shown in figure 4.
Figure 4: Saving the imported data in spreadsheet format
Creating the database file that points to the spreadsheet Now you’re ready to create the database file that reads the information out of your spreadsheet. It’s very simple. 1. Choose File→New→Database. 2. In the first window, select Existing Data Source, and Spreadsheet type, as shown in figure 5. Click Next.
Figure 5: Specifying Spreadsheet as the database type
Creating the database file that points to the spreadsheet
55
Issue 20 3. In the next window, point to the full path of the spreadsheet you want to use, as in figure 6. Click Next.
Figure 6: Pointing to your spreadsheet 4. Leave all the checkboxes marked, as in figure 7. You don’t need to edit the database but the first time, at least, it’s good to take a look at what the main database window looks like. Click Finish.
Figure 7: Finishing the database creation 5. Name the database, as shown in figure 8. This name will show up when you do mail merges, and it’s the name and location you’ll look for when you want to do things with the database like create queries or reports. Click Save.
Figure 8: Naming and saving the database
56
Creating the database file that points to the spreadsheet
Issue 20 6. You’ll see the database, as shown in figure 9. Click the Tables icon at the left, and select a table name. Each sheet containing data in your spreadsheet will become a table. Now, on the right side, instead of None, select Document. You’ll see the data in the spreadsheet.
Figure 9: Viewing the new database and its table(s) in the database editing window You don’t have to do anything else—you’re ready to pull information into your documents.
Updating the data When you have more data, just add it to the spreadsheet. When you want more tables, just add the data to another sheet in the spreadsheet.
Creating a query to restrict data Let’s say you might want to print labels for all the DVDs in your collection, or you might want to print labels only for DVDs released between 1998 and 2002. You’d need to create a query so that you could print labels based on that query, or on the whole database. 1. Open the .odb database file you created. 2. Click the Queries icon at the left side. 3. Click the option to use Design view. See figure 10.
Figure 10: Creating a query in Design view 4. In the window that appears in figure 11, click Add to add the table you want to base the query on.
Updating the data
57
Issue 20
Figure 11: Selecting the table to base the query on 5. Double-click each field that you want to add to the query. Then add the limiting information in the Criterion field; an example is shown in figure 12.
Figure 12: Creating the query 6. Click the Run Query icon, with the green checkmark, to see the results shown in figure 13.
Figure 13: Running the query and seeing the results 7. Click the Save Query icon and name the query. See figure 14.
58
Creating a query to restrict data
Issue 20
Figure 14: Saving and naming the query 8. Close the window.
Printing CD or DVD labels from your database Let’s say that you’ve created a lot of backups of your movies or you’re a musician with a burgeoning music career, or you’ve created a number of mix CDs. For whatever reason, you need to print out a lot of labels. The easiest way to do this is to create a mail merge that sucks the information out of your database and prints a label for each of your movies or CDs. 1. You need to print, so you need a document. You can download a CD template such as the 4.5” CD-ROM WL-OL1200 template from WorldLabel, or use one that you already have that works. The spacing is important so you’ll probably make more trouble for yourself than it’s worth by creating your own label template from scratch. I’ve shown the template here with the borders on—it’s created as a table—so you can see what you’re working with more easily. See figure 15.
Figure 15: Template for CD or DVD case label with borders showing 2. To pull the data from your database, press F4 or choose View→Data Sources. Expand the database you want to use and select the table or query to use. See figure 16.
Printing CD or DVD labels from your database
59
Issue 20
Figure 16: Selecting the query or table to do labels for 3. Click on the name of the field (not the data) that you want to use and drag it into the label. When you move your mouse over the field, you’ll see the full path of the database, table, and field. Drag in all the fields that you want to use. See figure 17.
Figure 17: Dragging in the fields to print on the labels 4. At this point, most of the choices are up to you. Add any other text, like labels for the title, actors, etc. You can put in graphics, do colorful formatting; anything you want. Use table formatting tools to align the text at the top or bottom, or just press Return a few times to get it where you want it. Add shading to the table or to the background of the document to get the color you want. Do everything in the first label to the second label, as well (the same formatting, etc.). See figure 18.
Figure 18: Formatting the fields in the labels
60
Printing CD or DVD labels from your database
Issue 20 5. Copy the fields from the first label to the second. 6. You now need to insert the trigger that will make the next record in the database print—otherwise the next movie’s information won’t print until the next page. a) In the second label area, click to the left of the first field, in this case Movie. See figure 19.
Figure 19: Preparing to insert an extra logic field b) Choose Insert→Fields→Other, Database tab, select the Next Record field, select the database and table you’re using, and click Insert. See figure 20.
Figure 20: Inserting the field that will trigger the next record c) The Next Record field will be inserted but it doesn’t take up any space, so it won’t be very obvious. You only need to insert the Next Record field once per record; not in front of each field. See figure 21.
Printing CD or DVD labels from your database
61
Issue 20
Figure 21: The inserted Next Record field d) If you have three or more label areas in the template, copy all the fields from the second label area to each additional area. Now when you print, you’ll get a new record for each label. 7. Choose File→Print and click Yes to print a form letter. Don’t mark the checkbox; you want this window to appear each time. See figure 22.
Figure 22: Choosing to print a form letter, also known as a mail merge 8. In the print window, choose to print to a printer and click OK. See figure 23.
Figure 23: Selecting the output for the mail merge
62
Printing CD or DVD labels from your database
Issue 20 9. In the Print window, select the printer you want, and be sure it’s loaded with labels. Click OK. See figure 24.
Figure 24: The printed labels 10. One label will be printed for each movie (or other record) in your database.
Conclusion As with anything, organizing your spices, your sock drawer, or your office, setting up a database and labels takes a little time. But you’ll save time, effort, and be able to keep track of who you’ve loaned your DVDs to, down the line.
Biography Solveig Haugland (/user/20443" title="View user profile.): Solveig Haugland has worked as an instructor, course developer, author and technical writer in the high-tech industry for 16 years, for employers including Microsoft Great Plains, Sun Microsystems,and BEA. Currently, Solveig is a StarOffice and OpenOffice.org instructor, author, and freelance technical writer. She is also co-author, with Floyd Jones, of three books: Staroffice 5.2 Companion, Staroffice 6.0 Office Suite Companion and OpenOffice.Org 1.0 Resource Kit, published by Prentice Hall PTR. Her fourth book, the OpenOffice.org 2.0 Guidebook, is available from Amazon.com, from Cafepress, and directly from Solveig . For tips on working in OpenOffice.org or StarOffice, visit Solveig's blog: http://openoffice.blogs.com.
Copyright information Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved. Source URL: http://www.freesoftwaremagazine.com/articles/creating_free_cd_dvd_database_labels_openoffice_org_base
Conclusion
63
Issue 20
64
Conclusion
Issue 20
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
From the driver to the window manager: how to install Compiz Fusion for Ubuntu, Kubuntu, and Xubuntu The step-by-step guide to installing ATI/NVIDIA, Xgl/AIGLX, and Compiz Fusion By Andrew Min The 3D world just got a lot brighter with the birth of Compiz Fusion, a powerful compositing window manager for GNU/Linux operating systems. Originally there was one project, Compiz, but the project forked into Compiz, and the unstable and unofficial fork of Compiz known as Beryl. Now, the two projects have been reunited for one amazing compositing window manager. In a nutshell, it adds effects to your desktop like wobbly windows (the windows actually wobble when you move them), a cool virtual desktops manager via a cube, and much more. For proof of how cool it is, just do a Google Video/YouTube search for “compiz fusion”. Unfortunately, Compiz Fusion has little or no documentation. The little that exists is meant for hardcore geeks who are expected to know what obscure and unintuitive commands like “git” are. They also don’t explain how to install a composite manager or a video card driver (both of which are required for Compiz Fusion to function properly). Worse still, much of the documentation available will only work for one type of video card (NVIDIA tutorials won’t work with ATI cards, and vice versa). And worst of all, virtually all of the tutorials out there are for Ubuntu and won’t work for Kubuntu or Xubuntu users. Therefore, this guide was created as a sort of all-in-one guide for all users of the major Ubuntu distributions and the major video cards. Warning: Most, if not all, of this software (including Compiz Fusion itself) is alpha. It should work, but there is the chance it will not. Therefore, it should not be used on production machines. You have been warned.
ATI card owners Getting the driver The first thing to do is to get the video card driver. There are two modern ATI drivers available to Ubuntu users: the unofficial free software Radeon driver and the ATI official (and proprietary) fglrx driver. fglrx is available for Radeon users 9000+ and X series users (e.g. Radeon X3000), though it may work for other ATI cards as well. Unfortunately, Radeon is extremely slow when running Compiz Fusion (enough to make it unusable). So, we’ve got to go with the proprietary fglrx. First, update your system using your favorite package manager. Next, make sure the packages linux-restricted-modules-generic and restricted-manager are installed. Then go to System→Administration→Restricted Drivers Manager (Settings→Restricted Drivers Manager in Kubuntu), or run restricted-manager as root (sudo restricted-manager in your favorite terminal). After entering your password, you will see an option for ATI accelerated graphics driver. Check that it is enabled. For more information, read the Ubuntu Wiki page, BinaryDriverHowto/ATI.
ATI card owners
65
Issue 20
Figure 1: Restricted manager for ATI
Getting the X server The newest version of X.Org includes AIGLX, which includes GLX rendering capabilities required by Compiz Fusion. Unfortunately, AIGLX requires that you use the Radeon driver, which is too slow to run Compiz Fusion with. Therefore, we need to use a new X server called Xgl. First, install the xserver-xgl package in the universe repository. Next, create a text file (as root) in the location /usr/bin/startxgl.sh. What goes in the text file depends on which desktop environment you will use. Ubuntu (GNOME) users will enter this: #!/bin/sh Xgl :1 -fullscreen -ac -accel xv:pbuffer -accel glx:pbuffer & DISPLAY=:1 dbus-launch --exit-with-session gnome-session
Kubuntu (KDE) users should enter: #!/bin/sh Xgl :1 -fullscreen -ac -accel xv:pbuffer -accel glx:pbuffer & DISPLAY=:1 exec startkde
And finally, Xubuntu (Xfce) users should enter: #!/bin/sh Xgl :1 -fullscreen -ac -accel xv:pbuffer -accel glx:pbuffer & DISPLAY=:1 exec xfce4-session
Note: DBUS is required for the GNOME login Save, and close the file. Make it executable (run sudo chmod +x /usr/bin/startxgl.sh in your favorite terminal). Now, create a new file (again as root) called /usr/share/xsessions/xgl.desktop. In it, put the following: [Desktop Entry] Encoding=UTF-8 Name=Xgl Comment=Start an Xgl Session Exec=/usr/bin/startxgl.sh Icon= Type=Application
Save, and log out of your session. At the login manager, choose Xgl as the session type. You’re done! Now, skip the next section to go to Getting Compiz Fusion.
66
Getting the driver
Issue 20
NVIDIA card owners Getting the driver Like ATI users, there are several options for NVIDIA owners. The most popular is the free software nv driver. But like ATI users, the best performance for Compiz Fusion seems to come from the proprietary NVIDIA driver (aptly named nvidia). First, update your system. Next, make sure the packages linux-restricted-modules-generic and restricted-manager are installed. Then go to System→Administration→Restricted Drivers Manager (Settings→Restricted Drivers Manager in Kubuntu), or run restricted-manager as root (sudo restricted-manager in your favorite terminal). After entering your password, enable the NVIDIA option. You should now be using the NVIDIA driver. More information is available at the Ubuntu Wiki page, BinaryDriverHowto/Nvidia.
Figure 2: Restricted manager for NVIDIA
Getting the X server NVIDIA users are a lot luckier than ATI users: they can choose between Xgl and AIGLX for their server. The nice thing about AIGLX is that it is built into X.Org 7.1, so you can enable it without installing anything. In addition, you don’t have to create a separate session to log into. The only thing you need to do is edit a few config files. Finally, AIGLX doesn’t require a separate session, unlike Xgl. Xgl is less stable and requires getting the xgl-server package (and for GNOME users, installing the DBUS package), but has less configuring to do.
AIGLX To use AIGLX, open up /etc/X11/xorg.conf in a text editor. Make sure that under the Section “Module” that you have the following: Load "dri" Load "dbe" Load "glx"
Also, under Section “Device” you should have: Option "XAANoOffscreenPixmaps"
You may need to add this to the device section: Option "AddARGBGLXVisuals" "True"
Lastly, make sure the following is enabled (probably at the end of the file):
NVIDIA card owners
67
Issue 20 Section "DRI" Mode 0666 EndSection Section "Extensions" Option "Composite" "Enable" EndSection
You should now be set.
Xgl Don’t feel like editing all those configuration options? Like to stay on the bleeding edge? Xgl is your answer. First, install the xserver-xgl package in the universe repository. Next, create a text file (as root) in the location /usr/bin/startxgl.sh. What goes in the text file depends on which desktop environment you will use. Ubuntu (GNOME) users will enter this: #!/bin/sh Xgl :1 -fullscreen -ac -accel xv:fbo -accel glx:pbuffer & DISPLAY=:1 dbus-launch --exit-with-session gnome-session
Kubuntu (KDE) users should enter: #!/bin/sh Xgl :1 -fullscreen -ac -accel xv:fbo -accel glx:pbuffer & DISPLAY=:1 exec startkde
And finally, Xubuntu (Xfce) users should enter: #!/bin/sh Xgl :1 -fullscreen -ac -accel xv:fbo -accel glx:pbuffer & DISPLAY=:1 exec xfce4-session
Note: DBUS is required for the GNOME login Save, and close the file. Make it executable (run sudo chmod +x /usr/bin/startxgl.sh in your favorite terminal). Now, create a new file (again as root) called /usr/share/xsessions/xgl.desktop. In it, put the following: [Desktop Entry] Encoding=UTF-8 Name=Xgl Comment=Start an Xgl Session Exec=/usr/bin/startxgl.sh Icon= Type=Application
Save, and log out of your session. At the login manager, choose Xgl as the session type. You’re now done! Now, you’re going to get Compiz Fusion.
Getting Compiz Fusion Installing prerequisites First, make sure the packages compiz-core and desktop-effects are uninstalled (ubuntu-desktop may be removed). Next, add new repositories to the file /etc/apt/sources.list:
68
Getting Compiz Fusion
Issue 20 # # # # # #
Trevi? Ubuntu feisty EyeCandy Repository (GPG key: 81836EBF - DD800CD9) Many eyecandy 3D apps like Beryl, Compiz, Fusion and kiba-dock snapshots built using latest available (working) sources from git/svn/cvs.
deb http://download.tuxfamily.org/3v1deb feisty eyecandy deb-src http://download.tuxfamily.org/3v1deb feisty eyecandy
(64-bit users should use deb http://download.tuxfamily.org/3v1deb feisty eyecandy-amd64 and deb-src http://download.tuxfamily.org/3v1deb feisty eyecandy-amd64 instead). You’ll also need to add the GPG key. To do this, run the following command in your favorite terminal: gpg --keyserver subkeys.pgp.net --recv-keys 81836EBF gpg --export --armor 81836EBF | sudo apt-key add -
Now, update your system.
Actually installing the dang thing It’s finally time to install Compiz Fusion! Ubuntu (GNOME) and Xubuntu (Xfce) users should install the following packages:
compiz compiz-gnome compizconfig-settings-manager compiz-fusion-plugins-extra compiz-fusion-plu
Kubuntu (KDE) users should install:
compiz compiz-kde compizconfig-settings-manager compiz-fusion-plugins-extra compiz-fusion-plugi
Make sure you are in the Xgl session (or that AIGLX is enabled). Now, it is the moment of truth! Run the following: compiz --replace
If the windows flicker, lose their title bars, and then reappear, you’ve got Compiz Fusion running. To double check, move around a window. If it acts differently than normal, you’re running Compiz Fusion! To configure Compiz Fusion, run ccsm or System→Preferences→CompizConfig Settings Manager (Kubuntu users should find it under Settings→CompizConfig Settings Manager).
Figure 3: Compiz Fusion in GNOME
Installing prerequisites
69
Issue 20
Figure 4: Wobbly Windows while playing Klondike
Figure 5: Maximizing KPat
Figure 6: The magical Aladdin effect
70
Actually installing the dang thing
Issue 20
Figure 7: Literally playing with Emerald themes To make it so that Compiz Fusion automatically runs when you log in, add the command compiz --replace to the startup (Gentoo Wiki has a good article on how to do this). Do you like the Emerald window decorations that ship with Beryl? Compiz Fusion users can use it! Make sure the package emerald-themes is installed (it will also install Beryl, so don’t be surprised if it’s a hefty package). Then, run compiz --replace -c emerald & instead of compiz --replace.
Resources • Current Compiz Fusion homepage • Old Compiz homepage • Compiz Fusion Blog • Compiz Fusion Forums • Ubuntu Wiki article on Compiz Fusion
Biography Andrew Min (/user/37372" title="View user profile.): Definition: Andrew Min (n): a non-denominational, Bible-believing, evangelical Christian. (n): a Kubuntu Linux lover (n): a hard core geek (n): a journalist for several online publications including Free Software Magazine, Full Circle Magazine, and Mashable.com
Copyright information This article is made available under the "Attribution-Sharealike" Creative Commons License 3.0 available from http://creativecommons.org/licenses/by-sa/3.0/. Source URL: http://www.freesoftwaremagazine.com/articles/installing_compiz_fusion
Resources
71
Issue 20
72
Resources
Issue 20
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Managing and configuring downloads with KGet The easy, friendly way to improve downloads with Konqueror By Gary Richmond Downloading—no matter what operating system you are using—is ubiquitous. If you’ve been on the internet you will have downloaded something at some point: PDFs, pictures, ISOs, movies, music files, streaming videos to name a few. This article will take a detailed look at KGet, a very versatile GUI download manager for the KDE desktop which is easy to use and has plenty of easily configurable options. It isn’t perfect (but the upcoming KDE4 may rectify that) but we’ll go with what we’ve got and put it through it paces. KGet isn’t the only option available. GNU/Linux is blessed with any number of downloading facilities, many of them on the command line: Aria2, Curl, Wget, Axel, Axel-Kapt (GUI). A quick visit to their man pages (using the man command from your terminal) will reveal their versatility and you will use them according to your specific needs apropos a particular download. However, KGet has the advantage of being integrated with KDE and being very user-friendly.
Getting started I have not carried out a detailed audit of which distros have KGet as part of their default installation, but if your distro is not one of them it is usually a case of a quick visit to your package manager’s software repositories to rectify the omission. If you wish to see a useful comparison by feature and operating system of various download managers, Wikipedia has a good tabular comparison. The first thing you will see, when you run it for the first time, is that it will offer you the option to integrate itself into Konqueror. Personally, I think that choosing to integrate is best but, if you choose not to, your decision is not set in stone. Open KGet and click on Settings→Configure KGet and you will be presented with a six-tab screen. Click on the Advanced tab and check the box for Use KGet as Downloader Manager for Konqueror and all downloads will be intercepted (figure 1).
Figure 1: Options to integrate KGet with Konqueror If you prefer, you can add a KGet icon to the toolbar by selecting the Settings drop-down menu and clicking on Configure Toolbars. From there, scroll down and choose Disable KGet as Konqueror Download Manager and, by use of the arrows, move it from Available Actions to Current Actions. The icon is, of course, in toggle mode, so repeatedly clicking on it will disable or enable integration. If you are sticking with your original configuration you can still “opt in” KGet by right-clicking a link and selecting from the actions menu to download with it. The enable/disable function is also available from the right click menu on the Kget icon in the panel.
Getting started
73
Issue 20 If you have decided to integrate KGet with Konqueror, two things will happen. First, an icon will be added to the Konqueror toolbar: when you download a file, Kget will intercept the request and Kget will be added to the panel until you decide to quit the application. When it comes to actually downloading a file, that blue icon in the Konqueror toolbar will be very useful; if you click on it, you can do one of two things: 1. You can detach it as a dockable item, place it anywhere on the desktop and you can simply drag a downloadable link (including any file download links in your Bookmarks) onto it and KGet will automatically fire up and start to download 2. If you wish to confirm and monitor progress, just double-click on the big blue icon to view. It is quite a large icon though, so if you think it a bit obtrusive just right-click on it and select Hide Drop Target.
Further configuration Given the number of file types you will download over a period of time, it makes sense to want a facility which can automatically save files types to predetermined directories. KGet can do that for you too; so, before you initiate an orgy of downloading, set it up to do all the hard work for you, using Konqueror’s View Filter.
Setting up default download folders First of all, you need to create suitably-named download folders which will be used by KGet. If you don’t, when you try to add a file type and specify the location you will get an error message. (It would be useful to have KGet create those directories automatically.) Open Settings→Configure KGet and choose the Folder tab. In the Extensions dialogue box key in the file type—wildcards are permitted—and then add the default folder directly or by browsing for it. Click Add, Apply and then OK your way out. Do this for all the file types and default folders you’d like to set up and you’re ready to roll. Note that when you start downloading you will see that KGet will still prompt you to save in the pre-selected folder. I would prefer that this all happened seamlessly, but at least this way gives you the option to refuse the default and save elsewhere—to another folder on the hard drive or to an external USB drive or stick. Additionally, if you wish to bypass the default folders you have set up you can also hold down the Shift key and left-click on the link: downloads will proceed to your home directory instead or wherever you decide to save them. You can even set a default download folder for given file types to an external device by specifying the file path to it. Just remember to have the device plugged in! At the end of all this you should see something like that shown in figure 2.
Figure 2: Setting up download folders by file type.jpg
Scheduling downloads Whether an internet connection is dialup or broadband, KGet’s scheduling facility is well worth having as an indirect way of managing available bandwidth. In its current incarnation KGet does not have a feature for bandwidth throttling (or segmentation and multi-threading) so this feature is welcome as an indirect way to manage affairs—until KDE4 ships it with both these features (as well as bittorent support!).
74
Further configuration
Issue 20 The reasons why a dialup user might schedule downloads with KGet are almost too obvious: unreliable connections, automatic, timed disconnect by your ISP and so on. Even broadband subscribers are not immune from flaky connections and if you are hogging bandwidth with streaming video it’s not a good time to suddenly decide to download that 3.5GB Fedora 7 ISO. So, let’s schedule it for a less bandwidth/processor hungry time and at the same time take advantage of different global time zones to squeeze the last ounce of bandwidth out of the connection we choose. You can’t use KGet to determine the timezone from where you will download, so if it’s a big ISO file first select a download mirror in your timezone and if the download speed is slow or fluctuates wildly, look for a mirror (relative to your timezone) which gives you the advantage of a server which may be many hours in advance of you (it may be, say, the middle of the day in the UK, but 3AM in Perth, Australia.) All things being equal you should have a fast and constant download speed. A tip: if you ever want to check the speeds without KGet automatically intercepting downloads, just open KGet’s Settings drop-down menu and select Configure KGet, open the Advanced tab and uncheck the box against Use KGet as Download Manager for Konqueror: the standard Konqueror download progress screen will open instead when you choose do download a file. If you are satisfied with the speed you can terminate the download, re-enable that KGet checkbox and allow KGet to intercept a particular download request. This ability to switch between these download modes is useful if you think KGet is over engineered for small files like HTMLs or small picture files. You could, of course, now just proceed to start the download with Kget as your default, then, as the file downloads, click on the pause button if the download speed is slow or fluctuates wildly and click on the resume button at a later time and see if speeds have improved and stabilized. A better way to manage things, viz other tasks and servers in advantageous timezones, (after you have paused the download in KGet) is to double-click on an entry (its inactive status will be indicated by a broken connection icon). A dialogue box will pop up. Select the Advanced button and you will now be able to set a date and time when KGet commences its download (figure 3).
Figure 3: KGet advanced feature for scheduling downloads To set the time just click on the hour, minute or second; then, set the time and right arrow across each field until you are satisfied, click on the timer icon above, and your done. Back in the main screen you will now see a small timer icon against the entry: that’s it! KGet can now be left unattended and be relied upon to do the business, and you can do this for as many downloads as you wish. (It would be nice, incidentally, if there were a tooltip to indicate the schedule details when the mouse is hovered over the download details.) That’s the good news. The bad news is that scheduling appears to work fine if KGet is sitting in the background counting down to zero hours but if you quit the application entirely, nothing happens; although, if you then open it the download will start again immediately. Shades of a missed cron job being picked up by anacron. One way round this problem is to add KGet to Autostart. You can either drag a link to the Autostart directory using the GUI (normally the Autostart directory is ~/.kde/Autostart), or fire up a terminal cd to the Autostart directory and type the following: ln -s /usr/bin/kget kget
Scheduling downloads
75
Issue 20 This adds a soft link to KGet in Autostart. Type ls to confirm that it has been added. Now, if you have scheduled a download and then quit KGet, when you reboot your computer KGet will be running and the scheduled download will start at the appointed time. Alternatively, if you are chronically allergic to the command line and you have a doctors’ exemption note, type ~/.kde/Autostart in the Konqueror location bar and drag KGet’s Desktop icon to it (if you have one). Note that dot in front of kde in the file path. That means it is a hidden file and if you’er doing all of this graphically you will need to activate Show Hidden Files in the Konqueror View menu in order to see it.
Scheduling KGet the smart way The above method is good and relatively straightforward, but you can work a little smarter by thinking ahead too. If you have a series of large files you want to download, you might want to think of assembling them liked stacked aircraft at Heathrow before handing them off the KGet’s air traffic control. KGet is often criticised for lacking a batch facility but this is not entirely true. There is a batch facility—of sorts. In respect of downloading ISOs, this will not work with HTTP connections—the KGet icon in the Konqueror toolbar will “disappear” in this mode and only “reappear” when you are in FTP mode. With that caveat in mind, navigate to the download page of choice, click the Kget icon in the Konqueror toolbar and select List All Links and you should be presented with something like figure 4.
Figure 4: KGet batch iso download Once you have selected the files you want, just click on Download Selected Files and Kget will add the selected files to the status screen. At this point you might want to think about pausing them and scheduling them for downloading at optimum times best suited to your own priorities and server load. It’s not perfect by any means but it is better than nothing. To prioritise the order of download highlight a file and either right-click on it and select Move to Beginning or select that option from the Transfer drop-down menu. However, the best way to transfer files to KGet without having to pause at all is to set it to expert and offline mode. Click on the globe button on the toolbar and it will change to a plugin icon; click on the expert mode button and repeat the batch process at the beginning of this paragraph. Now the files will transfer to KGet’s status window but will not download—unless you toggle the plug button which will revert to the globe button. These files can be identified easily as, not only are they not downloading, the download connection is not two separated halves but one (figure 5).
76
Scheduling KGet the smart way
Issue 20
Figure 5: Expert and offline mode set to “on” for batch downloading None of this prevents the user from double-clicking on the desired file and setting up a scheduled download. In any of these permutations the download can be paused, resumed, deleted or the schedule details amended. What would complete KGet as a downloader would be multi-threading, segmentation and better batch handling, but it’s not half bad as it is. There is one final feature of KGet that is useful utility. If you have emailed yourself (or been emailed) a link to a file download you can simply drag the link to the drop target and KGet will crank into action in the usual fashion—or you can save it for a scheduled download later. That’s nice.
KGet is a foxy operator Firefox is a great browser but it’s built-in download facility is so basic that it does not even support resuming from broken connections or across reboots. This is a glaring omission, and an inconvenient one too, but KGet can come to the rescue—with a little prior assistance from a Firefox extension which allows the user to select from a number of downloader managers, including KGet. The extension in question is called FlashGot and you can download and install it from the official Mozilla add on site. If you want to find out more about it , FlashGot’s website is extensive. Once installed you will be able to add KGet (and configure other download managers to work with FlashGot too). Once added, select Tools in the Firefox menu and then select More Options, click on Add, type in a name and then select Browse and navigate to the executable file path of KGet (if it is not already selected for you) and OK your way out.
Conclusion It is a truth universally acknowledged that a browser without a multi-featured download manager must be in want of improvement. Downloading files is such an integral part of the web experience these days that the absence of a good download manager would be as glaring and painful an omission as the want of a good search engine. KGet was designed for the KDE desktop (though of course it will work with GNOME too) and while its feature set is not going to win any awards from a power tweakers’ conference, it integrates well, contains sufficient features to avoid embarrassment and has at least been “ported” to Firefox via the FlashGot extension. We can only look forward to its further development in KDE4.
Biography Gary Richmond (/user/3653" title="View user profile.): An aspiring wanabee--geek whose background is a B.A.(hons) and an M.Phil in seventeenth-century English, twenty five years in local government and recently semi-retired to enjoy my ill-gotten gains.
Conclusion
77
Issue 20
Copyright information Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved. Source URL: http://www.freesoftwaremagazine.com/articles/managing_downloads_with_kget
78
Conclusion
Issue 20
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Extending Nautilus: rotating JPG images Customize the GNOME file manager with scripts By Scott Carpenter I recently went looking for a way to rotate JPG images from within Nautilus, and found a nice way to do this and more. It’s not difficult to customize the right-click popup menu in Nautilus to perform custom actions on files. Here are some instructions and scripts to get you started. I often have vertically oriented camera pictures that I want to rotate from within the file manager. Windows Explorer has a nice feature where you can multi-select pictures in thumbnail mode, right-click to get a popup menu (also known as the context menu), and pick clockwise or counter-clockwise rotation. This is a lossless transformation, as opposed to what you’ll likely get if you open up an image in your favorite graphics program, rotate it, and resave. That method is also cumbersome if you’re like me and just wanting to bulk rotate your images after you pull them in from your camera. This is one of the many things I’ve been gradually learning while replacing my previous Windows capabilities, and as usually happens when I find a solution in GNU/Linux, I was pleased at the elegant building block approach of *nix systems. I found that I have the power of bash at hand from within Nautilus, with scripts that further take advantage of some nifty utilities I’ll cover below. I found that I have the power of bash at hand from within Nautilus, with scripts that further take advantage of some nifty utilities It’s true that many import programs will rotate your images automatically as they are pulled from the camera, but not all of them will, and you won’t always be able to use the importer of your choice. It’s also possible your camera doesn’t store the proper Exif metadata in the image file, leaving your super funky-cool import program helpless. So I think we can all agree that this is something you must have for ease of photo management. I’ll look specifically at image file operations to demonstrate how Nautilus can be extended. I’ll cover on JPG files which are practically the universal format in digital cameras (at least those used by us common folk) and use Exif data that allows us to do several neat things. After working through this tutorial, you should be able to right click on your images in Nautilus and have menu options as shown in figure 1.
Figure 1: GNOME Nautilus file manager right click popup menu
Extending Nautilus: rotating JPG images
79
Issue 20 This corresponds to the hierarchy in your home directory: ~/.gnome2/nautilus-scripts/img/ autorotate.sh change-date-and-rename-with-exif.sh change-mod-date-to-exif.sh rename-with-exif-date.sh rotate-left.sh rotate-right.sh caution/strip-exif.sh
Preliminaries The public domain code for these scripts appears below. You can also grab downloads.tgz (see the download icon at the beginning of this article) and extract it to your nautilus-scripts directory: ~/.gnome2/nautilus-scripts$ tar -xvf downloads.tgz The GNOME page for extending Nautilus is a good place to start. It mentions that you can also use the File→Scripts menu to run your scripts, and from either there or the right click context menu, you can select the “Open Scripts Folder” item to open up the directory in Nautilus. If you don’t have any scripts installed, you won’t get these menu options. Make sure the scripts you create are executable.
Possible Gotcha I initially couldn’t get my scripts menu to appear when I put the scripts in the root of nautilus-scripts, and thought they needed to go in subdirectories. But when I checked again while writing this, it worked fine that way. Then after removing and restoring files to my preferred hierarchy, I couldn’t get my scripts to appear in the menu again, even after closing and reopening Nautilus. I had to force a reload of the directories in Nautilus to see everything in the menu again, using the little blue circular arrow in the toolbar. Maybe this had something to do with my original problem. In any case, if things don’t appear as expected, make sure to reload the nautilus-scripts directory. (Also available in menu View→Reload and with keyboard shortcut CTRL + R.) There is one more thing to do before you start looking at the scripts. You will need the jpegtran utility from the Independent JPEG Group and jhead, by Matthias Wandel. Both are small and robust free software programs. jhead is public domain, and jpegtran has its own license that appears to be fully free. The good news is that you probably already have jpegtran, and jhead is readily available. In my machines running Ubuntu 7.04 (Feisty Fawn) and Fedora FC5, jpegtran was included in the default installation. I installed jhead in Ubuntu with sudo apt-get install jhead and as root in Fedora with yum install jhead. For experimentation purposes, I recommend using a temporary directory with copies of some pictures. You most likely do not want to use your regular picture directories while initially working with these scripts, since the operations are not undoable. For experimentation purposes, I recommend using a temporary directory with copies of some pictures One of the things I’ll show you is how to create an “autorotate” script that takes advantage of Exif orientation data. If you’ve previously rotated your pictures, this flag may be cleared. You can use monkey.jpg to, um, monkey around with (figure 2).
80
Preliminaries
Issue 20
Figure 2: monkey.jpg Now, finally, the scripts!
rotate-right.sh (clockwise) #!/bin/bash while [[ -n "$1" ]]; do #if a file and not a dir if [[ -f "$1" ]]; then #by default jpegtran copies only # some Exif data; specify "all" jpegtran -rotate 90 -copy all \ -outfile "$1" "$1" #clear rotation/orientation tag # so that some viewers (e.g. Eye # of GNOME) won't be fooled jhead -norot "$1" fi shift done
Once you have this script in place, you might first want to open up a terminal window and run it from the command line first to verify that it works as expected: ~/.gnome2/nautilus-scripts/img/rotate-right.sh ~/tmp/monkey.jpg One reason is that when you run things from Nautilus’s script menu, you won’t get any feedback if there are errors. The scripts just silently fail. For example, if your scripts aren’t executable, it will look like your script did nothing. Here’s what the script does: • Processes all the files supplied on the command line in a loop. (If called from Nautilus, it will pass all the file names as arguments to the script.) • While there are files, and if the file is in fact a file and not a directory, it will call jpegtran to rotate the image 90 degrees. By making the outfile have the same name as the input file, it’s just going to replace it. • After rotating the image, it calls jhead to clear out the orientation flag in the Exif metadata. As commented in the script, if you don’t fix the rotation tag, programs like Eye of GNOME (the default GNOME viewer) will be confused and rotate the image because it is smart enough to read the Exif data and try to orient the picture better for your viewing pleasure. It’s that easy. For the counterclockwise rotate-left.sh script, replace -rotate 90 with -rotate 270.
rotate-right.sh (clockwise)
81
Issue 20
autorotate.sh Next up is even better. I used to spend a lot of time in Windows Explorer picking through a big directory of thumbnail images to find the ones that had been shot vertically and multiselect them in order to rotate. This was a tedious job; it’s not always obvious from the thumbnail what the orientation is, and in the process of scrolling the viewport, you may accidentally unselect files or select a lot of horizontal pics. If your camera stores the orientation Exif data in the file (like my Canon does, for example), you can select all the files and use jhead’s autorotate feature which will only rotate the vertical pictures. (I’ve found in practice that it’s not 100% guaranteed, maybe because the camera sometimes doesn’t detect the orientation correctly?) #!/bin/bash while [[ -n "$1" ]]; do if [[ -f "$1" ]]; then jhead -autorot "$1" elif [[ -d "$1" ]]; then #iname -- case insensitive find "$1" -iname "*\.jpg" \ -exec jhead -autorot {} \; fi shift done
For this script, I’ve made it so you can select a directory also, since it’s more likely you’d want to perform this on entire directories at a time. I’ve noticed if you right click on a directory in the left “tree pane” of Nautilus, you won’t get the scripts menu, but it is available in the File→Scripts menu. The scripts menu is also available if you right click on a directory in the right pane of Nautilus. Here is what the script does: • As in the “manual” rotate scripts, it will process all the arguments (files and/or dirs) passed to the program in a loop. • If a file (-f), it will simply pass it to jhead -autorot. • If a directory (-d), it will run the find command on that dir, looking case insensitively for *.jpg files, and on each one found, it will call jhead -autorot. • jhead will clear the orientation flag as part of that operation, and that’s all it takes. Those were the main features I had started out looking for (well, I hadn’t bargained on getting auto rotate!), but after finding this stuff I had to keep noodling around to see what jhead and jpegtran could do:
change-mod-date-to-exif.sh #!/bin/bash while [[ -n "$1" ]]; do #if a file and not a dir if [[ -f "$1" ]]; then jhead -ft "$1" fi shift done
If you copy files directly from a USB card reader (or from some cameras that let you read them via USB as a mass storage device, like my old Olympus but unlike my new Canon), the modified date is preserved for the files on your computer. I liked this feature because I would then use a Visual Basic program that used the modified dates to rename the files with a timestamp. My picture files mostly start with YYMMDD_HHMMSS. I like that it keeps things in sequence and uniquely identifies them. (I also go through and add identifying text to that datetime prefix.) When I tried the import this program into GNOME, I was disappointed to learn that it updated the modified time on the file as it copied files to my desktop machine. Well, now you’ll have the power to correct that with change-mod-date-to-exif.sh. If your camera’s clock is set correctly, you probably have the date that each picture was taken stored in the Exif data. Jhead will update the last modified time in the file system to match.
82
change-mod-date-to-exif.sh
Issue 20
rename-with-exif-date.sh Even better, that crusty old VB program can be thrown out (sure, it runs under Wine, but it’s indentured VB for chrissake) and use jhead to directly rename the file with the Exif date: #!/bin/bash while [[ -n "$1" ]]; do #if a file and not a dir if [[ -f "$1" ]]; then jhead -nf%y%m%d_%H%M%S "$1" fi shift done
This script uses another jhead option to rename the file with a YYMMDD_HHMMSS timestamp. I only made this one operate on files, but it could easily be modified to work on directories also, similar to autorotate.sh. Shown in the image above and included in the downloadable .tgz file is another script, change-date-and-rename-with-exif.sh, that simply calls the other two scripts so you can change the file modified date and rename with a timestamp in one step.
strip-exif.sh And last, carefully: #!/bin/bash while [[ -n "$1" ]]; do #if a file and not a dir if [[ -f "$1" ]]; then jhead -purejpg "$1" fi shift done
I put this one in a directory named caution (under the nautilus-scripts/img directory) in order to reduce the chance of clicking on it by accident. It will strip out all the Exif data from a picture, which in most cases you probably don’t want to do. I use this one for some pictures that go up on my web site, with the intent of keeping bandwidth usage and page loading times down. For example, monkey.jpg is 51KB with Exif data, and 44KB without. Not a big deal for pictures stored on your hard drive, especially as pictures commonly go over 1MB now, but it could help in the unlikely (for me!) event of a slashdotting or digging.
Resources • GNOME.org User Guide Page about extending Nautilus. • Home page for the Independent JPEG Group, creators of the jpegtran utility. • Jhead, an Exif jpeg header and thumbnail manipulator program. • G-Script, a collection of Nautilus scripts
Biography Scott Carpenter (/user/20795" title="View user profile.): Scott Carpenter has been lurking around the fringe of the free software movement since 1998 and in 2006 started a more concentrated effort to "move to freedom." (Chronicled at the Moving to Freedom blog: http://www.movingtofreedom.org/ (http://www.movingtofreedom.org/).) He has worked as a professional software developer/analyst since 1997, currently in enterprise application integration. (Views expressed here and at movingtofreedom.org are strictly his own and do not represent those of his employer. Nor of miscellaneous associates including friends and family. Nor of his dog. It's possible they're representative of his cats' opinions, but unlikely. Void where prohibited. Local sales tax applies.)
Resources
83
Issue 20
Copyright information Please use this attribution for derivatives and republications: By Scott Carpenter (http://www.movingtofreedom.org), originally published in Free Software Magazine. Source URL: http://www.freesoftwaremagazine.com/articles/jpg_image_rotation_in_nautilus
84
Resources
Issue 20
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Running a free software project Starting with your eyes open can really help! By John Calcote Running a free software project can be a rewarding experience if you begin with your eyes open. In my personal experience, starting a free software project with only a head-on view of a few existing free software projects is not really enough. Some basic background information can really help get you started in the right direction.
Historical perspective In 2001, I was working on the eDirectory development team at Novell. We needed to add service advertising functionality to the next version of eDirectory and we chose to go with OpenSLP—a free software implementation of the IETF Service Location Protocol. My manager asked me to write a patch for OpenSLP to add DHCP support—a feature that was slated for some future release of OpenSLP. As I was working with the OpenSLP project administrator on integrating this feature, he asked me to become a developer on the OpenSLP project, and I accepted. To shorten what could be a really long story, I eventually found myself running the project—simply because everyone else dropped out. I’d downloaded, compiled and executed various free software projects in the past, and like many people who have reached this level, I thought I was pretty hot stuff. I didn’t have a clue; but I learned. Because of my new position of responsibility (and luckily, it wasn’t a very popular project), over the next 5 years, I gradually learned about everything from package creation and maintenance to mailing lists, forums, and web site management. This article is an attempt to ease the burden of discovery for programmers new to the free software world. I hope I succeed—to some degree at least—because running a free software project properly is a rather intense and detail-oriented process. To run a free software project, you have to be the proverbial Jack-Of-All-Trades. Before we get started, I’d like to make a special note of one excellent resource for information similar to that found in this article: Karl Fogel’s free online book entitled, “Producing Open Source Software”.
Project setup Setting up a project is actually rather trivial—well, the initial portion of the setup, anyway. However, you soon find that you’ve started a small snowball down a long hill. But let’s just start at the beginning. First you need to find a free software hosting service that you like. Here is a short list of popular services: • http://www.sourceforge.net • http://code.google.com • http://dotsrc.org • http://developer.berlios.de • http://www.freepository.com • http://gforge.org • http://icculus.org • http://savannah.gnu.org • http://www.seul.org • http://www.bountysource.com I’ve listed these services in no particular order—with one exception: SourceForge.net is generally considered to be the definitive standard. Hosting over 120,000 free software projects, it offers a very extensive feature set
Project setup
85
Issue 20 to free software project administrators. I’ve heard it said that SourceForge is the largest repository of “dead” projects on the internet. However, it should be noted that to gain such notoriety SourceForge must, in fact, be a very popular free software hosting service. Otherwise, the main difference between these services is the feature sets offered. You will simply have to play with a few in order to determine which features you want to provide with your project. After reading the remainder of this article, you will be in a better position to judge the relative value of the various features offered by these services. SourceForge.net is generally considered to be the definitive standard In addition to this abbreviated list of general purpose public services, there are also several services hosted by various companies such as Novell, Microsoft, Sun, HP, IBM, etc, which provide free software hosting portals. Generally, these services are more closely affiliated with corporate goals and are intended to host projects that highlight technology specific to the host company. However, most of these services will allow any type of project to be hosted (as long as it’s not derogatory or derisive of the company providing the services, of course). Some services have very specific goals in mind, and in fact, enforce these goals with very specific rules and regulations. The Eclipse project (which is sponsored and administered by IBM), for instance, wants to host projects that extend or otherwise enhance the Eclipse development environment framework. For this reason, since Eclipse is written in Java, nearly all of the code hosted at eclipse.org is written in Java. Prospective Eclipse projects are also scrutinized closely for relevance to the Eclipse framework, and every Eclipse project must use a free software license that was designed specifically for Eclipse.
Figure 1: Starting a new project on SourceForge.net Once you’ve chosen a hosting service, you will then have to go through the process of setting up your project by entering some initial information into a web form. Generally, the hosting service will ask a number of questions about the nature of your project, and details such as a short (file system compatible) name for the project, a longer name, and a short description. The service may also ask you what you believe the value of your project is to the free software community. I’ve set up several projects in this manner, and never had one rejected, so I’m inclined to believe that the process is a bit of a formality. Another piece of information that is usually requested at this time is the free software license that you wish to use for your project.
Choosing a free software license To properly choose a free software license, you should be familiar with the general features of each type of license available. I strongly recommend that you read the overview information on each of the major free software licenses available at OpenSource.org. The concept of the free software license was originally invented by the Free Software Foundation under the guidance of Richard Stallman. Often these licenses were referred to as copyleft, implying “copyright with a twist”. Indeed, the rationale behind every free software license is fundamentally to keep the software freely available to the public. They effectively prohibit people or companies from stealing it and claiming it as proprietary, while at the same time allowing them to use it for their own purposes.
86
Choosing a free software license
Issue 20 Each major type of free software license has features that make it either better or worse for a given purpose. For example, the GNU General Public License is a very good general purpose free software license, providing an airtight legal right to the general public to use any GPL licensed software freely and without charge. However, it comes with the often rather painful caveat that any software that consumes GPL software in any way must also be made freely available—in source code form—to the general public. This feature is sometimes labeled “viral” by software companies considering using some free software within their proprietary software products. They simply can’t use GPL software for this purpose because it forces them legally to give away their intellectual property. On the other end of the spectrum, the MIT license is generally considered to be a very good free software license for projects that simply want to remain free, without imposing any significant restrictions on consumption. Basically, consumers must include the license in their proprietary offerings in a manner that allows the license to be publicly viewed, either at installation time, or in an “about” dialog of some sort, thereby giving credit to the providers of the free software that is being consumed. So, if MIT is such a good license, then why not use it for everything? Ironically, the MIT license is often a little too loose for proprietary software companies wishing to release portions of their code base. Sometimes proprietary software companies would like to make code available to the “little guy” for free, but establish a dual license agreement with larger customers wishing to make more proprietary use of their free software offerings. I say this is ironic because there’s little doubt that Richard Stallman never intended GPL to have this effect on software. His goals were very idealistic—by forcing consumers of GPL software to also make their consuming software GPL, he clearly hoped to eventually make all software free. Choosing the wrong license can be very detrimental to your project The following are a few of the more popular free software licenses documented at OpenSource.org, listed in order from least restrictive to most restrictive: MIT, BSD, LGPL, and GPL. Since the Free Software Foundation first created the GPL, dozens of types of free software licenses have been crafted (mostly by corporate legal staff). Read the literature and try to understand the high-level goals of each, and then carefully choose the one that is best suited to your project’s goals and target audience, because choosing the wrong license can be very detrimental to your project.
Categorizing your project Once you’ve chosen a service to host your project, and you’ve spent the requisite effort creating your project and choosing a free software license, you will then need to make a number of administrative decisions. After you’ve submitted your request to set up a new project, you will probably have to wait a day or two for a human to review your request and grant approval. You’ll generally be notified by email when this occurs. The next step is to choose your project site options. In order to be clear about this process, I’m going to focus on just one of the sites listed above—SourceForge.net—simply because it’s the one most familiar to me. These options may include trove categorization, mailing lists and chat rooms, RSS feeds, technical forums, bug and feature trackers, documentation, source code repository, web services, and compile farm services to name just a few. Trove is a tagging mechanism that allows your project to be categorized with similar projects. People looking for a particular type of project may search using the categories provided by the service to find the types of projects they are interested in. Projects are categorized using a dozen different criteria, and it’s important that you carefully select the trove categories you really want to associate with your project. This is one of the most often overlooked features of a free software project hosting service. By properly categorizing your project, you make it easier for the right people to locate your project.
Using mailing lists A mailing list is basically nothing more than an email account associated with an automatically managed distribution list. When you send a message to a mailing list, you are sending your message to everyone on the list automatically. People can (and do) include and exclude themselves from the list at will. People most closely associated with a project will often be members of the project mailing lists for long periods of time.
Using mailing lists
87
Issue 20 Others may just monitor list traffic for a few weeks in order to get a feel for the activity level of a project, although this can often be discovered by simply glancing at the mailing list archives for the last few months. Mailing lists are managed by mailing list software, the most popular of which today is GNU Mailman (figure 2).
Figure 2: The GNU Mailman administration screen It’s best to configure at least two mailing lists for your project—a low-traffic announcement list, and a higher traffic discussion list. The announcement list is specifically for announcements related to your project. Often there will be many more subscribers to announcement lists than to discussion lists. This is because people want to know when something significant happens with your project, but they don’t care about the details of how you got there. The discussion list is where the real work takes place. You might think of your discussion list as the virtual work place for your project. This is where project developers can get together with ideas about new features, or the proper approach to fixing specific defects. Sometimes project administrators like to make a distinction between developer discussions and questions from users. In this case, they will often configure a third mailing list for their project specifically for user questions. The user list is often self-managing for projects that have gained a following. That is, as a project developer, you often don’t need to answer too many questions on the user list because other user list subscribers will jump in and answer newbie questions for you. You see, there are quite often two types of people associated with free software projects—those who like to write code, and those who simply like to use the software, and have become quite proficient at it. The coders are interested in technical details of implementation, while the “power users” are more expert in proper and interesting uses for the project. In summary, the three most common types of mailing lists associated with OSS projects are announcement, developer, and user lists. The announcement list is low-traffic, and only allows posting from the project administrators and key developers. The developer and user lists are higher traffic lists and are targeted for specific uses by the project, as implied by their names. Common names for these lists are: • projectname-announce@lists.sourceforge.net • projectname-devel@lists.sourceforge.net • projectname-user@lists.sourceforge.net The projectname portions of the addresses above should be replaced with your project’s short name (often called the project’s Unix name). It’s a good idea to stick to conventions like these because people who are members of multiple projects can then almost guess the proper name for your mailing lists.
Some mailing list etiquette A few rules apply when interacting with others on project mailing lists: do your homework—don’t ask a question on a mailing list until you’ve done all you can to figure out a problem yourself. This etiquette extends to both the user and the developer level. Both users and developers are expected to be smart (although not necessarily on the same level or within the same domain). Users are expected to intelligently read any existing user documentation for answers before they make a query to the user list. Developers are expected to
88
Some mailing list etiquette
Issue 20 study the source a little and read existing development documentation before asking questions on the developer list. Don’t ask a question on a mailing list until you’ve done all you can Often the questions that could have been answered with a quick perusal of the documentation are answered curtly with the acronym, RTFM. I’ll tell you right now that ‘R’ stands for Read, and ‘M’ stands for Manual. You can guess the meaning of the rest, I’m sure. We should try to remember that free software projects are generally under-funded, and often even personally funded out of self-interest by project administrators themselves. That means project developers’ time is a very valuable resource. They don’t have time to answer questions that are easily answered by the documentation. On the other hand, please try to be nice. Sending a spiteful or mean response to a query on a mailing list is sometimes referred to as “flaming”. Flaming is discouraged, but often ignored because it has the effect of weeding out the pests—those who are just trying to get others to do their leg work for them. The problem with flaming, of course, is that you can alienate your target audience. For this reason, flaming is an activity that is often seen only on the mailing lists of more important and widely recognized projects. (Ah, the intricacies of free market economics!)
Using internet relay chat rooms Mailing lists used to be the only forum for discussion in the free software world. Lately, another more responsive channel has been employed by free software project administrators. Internet Relay Chat (IRC) is an instant message protocol that is specifically designed to perform well under heavy load. Most Instant Message (I/M) protocols such as AIM, Google Talk, Yahoo Messenger, Microsoft Messenger, and even proprietary I/M services such as Novell’s GroupWise Instant Messenger (GWIM) are designed to allow two or three people to chat effectively with each other in real time. IRC, on the other hand is designed to allow dozens of people to effectively communicate in a chat room at the same time. Being in an IRC chat room is much like being at a party where everyone is standing around with drinks in hand talking about various topics. You can focus on a given conversation if you want, or you can start one yourself, but it’s all going on over the same channel. Being in an IRC chat room is much like being at a party To get started with IRC, you will need to download or otherwise obtain an IRC client. IRC clients connect to IRC servers that are hosted services run by people interested in IRC as a protocol. Popular IRC clients include ChatZilla, KVirc, Opera, Pidgin, savIRC, X-Chat, and PJIRC. There are many others, but the clients I’ve listed here are reported to run on all of the most common platforms. In addition to these clients which are specific to IRC, there are some I/M clients that support multiple I/M protocols, including GAIM and Trillian (Windows only). IRC is actually a fairly old protocol invented in the late 1980s. While it’s been formalized by IETF working groups, the formal definition is not strictly adhered to. (This is often the case when an older protocol becomes widely adopted over a significant time period by the community before formalization.) You should try out a few of these clients before you settle on one that you like. IRC is essentially a command-line protocol. However, some IRC clients have gone a long way toward hiding this fact from the user. For example, the IRC interface presented by GAIM expects you to know many of the IRC command-line commands, and use them properly while establishing your chat sessions. The IRC interface built into the Opera web browser on the other hand, is highly automated, performing most tasks for you behind the scenes. It’s worth doing a bit of research to find one you really like.
Configuring an IRC client There are two phases to setting up most IRC clients: Initial setup, and joining a chat room. Initial setup is done only once per client. On the Opera client, for instance, (as shown in figure 3) you pull down the “Tools” menu and select “Mail and Chat accounts” option. Click the “Add” button to add a new chat account, and select the “Chat (IRC)” option in the list. Click “Next >” and enter your real name and email address. On the
Using internet relay chat rooms
89
Issue 20 next screen choose your nickname. This is a short “handle” by which you will be known by the community. It should be short, and something you don’t care if everyone sees. Whatever you choose, others will be calling you this, so be sure it’s what you’d like to be called. Finally, on the last screen choose the IRC server you’d like to join using this “chat account”. You should choose a chat server based on several key details. If you’re joining an existing chat room, you need to choose a server that supports that room. Often, the documentation for an existing chat room will indicate which IRC server it’s using. For instance, freenode is a popular chat service. Within the service you should choose an appropriate locality. Many of the popular chat services have servers in most major areas of the world. Choosing a server in Europe will only slow down your communications if you happen to live in North America.
Figure 3: Configuring a chat account in the Opera browser Once you have a local account set up with a chat service, you need to join a chat room or channel. This is done in Opera by pulling down the “Chat” menu item, and selecting the “List rooms” option, as shown in figure 4. Opera presents a “Chat rooms” dialog which takes a few seconds to initialize the first time, because it enumerates all existing chat rooms—most IRC services support thousands of chat rooms. You can do a quick search by typing all or a portion of the name of the room you’re looking for in the search box. The list will be iteratively narrowed down as you type each letter of the search key.
Figure 4: Listing available IRC channels in Opera’s chat list dialog If you’re creating your own chat room, just click the “Add” button, and type in a unique name. A nice way of determining uniqueness is to enter your potential names into the search box. If the list comes up empty, then you’ve found a good candidate. You’ll get a new tab in the main Opera window for your chat room. As long as others join the room with the same name, then you’re all in the same chat room. It’s that simple. Because the chat service and room name are critical aspects of your IRC channel, you should advertise these bits of information on your project web site. As the first person to join a chat room, you become the room operator. Operators have a few more rights than non-operators, but the trouble with this system is that when the last person leaves the room, the channel returns to its original state—non-existent. Ultimately, this is the nature of an IRC chat room. However, there
90
Configuring an IRC client
Issue 20 are ways around this problem. You can register your chat room with the IRC service, which makes it more or less permanent, with you in control. To use these more advanced features, you will need to become familiar with the IRC command-line syntax, and the regulations established by your chosen IRC service. In this case, you will also need to switch to a more powerful, but less user-friendly IRC client than Opera. A good choice here might be ChatZilla, which is a Mozilla (FireFox) plug-in. ChatZilla tries to hide the details as much as possible, without removing access to features like chat room registration and authentication.
Figure 5: Configuring the Chatzilla IRC plug-in for Firefox To use ChatZilla, you will have to install the ChatZilla FireFox extension as shown in figure 5. Go to the ChatZilla home page and click on the “install” link at the bottom of the main page (from within FireFox, of course), and follow the instructions. When you restart FireFox, pull down the “Tools” menu and select “ChatZilla”. A small dialog will appear, and the main edit box in the dialog will contain a set of tabbed windows with a single tab labeled “client” containing information about the ChatZilla client, including links at the bottom of the page for various IRC services known to exist by ChatZilla. Across the top of this window, you will see: Known Networks xx
ChatZilla w.x.y.z
Connected Networks 0
At the bottom, under [INFO], you will see a list of available networks shown as links (circled in figure 5). Select your desired network and you will open a new tab specifically for this IRC service, labeled according to the network name. For example, if you select freenode then you will get a new tab next to client, labeled freenode (see figure 6). You can always switch back to the client tab to select another network. Doing so will not close your freenode connection, it will only open a new connection. At the top of the connection tab, you will see the text: URL irc://freenode/
Connected Lag x.xx seconds.
To join a channel on this service, you pull down the “IRC” menu and select “Join channel”. Then enter the name of your channel, or simply press the “Refresh Now” button to generate a list of currently available channels. This list is refreshed in the background every so often, so if you neglect to press the “Refresh Now” button, the window will eventually populate itself anyway. If you know the name of your desired channel, you may simply type its name. Otherwise, you may type a portion of the channel name in the “Quick Search:” edit box and allow the filter to reduce the selection list for you. (Warning: Opera’s quick search feature is much faster than ChatZilla’s.) Once your target channel shows up, just click on it and press “Join”. After a while, you may tire of this tedious process and wish to speed things up. You can simply enter the command /join #my-channel (shown in figure 6) in the text window at the bottom of the ChatZilla dialog. Do this while the channel tab (eg., freenode) is visible. Note that you must press Ctrl + Enter to actually send the command because the text entry window allows you to enter multiple lines of text before sending your message (you may also press the “<return/enter>” button directly to the right of the text entry window).
Configuring an IRC client
91
Issue 20
Figure 6: Connecting to an IRC service and joining a chat room in Chatzilla You may have noticed by now that the narrow window on the left side actually contains a list of what appear to be user names or nicknames when you have a channel tab selected. These are the nicknames of the users currently joined to this channel, and you can, in fact, communicate with them by directly addressing them by their nickname. For instance, you might type: SiliconJoe, I can’t remember the command sequence you gave me yesterday on the load document feature of OpenSnorkleFork—can you run it past me again, please? Hopefully, if SiliconJoe isn’t nodding off at his desk or out to lunch, he’ll notice your question, and answer you in a timely fashion. Incidentally, if you know you’re going to be leaving your desk for a while, it’s polite to indicate so to your fellow chatters by marking yourself absent. You can do this in ChatZilla by pressing the button labeled with your nickname on the left side of the text entry window, and selecting one of the “Away” options. If you know you’re going to be using advanced IRC features such as room registration then you will probably want to start with a more complex client such as the FireFox ChatZilla plug-in, rather than the simplified client that is built into Opera. To learn more about IRC command-line syntax, go to the IRC Wikipedia pages—start with this one.
Providing RSS feeds RSS stands for (are you ready for this?) “Really Simple Syndication”. RSS feeds are simply XML representations of news-like information. These XML pages are backed by specific URLs associated with the hosting site and with your project. Often you’ll see a small orange box with radio waves propagating out from the lower left to the upper right corners on the right end of the URL edit box in your browser (see figure 7), if the current page supports one or more RSS feeds. The RSS links on a page can be consumed by RSS readers. There are as many RSS readers to choose from as there are IRC clients. Google is your friend here—just search for “RSS reader”, and take your pick. If you’re using ChatZilla, you might also be interested to know that there are several FireFox plug-ins that support RSS as extensions to the FireFox browser.
92
Providing RSS feeds
Issue 20
Figure 7: Using the Wiz RSS news reader plug-in for Firefox RSS feeds are often supported directly and automatically by free software hosting services. SourceForge, for instance, will update an RSS feed for your project when you’ve released a new version of your project, or released a news item about your project. Those who’ve registered with SourceForge to receive RSS feeds, will see news items under the associated news headline in their RSS readers.
Tracking defects Defect tracking is done by most free software hosting sites using the de facto standard tracking tool, Bugzilla. Bugzilla was invented by the Mozilla team years ago to track defects in Mozilla, and it’s pretty much become the standard by which all other defect tracking tools are measured. Despite the widespread use of Bugzilla, SourceForge deviates here for business reasons and uses a tool called Tracker. Tracker is part of the SourceForge Collaborative Development System. Learning to use either Bugzilla or Tracker effectively is almost trivial; rather than spend time on it in this article, I’ll just mention that excellent online documentation is available, although not really necessary—even for project administrators.
Software configuration management When you’re working on something important—especially when multiple people are involved—you want to protect your intellectual property (IP). You protect your free software IP legally by using the right free software license. You protect your IP physically by using a Software Configuration Management (SCM) system of some sort. There are many SCM systems available. The most common systems in use by free software projects include the Concurrent Versioning System (CVS) and Subversion (SVN), but there are additional systems coming up all the time, such as GIT. SCM systems are designed to save revisions of your software data files in such a manner as to allow you to retrieve any revision you want. Most often this is done by saving initial files entirely, and then saving the differences (also called deltas) between one version and the next, so as to avoid wasting storage space with complete copies of files that barely change from one revision to another. For many years CVS was the SCM system of choice for free software projects—mainly because it was free and well understood. But it had flaws that everyone acknowledged and worked around, as a matter of course. More recently, a newer SCM system has begun to take the place of CVS. Subversion or SVN is touted by its developers as, “CVS done right!” Some people disagree that CVS could ever be done right, and thus there are other offerings such as GIT that promote an entirely different view of distributed SCM. For many years CVS was the SCM system of choice While I use SVN regularly, and have few complaints, I personally appreciate the philosophy and distributed nature of the GIT system. Unfortunately, most free software hosting sites provide CVS and/or SVN. If you
Software configuration management
93
Issue 20 want to use something other than the services offered by your free software hosting service, you’ll have to setup your own public repository, so that other people can access your project source code directly from the authoritative source. Source code can be made available as tarballs—zip or tar.gz snapshots of your working directory—but this is cumbersome at best. You’re much better off in the long run making read-only access to your source code repository publicly available. This way, people can always get at the very latest code you have checked in. In fact, you may wish to grant check-in rights to major contributors to your project, so that they can check their fixes directly into the repository. The best documentation for Subversion is the on-line O’Reilly book entitled, “Version Control with Subversion”, found here. Once you understand the basic concepts presented in this book, the online documentation that accompanies Subversion on its website then becomes an excellent source of command reference material.
Project web services Configuring a project web site is a full time job in and of itself. Most free software projects are either funded by commercial efforts, or self-run by individuals. Often you can tell where free software project funding is coming from with just a single glance at the project web site. Commercially funded projects often have very dynamic and management intensive web sites, while individually funded projects have rather static (and perhaps not so glamorous) web sites. Configuring a project web site is a full time job in and of itself Ultimately, the purpose of a web site is to document and sell your project to the public. So presenting the right combination of glitz and information in an easily accessible manner is critical to the success of your project. The project web site is like the front door of your business. Done right, your project will almost sell itself. If done wrong, the front door can be intimidating and unwelcoming to the degree that most people simply won’t come any farther into the shop. (Don’t be put off by the product/project analogy. You really are trying to sell something—even if it’s just an idea. If you aren’t, then why make your project available to the public at all?) One of the key differentiating features of free software hosting services is often the web services provided. Some of these hosting services enforce a very rigid structure on project web pages. Some sites offer Wiki services for instance, others use online publishing systems. Some sites simply give you a form, allowing you to specify various options in order to tailor your site page or pages to your project’s needs. This has the effect of making all project web sites look and feel the same. SourceForge offers fairly generic web hosting services. The default name of your approved project’s web site is projectname.sourceforge.net, but you can also configure your project to allow you to use your own domain name (e.g., projectname.org). SourceForge’s web hosting services allow you to FTP and SSH into the web hosting server. Each project is given a location in the web hosting server’s file system at the location “/home/groups/ p / pr / projectname ”, where the portion beginning with p / pr / projectname would be specific to your project’s short (Unix) name. Under this directory, you will find an htdocs directory, which is where your project’s main “index.html” page is expected to be found by the web server. If you don’t have a full time web master working on your site, I highly recommend that you limit your site to mostly static information. However, the SourceForge web server does support PHP, so mixing in some dynamic content is still possible with little maintenance effort. Since organization is a critical aspect of a good free software project web site, it makes sense to use a CSS template. I’ve created project web sites in a few days using CSS templates that are freely available online from sources like OpenWebDesign.org. These CSS templates are all free, generally simple to use and modify, and are usually self-contained, which is a very nice feature. Once you choose a design, then you can unpack the archive locally, modify the design to your needs and add all of the relevant information for your project. An additional page to consider using is a main menu link to the project page—on SourceForge, this web reference would be something like http://www.sourceforge.net/projects/_projectname_.
94
Project web services
Issue 20 When your pages are configured to your satisfaction, you can upload them to the SourceForge web server using FTP, but my favorite mechanism for web site management is to actually store my web site in the project Subversion repository, and then use the web server’s Subversion client to simply create a work area of the site right inside my htdocs directory. I then setup a cron job to automatically execute an svn update command on the htdocs directory each morning at around 2:00AM (so I’m not likely to be working on it during the update). Using this system, I never have to update the shell server manually: I simply update my web site content in my local working copy, commit it to the repository when I’m finished, and then wait for the next cron job to start. The next day I come in and my site’s been updated automatically. Incidentally, keeping your project web site in your SCM repository is an excellent idea anyway. I usually set up my project SCM repository so that it contains one directory off the project root called projectname.web, and another called simply projectname, where I store my actual project source files.
Release management—tarballs, binaries and autotools Question: who do you know that has 27 computers running 3 different versions of 9 different operating systems (and still has a social life)? Not many individual free software project administrators have access to this sort of build and test equipment. Companies sponsoring free software projects often provide such resources to their own project administrators, but if you’re an individual there are a few things you can do to maximize your access to such resources. First, choose the right software tools. There are basically three platforms in the free software world—Win32, Mac, and Linux/Unix. For Win32 and Mac, there’s not much you can do except create separate, unrelated installation packages that work on these platforms, which is why many free software projects will support one but not the other. If you want to support Win32 or Mac, you will first have to determine which versions of these platforms you wish to support, and then find the desired hardware and system versions on which to test your project and build the installation packages. Packaging for Win32 and Mac requires highly domain-specific knowledge about the formats and processes involved. You’ll just have to become familiar with these things. In the Linux/Unix world you get a bit more bang for your buck, so to speak. Using the right build tools, you can test, build and package for source distribution with a single command on most flavors of GNU/Linux and Unix. I recommend using only GNU Autotools. Now, before you put this article down in disgust, you might want to read the rest of this section, to see why I feel this way. The most significant problem with Autotools is the almost complete lack of really good usable tutorial documentation. The fact is, the Autotools tool chain is difficult to learn. But once you’ve mastered the basics, and understood the flow of the build process in Autotools, it’s actually easy to pick up details from the documentation that does exist. However, I will grant you that it takes a fair amount of effort to “master the basics”—it’s a steep learning curve. An excellent place to start is the “goat book” (which can be found online here). I recommend using only GNU Autotools So why then do I say that Autotools is the best? Well, I have a friend and co-worker who used to believe that I was crazy to use Autotools. He too runs a free software project, and he simply writes and maintains his own GNU make files. Over the last couple of years, each time I showed him something I could do with my Autotools configuration files, he would run back to his desk, and spend an hour or two adding that functionality to his hand tailored make files. In the end, his make files were very complex, difficult to maintain, hard to read and understand, and ultimately less functional and more brittle than my Autotools configuration files. Today, he’s asking me to help him convert his projects to Autotools. The simple fact is, once you discover all you can do with Autotools configuration files—and all that’s done for you automatically—you will have no desire to write your own build system. The people who designed the Autotools tool chain and maintain it today understand every detail of the most commonly used free software test, build and distribution processes. Autotools incorporates functionality for each of these processes, and often includes this functionality for free in the most basic of configuration scripts.
Release management—tarballs, binaries and autotools
95
Issue 20 Some people have touted the benefits of using other high-level build tools (SCons, CMake, etc.). These tools are great, and they are evolving over time to include similar functionality to Autotools, but I’ve never seen them provide all of the functionality that Autotools provides automatically. Often, advocates of these other high-level tools will say that complete Autotools functionality can be added by the user willing to delve a little deeper. This is true, but the end result is often no better than using Autotools to begin with. The scripts are still very complex, difficult to maintain, and difficult to understand without a deep working knowledge of the tool that was used. How’s that different from Autotools? In the final analysis, everyone in the Linux/Unix world already understands the “configure; make” dance, and no additional tools need to be installed to perform this dance. I stand by my statement: Autotools is still the best there is—even if that’s not saying much. Probably the easiest way to get started with Autotools is to first read the goat book, and then take a look at existing projects that do almost the same thing you want your’s to do.
Releasing binary installation packages Many projects don’t provide binary installation packages for GNU/Linux or Unix because most flavors of GNU/Linux or Unix use their own installation package systems. There are exceptions to this rule that are worth knowing about: SUSE and Red Hat Linux both use the Red Hat RPM packaging system, and Ubuntu Linux uses a derivation of the Debian packaging system. If you choose to provide binary installation packages for one or more Unix or GNU/Linux flavors, then you should have a good reason for doing so. Most free software projects only distribute releases in source form for GNU/Linux and Unix, because binary installation packages are often created and maintained by the administrators of a given distribution when your project is picked up by the distribution. Building and installing from a source distribution (tarball) is a process often just taken for granted on GNU/Linux or Unix. With that said, there are benefits that come from creating one or more binary installation packages. First, you get to maintain the installation scripts yourself, which gives you the flexibility to do it your way. Creating a proper installation package build script can also uncover subtle project design flaws that would require some sort of software patch be written by a distribution packager for your project. By creating your own package, you find these flaws up front and can fix them in your main source line. Packagers will often not even tell you of such problems—they don’t have time to report every little issue they find to the originators of projects they consume in their distributions. There are benefits that come from creating binary installation packages If you wish to create your own binary installation packages, you might consider using an online compile and packaging service, such as the openSUSE build service. This relatively new (and, as yet, incomplete) service is like a super compile farm with a web interface. It provides the machines and versions, and allows you to define the target platforms, architectures, and output formats. You simply provide a tarball, a modified RPM specification file (spec file), and a little site configuration, and the build service builds binaries for you that can be downloaded right from the build service project page. When you make any changes to the source or spec file, the project is automatically rebuilt, and new links are created for you. Start by reading the opening documentation on the above link, and then just start experimenting. I’d also recommend joining the mailing list mentioned on that opening page.
Releasing a version Release management is the process of packaging your products for consumption. Release management can be as simple as posting tarballs—zip or tar.gz archives of the source code—or as complicated as building and posting installation packages for multiple hardware platforms and configurations. The more work you do for your users, the more they will appreciate your project. In the early days of free software, projects were only ever distributed in source archive form. Today, however, the community consists of an audience with a much broader experience range. Some users will simply ignore your project if it doesn’t come with a Window MSI installer (others may ignore it because it does!) In the end, however, except for the obligatory source code distribution, it’s completely up to you how you make your project available to your users. The only caveat to remember is this: it’s better to start out small, and grow gradually, than to start out too big, and have to back off. This is just simple marketing. If you offer a Win32 MSI, Red Hat/SUSE RPM and Debian and Ubuntu installation packages, and then take away the
96
Releasing binary installation packages
Issue 20 Debian and Ubuntu packages because you just can’t cover all the bases, people will be disgusted with you. If, on the other hand, you start out offering only source archives, and then later you add a Windows MSI installer and Red Hat/SUSE RPM packages, your users will be delighted. You have exactly the same outcome, but with different results—what’s the real difference here? No one likes to think they’re losing something, but everyone appreciates additional bits for free. It’s better to start out small and grow gradually than to have to back off On SourceForge, the process of releasing a new version is done by first uploading files to SourceForge’s FTP upload site at ftp://upload.sourceforge.net. Login as “anonymous”, and use your email address as your password. Then change into the “incoming” directory. If you’re using a command line client from a non-Unix platform, don’t forget to switch to binary mode, so the FTP protocol doesn’t mangle your binaries on the way up. Post your release files here, and then log out.
Figure 8: Accessing the SourceForge.net file release system Now pull down the “Admin” menu on your SourceForge project page and select the “File Releases” option as shown in figure 8. SourceForge categorizes file releases in terms of “packages” and “releases”. These concepts are highlighted in figure 9. A package is a group of files that will always be released together. Often, free software projects only have a single package. It depends on how complex your products are, and how you want to release them. Within a package, you will specify a new release—often stated in terms of the current version number—each time you want to post a new release. For example, you may wish to provide a C version and a Java version of your project. In this case, if these sub-projects are separately maintained, you may wish to have a C package and a Java package. By separating these two sub-projects into two different packages, you have the option of releasing new versions of each on different schedules.
Figure 9: Packages and releases Once you’ve defined your packages, you then create a new release (see figure 10). The name you give your release will often reflect the version of the release. For instance, you might name a release “1.0.7”. This version number will then be presented as the name of the release on the download page. For examples, you
Releasing a version
97
Issue 20 might wish to check out the download pages of several projects on SourceForge.
Figure 10: Creating a new release On the release configuration page, under “Step One” (shown in figure 11), you can add notes and change log information. If you’ve formatted existing text to look nice in your project release notes and change log files, you can simply upload these files, and check the “Preserve my pre-formatted text” check box beneath these edit windows. If you’re entering this information for the first time here, simply leave this box unchecked and then upload or enter the text.
Figure 11: File release process—step 1 In “Step Two” (figure 12), you add the files to your release. As you scroll down this page, you’ll see the files that you uploaded earlier to the FTP site along with several dozen others. You have access in this screen to all of the files uploaded to the incoming directory by you or anyone else within a given 24 hour period. Please be kind and leave files that don’t belong to you alone . Select the files that you uploaded and click the “Add Files and/or Refresh View” button.
98
Releasing a version
Issue 20
Figure 12: File release process—step 2 After the refresh is complete, in “Step Three”, you’ll see a list of entries corresponding to the files you selected in “Step Two”. This is depicted in figure 13 below. Here, you need to select the processor type and file type for each entry in the list. You must press the associated “Update/Refresh” button for each file, one at a time. (Admittedly, this interface is a bit clunky, but it does the job, and the SourceForge development team is constantly making interface improvements.) Just try to choose the processor and file types that are most closely associated with the file you’re adding. You may end up choosing “Other Source File” or “Other Binary Package”, but you must choose an option before the file will appear on your project’s download page.
Figure 13: File release process—steps 3 and 4 Finally, in “Step Four”, you can choose to send an email notification to everyone monitoring your package. If you notice from the informational text that no one is currently monitoring your package, then this step does nothing, so you can simply skip it. I recommend that you shadow your user I recommend at this point that you “shadow” your user by closing all browser windows, and reopening a browser to your project download page. Then download and install the package as you would expect your users to do. If anything seems unobvious, difficult or broken in the process you expect them to follow, now is the time to fix such problems.
Advertising—getting the word out Once you’re satisfied that the download and installation process works properly for your new release, it might be a good idea to go over to the “Publicity” option under the “Admin” pull-down menu and read the information provided here by SourceForge. This information will give you a few important ways to let the world know about your project.
Advertising—getting the word out
99
Issue 20
Figure 14: Enabling and submitting project news on SourceForge.net First, enable your project news as shown in figure 14. This allows you to effectively create press releases— granted the only source for this news is SourceForge, but a lot of people in the free software world monitor this site for free software project news. Second, submit a project news item for your project. Finding the location to submit project news can be a bit daunting because the button is buried pretty deep. I’ve circled it in figure 14 in order to help you find it. You’ve just released a new version of your project to the world. Tell them something about it. This news item will show up at the top of your project’s main SourceForge page. But it may also be selected by SourceForge to be run on the main SourceForge news page under “Project News” at the top of the page). If your project becomes popular, and your news items are well-written, then your item will almost surely be selected for the main news feed. This is good stuff—better than you might realize. You can also elect to write articles about your project for journals such as Free Software Magazine, for example, or even more traditional hard-copy publications like Dr. Dobb’s Journal. SlashDot and other techie news services are also good ways to advertise, but you have to be careful how you word your articles for these services. They want information, not marketing. If your project is widely recognized, then a new release is news-worthy, but if you’re just starting out, you have to find an angle that makes your new project release news-worthy. For example, if your project solves a common problem in a unique way, then you’ve discovered a news-worthy item to report. FreshMeat is also a source of free software release information, and a lot of people monitor FreshMeat for news about free software releases. You will have to create an account on FreshMeat and then add your project to your account. Then specify a new release for your project. This takes a day or two as your information will have to be verified by a human, but it’s worth the effort.
Conclusion Running a free software project is not easy, but can be very rewarding. As your project gains in popularity and usefulness, you may even find it lucrative. One word of advice in closing—don’t try to go it alone. If you have offers of help from others, then do yourself a favor and allow them to help you in ways that you need help. Check out their skills first by allowing them to do some tasks for you, and then grant check-in rights if you approve of their skills. Some ways others can help include writing documentation, managing the project web site, running build and release management, and helping out with the code itself. The more skilled people you have collaborating with you, the more likely your project is to become successful.
Biography John Calcote (/user/28810" title="View user profile.): John Calcote has worked in the software industry for over 25 years, the last 17 of which were at Novell. He's currently a Sr. Software Engineer with the LDS Church working on open source projects. He's the project admin of the openslp project, the openxdas project, and the dnx project on sourceforge.net. He blogs on open source, programming and software engineering issues in general at http://jcalcote.wordpress.com.
100
Conclusion
Issue 20
Copyright information Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved. Source URL: http://www.freesoftwaremagazine.com/articles/running_a_free_software_project
Conclusion
101
Issue 20
102
Conclusion
Issue 20
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
How to completely ditch GUI internet applications for the command line The short ‘n’ sweet guide to liberating yourself from the evil graphical user interface By Andrew Min Today, terminal-based programs have almost disappeared. GUIs are taking over, whether we like it or not. However, there is still a place for the old command line. Take the internet as an example: everyone’s using Firefox, Thunderbird, and Pidgin for their internet activities. Even though these are great, quality, free software apps, they tend to be bloated. That’s where the terminal comes in.
Introduction Since most of this article is for the internet, having an internet connection might be useful. Also, you’ll probably need a computer (preferably running GNU/Linux, as many of these programs don’t run on Windows or OS X). Finally, you’ll need a terminal emulator (which most GNU/Linux distributions come with). Windows users have cmd.exe, Mac users have Terminal, GNOME users have gnome-terminal, KDE users mostly use konsole, and Xfce users often use xfce4-terminal. Most operating systems also install xterm or some other terminal program. If your operating system doesn’t have any of these (something I would find hard to believe) there is a list of terminal apps at Wikipedia. And if none of these are installed on your system (something I would find extremely hard to believe), you can always install one using your package manager. As a last resort, try typing Ctrl + Alt + F1 to exit KDE, GNOME, Xfce, or whatever desktop environment you are using (use Alt + F7 to go back). If you are told to run something in this kind of text, then that means you should copy it into the terminal and hit enter (unless it’s a hotkey like c. In that case, just type c into the terminal and it will run automatically).
Lynx: web browsing from the terminal Homepage: http://lynx.browser.org/ Back in 1989, a text browser that could browse HTTP and Gopher protocols was born. That browser is still around today. It’s called Lynx. There is absolutely no GUI. All it is is a terminal app with a few different colors. It handles most HTML, supports SSL, and much more. You can even download pictures and movies to view with an external application (like MPlayer). You can use it on servers (which often have no GUI), low resource machines, or just for connecting to the web without all the frills of Firefox. It’s available for Windows, GNU/Linux, and Macintosh (via a fink package or an unofficial build).
Lynx: web browsing from the terminal
103
Issue 20
Figure 1: Browsing Newsvine.mobi with Lynx
wget: Text-based downloader Homepage: http://www.gnu.org/software/wget/
Sometimes when downloading a file you don’t need all the bloat of a full download manager like KGet. That’s why I like wget, an HTTP(s) and FTP downloader for Windows and GNU/Linux (with an unofficial build for Mac OS X. It supports incomplete downloads, HTTP or FTP mirroring, proxies, and much more. Just type wget [URL], replacing [URL] with the URL of the file, e.g. http://mirror.cc.columbia.edu/pub/linux/ubuntu/releases/kubuntu/feisty/kubuntu-7.04-d and the Kubuntu CD image will start downloading. You can also use wget to mirror sites with the -m flag.
Figure 2: Downloading Kubuntu Feisty with wget
rtorrent: Torrents without a user interface Homepage: http://libtorrent.rakshasa.no/ Sometimes, it’s faster to download a file with the BitTorrent technology than wait for the six thousand other users to leave. But you don’t always need the chrome and glass of KTorrent or Deluge. Also, GUIs can hog more system resources. That’s why rtorrent was created. It’s a torrent client for GNU/Linux (or Macintosh with an unofficial port from Mac OS Forge) that runs in the terminal and claims to seed up to 3 times the speed of the official BitTorrent client. Just open rtorrent and type in the URL of the torrent.
104
rtorrent: Torrents without a user interface
Issue 20
Figure 3: rtorrenting the Kubuntu Feisty torrent
Mutt: email without a GUI Homepage: http://www.mutt.org/ I personally love Thunderbird. However, like most Mozilla apps, it can be rather slow. Luckily, we have Mutt for GNU/Linux (and unofficially, Windows and Macintosh as well). It’s a small email client that runs in the terminal. All you do is run it (mutt), type c to open a mailbox, and type in the location of the mailbox. For connecting to a remote POP box, type something like pop://username@mail.example.com/ (pops://username@mail.example.com/ for SSL). For example, if I had the Gmail account andrewmin@gmail.com, I would connect to Gmail by typing pops://andrewmin@pop.gmail.com. You can also connect to IMAP, mbox, Maildir, MH, and NFS.
Figure 4: Checking my Gmail with Mutt
Finch: IM. No interface required Homepage: http://developer.pidgin.im/wiki/Using%20Finch If you don’t live under a rock, you’ll know what Pidgin (formerly GAIM) is: it’s a GNU/Linux program that lets you connect to the multiple instant messaging networks. But what most people don’t realize is that there is a command line version of it called Finch (the older version is called gaim-text) and it is usually bundled with Pidgin. Just open it up, and start typing! You switch between chats (and the buddy list) using M - n/M - p to go to the next/previous window.
Finch: IM. No interface required
105
Issue 20
Figure 5: Chatting with myself in gaim-text. Yes, I was extremely bored
Snownews: RSS reading, terminal style Homepage: http://kiza.kcore.de/software/snownews/ Want to catch up on the latest headlines, but don’t want to fire up a full GUI just to do it? Then Snownews is for you. It’s an RSS reader that supports proxies, update checking, keybindings, categories, plugins, and even a built-in web browser. All from the command line. To run it, type snownews (typing h brings up the help window with some helpful commands to get started). You can even import and export OPML files with the bundled app opml2snow (run opml2snow -h for a short help).
Figure 6: Reading the Newsvine.com feed with Snownews
Conclusion By now, you can see that the terminal is a powerful tool—and I’ve hardly scratched the surface of it. There are tons of alternatives to the programs I mentioned, including cURL (a robust download manager), Links (a browser), ELinks (another browser based on Links), W3M (yet another browser), MPlayer (terminal media player), Irssi (IRC client), and naim (an AIM/ICQ/IRC/CMC instant messenger).
Further reading • CLI-Apps.org • Command Line Warriors • Wicked Cool Shell Scripts (book review here). • A beginner’s introduction to the GNU/Linux command line
106
Further reading
Issue 20
Biography Andrew Min (/user/37372" title="View user profile.): Definition: Andrew Min (n): a non-denominational, Bible-believing, evangelical Christian. (n): a Kubuntu Linux lover (n): a hard core geek (n): a journalist for several online publications including Free Software Magazine, Full Circle Magazine, and Mashable.com
Copyright information This article is made available under the "Attribution-Sharealike" Creative Commons License 3.0 available from http://creativecommons.org/licenses/by-sa/3.0/. Source URL: http://www.freesoftwaremagazine.com/articles/ditch_gui_apps_for_command_line
Further reading
107
Issue 20
108
Further reading
Issue 20
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Run any GNU/Linux app on Windows without any virtualization Using SSH to access programs from an Ubuntu box By Nathan Sanders SSH tools, long used by UNIX gurus to perform complicated administrative tasks over the internet on machines miles away, are a very simple and user-friendly solution for more conventional purposes. Ubuntu users, read on to learn how to use SSH to run your favorite GNU/Linux software on Microsoft Windows—without installing any software on the Windows box.
Installing an SSH server Before you begin, make sure you have the necessary materials. You need an Ubuntu machine to serve programs and a Windows machine to access them, a USB stick if you want to make your setup portable, and a fast network connection if you want to be able to run anything more complicated than nano. You are going to be focusing your attention on accessing programs from a Windows machine, but first you need to install some server software on your Ubuntu box. You will be installing OpenSSH, the de facto standard in the free software world for opening a secure gateway to your PC. The OpenSSH server installs just like any other software and requires no additional configuration for our purposes. The whole process should take about two minutes. The OpenSSH server installs just like any other software and requires no additional configuration—the whole process should take about two minutes These instructions are tailored for Ubuntu users, but OpenSSH is available for nearly every GNU/Linux distribution and other UNIX-like operating systems. Windows users can install SSH servers, too. Although it is only a bit more complicated on Microsoft’s platform, you will have to refer to the OpenSSH for Windows project for guidance. If you are comfortable installing software on your Ubuntu machine, go ahead and install the openssh-server package. If this is unfamiliar territory, all you need to do is enter the command below in a terminal. You can use the Terminal program found in the Applications?Accessories menu. sudo apt-get install openssh-server
You will be prompted to enter your user password and respond “yes” to installing the package and any associated dependencies. Installing the OpenSSH server is as simple as that and it should be configured correctly right out of the box. If you run into problems later, skip to the end of the article for configuration troubleshooting. You can also install software using Synaptic Package Manager (System→Administration→Synaptic Package Manager), without touching the command line. Stay by your Ubuntu box for one more minute. To access it later on, you will need to know the machine’s location on the internet (IP address). If you don’t know it already, visit a website that will tell you your IP address. Depending upon your internet service provider’s practices, this address could change periodically. You can create a stable DNS name for yourself using a dynamic DNS service. If your Ubuntu machine is one of several computers in a local area network (LAN), things become a bit more complicated. To access the Ubuntu machine from another computer within the LAN, you will have to find out
Installing an SSH server
109
Issue 20 what address your Ubuntu box has been assigned. If you are using a home network router, this should be easy to do from a web browser—consult your router’s manual for details. Alternatively, you can use the [ifconfig](http://linux.die.net/man/8/ifconfig) command or ask your system administrator. If you are accessing your LAN from elsewhere on the internet, you will need to make sure your router is forwarding the SSH port (port 22, by default) to the Ubuntu machine. This can also be setup from your router’s web interface without much hassle.
Windows client-side software You are going to need some software Microsoft didn’t supply for you to access your server on Windows, but I wasn’t lying—you don’t have to install any of it if you don’t want to. You will need an SSH client to connect to your Ubuntu box and an X-server to display graphical applications. You can use free software tools from Xming, which can be carried around on a portable USB stick (or any other portable device with about 8MB of free space) usable on any computer running Windows. If you don’t want to bother with the USB stick, just install everything to the computer as normal. If you are using Windows XP or newer, download the Xming installer and double click to begin. Note that there is also an older version of Xming for Windows 2000, but platforms prior to that are not supported. Click “Next” on the welcome screen to be prompted for the installation location. If you want to put it on your USB stick, click “Browse” and select the proper drive.
Figure 1: If you are installing to a USB stick, make sure you specify the correct drive. Click “Next” again to be presented with a few installation options. You can do away with “Non US Keyboard support” to save a little space, but leave the rest of the packages checked. Click “Next” again and you will be asked if you want to create a start menu folder for Xming. If you are installing to a USB stick, you can check off “Don’t create a Start Menu folder”. On the next screen, you will want to uncheck all of these options as well. Click “Next” one last time and then “Install” to finish up.
Running remote applications To test out Xming, plug your USB stick into a Windows computer, open the Windows Explorer file manager and navigate to the USB stick. Open the folder you installed Xming to and double-click on the XLaunch application. You can use your Xming USB stick with any Windows machine running XP or later You will be greeted with a handy wizard to help you access your Ubuntu machine. On the first screen, keep the “Multiple windows” option checked so that the program you launch is opened in a window like any other application would be. On the next screen, choose to “Start a program” so that you can immediately test your server without having to drop down to the command line.
110
Running remote applications
Issue 20 Click “Next” and then fill out the server and program’s parameters. First, enter the name of the program you want to run in the text box next to the words “Start program”. It is important that you know the Unix name of the desired program, which is the command you would type on your Ubuntu machine to run it rather than the exact name of the application. This is usually just the application’s name in all lower-case letters. For instance, to run the GIMP, type gimp. Some software does deviate from this rule, such as Firefox: mozilla-firefox. Still on the same screen, select the “Using PuTTY (plink.exe)” option in the “Run Remote” frame. PuTTY isn’t installed on this USB stick, but Xming came with a replacement program that does everything you need it to. I will discuss this program in more detail shortly. In the “Connect to computer” textbox, input the IP address of your Ubuntu box that you noted earlier. In the “Login as user” textbox, input your Ubuntu user name. Of course, in the “Password” box you need to input your Ubuntu password. Click “Next” twice and then “Finish” to start the application.
Figure 2: Yes, this is Amarok running on Windows. Don’t get too excited, though; all the sound still plays out of my Ubuntu machine downstairs. Once you have your application running, you can use it just as you would sitting in front of your Ubuntu box. Keep in mind, however, that you are still using all the hardware from your Ubuntu machine rather than the Windows one—SSH is just giving you a window into it. That means that files you want to save or open have to come from the Ubuntu machine’s hard disk or removable storage drives. The same goes for sound and printer output. Once you have your application running, you can use it just as you would sitting in front of your Ubuntu box, but keep in mind that you are still using all the hardware from your Ubuntu machine This also means that applications will run just as fast as they do on your Ubuntu box, even if your Windows machine is sluggish, provided that you have a fast network connection. One thing that should transfer between computers automatically is the contents of the clipboard when you copy and paste things, although this does not work perfectly. To learn how to easily share files between your two computers using SSH, refer to the SSH Beyond the Command Line article in Free Software Magazine issue 19.
Xming and the command line Now that you have Xming up and running and are enjoying graphical applications, try using it to open a terminal from your Ubuntu box. This will allow you to use command line utilities and even open other graphical applications. Click the “Run…” button in the Windows start menu and type “cmd.exe”. Click “OK” to run the Windows terminal. Go back to the Windows Explorer file manager and navigate back to your Xming folder. Find a program called “plink” and drag and drop it into the command line window.
Xming and the command line
111
Issue 20 Plink is an Xming application that suffices as a PuTTY replacement. PuTTY is a very sophisticated SSH client for Windows, but it has far more functionality than needed to simply run a few programs. Dragging and dropping the program into the terminal window enters the location of the plink program into the command line. Now complete the command as follows to open the program of your choice: [plink location] -X [Ubuntu user name]@[IP address of Ubuntu machine]
Figure 3: PuTTY is a wonderful and powerful application, but it is far more simple to launch remote programs with plinkâ&#x20AC;&#x201D;even if it means using the command line. Hit the enter key and you will be prompted for your Ubuntu password. Enter it and you will be dropped to a terminal from your Ubuntu machine. From here you can run all the commands you would in Ubuntu. To start more graphical applications, simply type their UNIX name as you did before and hit enter.
Troubleshooting SSH configuration As I mentioned, the default OpenSSH configuration on Ubuntu should suffice for compatibility with Xming. If you are using a different distribution with other default settings or experience problems, read on for a few troubleshooting suggestions. To configure OpenSSH, you need to open your favorite text editor as root and load the OpenSSH configuration file. If you still have that Ubuntu terminal window open, the following command will do this for you: sudo gedit /etc/ssh/sshd_config
The file you just opened specifies all the configuration options for OpenSSH. Whatever distribution you use, a long default configuration file should be supplied. You will only need to look at a few specific lines in this file to make sure they are written as you want them.
Port Quickly check what port your server is configured to accept connections on. This article has assumed you are using port 22, the standard port for SSH. The configuration file line should look like this: Port [number]
If the number is not set to 22, you can change it to 22 or leave it as it isâ&#x20AC;&#x201D;and remember to specify the right port in the client software you use on Windows.
Password authentication To follow my instructions above, you are expected to be proving your identity using a password. Although other methods exist, this is perhaps the simplest way to authenticate users and should provide plenty of
112
Troubleshooting SSH configuration
Issue 20 security provided that you have a strong password. Make sure the following line is in your configuration file: PasswordAuthentication yes
Remember that any line with a # (hash mark) in front of it is a comment and will not be considered as a setting. OpenSSH should default to using password authentication even if this line is preceded by a hash mark, but you can remove it just in case. Also note that if this line does not exist in your configuration file, it is safe to simply add it.
Allowed users If your Ubuntu user is not permitted to be using SSH, it will certainly cause you problems when trying to log in. Make sure your user name is on the following line: AllowUsers [my user]
X11 Forwarding This is the line that is most likely to be causing you trouble, as it is often turned off by default. If it is not configured as shown below, you will be able to access your server with the command line but not start graphical applications. X11Forwarding yes
The X11Forwarding line is what is most likely to cause trouble
Restart your server! None of the configuration changes you have made will take effect until you restart your server as the root user. Do it by entering the following command: sudo /etc/init.d/ssh restart
If you are using a distribution other than Ubuntu, something like “sshd” may appear in this command rather than “ssh”.
Use SSH responsibly Now that you have learned to access your computer from across the room or across the country, remember that you can do as much damage to it from afar as up close. Don’t fool around in the command line unless you are sure you know what you are doing. That said, if you have just learned to use SSH, now is a wonderful time to learn the greater power of the GNU/Linux command line. There are a bevy of resources on the internet and in bookstores to help you learn, including articles in Free Software Magazine.
Biography Nathan Sanders (/user/20396" title="View user profile.): Nathan Sanders is an experienced free-software user and frequent contributor to publications concerning open-source software.
Copyright information Source URL: http://www.freesoftwaremagazine.com/articles/run_any_gnu_linux_app_on_windows_without_any_virtualization
Use SSH responsibly
113
Issue 20
114
Use SSH responsibly
Issue 20
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Beginners guide to database administration tools MySQL, Ubuntu and a drop of PHP and Perl By Alan Berg Welcome to an introduction for the beginner to the basic manipulation of the MySQL database with free software. The purpose of this article is to show how universally straightforward it is to get started with installing and applying a high-grade enterprise ready database like MySQL, and to learn how to manipulate it via numerous free software approaches. I will explain how to setup MySQL and a few client helper tools to enter data; I will also cover PhpMyAdmin, a well known and highly deployed administration tool for MySQL. Finally, I will look at the Perl programming language as an easily accessible vehicle to database manipulation. One obvious warning: installing server software has inherent risks such as security and accidental deletion of files. A security related example is that the MySQL database sometimes comes cleanly installed with accounts with blank passwords, which are obvious targets for even the laziest hunter. Personally, I have an experimental machine that I reformat and cleanly install periodically and doesnâ&#x20AC;&#x2122;t hold any information valuable to the outside world.
Database installation Both version 4 and 5 of MySQL are highly popular. Version 5 has extra features including views, triggers and stored procedures. These features enhance the databases potential when compared to the slightly older and arguably more proven version 4. Sometimes life is easy by design. To install MySQL from the command line Iâ&#x20AC;&#x2122;ll use the excellent package manager tool apt. Assuming you have Ubuntu 7.04 installed on your development machine, then enter the following command: sudo apt-get install mysql-server-5.0
During installation, many words will whiz by past you at a rapid pace from the command line. Focusing in on the relevant, if all goes well, you will see the following text generated:
The following NEW packages will be installed: libdbd-mysql-perl libdbi-perl libnet-daemon-perl libplrpc-perl mysql-client-5.0 mysql-server-
Note from the text that MySQL wasnâ&#x20AC;&#x2122;t the only program that got installed: as part of the installation, apt updated the Perl interpreter with modules that enable seamless communication from Perl scripts with the MySQL database. The default binding address of the database is localhost; this implies that the package maintainers have thoughtfully secured the database so that it is not connected to the internet and is only directly contactable from users on the same machine. Yes, a good solid default for single user play around computers, such as my own desktop. As standard, the root account has no password. Obviously, an interesting feature meant for ease of use but definitely not a secure long-term stance. Our initial actions are first to check which accounts need changing and then actually change the password of those accounts. To log on to the database with a just installed client tool, try the following from the command line:
Database installation
115
Issue 20 mysql -u root
The tool connects you locally to the database. Now to find which accounts already exist try inputting the following: SELECT Host, User FROM mysql.user; +-----------+------------------+ | Host | User | +-----------+------------------+ | 127.0.0.1 | root | | alans | root | | localhost | debian-sys-maint | | localhost | root | +-----------+------------------+
The debian-sys-maint account is exactly that: an account used by administrators and for scripted MySQL maintenance. Changing the password of this account is not necessary—in fact, it’s even potentially dangerous. However, for the daredevils among you, if you really have to fiddle then you will find the information about this account mirrored in the startup configuration /etc/mysql/debian.cnf—remember that you’ll need to change that file as well. To change the root passwords used to log on from localhost and from alans (alans being the hostname of my overloaded, abused, battered, sandwich stained, beaten, trodden on, dropped and bounced computer), then copy the following commands into the MySQL client window. Before doing so please change the mentioned password changeit to suite your own security policy (you do have one, right?). SET PASSWORD FOR 'root'@'localhost' = PASSWORD('changeit'); SET PASSWORD FOR 'root'@'alans' = PASSWORD('changeit'); SET PASSWORD FOR 'root'@'127.0.0.1' = PASSWORD('changeit'); flush privileges;
The flush privileges command is just a reflection of my over cautiousness nature, forcing me to make sure that the privileges have been correctly updated. Next, I am going to create a new database called freesoftware, then show that the database actually exists and then exit gracefully from the MySQL client shell. create database freesoftware; show databases; exit
To connect again I need to input the new password and, of course, use the freesoftware database. mysql -u root -p freesoftware password: changeit
For the sake of simplicity, I will create a rather simple table with only two fields, PK_ITEM (the primary key) and Name (a normal field): create table items (PK_ITEM INTEGER NOT NULL, NAME CHAR(15) NOT NULL, PRIMARY KEY (PK_ITEM) ); describe items; +---------+----------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +---------+----------+------+-----+---------+-------+ | PK_ITEM | int(11) | NO | PRI | | | | NAME | char(15) | NO | | | | +---------+----------+------+-----+---------+-------+
Creating such a simple table is necessary to avoid obscuring the purpose of this article, which is to act as a quick starter. In real life, applications tend to have complex data models. It is common for some of the production ripe applications I work with to have 50 or so tables with hundreds of well-thought-out and not-so-well-thought-out constraints. Suffice to say that the exact design of a data model is a significant weight in the definition of quality of a given piece of database-enabled software. A good design at the beginning of
116
Database installation
Issue 20 the development process has the potential to save enormous amounts of refactoring and, worse still, data migration later on. In this case, with one primary key and one normal field, I am sure it will be okay design-wise! To populate the items table, please create a file named data.txt in a location of choice (probably your home directory) with the following comma delimited content: 1,small_toy 2,medium_toy 3,large_toy
To load in the data from the MySQL client command line, copy the following line, where path is the location on your file system of the data file: LOAD DATA INFILE '/path/data.txt' INTO TABLE items FIELDS TERMINATED BY ',';
By default, MySQL reads in tab-delimited files. I personally prefer comma delimited files: that’s why I added FIELDS TERMINATED BY to the command. If the action succeeds, then the response will be the same as: Query OK, 3 rows affected (0.00 sec) Records: 3 Deleted: 0 Skipped: 0 Warnings: 0
Or if you pointed to a nonexistent file, the following meaningful result is returned: ERROR 13 (HY000): Can't get stat of '/home/alan/Desktop/freesoftware/data.txt' (Errcode: 2)
Yes I saw the error of my ways immediately… To verify that good things have happened, e.g. that the data exists in the items table, fire off the query: select * from items; +---------+------------+ | PK_ITEM | NAME | +---------+------------+ | 1 | small_toy | | 2 | medium_toy | | 3 | large_toy | +---------+------------+
Everyone has his or her own personal way of working. When I play—sorry I mean develop—with MySQL, I like to keep my initial data in text files. If I accidentally damage my tables, then I like to drop the table, create a new one and then quickly load the new table with the data from the text files. Even for hundreds of thousands of records, the load in time is impressively short and, at worst, in the order of milliseconds to seconds—truly faster than a speeding bullet! Even for hundreds of thousands of records, the load in time is impressively short and, at worst, in the order of milliseconds to seconds—truly faster than a speeding bullet If you wish to drop the table items and then drop the database to totally hide your evil wrong doing, or just to clean up, type: drop table items; drop database freesoftware;
Within a couple of command line actions I was able to create a MySQL 5 database instance and matching basic client software, create a database and a table and populate the table with data contained within a plain old text file. As a bonus, installation of the database also enhanced the standard distribution of Perl with extra modules that make querying the MySQL server straightforward—and hopefully, at some point, fun.
Database installation
117
Issue 20
Administration GUI Ease of use dictates that you will need to perform standard administration tasks via a GUI. MySQL Administrator, developed by none other than the creators of MySQL, can easily do most of your database related system administration grunt work for you. To install MySQL Administrator from the command line type: sudo apt-get mysql-admin
Assuming you have the standard Ubuntu desktop environment, you will find a shortcut to the tool at the top of the screen under the menu option Applications→Programming→MySQL Administrator. Alternatively, to run from the command line: mysql-admin&
On execution, a connection dialog will appear as shown in figure 1. Please type in localhost for the server’s hostname and then the username “root” and the hopefully long and random password you chose earlier. One of the first things to do is to create an extra user that only has select and insert permission on the freesoftware database.
Figure 1: The connection dialog Upon successful connection, the main screen, as displayed in figure 2, will appear. Notice all the potential actions on the left hand side. Clicking around will give you great insight into the current configuration and health of your database(s). In this section, I will describe the primitives of user administration; the rest of the inbuilt functionality is beyond the scope of this elementary starter tutorial.
118
Administration GUI
Issue 20
Figure 2: MySQL Administrators main screen I sometimes find it hard to know which ones, amongst all of the finely scaled permissions available, a user should have within MySQL. Some users may actually require the permissions needed for PHP scripts or other applications that generally should only be able to select from a given database. Other users may be associated with real system administrators and require insert or update rights. As you will soon see, the GUI client known as MySQL Administrator will make the process intuitively obvious and thus painless. In the next section, I will be describing a simple Perl script that searches via the select command and inserts data. With these constraints in mind, I will make a user called “alan”. In reality, when more than one developer is working with the database, you should use a more complex naming convention. Select the user administration, right click on the debian-sys-maint account in the bottom left hand corner and then “Add new User”. Filling in the User Information Dialog and applying the changes generates the “alan” account, as shown in figure 3.
Figure 3: Creating a new user account Finally, update the privileges of the user so that you can limit damage if evil doers intentionally (or bad programming habits inadvertently) abuse the account. Click on the Schema Privileges tab, as shown in figure 4, and choose Select and Insert privileges, then apply changes. You can always update the account later if you need more power.
Administration GUI
119
Issue 20
Figure 4: The schema privileges tab I particularly like the schema tab as it de-skills the knowledge required to define permissions. Of course, the GUI can help with a lot more, such as looking at system health, starting and stopping services, backups and viewing configuration for further adventures. I would even, hum, suggest first reading selected parts of the manual.
Perl Note: If you are not interested in the basic Perl programming structures necessary for scripting MySQL interactions, you are welcome to skip this brilliantly thought out and excellently written section (and itâ&#x20AC;&#x2122;s me saying so!); this section could change your life and help you win the lottery. To recap, so far I have shown you how to install a database, add a table with records from a text file and creat a least privileged user. In this section, I will describe a relatively simple Perl script that will insert data into the items table and display the full contents of the table. If you remember, during installation apt installed extra Perl modules that enable scriptable communication with the MySQL 5 database. To understand a little about the database independent interface DBI for Perl activate the correct man pages: man DBI man DBD::mysql
The Perl script shown next performs two distinct types of database related actions. The first type are actions such as insert and update that do not expect a result set of values to be returned. This is type of action can be delivered by the do() method. The second more complex action (as enacted in the code just below comment [4]) is to query and then loop through each row of data via preparing a statement, executing and then fetching a row as an array and looping through all of the rows. This process is stereotypical for web site interactions, where users place orders in a shop and the application retrieves data to describe salable items. As described next in the comments of the script, the flow of the program is: 1. Initialize configuration information 2. Connect to the database 3. Insert data $no_entries number of times 4. Query the database 5. Disconnect from database. I would not expect a basic Mod Perl/CGI script to need much more in terms of vocabulary. #!/usr/bin/perl use DBI; # [1] Configuration variables
120
Perl
Issue 20 $database='freesoftware'; # Which database $host='127.0.0.1'; # Which host to connect to $dbUser='alan'; # The user to connect to the DB $password='changeit'; # Password $table='items'; # Table to insert into $query='select * from items'; # Query to ask $no_entries=40; # Number of extra entries to generate $startPK=4; # Primary Key start point #MAIN # [2] Connect to Database $dbh = DBI->connect("DBI:mysql:database=$database;host=$host",$dbUser, $password,{'RaiseError'
# [3] Loop $no_entries times and insert a row into the table each time my $counter=0; while ($counter != $no_entries){ $dbh->do("INSERT INTO items VALUES (?,?)",undef,$counter+$startPK, "AUTOMATIC_PK_$counter") $counter++; } # End of loop # [4] Return Query information $sth = $dbh->prepare($query); $sth->execute; print "QUERY: $query\n"; # Loop through each row of information returned while (my @row_array=$sth->fetchrow_array) { print "@row_array\n" } $sth->finish(); # End of Query # [5] Cleanup database connection $dbh->disconnect();
Notice that, for the sake of simplicity, I have not included any real error handling in the code. However, if you deliberately break the query statement stored in $query you will normally get a meaningful error message back; for example:
DBD::mysql::st execute failed: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'slect * from items' at line 1 at select.pl line 17.
Having an example code snippet gives you a finger hold into the programming aspects. If you are looking for an exercise to strengthen your developerâ&#x20AC;&#x2122;s backbone, feel free to expand the code to delete the first and last rows and then redisplay the results. In summary, in this section a basic DBI script has been detailed which notably uses both the prepare() and execute() and the do() approach to perform a given SQL action.
MySQL querying from the GUI The nice people at MySQL AB have also developed the MySQL Query Browser. This tool allows for ad hoc querying and modification of data. Further you can export result sets to CSV, XML, PDF, EXCEL, and create and edit stored procedures within the database itself. To install via apt: sudo apt-get install mysql-query-browser
You will now find the GUI tool under the menu item Programmingâ&#x2020;&#x2019;MySQL Query Browser. Connect to the database, type the query select * from freesoftware.items; and then click on the top right hand side: the Execute icon will generate a screen similar to that shown in figure 5.
MySQL querying from the GUI
121
Issue 20
Figure 5: The MySQL Query browser From the GUI you now have the ability to conveniently modify data on the fly. At the bottom of the screen, you may notice the “Start Editing” button: press it, click on the result medium_toy, edit and finally apply changes. Execute the query again and you will clearly see that the database had updated the relevant value. To export the data to a particular file type, select the menu item File→Export Ruleset and then choose the type and location of file you wish to export.
PhpMyAdmin Before installing PhpMyAdmin be warned that for a fresh install you will end up deploying both the Apache 2 server and PHP 5. If you have already installed another web server or do not like the security implications, then this may be one-step too far; but then again, if you are like me and don’t mind taking risks on a development machine, carry on. The PHP language is well suited to developing MySQL applications: it’s interpreted (and therefore doesn’t need recompilation), and has built-in commands that make connecting to MySQL obvious and easy to master. PhpMyAdmin is a best of breed web interface that allows rich manipulation of MySQL and is included in the popular and well regarded XAMPP bundle. To install PhpMyAdmin under Ubuntu via apt: sudo apt-get install phpMyAdmin
Notice that apt installed the following packages: apache2-mpm-prefork apache2-utils apache2.2-common libapache2-mod-php5 php5-common php5-mysql phpMyAdmin. Watching the fast flowing installation feedback I also noticed the warning:
Setting up apache2-mpm-prefork (2.2.3-3.2ubuntu0.1) ... Starting web server (apache2)... apache2: Could not reliably determine the server’s fully qualified do
If you see this message, to contact the PHP5 application you will need to write the following in your favourite browser’s address bar: http://127.0.1.1/phpMyAdmin/. The security implications are positive as only a local user has access to the application—this is very good for local testing. Notice that your newly installed login page is not encrypted; for security reasons it is best practice to enable the SSL port if visiting from the internet. As shown in figure 6, there is a friendly forward-looking feature of this tool: it’s been translated into a vast range of languages. As I live in Holland, it is pleasant to see the Dutch language as one of those included.
122
PhpMyAdmin
Issue 20 Now log in as user “alan” with whatever least privileged user you have created and with the password you use directly against MySQL.
Figure 6: The default Login page of phpMyAdmin After logging on to the application, it quickly and efficiently assembles relevant information into an introduction screen (figure 7). The user “alan” has no privileges to create databases, and that is reflected in red highlighted text (not to be missed even if we wanted to!).
Figure 7: The default Login page of phpMyAdmin The buttons on the left hand side underneath the products icon are shortcuts to home, exit, “perform an SQL query”, the tools documentation and more generic documentation. Hitting the SQL icon results in the pop up of a query window, as described in figure 8. In the query window write the query “select * from freesoftware.items;” and then press the go button.
PhpMyAdmin
123
Issue 20
Figure 8: The SQL query window Hitting the go button brings up the main window with the results, figure 9.
Figure 9: The results window Clicking on the pencil icon allows you to edit individual results. The developers of this rich application have enabled many potential actions including browsing the data. At the top of the browser window are tabs Browse, Structure, etc. Selecting the structure tag generates a screen similar to figure 9.
Figure 10: The structure window Within the structure pane (figure 10), you can clearly see and manipulate the tables structure and even generate new indexes that make querying faster. To export data choose the Export tab as shown in figure 11, and then select the export format of choice.
124
PhpMyAdmin
Issue 20 And the list of potential administrative tasks goes on…
Figure 11: The export tab of phpMyAdmin To sum up, I personally like the phpMyAdmin tool because of its no-fuss interface and rich functionality—and it works from the internet. No wonder it is so popular with website administrators and Internet Service Providers.
End bits The MySQL database is fast, stable, well received and has numerous free software tools that support it. I hope this article has placed some of the first time users on the initial rungs of the knowledge ladder. After following the recipes on your Ubuntu development machine you now have a database, a few graphical tools, Apache with PHP and a Perl script waiting to be adapted. Time to experiment. Remember to drop everything afterwards!
References • PhpMyAdmin homepage • MySQL homepage • MySQL Administrator Manual • MySQL Query browser • XAMPP • XAMPP statistics
Biography Alan Berg (/user/8" title="View user profile.): Alan Berg Bsc. MSc. PGCE, has been a lead developer at the Central Computer Services at the University of Amsterdam for the last eight years. In his spare time, he writes computer articles. He has a degree, two masters and a teaching qualification. In previous incarnations, he was a technical writer, an Internet/Linux course writer, and a science teacher. He likes to get his hands dirty with the building and gluing of systems. He remains agile by playing computer games with his kids who (sadly) consistently beat him physically, mentally and morally. You may contact him at reply.to.berg At chello.nl
References
125
Issue 20
Copyright information This article is made available under the "Attribution-NonCommercial-Sharealike" Creative Commons License 3.0 available from http://creativecommons.org/licenses/by-nc-sa/3.0/. Source URL: http://www.freesoftwaremagazine.com/articles/beginners_guide_to_database_administration_tools
126
References
Issue 20
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Create a simple application with Hecl Introducing Hecl, a mobile phone scripting language By David Welton These days, almost everyone has a cell phone; cell phones keep getting faster, smarter, and more capable, yet relatively few applications exist for them. The Hecl programming language makes it easy to script applications for your cell phone—with just a few lines of code, you can create applications that you can carry with you, everywhere.
Easy cell phone applications with Hecl I first fell in love with computers when my parents bought me a Commodore 64, a fairly nice computer for the time. Thanks to Moore’s law, and the relentless pace of development, the average cell phone is now more powerful than that machine from just 20 some years ago. While it’s understandable that many people just want to make phone calls, think of all the programs out there waiting to be written that take advantage of the fact that you almost always have a cell phone with you. I think I’m just beginning to scratch the surface of what’s possible, especially as phones continue to get faster, and have better connections to the internet. I became interested in writing cell phone applications several years ago, after a rainy day high in the Italian Dolomites near Cortina d’Ampezzo—my old phone ended up in a mud puddle and died, leading me to purchase a new phone with J2ME (Java) capabilities. Writing applications in Java was okay, but I thought to myself that it would be an interesting experiment to try and create a scripting language that runs on top of the J2ME (now known as Java Micro Edition or Java ME) environment. When I created Hecl, I did so with several goals in mind: 1. Make it even easier and faster for experienced programmers to create cell phone applications. 2. Make it possible for novice programmers to create cell phone applications without the burden of dealing with Java. Hecl has other benefits too—it’s faster to develop applications, because you don’t have to recompile after each change. In the hands of a clever programmer, it’s also possible to do interesting things with Hecl because of its interpreted nature. You could start an application on your phone, and download additional bits of code off the web. The aim of this tutorial is to help you create cell phone applications, so let’s get started right away. You’ll need a few things first: • Sun’s Java. This is heading towards free software, but isn’t quit there yet. If you run Ubuntu, like me, you can get Java with apt: apt-get install sun-java5-jdk, if you’ve added the “multiverse” repositories to your /etc/apt/sources.list file: deb http://us.archive.ubuntu.com/ubuntu/ feisty multiverse • Sun’s WTK toolkit. While you don’t need the tools to compile Hecl (unless you want to hack on it!), you do want the emulator, so that you don’t have to load your app onto your phone each time you want to test it. It’s not free software (yet?), but it does run on Linux, Mac and Windows. You can download the WTK for free. • Hecl itself. You can get it from the Sourceforge download page. Sun’s WTK requires installation—you can put it somewhere like /opt, so it won’t get mixed up with the rest of your system. The installation process is very simple—just say yes to a few questions, and you’re done. Hecl doesn’t require installation: everything you need is already there in the distribution.
Easy cell phone applications with Hecl
127
Issue 20 To see if everything’s working, you can try launching the emulator with the sample application: /opt/WTK2.5.1/bin/emulator -classpath build/j2me/final/cldc1.1-midp2.0/Hecl.jar Hecl That should bring up something like this:
Figure 1: Hecl demo screen shot This is Hecl’s built in demo—its source code is located in midp20/script.hcl, but before I get too far ahead of myself, let’s go back and create the classic “Hello World” application, just to get started and see how to work with Hecl. Note: Hecl actually comes in several flavors, with slightly different GUI commands—MIDP1.0 (older phones), which has fewer commands and doesn’t do as much, and MIDP2.0, for newer phones, which has a lot more features. This tutorial utilizes the MIDP2.0 commands, because that’s what current phones are based on. The concepts described are very similar for the MIDP1.0 commands, but the commands are slightly different. Please contact me if you are interested in a MIDP1.0 version of this tutorial.
The “Hello World” cell phone application To write your first Hecl program, open a text editor, and type the following program into a file—I’ll call it hello.hcl: proc HelloEvents {cmd form} { [lcdui.alert -text "Hellllllllooooo, world!" -timeout forever] setcurrent } set form [lcdui.form -title "Hello world" -commandaction HelloEvents] set cmd [lcdui.command -label "Hello" -longlabel "Hello Command" -type screen] $form setcurrent $form addcommand $cmd $form append [lcdui.stringitem -label "Hello" -text "World"]
Not bad—8 lines of code, and most of it’s pretty clear just from looking at it. I’ll go through it line by line, so you understand exactly what’s happening. 1. The first bit of code, that starts with proc HelloEvents, defines a “procedure”: in other words a function called HelloEvents. When this function is called, it creates an “alert”—think of it as a pop up message telling you something important. -timeout forever tells the message to stay on the screen until the user dismisses it. 2. The second command defines a form, with the command lcdui.form, with the title of “Hello World”, and connected to the HelloEvents proc. What this connection means is that when any commands associated with the form are activated by the user, this procedure is called to handle them. The code set form stores the form object in the variable form, so that it can be referenced later.
128
The “Hello World” cell phone application
Issue 20 3. The following line creates a command that can be activated by the user. It has the label “Hello”, and is stored in the variable cmd. I use the screen type for the command, which is used for user defined commands. There are some predefined types such as “exit”, “back”. 4. $form setcurrent references the previously created form, and tells Hecl to display it on the screen. 5. The addcommand subcommand (you could also think of it as a “method”, like in an object oriented language) attaches the command I created above to the form. This makes the command visible in the form. 6. Finally, I display a string on the form with the lcdui.stringitem command. On most phones, the -label text is displayed in bold, and the -text text is displayed next to it. That’s it! Now, to transform the code into a cell phone application, run a command: java -jar jars/JarHack.jar jars/cldc1.1-midp2.0/Hecl.jar ~/ Hello hello.hcl
This is all it takes—this command takes the existing Hecl.jar file, and replaces the Hecl script within with our newly created hello.hcl script, and creates the resulting Hello.jar in your home directory (referenced as ~/ in the command above). Now, I can run the code in the emulator to see the application (figure 2).
Figure 2: Hecl Hello World screenshot Highlighted, from the top, are the form’s -title, the stringitem, and in the lower right corner, the command labeled Hello. If you press the “hello” button, the code in HelloEvents is executed, and an “alert” is popped up onto the screen, and stays there until you hit the “Done” button.
Installing the code on your phone While creating an application is very easy, unfortunately, installing it on a phone is not; there isn’t much that Hecl can do to ease that process, which is different for each phone. On Linux, for my Nokia telephone, I use the gammu program to transfer programs to my phone, like so: gammu nothing --nokiaaddfile Application Hecl Another method that may work better across different phones is to use the phone’s browser to download and install the application, by placing the .jar and .jad files on a publicly accessible web server, and accessing the .jad file. Note that this will likely cost money in connection charges!
Installing the code on your phone
129
Issue 20
Next steps—Shopping List application So far so good. Next, I’ll create a small application that you can interact with to do something useful. It’s a simplified version of the shopping list that can be found here. The theory of operation behind this application is simple: typing a shopping list into a mobile phone is pretty painful—it’s much better to do the data entry via a web page, and then fetch the list with the mobile phone application. For this tutorial, I’ve created a simple list on the ShopList web site, with the PIN number 346764, which can be viewed here. Feel free to create your own shopping lists—the site costs nothing to use. The cell phone application works like so: by entering the PIN, it downloads the list of items and displays them on the phone screen as a series of checkboxes. Have a look at the code to do this: # Process events associated with the shopping list screen. proc ShopListEvents {exitcmd backcmd cmd shoplist} { if { eq $cmd $exitcmd } { midlet.exit } elseif { eq $cmd $backcmd } { global shopform $shopform setcurrent } } # Create a new shopping list screen and fetch . proc MakeList {exitcmd backcmd pin} { set url "http://shoplist.dedasys.com/list/fetch/${pin}" # Fetch the data, and retrieve the data field from the results hash. set data [hget [http.geturl $url] data] if { eq $data "PIN NOT FOUND" } { [lcdui.alert -type warning \ -title "Pin Not Found" \ -timeout forever\ -text "The PIN $pin was not found on shoplist.dedasys.com"] setcurrent return } set shoplist [lcdui.list -title "Shopping List" \ -type multiple] foreach e [split $data \n] { $shoplist append $e } $shoplist addcommand $exitcmd $shoplist addcommand $backcmd $shoplist setcurrent $shoplist configure -commandaction \ [list ShopListEvents $exitcmd $backcmd] } # Process events associated with the main form. proc ShopFormEvents {backcmd exitcmd pinfield fetchcmd cmd shopform} { if { eq $cmd $exitcmd } { midlet.exit } elseif { eq $fetchcmd $cmd } { MakeList $exitcmd $backcmd \ [$pinfield cget -text] } } # The action starts here... # Create a generic back command. set backcmd [lcdui.command \ -label Back \ -longlabel Back -type back -priority 1] # Create an exit command. set exitcmd [lcdui.command \ -label Exit \ -longlabel Exit -type exit -priority 2]
130
Next steps—Shopping List application
Issue 20 # Create the form. set shopform [lcdui.form -title "Shopping List"] set pinfield [lcdui.textfield \ -label "shoplist.dedasys.com PIN:" \ -type numeric] set fetchcmd [lcdui.command -label "Fetch" \ -longlabel "Fetch Shopping List" \ -type screen -priority 1] $shopform $shopform $shopform $shopform
append $pinfield addcommand $exitcmd addcommand $fetchcmd setcurrent
$shopform configure -commandaction \ [list ShopFormEvents $backcmd $exitcmd $pinfield $fetchcmd]
This is certainly more complex than the first example, but the general pattern is the same—screen widgets and items are created, displayed, and procs are called to deal with commands. As I mentioned previously, commands with specific, predefined tasks have their own types, as I can see with the back and exit commands, which are respectively of types “back” and “exit”. After the two commands are defined, I create a form and add a textfield to it. By specifying -type numeric for the textfield, I indicate that it is only to accept numbers—no letters or symbols. After creating the Fetch command, I append the textfield to the form (or else it wouldn’t be visible), add the two commands to the form, and then, with setcurrent, make the form visible. The last line of code configures the form to utilize the ShopFormEvents proc to handle events. The list argument warrants further explanation: Hecl, like many programming languages, has a global command that could be used in the various procs that utilize the back and exit commands—you could simply say global backcmd, and then the $backcmd variable would be available in that procedure. However, using global variables all over the place gets kind of messy, so what I want to do is pass in everything that the proc might need, and I do so by creating a list: ShopFormEvents $backcmd $exitcmd $pinfield $fetchcmd. You can see that these corresponds to the arguments that the proc takes: proc ShopFormEvents {backcmd exitcmd pinfield fetchcmd **cmd****shopform** }, except for the last two, which Hecl automatically passes in. cmd is the command that was actually called, and shopform is of course the form that the proc was called with. By comparing $cmd with the various commands that are available, it’s possible to determine which command called the proc, and act accordingly. Now, let’s build it and run it: java -jar jars/JarHack.jar jars/cldc1.1-midp2.0/Hecl.jar ~/ ShopList shoplist.hcl /opt/WTK2.5.1/bin/emulator -classpath ShopList.jar Hecl
Figure 3: Initial shoplist form
Next steps—Shopping List application
131
Issue 20 At this point, you enter the PIN number (346764), and press the Fetch button. This command executes the code in MakeList. The first thing it does is attempt to fetch the data from the shoplist site, using the http.geturl command. Since this command returns a hash table, in order to get at the data returned, I use the hget command to access the “data” element. If the PIN was not available on the server, an error message is returned, and the user is returned to the first screen. Otherwise, a list of checkboxes is created with lcdui.list, by specifying “multiple” as the type. Since the shopping list is sent “over the wire” (so to speak…) as a list of lines, all I have to do to add it to the display is split it by lines with the split command, and then iterate over that list with foreach. The result looks like that displayed figure 4.
Figure 4: Shopping List And there you have it, a network-based shopping list in less than 100 lines of code. Of course, there is room for improvement. For instance, in the production version of this shopping list application, RecordStore (in Hecl, the rms.* commands make this functionality available) is utilized to save the list and its state between invocations of the program, so that you can leave the application, run it again, and find the list as you left it. Support for multiple lists might also be handy. Of course, this tutorial barely scratches the surface. Hecl has a number of other GUI commands, and is a complete programming language that can do some interesting and dynamic things. If you’re curious, the best way to learn more is to have a look at the Hecl web site, sign up for the Hecl mailing list on SourceForge.
Bibliography The Hecl web site The Hecl mailing list Java Micro Edition The Design and Implementation of Hecl
Biography David Welton (/user/54" title="View user profile.): David N. Welton lives in Innsbruck, Austria, with his wife Ilenia, after a number of years of living in Padova, Italy. His personal web site is here: welton.it (http://www.welton.it/davidw/) and his business web site is here: dedasys.com (http://www.dedasys.com). He has been involved with the Debian project since 1997, the Apache Software Foundation since 2001, and generally loves working with open source software.
Copyright information Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved.
132
Bibliography
Issue 20 Source URL: http://www.freesoftwaremagazine.com/articles/creating_a_simple_application_with_hecl
Bibliography
133
Issue 20
134
Bibliography
Issue 20
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Virtualization in OpenSolaris Virtualization techniques in OpenSolaris By Rami Rosen Recently there’s been a lot of news about OpenSolaris, more specifically in reference to the great progress made by virtualization technologies in it. In this article, I will exam some of these technologies, and compare them with the state of the art on other platforms.
Zones OpenSolaris’ Zones is a mechanism that provides isolated environments with a subset of the host operating system’s privileges, allowing applications to run within the zone without any modifications (Xen is also capable of this). This makes zones useful for server consolidation, load balancing and much more. Each zone has a numeric ID and a unique name; the global zone has ID 0, is always running and cannot be halted. There are two user space tools for zone configuration, creation and management: zonecfg and zoneadm; these tools use a lightweight IPC (Inter Process Communication) mechanism called doors to communicate with the kernel, which is implemented as a virtual file system (doorfs). When using doors, context switches are executed using a unique synchronization mechanism called shuttle, instead of through the kernel dispatcher; this allows faster transfer of control between kernel threads. I should mention that Linux does not have a doors IPC system, though there was an attempt to write one by Nikita Danilov in 2001; this project can be found on sourgeforge.net (Doors for Linux). Some operations are not allowed in a zone: mknod from inside a zone, for example, will return mknod: Not owner; the creation of raw sockets is also prohibited, with the one exception of socket(AF_INET,SOCK_RAW,IPPROTO_ICMP) (which is permitted in order to allow zones to perform ping). It’s worth noting that zones can modify the attributes of a device (such as its permissions) but can not rename it. All zoneadmd daemons run in the global zone, and each zone has a zoneadmd process (used for state transitions) assigned to it. When dealing with zones other than the global zone, processes running in one zone cannot affect or see processes in other zones: they can affect or see only processes within their own zone. A zone can be in one of the following states: configured, installed, ready, running, shutting down or down. • Configured: configuration was completed and committed • Installed: the packages have been successfully installed • Ready: the virtual platform has been established • Running: the zone booted successfully and is now running • Shutting down: the zone is in the process of shutting down • Down: the zone has completed the shut down process and is down
Zones
135
Issue 20
Figure 1: Zones State Machine Another interesting feature of zones is that they can be bound to a resource pool; Solaris Containers is the name for zones which use resource management.
Branded Zones Branded Zones enable you to create non-global zones which contain foreign operating environments. The lx brand provides a Linux environment under Solaris, which can be created with zonecfg using the set brand=lx option when configuring with the zonecfg command. The lx zone only supports user level applications; therefore, you cannot use Linux device drivers or kernel modules–including file systems—in an lx zone. Implementing lx zones required a lot of additions and modifications: for example, executing an ELF binary in an lx zone is performed by the lx brand ELF handler. In Linux, system calls are made by calling interrupt 0x80, whereas Solaris usually uses sysenter or syscall instructions for a system call on x86, while in earlier versions it was done with lcall instructions (in Sparc, system calls are initiated by traps). Since Solaris did not have a handler for interrupt 0x80, the Brandz project was started to add such a handler; this handler, in fact, simply delegates the call to the handler in the brand module, where it is eventually executed. The lx brand is available only for i386/x86_64 systems: you cannot run Linux applications on SPARC using the lx brand. You will often encounter the term “Solaris Containers for Linux Applications” or the acronym “SCLA” as a synonym to branded lx zones. The branded zone was integrated into the mainline Solaris tree in December 2006 (OpenSolaris brandZ project.)
CrossBow and IP Instances CrossBow is a new OpenSolaris virtualization networking project that allows you to create multiple virtual NICs (VNICs) from a single physical nic. It also enables you to control QoS parameters making it possible to assign specific bandwidth allocations and provide different priorities to each virtual nic, protocol, or service. This can be done by a system administrator (with the dladm and flowadm commands) or by an application using setsockopt(). CrossBow is ideal for server consolidation, the isolation of Solaris Zones, tuning a system’s network resources, enhancing security (in the case of a distributed denial of service attack, for example, only the attacked vnic will be impacted instead of the entire system), and much more. Here is an example for setting vnic bandwidth: dladm create-vnic -d bge0 -m 00:01:02:03:04:05 -b 10000
dladm is a utility which administers data links. The network virtualization layer in CrossBow was implemented by changes made to the MAC layer, and by adding a new VNIC pseudo driver. The VNIC pseudo driver appears in the system as if it were a regular network driver, allowing you to run the usual commands (i.e. plumb and snoop). The VNIC pseudo driver
136
CrossBow and IP Instances
Issue 20 was implemented as a Nemo/GLDv3 MAC driver and it relies on hardware-based flow classification. IP instances are part of the CrossBow project that uses the flow classification feature of NICs, but also has a solution for NICs without this feature; in the future, almost all 1GB and 10GB NICs will support flow classification. With IP instances, each zone can have its own instance of the kernel TCP/IP stack: each zone will also have its own ARP table and its own IP routing table, IP filter rules table and pfhooks (pfhooks is the OpenSolaris equivalent of Linux’s nfhooks or Netfilter hooks). IP instances also enable zones to use DHCP, IPMP and IPSec (IP Security protocol, which is used in VPNs), with each zone having its own IPSec Security Policy Database (SPD) and Security Association (SA). In order to implement IP instances, all global data in the kernel TCP/IP stack which might be modified during runtime, was made non-global. For example, a new structure named ip_stack was created for the IP kernel layer, (layer 3 in the 7 layer model, the network layer); a new structure named udp_stack was created for the UDP kernel layer (layer 4 in the 7 layer model, transport layer) and so on. Using IP instances, non-global zones can apply IP filter rules (IP filter is the OpenSolaris equivalent of IP tables in Linux); prior to the CrossBow and IP instances project, this was impossible. IP instances are enabled with set ip-type=exclusive when creating a zone with zonecfg. A non-global zone created without this option will, of course, share its IP instance with the global zone (as was the case before the integration of the IP Instances project). See OpenSolaris Crossbow project for more information.
Xen Xen in OpenSolaris is a port of the Linux Xen project. It enables us to run OpenSolaris as domain 0 or OpenSolaris as a guest (domU). The last update to the Xen project, as for today, was in July 2007. There is HVM support in OpenSolaris Xen; this means that if you have processors with virtualization extensions, you can run unmodified operating systems as guests. The Xen project uses virtual NICs from the CrossBow Project, which is discussed in the previous section. There is also support for management tools (virt-manager, virt-install and virsh). For more information about Xen, see: • OpenSolaris Xen project • Linux Xen project • Virtual Machine Manager project A new platform called i86xpv was prepared for Xen; You can verify that you booted into Xen by running uname -i (you should get i86xpv) New features include PAE for 32 bit Solaris, Xen crash dumps for dom0, better integration with other Solaris network virtualization projects, and more.
Figure 2: virt-manager in solaris
Xen
137
Issue 20
Conclusion In this article, I showed the current state of the art of some interesting virtualization techniques in OpenSolaris, many of which enable you to use our hardware more efficiently. It seems that OpenSolaris made a great effort in this field, and is now the same abilities to other modern OSes, along with some nice extras.
Biography Rami Rosen (/user/42207" title="View user profile.): I am a Computer Science graduate of Technion, the Israel Institute of Technology, located in Haifa. I works as a Linux and Solaris kernel programmer for a networking start-up. I specialize in virtualization and networking. I give advanced kernel lectures from time to time in Local Linux User groups.
Copyright information Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved. Source URL: http://www.freesoftwaremagazine.com/articles/virtualization_in_opensolaris
138
Conclusion
Issue 20
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
UPS (Uninterruptible Power Supply) installation and configuration Preventing unscheduled power related downtime By Ken Leyba An inexpensive way to prevent unscheduled downtime or data loss due to power problems is with a UPS or Uninterruptible Power Supply. However, a UPS by itself is not enough for proper operation. Hardware, software, and configuration together make up a UPS system that will recover from unexpected power loss or power fluctuations that can damage systems and peripherals.
Introduction When considering data loss, system downtime and disaster recovery, backup methods are primarily discussed. There are many methods of preventing data loss, including clustering, backup, security and power conditioning. Proper power can prevent an initial disaster from ever occurring. Providing proper power can be in the form of an Uninterruptible Power Supply or UPS. A UPS has rechargeable batteries to supply emergency power in the event of immediate power loss. If the power loss is longer than the batteries can supply, then the UPS can signal the server to initiate a power down sequence to properly shutdown, preventing data loss. When power is returned the server can return to operation after having made a clean shutdown. Other power related problems that occur can be minimized with the circuitry of a UPS. Voltage sags and spikes, brown outs and line noise (from other machinery like elevators, air conditioners or office equipment), can all be isolated by a UPS. These power related fluctuations can wreak havoc on systems and devices. For a relatively low cost a UPS can prevent downtime due to power anomalies.
Network UPS Tools The Network UPS Tools (NUT) [1] are a group of tools that are used to monitor and administer UPS hardware. NUT uses a layered scheme of equipment, drivers, server and clients. The equipment consists of the monitored UPS hardware. Drivers specific to the UPS hardware communicate or poll the UPS for status information in the form of variables. The driver programs talk directly to the UPS equipment and run on the same host as the server. The server upsd serves data from the drivers to the network. Clients talk to the upsd server and initiate tasks with the status data. As indicated by the name, Network UPS Tools, NUT is a network based UPS system that works with multiple UPSs and systems. One of the many features of NUT allows multiple systems to monitor a single UPS, not requiring special UPS sharing hardware connections. The master/slave relationship synchronizes shut-downs so the slaves can initiate power-down sequences before the master switches off UPS power. This article details the installation and configuration of a single system with a UPS connected to the serial port of the system. This is the natural first step of getting NUT installed and configured. If the UPS will supply more than one system, the second and subsequent systems can be configured as slaves. The NUT developers also have a different take on when the systems should be powered down. NUT will wait until the UPS is “on battery” and “low battery” before it considers the UPS “critical”. This philosophy gets the most out of the UPS batteries and will wait until the critical moment to initiate a power down sequence, just in case the power comes back on line. There is an option to override this behavior if desired with upssched, which can be found in the documentation. With the upssched utility, commands can be invoked based on UPS events.
Network UPS Tools
139
Issue 20 In typical GNU/Linux fashion, NUT is not the only tool available for monitoring a UPS. Apcupsd [2] is used for power management and control of APC model UPSs. There are also several graphical frontends for workstation class machines.
Preparing for installation Prior to installing and using a UPS and its associated software, a few things must be in place. Since the system is going to be shutdown there must be a way to bring the system back up when power returns. The system BIOS needs to be configured correctly. Most modern BIOSes have an option to power-on when main power, now supplied by the UPS, is returned. Server system BIOSes will most likely support the “power on when power returns” option. If the BIOS does not support this option (more common with workstation class systems), a BIOS update may correct it. With servers configured headless, without a monitor or keyboard, there are also settings to ignore keyboard errors. Commonly these types of systems are administered via SSH or administration utilities like Webmin [3]. The UPS must also have the correct signal cable from the UPS to the system. With a USB type of UPS this is not a concern. A UPS that communicates via the serial port needs the correct signal cable that supports intelligent signaling between the UPS and system. See the UPS vendor for the correct cable or a custom cable can be built with information from the Network UPS Tools web site.
Installing NUT The example system is running a basic install of Debian GNU/Linux V4.0 [4] and utilizing an APC SmartUPS 700. Debian is an excellent, long term supported GNU/Linux, which is ideal for small enterprise deployments, as well as much larger environments. Different GNU/Linux distributions may install the software and configuration files in different directories. Since the server is configured without a GUI interface, all commands and configuration are done through the command line as the root user. Using the APT package tool, apt-get, the package nut can easily be installed: # apt-get install nut
The package tool installs the NUT software, documentation, man pages and example configuration files. Debian specific documentation is found in the /usr/share/doc/nut/ directory. Extensive NUT documentation can be found in /usr/share/doc/nut/docs/ and the example configuration files in /usr/share/doc/nut/examples/. Some of the documentation is compressed with gzip which can be uncompressed or viewed with the zcat utility. # zcat /usr/share/doc/nut/README.Debian.gz | less
The configuration files exist in the /etc/nut/ directory. The ups.conf configuration file contains the UPS definitions. The UPS is defined with the [labsvr] entry. The driver and port fields must be defined, the desc field is optional and describes the UPS. Additional UPS definitions can be configured in this file; however, this example is a single UPS and server configuration. [labsvr] driver = apcsmart port = /dev/ttyS0 desc = "Lab Server"
The definition, between the square brackets, is user definable, with the exception of word default, since it is used by the Network UPS Tools. The correct driver name for your UPS can be found in the file /usr/share/nut/driver.list. For proper permissions to use the serial port the nut user must be added to the “dialout” group, and can be accomplished with the addgroup command. To manually test the configuration and to verify the configuration is correct the upsdrvctl (UPS driver controller) command is used. After verification the driver can be stopped. # addgroup nut dialout Adding user `nut' to group `dialout' ... Done.
140
Installing NUT
Issue 20 # /sbin/upsdrvctl start labsvr Network UPS Tools - UPS driver controller 2.0.4 Network UPS Tools (version 2.0.4) - APC Smart protocol driver Driver version 1.99.8, command table version 2.0 Detected SMART-UPS 700 [QS0331213446] on /dev/ttyS0 # /sbin/upsdrvctl stop labsvr Network UPS Tools - UPS driver controller 2.0.4 Stopping UPS: labsvr #
Since this example is a single server, the access control list for server communication is minimal. Configuring the access control lists is accomplished in the configuration file upsd.conf. The ACL (access control list) all is defined with the netblock in CIDR format, the old style address/network format can also be used. Additionally the ACL localhost is defined. The ACCEPT field allows communication to the server for localhost and the REJECT field blocks all other access. Similar to other access control lists, flow goes from the top down. The ACCEPT is evaluated before the REJECT, if the REJECT line were before ACCEPT, localhost would meet the rule and not be allowed access. ACL all 0.0.0.0/0 ACL localhost 127.0.0.1/32 ACCEPT localhost REJECT all
The upsd.users configuration file is used to define users that have access to administrative commands. Here are defined the users and what access each user is allowed; each section begins with user names in brackets and continues to the next bracketed user or end of the file. The password field defines the user’s password. The allowfrom field grants access based on the user’s source IP address, the values are defined in the ACL lists configuration file upsd.conf. The upsmon field is set to either master or slave to allow the upsmon process to work. [monmaster] password = p455w0rd allowfrom = localhost upsmon = master
The final configuration file upsmon.conf defines which systems the upsmon process will monitor, as well as how to shutdown systems when necessary. The MONITOR line defines the UPS to monitor. The first field is the UPS to monitor, in this case labsvr@localhost. The second field is the power value that defines the number of power supplies the UPS supplies. In simple configurations this is normally set to 1. The next two fields are the user name and password that were previously defined in upsd.users. The last field will be either master or slave process. A master process is one in which the process is running on the system that is plugged in directly and communicates with the UPS. A slave is a process that gets power from the UPS but doesn’t communicate directly to it. MONITOR labsvr@localhost 1 monmaster p455w0rd master POWERDOWNFLAG /etc/killpower SHUTDOWNCMD "/sbin/shutdown -h +0"
The POWERDOWNFLAG defines a file name to be created in master mode when the UPS needs to be powered off. This file is cleared when the system comes back up. Finally, SHUTDOWNCMD is the actual shutdown command performed enclosed in quotes. After the configuration of the UPS and Network UPS Tools is complete, a couple of housekeeping tasks need to be accomplished. Since several of the configuration files contain user names and passwords, they will have permissions set so only the root user and nut group can read them with the following commands: # chown root:nut /etc/nut/* # chmod 640 /etc/nut/*
With Debian GNU/Linux, two items must also be changed in the file /etc/default/nut, START_UPSD and START_UPSMON are changed from “no” to “yes”. START_UPSD=yes
Installing NUT
141
Issue 20 START_UPSMON=yes
The nut init.d script can be run to start the UPS monitor tools and /var/log/syslog is checked to verify everything is running correctly. # /etc/init.d/nut start Starting Network UPS Tools: upsdrvctl upsd upsmon. # tail Sep 01 Sep 01 Sep 01 Sep 01 Sep 01 Sep 01
/var/log/syslog 13:36:48 labserver 13:36:48 labserver 13:36:50 labserver 13:36:50 labserver 13:36:50 labserver 13:36:50 labserver
apcsmart[2519]: Startup successful upsd[2520]: Connected to UPS [labsvr]: apcsmart-ttyS0 upsd[2521]: Startup successful upsmon[2523]: Startup successful upsd[2521]: Connection from 127.0.0.1 upsd[2521]: Client monmaster@127.0.0.1 logged into UPS [labsvr]
To quickly poll the status of a UPS server, the upsc UPS client utility is used. The first example displays the value of the ups.status variable, which is OL (or “on line”), meaning the UPS labsvr@localhost is on line power. If the value of ups.status were OB (“on battery”), then the UPS would be supplying battery power to the server. The second invocation displays all available variables and the values for labsvr@localhost. # upsc labsvr@localhost ups.status OL # upsc labsvr@localhost battery.alarm.threshold: 0 battery.charge: 100.0 battery.charge.restart: 00 battery.date: 08/02/03 battery.packs: 000 battery.runtime: 7860 battery.runtime.low: 120 battery.voltage: 27.60 battery.voltage.nominal: 024 driver.name: apcsmart driver.parameter.port: /dev/ttyS0 driver.version: 2.0.4 driver.version.internal: 1.99.8 input.frequency: 60.00 input.quality: FF input.sensitivity: H input.transfer.high: 132 input.transfer.low: 103 input.transfer.reason: S input.voltage: 120.2 input.voltage.maximum: 121.5 input.voltage.minimum: 119.6 output.voltage: 120.2 output.voltage.target.battery: 115 ups.delay.shutdown: 180 ups.delay.start: 000 ups.firmware: 50.14.D ups.id: UPS_IDEN ups.load: 008.3 ups.mfr: APC ups.mfr.date: 08/02/03 ups.model: SMART-UPS 700 ups.serial: QS0331213446 ups.status: OL ups.temperature: 037.8 ups.test.interval: 1209600 ups.test.result: NO #
Testing power loss is accomplished with the upsdrvctl utility. The value of ups.delay.shutdown is the amount of time in seconds the UPS will wait before shutting down. In the above listing the value is 180 seconds. This value can be changed with the upsrw utility, though a valid user in upsd.users with proper permissions must be defined to change variable values. Refer to the upsd.users and upsrw man pages for
142
Installing NUT
Issue 20 more information. The 180 second delay is enough time to allow for a proper shut down of this system. # upsdrvctl shutdown labsvr; shutdown -h +0
After this command is run, upsdrvctl tells the UPS to issue its shutdown sequence. The second command tells the system to shutdown immediately. The server shuts down and after the 180 second delay the UPS shuts down and powers back up. If the BIOS is set correctly in the server, when the UPS supplies line power again the server system comes back on line. In addition to the extensive documentation installed in /usr/share/doc/nut/ and on the NUT web site, the man pages contain excellent detailed information on the utilities, configuration files and drivers.
Conclusion Power interruptions are a common problem in many areas and can cause eventual failure of components and systems. Having an uninterruptible power supply to prevent damage and initiate proper power down sequences can save many headaches as well as avoid disaster. Just implementing a UPS system is not enough and the proper UPS, server cabling and motherboard BIOS are all part of a reliable system.
Bibliography [1] Network UPS Tools [2] Apcupsd, APC UPS Daemon [3] Webmin, Web based system administration [3] Debian GNU/Linux
Biography Ken Leyba (/user/5507" title="View user profile.): Ken has been working in the IT field since the early 80's, first as a hardware tech whose oscilloscope was always by his side, and currently as a system administrator. Supporting both Windows and Linux, Windows keeps him consistently busy while Linux keeps his job fun.
Copyright information This article is made available under the "Attribution-NonCommercial" Creative Commons License 3.0 available from http://creativecommons.org/licenses/by-nc/3.0/. Source URL: http://www.freesoftwaremagazine.com/articles/ups_installation_and_configuration
Bibliography
143
Issue 20
144
Bibliography
Issue 20
Published on Free Software Magazine (http://www.freesoftwaremagazine.com)
Gaia Ajax Widgets: no-JavaScript Ajax Tutorial about how to use Gaia Ajax Widgets with ASP.NET Mono By Thomas Hansen Imagine you need to create an Ajax application, and you’re scratching your head in frustration since you don’t understand prototype.js, you think using ASP.NET Ajax feels like building a car with scissors and paperclips and you don’t know enough Java to use GWT. If this is your problem, Gaia Ajax Widgets could solve your problem: Gaia abstracts away JavaScript, feels like normal ASP.NET, works on both ASP.NET and Mono—and it’s free software. Gaia Ajax Widgets GPL version is complete and also comes with support (if you register) but you must purchase a commercial license from Frost Innovation to be able to develop proprietary applications with it. Read the details here: Gaia License Explained
Gaia Ajax Widgets puts the fun back into Ajax Gaia Ajax Widgets is a GPL licensed library (a commercial license available from the company behind it). It is a 100% Hijax library, which means that you don’t need to write JavaScript at all to use it. You use components as if they where Qt Widgets, and you declare which events you wish to handle by setting the proper handlers. Before you can start using it, you must download Gaia Ajax Widgets. For this tutorial, I will be using MonoDevelop running on GNU/Linux and the GPL version of Gaia Ajax Widgets. One word of warning though! The download link above is for the Q2 2007 release of Gaia Ajax Widgets. If you register at AjaxWidgets.com you will receive update notifications and have an easier way to get updated versions of Gaia. You will also be able to post in our forums and get free support. Though you’re not forced to register, AjaxWidgets encourages you to do so. AjaxWidgets promises to only send you 8 to 10 emails per year with “extras” in the form of links to tutorials, videos, etc.
Creating a new project
Create new project
Creating a new project
145
Issue 20 Create a new Web Project in MonoDevelop and add a reference to the Gaia.WebWidgets.dll file, which should be underneath the Library folder of the Gaia package.
Add a reference to Gaia
Default.aspx Then, open your Default.aspx web form and register the Gaia Widgets assembly using the following code: <%@ Register Assembly="Gaia.WebWidgets" Namespace="Gaia.WebWidgets" TagPrefix="gaia">
And add a new Gaia Button using the following code: <gaia:Button id="button1" runat="server" Text="Show Window" OnClick="onButtonClick" />
Also add a Gaia Window like this: <gaia:Window id="window1" runat="server" Visible="false">Hello World!</gaia:Window>
Notice that the Windowâ&#x20AC;&#x2122;s Visible property is set to false. Also notice that the buttonâ&#x20AC;&#x2122;s OnClick event is attached to an event handler called onButtonClick. This handler in the code found in the file Default.aspx.cs should look like this: public void onButtonClick(object sender, EventArgs e){ window1.Visible = true; }
146
Creating a new project
Issue 20
Default.aspx.cs Now make sure that Gaia library file Gaia.WebWidgets.dll is located under the bin folder of the website; if not, copy it there. Using a command shell, change to the directory of the web site and type XSP2; then hit enter to execute the Mono XSP2 server. This will start a new instance of the XSP2 web server which is included with Mono. The server will listen to port 8080. Now open up Firefox on your GNU/Linux system and type the address http://localhost:8080/Default.aspx. Right now, you should be running your Gaia Application on GNU/Linux.
Final result C# is a language almost like Java in that it is binary compatible for most systems. This means that the code you now have created can in fact run on GNU/Linux, 64 bits CPUs, 32 bits CPUs, Mac OS X, Windows, etcâ&#x20AC;Ś
A nicer look If youâ&#x20AC;&#x2122;d like to have a prettier skin for your Gaia Window, you can go to the folder named Skin in the TestWebWidgets folder. Now, if you include the mac_os_x.css file in your Default.aspx file and add the CssClass mac_os_x to your Window object, you will see a better looking Gaia window.
A nicer look
147
Issue 20
Mac OS X Skin
Final words Gaia is 100% Mono and ASP.NET compatible. This means you can use any of the .Net and Mono programming languages. This example assumes you’ve built your Web Application using C#, but you could just as easily use VB.NET, IronPython, Boo, JScript, C++, Eiffel, etc. There are about 50 languages available, although not all of them exists on both Mono and .Net; so, C# is the “safe bet” if you must have both Mono and ASP.NET compatibility.
Biography Thomas Hansen (/user/40559" title="View user profile.): Thomas Hansen is a developer working for Frost Innovation which is the company behind Gaia Ajax Widgets. He has been working with Ajax ever since before it even had a name and has been doing programming since he was 8 years old. He has been working on GUI libraries ever since 2002, and has been leading Open Source GUI projects ever since.
Copyright information Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is available at http://www.gnu.org/copyleft/fdl.html. Source URL: http://www.freesoftwaremagazine.com/articles/gaia_ajax_widgets
148
Final words