You are here: Home / Science and Technology / ZFS and Ubuntu Home Server howto
ZFS and Ubuntu Home Server howto May 21, 2013 By Latentexistence 3 Comments A while ago I bought myself a HP Microserver – a cheap, low power box which has four 3.5″ drive bays and an optical drive bay. I bought it to run as a home server which would back up all my data as well as serve up video, music and photos around the house. I had decided before buying that I wanted to store my data using the ZFS filesystem since ZFS was the only filesystem at the time which offered guaranteed data integrity. (It still is the only filesystem of release quality which offers this, although BTRFS is catching up.) I have become almost obsessed with ZFS because of the overwhelming benefits it offers but I won’t go into them here. Instead I recommend watching this talk by the creators of ZFS (Part 1, part 2, part 3) or read through the accompanying slides. [PDF]
I meant at the time to write about how I set up my system but never did get around to it, so here is what I did in the end. The server arrived with 2GB of ECC RAM and a 250GB hard disk. I eventually upgraded this to 8GB RAM and added two 2TB hard disks, although I started with one 2TB disk and added the second as a mirror when finances allowed. ZFS checks the integrity of the stored data through checksums and so it can always tell you when there is data corruption but it can only silently heal the problem if it has either a mirror or a RAID-Z/Z2 (Equivalent to RAID 5 or 6.) ZFS is available as part of FreeNAS, FreeBSD, Solaris, and a number of Solaris derivatives. I initially installed FreeNAS 8. FreeNAS runs from a USB stick which I put in the handy internal USB socket, but while that was great for storing and sharing files it was not so good for running bittorrent on or using SSH to connect from out of the house. I also tried Solaris but I ended up going
back to what I know and using Ubuntu Linux 12.04 LTS. Although licensing prevents ZFS from being included with Linux it is trivial to add it yourself. I have assumed a certain level of knowledge on the reader’s part. If it doesn’t make much sense to you then you might be better off with FreeNAS or an off-the-shelf NAS box. After installing Ubuntu and fully updating it I did the following: sudo add-apt-repository ppa:zfs-native/stable sudo apt-get update sudo apt-get install ubuntu-zfs …and that was it. It is a lot more complicated to use ZFS as your root filesystem on Linux, so I don’t. Next, I had to set up the ZFS storage pool. The creators of ZFS on Linux recommend that you use disk names starting with /dev/disk/by-id/ rather than /dev/sda, /dev/sdb etc as it is more reliable so look in that folder to see what disk names you have. ls -l /dev/disk/by-id/ The example name given is tank but I strongly recommend that you use something else. To create a single disk storage pool with no mirror: sudo zpool create tank /dev/disk/by-id/scsi-SATA_ST2000DM001-9YN_S2409P51 To add a mirror later you would do: sudo zpool attach tank /dev/disk/by-id/scsi-SATA_ST2000DM001-9YN_S2409P51 /dev/disk/by-id/scsi-SATA_ST2000DM001-9YN_Z1E0EPBC Or if starting with two disks in a mirror, your initial command would be: zpool create tank mirror /dev/disk/by-id/scsi-SATA_ST2000DM001-9YN_S2409P51 /dev/disk/by-id/scsi-SATA_ST2000DM001-9YN_Z1E0EPBC The system will create your storage pool, create a filesystem of the same name and automatically mount it, in this case under /tank. “sudo zpool list” will show you that a pool has been created as well as the raw space in the pool and the space available. “sudo zpool status” will show you the disks that make up the pool.
In my example above you can also see that ZFS has caught and repaired 2.1MB of corrupted data, without me even being aware of it. While you can just start storing data in your newly-created filesystem (in /tank in our example) that isn’t the best way to use ZFS. Instead you should create additional filesystems within your storage pool to hold different types of data. You use the zfs command to do this. Some examples: sudo zfs create tank/music sudo zfs create tank/videos sudo zfs create tank/backups The above examples will create filesystems in the pool and will automatically mount them as subfolders of the main filesystem. Note that the name is in the format pool / filesystem name and there is no leading slash on the pool name. Check that your filesystems have been created: sudo zfs list Now we need to share the data, otherwise it’s not much of a server. ZFS will automatically manage sharing through NFS (Unix/Linux) or SMB (Windows) but you must first install the server software. sudo apt-get install nfs-kernel-server samba You don’t need to configure much because ZFS handles most settings for you, but you might wish to change the workgroup name for Samba in /etc/samba/smb.conf. To share a ZFS filesystem you change a property using the zfs command. For Windows clients: sudo zfs set sharesmb=on tank/music sudo zfs set sharesmb=on tank/videos
For Unix / Linux clients: sudo zfs set sharenfs=on tank/backups Or you can share the whole lot at once by sharing the main pool. The sub-filesystems will inherit the sharing property unless you turn them off: sudo zfs set sharesmb=on tank sudo zfs set sharesmb=off tank/music You can check whether your filesystems are shared or not: sudo zfs get sharesmb,sharenfs At this point you should be able to see your shares from other computers on the network but you probably won’t have permission to access them. You will need to ensure that the file permissions and owners are set correctly, and you will also have to set a password for use when connecting through Samba. If your username is steve then use: smbpasswd steve to set your Samba password, and make sure that steve has permission to access all the files in your shared folders. This command inside your shares should help: (Change the name, obviously.) cd /tank/videos sudo -R chown steve:steve * I hope this will be helpful if you are trying to set up a ZFS server. Let me know in the comments.
HOWTO install Ubuntu to a Native ZFS Root Filesystem These instructions are for Ubuntu. The procedure for Debian, Mint, or other distributions in the DEB family is similar but not identical.
System Requirements • • • •
64-bit Ubuntu Live CD. (Not the alternate installer, and not the 32-bit installer!) AMD64 or EM64T compatible computer. (ie: x86-64) 8GB disk storage available. 2GB memory minimum.
Computers that have less than 2GB of memory run ZFS slowly. 4GB of memory is recommended for normal performance in basic workloads. 16GB of memory is the recommended minimum for deduplication. Enabling deduplication is a permanent change that cannot be easily reverted.
Latest Tested And Recommended Version • Ubuntu 12.04.2 Precise Pangolin • spl-0.6.1 • zfs-0.6.1
Step 1: Prepare The Install Environment 1.1 Start the Ubuntu LiveCD and open a terminal at the desktop. 1.2 Input these commands at the terminal prompt: $ # # #
sudo -i apt-add-repository --yes ppa:zfs-native/stable apt-get update apt-get install debootstrap ubuntu-zfs
1.3 Check that the ZFS filesystem is installed and available: # modprobe zfs # dmesg | grep ZFS: ZFS: Loaded module v0.6.1-rc14, ZFS pool version 5000, ZFS filesystem version 5
Step 2: Disk Partitioning This tutorial intentionally recommends MBR partitioning. GPT can be used instead, but beware of UEFI firmware bugs. 2.1 Run your favorite disk partitioner, like parted or cfdisk, on the primary storage device. /dev/disk/by-id/scsi-SATA_disk1 is the example device used in this document. 2.2 Create a small MBR primary partition of at least 8 megabytes. 256mb may be more realistic, unless space is tight. /dev/disk/by-id/scsi-SATA_disk1-part1 is the example boot partition used in this document. 2.3 On this first small partition, set type=BE and enable the bootable flag. 2.4 Create a large partition of at least 4 gigabytes. /dev/disk/by-id/scsi-SATA_disk1part2 is the example system partition used in this document. 2.5 On this second large partition, set type=BF and disable the bootable flag. The partition table should look like this: # fdisk -l /dev/disk/by-id/scsi-SATA_disk1 Disk /dev/sda: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot /dev/sda1 * /dev/sda2
Start 1 2
End 1 1305
Blocks 8001 10474380
Id be bf
System Solaris boot Solaris
Remember: Substitute scsi-SATA_disk1-part1 and scsi-SATA_disk1-part2 appropriately below.
Hints: • Are you doing this in a virtual machine? Is some something in /dev/disk/by-id is missing? Go read the troubleshooting section. • Recent GRUB releases assume that the /boot/grub/grubenv file is writable by the stage2 module. Until GRUB gets a ZFS write enhancement, the GRUB modules should be installed to a separate filesystem in a separate partition that is grub-writable. • If /boot/grub is in the ZFS filesystem, then GRUB will fail to boot with this message: error: sparse file not allowed. If you absolutely want only one filesystem, then remove the call to recordfail() in each grub.cfg menu stanza, and edit the /etc/grub.d/10_linux file to make the change permanent. • Alternatively, if /boot/grub is in the zfs filesystem you can comment each line with the text save_env in the file /etc/grub.d/00_header and run update-grub.
Step 3: Disk Formatting 3.1 Format the small boot partition created by Step 2.2 as a filesystem that has stage1 GRUB support like this: # mke2fs -m 0 -L /boot/grub -j /dev/disk/by-id/scsi-SATA_disk1-part1
3.2 Create the root pool on the larger partition: # zpool create -o ashift=9 rpool /dev/disk/by-id/scsi-SATA_disk1-part2
Always use the long /dev/disk/by-id/* aliases with ZFS. Using the /dev/sd* device nodes directly can cause sporadic import failures, especially on systems that have more than one storage pool. Warning: The grub2-1.99 package currently published in the PPA for Precise does not reliably handle a 4k block size, which is ashift=12. Hints: • # ls -la /dev/disk/by-id will list the aliases. • The root pool can be a mirror. For example, zpool create -o ashift=9 rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part2 /dev/disk/byid/scsi-SATA_disk2-part2. Remember that the version and ashift matter for any pool that GRUB must read, and that these things are difficult to change after pool creation. • The pool name is arbitrary. On systems that can automatically install to ZFS, the root pool is named "rpool" by default. Note that system recovery is easier if you choose a unique name instead of "rpool". Anything except "rpool" or "tank", like the hostname, would be a good choice. 3.3 Create a "ROOT" filesystem in the root pool: # zfs create rpool/ROOT
3.4 Create a descendant filesystem for the Ubuntu system: # zfs create rpool/ROOT/ubuntu-1
On Solaris systems, the root filesystem is cloned and the suffix is incremented for major system changes through pkg image-update or beadm. Similar functionality for APT is possible but currently unimplemented. 3.5 Dismount all ZFS filesystems. # zfs umount -a
3.6 Set the mountpoint property on the root filesystem: # zfs set mountpoint=/ rpool/ROOT/ubuntu-1
3.7 Set the bootfs property on the root pool. # zpool set bootfs=rpool/ROOT/ubuntu-1 rpool
The boot loader uses these two properties to find and start the operating system. These property names are not arbitrary. Hint: Putting rpool=MyPool or bootfs=MyPool/ROOT/system-1 on the kernel command line overrides the ZFS properties. 3.9 Export the pool: # zpool export rpool
Don't skip this step. The system is put into an inconsistent state if this command fails or if you reboot at this point.
Step 4: System Installation Remember: Substitute "rpool" for the name chosen in Step 3.2. 4.1 Import the pool: # zpool import -d /dev/disk/by-id -R /mnt rpool
4.2 Mount the small boot filesystem for GRUB that was created in step 3.1: # mkdir -p /mnt/boot/grub # mount /dev/disk/by-id/scsi-SATA_disk1-part1 /mnt/boot/grub
4.4 Install the minimal system: # debootstrap precise /mnt
The debootstrap command leaves the new system in an unconfigured state. In Step 5, we will only do the minimum amount of configuration necessary to make the new system runnable.
Step 5: System Configuration 5.1 Copy these files from the LiveCD environment to the new system: # cp /etc/hostname /mnt/etc/ # cp /etc/hosts /mnt/etc/
5.2 The /mnt/etc/fstab file should be empty except for a comment. Add this line to the /mnt/etc/fstab file: /dev/disk/by-id/scsi-SATA_disk1-part1
/boot/grub
auto
defaults
0
1
The regular Ubuntu desktop installer may add dev, proc, sys, or tmp lines to the /etc/fstab file, but such entries are redundant on a system that has a /lib/init/fstab file. Add them now if you want them. 5.3 Edit the /mnt/etc/network/interfaces file so that it contains something like this: # interfaces(5) file used by ifup(8) and ifdown(8) auto lo
iface lo inet loopback auto eth0 iface eth0 inet dhcp
Customize this file if the new system is not a DHCP client on the LAN. 5.4 Make virtual filesystems in the LiveCD environment visible to the new system and chroot into it: # # # #
mount --bind /dev /mnt/dev mount --bind /proc /mnt/proc mount --bind /sys /mnt/sys chroot /mnt /bin/bash --login
5.5 Install PPA support in the chroot environment like this: # locale-gen en_US.UTF-8 # apt-get update # apt-get install ubuntu-minimal python-software-properties software-propertiescommon
Even if you prefer a non-English system language, always ensure that en_US.UTF-8 is available. The ubuntu-minimal package is required to use ZoL as packaged in the PPA. 5.6 Install ZFS in the chroot environment for the new system: # apt-add-repository --yes ppa:zfs-native/stable # apt-add-repository --yes ppa:zfs-native/grub # apt-get update # apt-get install --no-install-recommends linux-image-generic linux-headersgeneric # apt-get install ubuntu-zfs # apt-get install grub2-common grub-pc # apt-get install zfs-initramfs # apt-get dist-upgrade
Warning: This is the second time that you must wait for the SPL and ZFS modules to compile. Do not try to skip this step by copying anything from the host environment into the chroot environment. Note: This should install a kernel package and its headers, a patched mountall and dkms packages. Double-check that you are getting these packages from the PPA if you are deviating from these instructions in any way. Choose /dev/disk/by-id/scsi-SATA_disk1 if prompted to install the MBR loader. Ignore warnings that are caused by the chroot environment like: • Can not write log, openpty() failed (/dev/pts not mounted?) • df: Warning: cannot read table of mounted file systems 5.7 Set a root password on the new system: # passwd root
Hint: If you want the ubuntu-desktop package, then install it after the first reboot. If you install it now, then it will start several process that must be manually stopped before dismount.
Step 6: GRUB Installation Remember: All of Step 6 depends on Step 5.4 and must happen inside the chroot environment.
6.1 Verify that the ZFS root filesystem is recognized by GRUB: # grub-probe / zfs
And that the ZFS modules for GRUB are installed: # ls /boot/grub/zfs* /boot/grub/zfs.mod /boot/grub/zfsinfo.mod
Note that after Ubuntu 13, these are now in /boot/grub/i386/pc/zfs* # ls /boot/grub/i386-pc/zfs* /boot/grub/i386-pc/zfs.mod /boot/grub/i386-pc/zfsinfo.mod
Otherwise, check the troubleshooting notes for GRUB below. 6.2 Refresh the initrd files: # update-initramfs -c -k all update-initramfs: Generating /boot/initrd.img-3.2.0-40-generic
6.3 Update the boot configuration file: # update-grub Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.2.0-40-generic Found initrd image: /boot/initrd.img-3.2.0-40-generic done
Verify that boot=zfs appears in the boot configuration file: # grep boot=zfs /boot/grub/grub.cfg linux /ROOT/ubuntu-1/@/boot/vmlinuz-3.2.0-40-generic root=/dev/sda2 ro boot=zfs $bootfs quiet splash $vt_handoff linux /ROOT/ubuntu-1/@/boot/vmlinuz-3.2.0-40-generic root=/dev/sda2 ro single nomodeset boot=zfs $bootfs
6.4 Install the boot loader to the MBR like this: # grub-install $(readlink -f /dev/disk/by-id/scsi-SATA_disk1) Installation finished. No error reported.
Do not reboot the computer until you get exactly that result message. Note that you are installing the loader to the whole disk, not a partition. Note: The readlink is required because recent GRUB releases do not dereference symlinks.
Step 7: Cleanup and First Reboot 7.1 Exit from the chroot environment back to the LiveCD environment: # exit
7.2 Run these commands in the LiveCD environment to dismount all filesystems: # # # # # #
umount /mnt/boot/grub umount /mnt/dev umount /mnt/proc umount /mnt/sys zfs umount -a zpool export rpool
The zpool export command must succeed without being forced or the new system will fail to start. 7.3 We're done! # reboot
Caveats and Known Problems This is an experimental system configuration. This document was first published in 2010 to demonstrate that the lzfs implementation made ZoL 0.5 feature complete. Upstream integration efforts began in 2012, and it will be at least a few more years before this kind of configuration is even minimally supported. Gentoo, and its derivatives, are the only Linux distributions that are currently mainlining support for a ZoL root filesystem.
zpool.cache inconsistencies cause random pool import failures. The /etc/zfs/zpool.cache file embedded in the initrd for each kernel image must be the same as the /etc/zfs/zpool.cache file in the regular system. Run update-initrd -c -k all after any /sbin/zpool command changes the /etc/zfs/zpool.cache file. This will be a recurring problem until issue zfsonlinux/zfs#330 is resolved.
Every upgrade can break the system. Ubuntu systems remove old dkms modules before installing new dkms modules. If the system crashes or restarts during a ZoL module upgrade, which is a failure window of several minutes, then the system becomes unbootable and must be rescued. This will be a recurring problem until issue zfsonlinux/pkg-zfs#12 is resolved.
Troubleshooting (i) MPT2SAS Most problem reports for this tutorial involve mpt2sas hardware that does slow asynchronous drive initialization, like some IBM M1015 or OEM-branded cards that have been flashed to the reference LSI firmware. The basic problem is that disks on these controllers are not visible to the Linux kernel until after the regular system is started, and ZoL does not hotplug pool members. See https://github.com/zfsonlinux/zfs/issues/330. Most LSI cards are perfectly compatible with ZoL, but there is no known fix if your card has this glitch. Please use different equipment until the mpt2sas incompatibility is diagnosed and fixed, or donate an affected part if you want solution sooner.
(ii) Areca Systems that require the arcsas blob driver should add it to the /etc/initramfs-
tools/modules file and run update-initramfs -c -k all. Upgrade or downgrade the Areca driver if something like RIP: 0010: [<ffffffff8101b316>] [<ffffffff8101b316>] native_read_tsc+0x6/0x20 appears anywhere in kernel log. ZoL is unstable on systems that emit this error message.
(iii) GRUB Installation Verify that the PPA for the ZFS enhanced GRUB is installed: # apt-add-repository ppa:zfs-native/grub # apt-get update
Reinstall the zfs-grub package, which is an alias for a patched grub-common package: # apt-get install --reinstall zfs-grub
Afterwards, this should happen: # apt-cache search zfs-grub grub-common - GRand Unified Bootloader (common files) # apt-cache show zfs-grub N: Can't select versions from package 'zfs-grub' as it is purely virtual N: No packages found # apt-cache policy grub-common zfs-grub grub-common: Installed: 1.99-21ubuntu3.9+zfs1~precise1 Candidate: 1.99-21ubuntu3.9+zfs1~precise1 Version table: *** 1.99-21ubuntu3.9+zfs1~precise1 0 1001 http://ppa.launchpad.net/zfs-native/grub/ubuntu/precise/main amd64 Packages 100 /var/lib/dpkg/status 1.99-21ubuntu3 0 1001 http://us.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages zfs-grub: Installed: (none) Candidate: (none) Version table:
For safety, grub modules are never updated by the packaging system after initial installation. Manually refresh them by doing this: # cp /usr/lib/grub/i386-pc/*.mod /boot/grub/
If the problem persists, then open a bug report and attach the entire output of those apt-get commands. Packages in the GRUB PPA are compiled against the stable PPA. Systems that run the daily PPA may experience failures if the ZoL library interface changes. Note that GRUB does not currently dereference symbolic links in a ZFS filesystem, so you cannot use the /vmlinux or /initrd.img symlinks as GRUB command arguments.
(iv) GRUB does not support ZFS Compression If the /boot hierarchy is in ZFS, then that pool should not be compressed. The grub packages for Ubuntu are usually incapable of loading a kernel image or initrd from a compressed dataset.
(v) VMware • Set disk.EnableUUID = "TRUE" in the vmx file or vsphere configuration. Doing this ensures that /dev/disk aliases are created in the guest.
(vi) QEMU/KVM/XEN • In the /etc/default/grub file, enable the GRUB_TERMINAL=console line and remove the splash option from the GRUB_CMDLINE_LINUX_DEFAULT line. Plymouth can cause boot errors in these virtual environments that are difficult to diagnose. • Set a unique serial number on each virtual disk. (eg: -drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890)
(vii) Kernel Parameters The zfs-initramfs package requires that boot=zfs always be on the kernel command line. If the boot=zfs parameter is not set, then the init process skips the ZFS routine entirely. This behavior is for safety; it makes the casual installation of the zfs-initramfs package unlikely to break a working system. ZFS properties can be overridden on the the kernel command line with rpool and bootfs arguments. For example, at the GRUB prompt: linux /ROOT/ubuntu-1/@/boot/vmlinuz-3.0.0-15-generic boot=zfs rpool=AltPool bootfs=AltPool/ROOT/foobar-3
(viii) System Recovery If the system randomly fails to import the root filesystem pool, then do this at the initramfs recovery prompt: # : # : # : # #
zpool export rpool now export all other pools too zpool import -d /dev/disk/by-id -f -N rpool now import all other pools too mount -t zfs -o zfsutil rpool/ROOT/ubuntu-1 /root do not mount any other filesystem cp /etc/zfs/zpool.cache /root/etc/zfs/zpool.cache exit
This refreshes the /etc/zfs/zpool.cache file. The zpool command emits spurious error messages regarding missing or corrupt vdevs if the zpool.cache file is stale or otherwise incorrect. Last edited by Christ Schlacta,
PPA description The native ZFS filesystem for Linux. Install the ubuntu-zfs package. These kernel modules provide zpool version 5000 and zfs version 5, which are compatible with all other implementations in the Illumos family. This PPA contains the current stable 0.6.1 release. Please join the Launchpad user group if you want to show support for ZoL:
https://launchpad.net/~zfs-native-users The ZoL project home page is: http://zfsonlinux.org/ Send feedback or requests for help to this email list: <email address hidden> A searchable email list history is available at: http://groups.google.com/a/zfsonlinux.org/group/zfs-discuss/ Report bugs at: https://github.com/zfsonlinux/zfs/issues Get recent daily beta builds at: https://launchpad.net/~zfs-native/+archive/daily
Adding this PPA to your system You can update your system with unsupported packages from this untrusted PPA by adding ppa:zfsnative/stable to your system's Software Sources. (Read about installing) Technical details about this PPA
Overview of published packages
ZFS Stable Releases for Ubuntu 1. â&#x20AC;&#x153;Native ZFS for Linuxâ&#x20AC;? team 2. ZFS Stable Releases for Ubuntu
PPA description The native ZFS filesystem for Linux. Install the ubuntu-zfs package. These kernel modules provide zpool version 5000 and zfs version 5, which are compatible with all other implementations in the Illumos family. This PPA contains the current stable 0.6.1 release. Please join the Launchpad user group if you want to show support for ZoL: https://launchpad.net/~zfs-native-users The ZoL project home page is: http://zfsonlinux.org/ Send feedback or requests for help to this email list: <email address hidden>
A searchable email list history is available at: http://groups.google.com/a/zfsonlinux.org/group/zfs-discuss/ Report bugs at: https://github.com/zfsonlinux/zfs/issues Get recent daily beta builds at: https://launchpad.net/~zfs-native/+archive/daily
Adding this PPA to your system You can update your system with unsupported packages from this untrusted PPA by adding ppa:zfsnative/stable to your system's Software Sources. (Read about installing) Technical details about this PPA This PPA can be added to your system manually by copying the lines below and adding them to your system's software sources. Display sources.list entries for: deb http://ppa.launchpad.net/zfs-native/stable/ubuntu YOUR_UBUNTU_VERSION_HERE main deb-src http://ppa.launchpad.net/zfs-native/stable/ubuntu YOUR_UBUNTU_VERSION_HERE main
Signing key: 1024R/F6B0FC61 (What is this?) Fingerprint: E871F18B51E0147C77796AC81196BA81F6B0FC61 For questions and bugs with software in this PPA please contact Native ZFS for Linux.
PPA statistics Activity 0 updates added during the past month. View package details
Overview of published packages 1 → 28 of 28 results Package ▾ dkms dkms dkms dkms mountall mountall mountall mountall
First • Previous • Next • Last Version 2.2.0.3-1.1ubuntu2+zfs6~raring1 2.2.0.3-1.1ubuntu1.1+zfs6~quantal1 2.2.0.3-1ubuntu3.1+zfs6~precise1 2.1.1.2-5ubuntu1.1~lucid 2.48build1-zfs2 2.42ubuntu0.4-zfs2 2.36.4-zfs2 2.31-zfs1
Uploaded by Darik Horn (2013-03-22) Darik Horn (2013-03-22) Darik Horn (2013-03-22) Darik Horn (2011-09-06) Darik Horn (2013-04-01) Darik Horn (2013-04-08) Darik Horn (2013-04-08) Darik Horn (2012-03-19)
Package â&#x2013;ž mountall spl-linux spl-linux spl-linux spl-linux spl-linux ubuntu-zfs ubuntu-zfs ubuntu-zfs ubuntu-zfs ubuntu-zfs zfs-auto-snapshot zfs-auto-snapshot zfs-auto-snapshot zfs-auto-snapshot zfs-linux zfs-linux zfs-linux zfs-linux zfs-linux
Version 2.15.3-zfs1~lucid1 0.6.1-1~raring 0.6.1-1~quantal 0.6.1-1~precise 0.6.0.102-0ubuntu1~oneiric1 0.6.0.102-0ubuntu1~lucid1 7~raring 7~quantal 7~precise 6~oneiric 6~lucid 1.0.8-0ubuntu2~raring1 1.0.8-0ubuntu2~quantal1 1.0.8-0ubuntu1~precise1 1.0.8-0ubuntu1~oneiric1 0.6.1-1~raring 0.6.1-1~quantal 0.6.1-1~precise 0.6.0.102-0ubuntu1~oneiric1 0.6.0.102-0ubuntu1~lucid1
Uploaded by Darik Horn (2012-03-19) Darik Horn (2013-03-27) Darik Horn (2013-03-27) Darik Horn (2013-03-27) Darik Horn (2013-04-01) Darik Horn (2013-04-01) Darik Horn (2013-03-27) Darik Horn (2013-01-05) Darik Horn (2013-01-05) Darik Horn (2012-03-19) Darik Horn (2012-03-19) Darik Horn (2013-04-01) Darik Horn (2013-04-01) Darik Horn (2012-03-02) Darik Horn (2012-03-19) Darik Horn (2013-03-27) Darik Horn (2013-03-27) Darik Horn (2013-03-27) Darik Horn (2013-04-01) Darik Horn (2013-04-01)