OCZ RevoDrive SSD in Arch

TL;DR - I get the OCZ RevoDrive working as a boot device in Arch Linux. Click here to skip to the steps.

Background

Earlier this year I updated the hardware in all my business computers - including adding an HP z800 to the menagerie. One of the updates I was most excited about was an internal, PCI Express SSD. The OCZ RevoDrive PCI-E x4 120 GB SSD. The specs sounded amazing and I was already spoiled in my last workstation with having the boot device and root filesystem on an SSD.

Unfortunately, I learned after the drive arrived that it required some special finagling to run on an operating system other than Windows.

The complexity is due to how the RevoDrive is built. It’s a RAID0 array of four smaller SSDs, presenting themselves in some contexts as a single disk, and in other contexts as four disks. This is great for read-write speeds, but not great for easy setup. I used Linux software RAID support through md (multi-disk) utilities. Kernel dot org has a great wiki covering this set of features. Having a boot device be part of a RAID0 array presented its own set of hurdles as well, but fear not! If you have an OCZ RevoDrive and you’re running modern Linux, this article will walk you through setting up a functional LVM + RAID system on your new smoking-fast SSD.

Implementation

Here we go.

Gather Some Information

First you boot into your maintenance environment (Live CD or installation media). From there, drop into a shell and get ready to take some notes.

First thing we need to do is find the four drives by ID so we can reference them later when using mdadm. On a system using udev this can be done by simply checking the by-id directory. Other systems should use lspci to find the disk IDs.

$ ls -l /dev/disk/by-id | grep ata-OCZ-REVODRIVE
lrwxrwxrwx 1 root root  9 Mar 17 12:11 ata-OCZ-REVODRIVE350_11111111111111111 -> ../../sdb
lrwxrwxrwx 1 root root  9 Mar 17 12:11 ata-OCZ-REVODRIVE350_OCZ-2222222222222222 -> ../../sdc
lrwxrwxrwx 1 root root  9 Mar 17 12:11 ata-OCZ-REVODRIVE350_OCZ-3333333333333333 -> ../../sdd
lrwxrwxrwx 1 root root  9 Mar 17 12:11 ata-OCZ-REVODRIVE350_OCZ-4444444444444444 -> ../../sde

Your IDs will be something like ‘ata-OCZ-REVODRIVE350_85C7GT5156B13’, I’ve changed them here for simplicity’s sake. You’ll note that sda is most likely not included. That’s because it’s probably your boot media.

You can also verify that these four devices are on the same PCI bus:

$ ls -ld /sys/block/sd*/device
lrwxrwxrwx 1 root root 0 Mar 17 12:11 /sys/block/sda/device -> ../../../7:0:0:0
lrwxrwxrwx 1 root root 0 Mar 17 12:11 /sys/block/sdb/device -> ../../../1:0:0:0
lrwxrwxrwx 1 root root 0 Mar 17 12:11 /sys/block/sdc/device -> ../../../1:0:1:0
lrwxrwxrwx 1 root root 0 Mar 17 12:11 /sys/block/sdd/device -> ../../../1:0:2:0
lrwxrwxrwx 1 root root 0 Mar 17 12:11 /sys/block/sde/device -> ../../../1:0:3:0

Great, now we know the disk IDs for the internal SSDs in the RevoDrive.

Create the array

Build the md0 device, a 4-disk RAID0 array, using the previously identified component disks:

# mdadm --create /dev/md0 --metadata=0.9 --verbose \
      --chunk=64 --raid-devices=4 --level=0 \
      /dev/disk/by-id/ata-OCZ-REVODRIVE350_11111111111111111 \
      /dev/disk/by-in/ata-OCZ-REVODRIVE350_OCZ-2222222222222222 \
      /dev/disk/by-in/ata-OCZ-REVODRIVE350_OCZ-3333333333333333 \
      /dev/disk/by-in/ata-OCZ-REVODRIVE350_OCZ-4444444444444444

Of particular note here is the use of version 0.9 metadata. For new, regular multi-disk arrays, you should definitely use the version 1.2 (or newer) metadata. Version 1.2 allows for more devices of larger sizes to be included in a multidisk array. The issue with the RevoDrive is that version 1.0 onward writes the metadata to the beginning of the disk, whereas version 0.9 writes the metadata at the end of the disk. The RevoDrive presents itself to the BIOS as a single disk, and as such the BIOS expects to find a partition table at the beginning. We need to leave the beginning of the disks untouched!

Partition, add LVM physical device, add filesystems

At this point we have a new multi-disk device, md0, ready to be partitioned and formatted. Since I’ll be using LVM throughout, I went with the following partitions:

  1. A 2GB, type 83 partition for /boot, marked as active
  2. 400 GB, type 8E for LVM physical volume

Provide the following geometry to fdisk so that it aligns nicely with the underlying SSD structure:

# fdisk --sectors 32 --heads 32 /dev/md0

After adding the partitions, you should now see the following devices:

$ ls -l /dev/md0*
brw-rw---- 1 root disk   9, 0 Mar 17 12:11 /dev/md0
brw-rw---- 1 root disk 259, 0 Mar 17 12:11 /dev/md0p1
brw-rw---- 1 root disk 259, 1 Mar 17 12:11 /dev/md0p2

md0p1 is the first partition you added - the boot partition, and md0p2 is the second - the partition to be used for an LVM physical volume.

Let’s make the boot partition into an ext4 partition without journaling (don’t needlessly wear out the SSD).

# mkfs.ext4 -b 1024 -E stride=128,stripe-width=128 -O '^has_journal' /dev/md0p1

Next, make an LVM physical volume for the remainder of the RAID, using SSD-friendly data alignment:

# pvcreate --dataalignment 1m /dev/md0p2
# vgcreate OCZVolGroup /dev/md0p2

Now make logical volumes as desired. If you have other disks available, such as traditional spinning platter hard drives, be sure to prefer that write-once read-many style data is on the SSD, and more volatile data is on the spinning platter drive. The idea is to write to the SSD as little as possible.

# lvcreate -L 20G OCZVolGroup -n vol_root
# lvcreate -L 30G OCZVolGroup -n vol_home
# lvcreate -L 60G OCZVolGroup -n vol_work

Since we’re using LVM, don’t stress too much about what partitions to put where. LVM allows you to migrate logical volumes across physical volumes online, without service interruption. LVM is awesome.

After you’ve created all the mount-points you want (I have /, /boot, /var, /home, /storage, /db, and /work) go ahead and make filesystems for them. Again, we use SSD-friendly options for those logical volumes we intend to host on the RevoDrive.

# mkfs.ext4 -b 1024 -E stride=128,stripe-width=128 \
    -O '^has_journal' /dev/mapper/OCZVolGroup-vol_root
# mkfs.ext4 -b 1024 -E stride=128,stripe-width=128 \
    -O '^has_journal' /dev/mapper/OCZVolGroup-vol_home
# mkfs.ext4 -b 1024 -E stride=128,stripe-width=128 \
    -O '^has_journal' /dev/mapper/OCZVolGroup-vol_work

When making mount points during an Arch installation you should add some additional mount options for the SSD-hosted partitions:

# mount -o noatime,discard /dev/mapper/OCZVolGroup-vol_root /mnt
# mkdir /mnt/home
# mount -o noatime,discard /dev/mapper/OCZVolGroup-vol_home /mnt/home
# mkdir /mnt/work
# mount -o noatime,discard /dev/mapper/OCZVolGroup-vol_work /mnt/work

Continue on with your installation as usual at this point. The next place to take care is with the bootloader.

Set a stable md config

At some point in the installation you should generate and save a /etc/mdadm.conf file in the target system. This is easy to do once you’ve arch-chroot-ed into /mnt:

# mdadm --detail --scan >> /etc/mdadm.conf

Take a look at the results and tweak it as needed.

Configuring the boot loader

The RevoDrive presents itself to the BIOS as a single disk, and the BIOS will look there for a partition table and bootloader. Once booted, the RevoDrive presents itself as four disks, which is how things appear in our installation environment. There are some special tricks involved

First, make sure that initramfs is loading the mdadm and lvm modules. Edit /etc/mkinitcpio.conf to ensure the modules and hooks are present:

# I'm using raid1 in a different multi-disk array, but be sure to at least have
# raid0 and dm_mod included for the RevoDrive
MODULES=(ext4 raid1 raid0 dm_mod)
# The important modules here are mdadm_udev and lvm2
HOOKS=(base udev autodetect modconf block mdadm_udev lvm2 filesystems keyboard fsck)

Tell grub where the root partition is and provide hints about required modules by editing /etc/default/grub to include the following:

# Only the 'root=' is required for this
GRUB_CMDLINE_LINUX="root=/dev/mapper/OCZVolGroup-vol_root video=vesa:off vga=normal"
# Indicate modules that must be preloaded before starting to boot:
# The relevant parts are mdraid09 for the RevoDrive (the 09 means metadata 0.9 -
# my other raid array is a raid1 using 1.2 metadata - aka 1x)
GRUB_PRELOAD_MODULES="part_gpt part_msdos lvm mdraid09 mdraid1x ext2"

Regenerate the grub config and install it to the correct device. The correct device to use is the “first” disk in the RevoDrive, which you can identify by the PCI bus address. This can be viewed by:

$ ls -ld /sys/block/sd*/device
lrwxrwxrwx 1 root root 0 Mar 17 12:11 /sys/block/sda/device -> ../../../7:0:0:0
lrwxrwxrwx 1 root root 0 Mar 17 12:11 /sys/block/sdb/device -> ../../../1:0:0:0
lrwxrwxrwx 1 root root 0 Mar 17 12:11 /sys/block/sdc/device -> ../../../1:0:1:0
lrwxrwxrwx 1 root root 0 Mar 17 12:11 /sys/block/sdd/device -> ../../../1:0:2:0
lrwxrwxrwx 1 root root 0 Mar 17 12:11 /sys/block/sde/device -> ../../../1:0:3:0

In the output above, sdb would be the first disk.

# grub-mkconfig -o /boot/grub/grub.cfg
# grub-install /dev/sdb

Lastly, add a device.map file in /boot/grub containing a hint about how to map the first BIOS-identified harddrive when booting:

# Contents of /boot/grub/device.map
(hd0)	/dev/md0

At this point, you should be able to finish any remaining installation tasks, exit the chroot, say a quick prayer and reboot!

Acknowledgements

This post was thanks to a full weekend spent trying different approaches to getting the drive to work. Along the way I was helped by many different pieces of content out there on the net. Here’s a list of what I found useful:

Arch Wiki, LVM - mkinitcpio

MDADM and LVM cheatsheet

Setting Kernel params with Grub

Using Grub with Raid

Linux RAID superblock format

Linux RAID setup / howto

Text walkthrough for the OCZ RevoDrive using mdadm

Using the GRUB command shell, then troubleshooting tips

mkinitcpio docs

GRUB2 docs for advanced storage options

Arch article on LVM atop RAID

LVM and RAID, convert MBR to GPT

Debian Wiki software raid root

StackOverflow question about lvm and raid

Official GRUB documentation

GRUB Bootloader Full Tutorial

GRUB Rescue, includes setting prefix

Troubleshooting

Rebooting didn’t load your environment? Panicking? Don’t worry! Here’s how you can boot manually while troubleshooting grub / bootloader issues (from the grub command line):

set prefix=(hd0,1)/grub
insmod normal
set root=(hd0,1)
insmod mdraid09
insmod lvm
insmod linux
ls
linux /vmlinuz-linux root=/dev/mapper/OCZVolGroup-vol_root
initrd /initramfs-linux.img
boot