About This Blog

Musings, reports, and notes to ourselves about technical matters of importance.

RAID1 on Debian Sarge Redux - SATA drives and 2.6.x kernel

01/02/2007 21:14
Permalink

In an earlier discussion, we described how to install RAID1 on a by-now older Debian Sarge box using a 2.4.x kernel and IDE drives. While the basic approach remains similar, we ran into issues installing on a new “9G” Dell box (SC440) which has SATA drives and the Intel ICH7 controller. Here is a step-by-step summary of how we got this to work.

The discussions by “philcore:http://www.debian-administration.org/users/philcore "here”:http://www.debian-administration.org/articles/238 and here (along with reader comments) are probably the most useful information, and ultimately helped guide this approach. Lots of other discussions can be found by Googling on various combinations of “raid1”, “sata”, “sarge”, “grub”, “initrd”, etc — but most of those discussions are somewhat out-of-date.

NB: supposedly the current Sarge installer can create RAIDs for you, but this was not at all apparent to me, and when I tried to use the installer tool to “configure RAID” with a half-completed install, it blew up most spectacularly. This doesn’t seem ready for prime-time yet.

So here’s one approach that worked on this particular box.

  1. Install a basic Sarge system via an install CD. The regular network install CD wouldn’t install on this box because of its hardware (notably the onboard NIC) so it required a custom Sarge image from here; more info from Dell. (We’re using Sarge not Etch because this is a basic server box.)
  2. If you end up doing this multiple times (yeah, I’m not the only one), make sure to erase and then delete all partitions between attempts. Format the first hard disk the way you want to lay out your drive and leave the second disk as free space. NB: here we have two SATA disk named /dev/sda and /dev/sdb.
  3. Add some other packages useful for creating a custom kernel: kernel-package and its dependencies.
  4. Build a new kernel compiling in RAID support plus whatever else you need (in this case, SMP support plus other stuff). This is a tedious but not all that difficult process. Sarge uses grub as its bootloader, and kernel-package makes adding a new kernel really easy. Make sure that you can boot to this new RAID-enabled kernel.
  5. Add in an initrd-img. Trying to build this with the —initrd option to make-kpkg didn’t work here since (I think) it tries to use mkinitrd which is incompatible with the latest kernels. Instead, you need to create the initrd image via mkinitramfs (for Sarge) or yaird (for Etch), where “whatever” matches the name of your boot image: /boot/vmlinuz-whatever.
     # mkinitramfs -o /boot/initrd.img-whatever
    
  6. Edit /etc/kernel-img.conf to set
    link_in_boot = yes
    (Not sure what this does, but it works.)
  7. Install a couple more packages we need:
     # apt-get install mdadm hdparm 
  8. Check /dev/ to see what md nodes you have; probably will have at this point only /dev/md0.
  9. Tell mdadm not to boot to the hard disks (or something; not entirely clear about this, but it works):
     # mv /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.off 
  10. Copy the partition table of the first disk over to second disk:
     # sfdisk -d /dev/sda | sfdisk /dev/sdb 
  11. Change the file types to “fd” for all partitions of the second disk including swap since we’re putting the swap into a RAID device. Write out the changes and exit.
     # cfdisk /dev/sdb 
  12. Make new file systems on second disk. (This may not be necessary as it’s not mentioned by every source, but it makes sense and it worked here.)
     # mkfs.ext3 /dev/sdb1
     # mkswap /dev/sdb2
     # mkfs.ext3 /dev/sdb5
     # mkfs.ext3 /dev/sdb6
     # mkfs.ext3 /dev/sdb7
     # mkfs.ext3 /dev/sdb8
  13. Add md nodes (unless they’re already there — check
    ls /dev/md*
    If they’re not there, do this:
     # mknod /dev/md1 b 9 1
     # mknod /dev/md2 b 9 2
     # mknod /dev/md3 b 9 3
     # mknod /dev/md4 b 9 4
     # mknod /dev/md5 b 9 5
  14. Create new md devices using the second disk (with the first disk "missing"):
     
     # mdadm --create /dev/md0 --level 1 --raid-devices=2 missing /dev/sdb1
     # mdadm --create /dev/md1 --level 1 --raid-devices=2 missing /dev/sdb2
     # mdadm --create /dev/md2 --level 1 --raid-devices=2 missing /dev/sdb5
     # mdadm --create /dev/md3 --level 1 --raid-devices=2 missing /dev/sdb6
     # mdadm --create /dev/md4 --level 1 --raid-devices=2 missing /dev/sdb7
     # mdadm --create /dev/md5 --level 1 --raid-devices=2 missing  /dev/sdb8
  15. Make a new initrd with md0 as a boot drive:
     # mkinitramfs -r /dev/md0 -o /boot/initrd.img-2.6.19.1 
  16. Create file systems for the md devices:
     # mkfs.ext3 /dev/md0
     # mkswap /dev/md1
     # mkfs.ext3 /dev/md2
     # mkfs.ext3 /dev/md3
     # mkfs.ext3 /dev/md4
     # mkfs.ext3 /dev/md5
  17. Copy over stuff from first disk to md device:
     # mount /dev/md0 /mnt
     # cp -dpRx / /mnt
     # mount /dev/md2 /mnt/tmp
     # cp -dpRx /tmp /mnt
     # mount /dev/md3 /mnt/home
     # cp -dpRx /home /mnt
     # mount /dev/md4 /mnt/usr
     # cp -dpRx /usr /mnt
     # mount /dev/md5 /mnt/var
     # cp -dpRx /var /mnt
  18. /dev probably didn’t copy over, so drop into single user mode, disable udev, copy /dev, restart udev, and return to multiuser mode:
     # init 1
     # /etc/init.d/udev stop
     # cp -dpRx /dev /mnt
     # /etc/init.d/udev start
     # init 3
  19. Edit /mnt/etc/fstab to use md devices instead of the sd* devices:
     # e3em /mnt/etc/fstab 
  20. Edit grub list to add in the following lines (for example):
     # e3em    /mnt/boot/grub/menu.lst
     
     title  Debian custom kernel 2.6.19.1 RAID
     root   (hd0,0)
     kernel /boot/vmlinuz-2.6.19.1 root=/dev/md0 md=0,/dev/sda1,  /dev/sdb1 ro
     initrd /boot/initrd.img-2.6.19.1
     boot
      
     title  Debian custom kernal 2.6.19.1 RAID Recovery
     root   (hd1,0)
     kernel /boot/vmlinuz-2.6.19.1 root=/dev/md0 md=0,/dev/sdb1 ro
     initrd /boot/initrd.img-2.6.19.1
     boot
    
  21. Install grub on both disks (already on /dev/sda but it doesn’t hurt to add again):
     # grub-install /dev/sda
     # grub-install /dev/sdb
  22. Run grub to include the second disk as a boot disk:
     # grub
     
     grub> device (hd0) /dev/sdb
     grub> root (hd0,0)
     grub> setup (hd0)
     grub> quit
  23. Copy over fstab and menu.lst from md device to the first disk:
     # cp -dp /mnt/etc/fstab /etc/fstab
     # cp -dp /mnt/boot/grub/menu.lst /boot/grub
  24. Reformat first disk partitions to type fd:
     # cfdisk /dev/hda 
  25. Reconfigure mdadm:
     # mdadm --detail --scan >> /mnt/etc/mdadm/mdadm.conf
     # cp -dp /mnt/etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf
  26. Make a new initrd with md0 as a boot drive:
     # mkinitramfs -r /dev/md0 -o /boot/initrd.img-2.6.19.1
  27. Hold your breath, and reboot.
  28. Presuming that the box reboots, check that md devices are mounted:
     # df
     Filesystem           1K-blocks      Used Available Use% Mounted on
     /dev/md0                186555    104946     71977  60% /
     tmpfs                  1037132         0   1037132   0% /dev/shm
     /dev/md3               1921036     32860   1790592   2% /home
     /dev/md2                964408     16428    898988   2% /tmp
     /dev/md4               9614052    960776   8164908  11% /usr
     /dev/md5              62286628    133232  58989400   1% /var
     tmpfs                    10240       120     10120   2% /dev
  29. Add the first drive to array:
     # mdadm --add /dev/md0 /dev/sda1
     # mdadm --add /dev/md1 /dev/sda2
     # mdadm --add /dev/md2 /dev/sda5
     # mdadm --add /dev/md3 /dev/sda6
     # mdadm --add /dev/md4 /dev/sda7
     # mdadm --add /dev/md5 /dev/sda8
  30. Wait for all devices to synch by watching their status:
     # cat /proc/mdstat 
  31. Make a yet another new initrd with md0 as a boot drive:
     # mkinitramfs -r /dev/md0 -o /boot/initrd.img-2.6.19.1 
  32. Reboot again. You are done if it reboots and you see the md devices are still mounted:
     # df
     Filesystem           1K-blocks      Used Available Use% Mounted on
     /dev/md0                186555    104946     71977  60% /
     tmpfs                  1037132         0   1037132   0% /dev/shm
     /dev/md3               1921036     32860   1790592   2% /home
     /dev/md2                964408     16428    898988   2% /tmp
     /dev/md4               9614052    960776   8164908  11% /usr
     /dev/md5              62286628    133232  58989400   1% /var
     tmpfs                    10240       120     10120   2% /dev
  33. With IDE drives, there are some optimizations that can be done with hdparm. However, hdparm doesn’t appear to be relevant to SATA drives other than to measure performance — which is pretty darn good:
     # hdparm -Tt /dev/sda
     /dev/sda:
     Timing cached reads: 4328 MB in 2.00 seconds = 2164.05 MB/sec
     Timing buffered disk reads: 174 MB in 3.01 seconds = 57.72 MB/sec
     
     # hdparm -Tt /dev/sdb
     /dev/sdb:
     Timing cached reads: 4352 MB in 2.00 seconds = 2176.37 MB/sec
     Timing buffered disk reads: 174 MB in 3.01 seconds = 57.85 MB/sec
    


Comments

There are no comments so far.

If you signup and login, you can post comments.