Welcome! Log In Create A New Profile

Advanced

[tutorial] Making a RAID1 rootfs

Posted by bobafetthotmail 
[tutorial] Making a RAID1 rootfs
April 03, 2016 02:50PM
This works fine on my NSA325v2 with bodhi's 2015 uboot (the latest at the time of writing), but should work fine with anything else as it is mostly a Debian-side configuration.

My setup leaves u-boot untouched as it relies on disk labels, I'm labeling the RAID1 "rootfs" and the uboot does not need to be altered in any way, so if you use the automagic "search-and-boot" configuration bodhi supplies, it will be fine.

EDIT: a little caveat: bodhi's uboot envs are configured to boot only from ext2 and ext3 partitions, mimicking the behavior of the stock uboot. If you want it to boot also ext4 partitions you need to alter the envs by replacing all instances of "ext2load" with "load", see the posts below for details.

It should THEORETICALLY work also with stock uboot, but you will have to use ext2 or ext3 instead of ext4, or make a raid boot partition.

Start with booting a SINGLE disk as normal from bodhi's kernel/rootfs thread, the second disk with the same partitions as the first, but empty.

in this example, dev/sda is the disk we are booting from, /dev/sdb is the partitioned disk we are preparing.
/dev/sdb1 and /dev/sda1 are the rootfs.

Installing needed tools (run as root or with sudo)

apt-get install mdadm rsync initramfs-tools
mdadm will popup something while you install it and also drop some warnings. It's ok, we will set it up and then rerun its configuration popups later.

Good, now we start by creating a RAID1 array with only the second hard drive (the empty one).

mdadm --create /dev/md0 --metadata=0.90 --level=1 --raid-devices=2 missing /dev/sdb1

Note the "missing", that tells mdadm that this is a "degraded" array with only one drive of 2.

Note the --metadata option. This option limits this array to max 2 TB of size BUT lets bootloaders read and boot from the partition even if they don't understand RAID.
u-boot does not understand RAID, so we need this.
Since rootfs isn't likely going to be more than a dozen GBs, the max size isn't an issue.

data RAID will be generated with a similar command without the --metadata option, so will not have any size limitation and won't be readable by u-boot.

In my setup I use SnapRaid, so I don't need a data RAID.

I'm doing without swap partition as I use a swapfile in rootfs partition.

formatting the array as ext4 and giving it the "rootfs" label.
mkfs.ext4 -L rootfs /dev/md0

Open mdadm configuration file
nano /etc/mdadm/mdadm.conf

And in the DEVICE section add
DEVICE /dev/sd?*
This forces the thing to look at all drives/partitions for RAID signatures.

Save and close the file, then let's add the last line of that config file (the raid signature of the arrays running at the moment) like pros do

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Feel free to open up again the same file with the command above to see if there is now a line looking like this
ARRAY /dev/md0 metadata=0.90 UUID=66a8c96d:ac6a5da3:9d4deba6:47ca997f

Now we need to configure mdadm to start the stuff from inside the initramfs, as if this isn't started there, the kernel cannot find the rootfs to boot from.

dpkg-reconfigure mdadm

The settings are self-explanatory, I left "all" in the first one, anyway.

You will see a line saying that initramfs is getting updated, mdadm is getting into it, and its configs are also getting in there.

In case you have more than one kernel/initramfs or you want to trigger a rebuild manually, write this

update-initramfs -u -k all


This is cool for a PC's BIOS or UEFI firmware but we boot with u-boot that works with uImage and uInitrd, not directly with initramfs.

So we need to rebuild them, let's go in /boot folder and see what we have in there

cd /boot && ls

Now we rebuild uImage, please alter the names of the files according to the ones you have in your /boot folder (version numbers will probably differ)
mkimage -A arm -O linux -T kernel -C none -a 0x00008000 -e 0x00008000 -n Linux-3.18.5-kirkwood-tld-1 -d vmlinuz-3.18.5-kirkwood-tld-1 uImage

rebuilding uInitrd, same as above, change names according to the ones you have
mkimage -A arm -O linux -T ramdisk -C gzip -a 0x00000000 -e 0x00000000 -n initramfs-3.18.5-kirkwood-tld-1 -d initrd.img-3.18.5-kirkwood-tld-1 uInitrd

Then we mount the array, and clone the rootfs from the booted partition
mkdir /tmp/mnt
mount /dev/md0 /tmp/mnt
rsync -auHxv --exclude=/proc/* --exclude=/sys/* --exclude=/tmp/* /* /tmp/mnt

open the fstab of the new drive and adjust it

nano /tmp/mnt/etc/fstab

My fstab this line for root filesystem

/dev/md0      /               auto   noatime,errors=remount-ro 0 1

Save and close.

Now power down your device
poweroff

Disconnect the drive we booted from, leave only the drive we just prepared.

Power up the device, and see what happens.

You should see mdadm coming up and initalizing the raid array at a bit after second 3 after the kernel started booting, and booting will continue just fine until login.

Now login as normal and write
mount

this is my output, see the line? that's root filesystem mounted on raid array.
root@debian:/boot# mount
-----removed stuff------------
/dev/md0 on / type ext4 (rw,noatime,errors=remount-ro,data=ordered)
-----other removed stuff-------

Now connect the first drive, check that it is there, and simply add it to the array.

Since the device had only one drive when it booted, the new drive will be /dev/sdb1 again, while the old drive will be /dev/sda and /dev/md0 will be using /dev/sda1

WARNING, DATA IN PARTITION /dev/sdb1 will be erased, if you screw up the command and ask it to add /dev/sda1 it will just error out, so that's not an issue.

mdadm --add /dev/md0 /dev/sdb1

Nice, now let it settle a bit, check rebuilding progress with

mdadm --detail /dev/md0


And now, a link to a useful cheat sheet with most common mdadm commands http://www.ducea.com/2009/03/08/mdadm-cheat-sheet/



Edited 2 time(s). Last edit at 06/02/2016 09:37AM by bobafetthotmail.
Re: [tutorial] Making a RAID1 rootfs
April 03, 2016 03:15PM
bobafetthotmail,

Excellent tutorial!

-bodhi
===========================
Forum Wiki
bodhi's corner (buy bodhi a beer)
Re: [tutorial] Making a RAID1 rootfs
April 03, 2016 03:55PM
Thanks. :)

I wrestled with my box for hours to get it to boot, info is scattered all over the place.

Now I got it all in one place.

After I compiled the latest SnapRaid I'm making a tutorial for that too, as it is very interesting for most of our devices, imho.
Re: [tutorial] Making a RAID1 rootfs
April 03, 2016 07:02PM
excellent writeup!

I wonder how long it will be before someone posts about using this to raid some thumb drives....

* Oh Jooeeyyyyyyyy - im looking at you :) *
Conor
Re: [tutorial] Making a RAID1 rootfs
May 11, 2016 09:19PM
This is a great write-up. I was just trying to figure out how to set-up raid on my Pogoplug Series 4. I will try this once my second drive arrives next week. I have one question, though. I have my device set-up on a USB hard drive with 2 partitions:

/dev/sda1 on / type ext3 (rw,noatime,errors=remount-ro,data=ordered)
/dev/sda2 on /home type ext3 (rw,noatime,data=ordered)

Can I follow the above guide, making sure to create md0 and md1, one for each of the above partitions? Do you foresee any problems with a 2-partition set up?
Re: [tutorial] Making a RAID1 rootfs
May 13, 2016 05:30AM
> my Pogoplug Series 4.

That's the guy with two USB 3.0 ports right?
Assuming the two ports are not on a hub but on the average dual-port-to-pcie controller you have around 250 MB/s up and 250MB/s down (independent speeds), should be more than enough for two drives, just don't use SSDs.

>Can I follow the above guide, making sure to create md0 and md1, one for each of the above partitions? Do you foresee any problems with a 2-partition set up?

No issues at all. I also talked about adding a second data RAID above when discussing the --metadata option.

I generally prefer using ext4 for data (and also rootfs) as it is simply better thant ext3, others may also recommend xfs (it's also the standard data filesystem in most commercial NAS devices).

The only folder that may need to stay ext3 is the /boot (ot the filesystem containing that folder) as it needs to be readable by the stock u-boot, but I have bodhi's so I can keep all ext4.



Edited 1 time(s). Last edit at 05/13/2016 05:35AM by bobafetthotmail.
Conor
Re: [tutorial] Making a RAID1 rootfs
May 14, 2016 04:04PM
Thanks for the reply. My second drive should arrive next week. I'll see what I can make happen!

Cheers.
Conor
Re: [tutorial] Making a RAID1 rootfs
May 18, 2016 11:57AM
Alright, I have gone through these steps using the latest uboot 2015.10, but I can't seem to get my cloned drive to boot. I am almost certain that it has something to do with uboot. Over on the instructions for setting up a rootfs here: (http://forum.doozan.com/read.php?2,12096), we are no longer re-building the uImage file. So I have tried rebuilding the image, as above, and I have tried skipping that step. No luck so far. Any ideas? Uboot variables below.

I should mention that the drive I use to boot is a USB 2.0 drive on the top usb slot, and the second drive (which won't boot is SATA).

bootcmd_exec=mw 0x800000 0 1; run load_uimage; if run load_initrd; then if run load_dtb; then bootm 0x800000 0x1100000 0x1c00000; else bootm 0x800000 0x1100000; fi; else if run load_dtb; then bootm 0x800000 - 0x1c00000; else bootm 0x800000; fi; fi
bootcmd=run bootcmd_uenv; run scan_disk; run set_bootargs; run bootcmd_exec
bootdelay=10
bootdev=usb
device=0:1
devices=usb ide mmc
disks=0 1 2 3
ethact=egiga0
if_netconsole=ping $serverip
led_error=orange blinking
led_exit=green off
led_init=green blinking
load_dtb=echo loading DTB $dtb_file ...; ext2load $bootdev $device 0x1c00000 $dtb_file
load_initrd=echo loading uInitrd ...; ext2load $bootdev $device 0x1100000 /boot/uInitrd
load_uimage=echo loading uImage ...; ext2load $bootdev $device 0x800000 /boot/uImage
mainlineLinux=yes
mtdids=nand0=orion_nand
partition=nand0,2
scan_disk=echo running scan_disk ...; scan_done=0; setenv scan_usb "usb start";  setenv scan_ide "ide reset";  setenv scan_mmc "mmc rescan"; for dev in $devices; do if test $scan_done -eq 0; then echo Scan device $dev; run scan_$dev; for disknum in $disks; do if test $scan_done -eq 0; then echo device $dev $disknum:1; if ext2load $dev $disknum:1 0x800000 /boot/uImage 1; then scan_done=1; echo Found bootable drive on $dev $disknum; setenv device $disknum:1; setenv bootdev $dev; fi; fi; done; fi; done
set_bootargs=setenv bootargs console=ttyS0,115200 root=LABEL=rootfs rootdelay=10 $mtdparts $custom_params
start_netconsole=setenv ncip $serverip; setenv bootdelay 10; setenv stdin nc; setenv stdout nc; setenv stderr nc; version;
stderr=serial
stdin=serial
stdout=serial
uenv_import=echo importing envs ...; env import -t 0x810000
uenv_init_devices=setenv init_usb "usb start";  setenv init_ide "ide reset";  setenv init_mmc "mmc rescan"; for devtype in $devices; do run init_$devtype; done;
uenv_load=run uenv_init_devices; setenv uenv_loaded 0; for devtype in $devices;  do for disknum in 0; do run uenv_read_disk; done; done;
uenv_read_disk=if test $devtype -eq mmc; then if $devtype part; then run uenv_read;  fi; else if $devtype part $disknum; then run uenv_read; fi;  fi
uenv_read=echo loading envs from $devtype $disknum ...; if load $devtype $disknum:1 0x810000 /boot/uEnv.txt; then setenv uenv_loaded 1; fi
usb_ready_retry=15
arcNumber=3960
machid=f78
ethaddr=<my mac>
mtdparts=mtdparts=orion_nand:2M(u-boot),3M(uImage),3M(uImage2),8M(failsafe),112M(root)
bootcmd_uenv=run uenv_load; if test $uenv_loaded -eq 1; then run uenv_import; fi; sleep 3
dtb_file=/boot/dts/kirkwood-pogoplug_v4.dtb
preboot_nc=setenv nc_ready 0; for pingstat in 1 2 3 4 5; do; sleep 1; if run if_netconsole; then setenv nc_ready 1; fi; done; if test $nc_ready -eq 1; then run start_netconsole; fi
preboot=run preboot_nc
ipaddr=<ip of pogo>
serverip=<ip of laptop> 
Re: [tutorial] Making a RAID1 rootfs
May 19, 2016 04:13AM
Hmm, I see this
load_dtb=echo loading DTB $dtb_file ...; ext2load $bootdev $device 0x1c00000 $dtb_file
load_initrd=echo loading uInitrd ...; ext2load $bootdev $device 0x1100000 /boot/uInitrd
load_uimage=echo loading uImage ...; ext2load $bootdev $device 0x800000 /boot/uImage
This means that uboot looks for a ext2 partition (and will also read ext3 because it's similar).

If you followed my advice above the new one is ext4 now right?

If yes, please replace all instances of "ext2load" (I see others around those envs) with "load" to tell uboot to figure out the filesystem by itself, it is a cool feature of the latest uboot.

EDIT: or just format the new rootfs as ext3. Sorry about this, I forgot I had to modify the u-boot envs to get it to boot ext4.

But bodhi didn't update the env image to use it, he seems to be stuck in the past :P

@Bodhi update the standard envs with "load" pls, it's very nice.



Edited 1 time(s). Last edit at 05/19/2016 03:46PM by bobafetthotmail.
Re: [tutorial] Making a RAID1 rootfs
May 19, 2016 04:54AM
> But bodhi didn't update the env image to use it,
> he seems to be stuck in the past :P

Leisurely pace :))

>
> @Bodhi update the standard envs with "load" pls,
> it's very nice.

I was procastinating :) I was hoping to do it at the same time I release new u-boot. But there is no urgency for releasing a new u-boot, so yes I'll update new env image, instead.

-bodhi
===========================
Forum Wiki
bodhi's corner (buy bodhi a beer)
Re: [tutorial] Making a RAID1 rootfs
May 19, 2016 03:57PM
>there is no urgency for releasing a new u-boot,

Yeah, as it is now it will probably be fine until kirkwoods are obsolete (considering their average role, that's at least a decade), any more work on it will be to add device support.

I mean, it has all useful drivers, scripting, can read external config files, can figure out the file system in a partition on its own, does netconsole.
Re: [tutorial] Making a RAID1 rootfs
May 19, 2016 06:19PM
bobafetthotmail Wrote:
-------------------------------------------------------
> >there is no urgency for releasing a new u-boot,
>
> Yeah, as it is now it will probably be fine until
> kirkwoods are obsolete (considering their average
> role, that's at least a decade), any more work on
> it will be to add device support.
>
> I mean, it has all useful drivers, scripting, can
> read external config files, can figure out the
> file system in a partition on its own, does
> netconsole.

But then again, when we add new device, may be it's worthwhile to rebase everything to latest mainline version. That's my reason for being lazy :)

-bodhi
===========================
Forum Wiki
bodhi's corner (buy bodhi a beer)
Re: [tutorial] Making a RAID1 rootfs
August 25, 2019 07:41PM
Pogoplug-pro v3, Just a quick note pertaining to the raid configuration on my machine, I failed to label the two identical looking raid 5 tb usb drives, for some reason I must have plugged then into the wrong slots and it made them both show empty, I did check them on my ubuntu machine and found them not empty and then labelled each one to the slots that I were correct, I do not know if this was my fault or some anomaly but everything was working when I reinserted however I had to do some steps to get the second raid drive activated again. hope this helps if someone else runs into this issue!

first I had to find out what was working and not working,

 
cat /proc/mdstat

result: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdc1[0]
      4883638464 blocks super 1.2 [2/1] [U_]
      bitmap: 3/37 pages [12KB], 65536KB chunk


sudo mdadm --detail /dev/md0

result:
/dev/md0:
          Version : 1.2
     Creation Time : Tue May 22 03:57:15 2018
        Raid Level : raid1
        Array Size : 4883638464 (4657.40 GiB 5000.85 GB)
     Used Dev Size : 4883638464 (4657.40 GiB 5000.85 GB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun Aug 25 14:02:05 2019
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : unknown

              Name : XXXX:0  (local to host XXXX)
              UUID : be9b73d1:54b0c2db:8ab52103:5ecedc50
            Events : 185305

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       -       0        0        1      removed

next I had to activate other drive sdd1 as I had somehow deactivated it.



sudo mdadm --manage /dev/md0 --add /dev/sdd1

Result: mdadm: re-added /dev/sdd1


then I verified:


sudo mdadm --detail /dev/md0

result :
/dev/md0:
 
          Version : 1.2
     Creation Time : Tue May 22 03:57:15 2018
        Raid Level : raid1
        Array Size : 4883638464 (4657.40 GiB 5000.85 GB)
     Used Dev Size : 4883638464 (4657.40 GiB 5000.85 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun Aug 25 19:56:06 2019
             State : clean, degraded, recovering 
    Active Devices : 1
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : unknown

    Rebuild Status : 22% complete

              Name : XXXX:0  (local to host XXXX)
              UUID : be9b73d1:54b0c2db:8ab52103:5ecedc50
            Events : 185308

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       49        1      spare rebuilding   /dev/sdd1

please note it reads spare rebuilding,then I finished with the following command:


sudo mdadm --detail /dev/md0

result:
/dev/md0:
           Version : 1.2
     Creation Time : Tue May 22 03:57:15 2018
        Raid Level : raid1
        Array Size : 4883638464 (4657.40 GiB 5000.85 GB)
     Used Dev Size : 4883638464 (4657.40 GiB 5000.85 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun Aug 25 19:56:20 2019
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : unknown

              Name : XXXX:0  (local to host XXXX)
              UUID : be9b73d1:54b0c2db:8ab52103:5ecedc50
            Events : 185314

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       49        1      active sync   /dev/sdd1


so now everything is working correctly!

Echowarrior108

device: pogoplug-pro v3

Currently running:
Debian GNU/Linux Bullseye 12-9-22
Linux 5.4.224-oxnas-tld-1 armv6l GNU/Linux 11-27-22
Re: [tutorial] Making a RAID1 rootfs
November 18, 2020 07:01PM
Just had this issue again, glad I posted it as I had to do the same thing. I may have changed usb ports while doing upgrades on the main unit. The result was same as last time and I was able to recover doing the above steps!

Echowarrior108

device: pogoplug-pro v3

Currently running:
Debian GNU/Linux Bullseye 12-9-22
Linux 5.4.224-oxnas-tld-1 armv6l GNU/Linux 11-27-22
Re: [tutorial] Making a RAID1 rootfs
February 16, 2021 08:22AM
I populated raid1 array with 3 partitions

when i add the next drive (/dev/sdb) to the R1 array, will i have to fdisk aswell? or is this done by the daemon?

also, adviceable to resize before or after population? because i need a swap and remove the uboot partition, and realign them.


sda 931.5G linux_raid_member disk
`-md0 931.5G raid1
|-md0p1 1M ext4 uboot md
|-md0p2 12G ext4 satarootfs md /mnt/md0p2
`-md0p3 919.5G ext4 netshare md /mnt/md0p3
sdb 963M disk
`-sdb1 962M ext3 rootfs part /
Author:

Subject:


Spam prevention:
Please, enter the code that you see below in the input field. This is for blocking bots that try to post this form automatically. If the code is hard to read, then just try to guess it right. If you enter the wrong code, a new image is created and you get another chance to enter it right.
Message: