

# lvcreate -n lv_backups -L 11.98G vg_normal Rounding up size to full physical extent 11.98 GiB # lvcreate -n lv_base -L 1G vg_critical Logical volume "lv_base" created. LV UUID W6XT08-iBBx-Nrw2-f8F2-r2y4-Ltds-UrKogV # lvdisplay # lvcreate -n lv_files -L 5G vg_critical Logical volume "lv_files" created. This involves the lvcreate command, and a slightly more complex syntax: Let's now carve them up into “virtual partitions” (LVs). We now have two “virtual disks”, sized about 8 GB and 12 GB respectively. If these arrays were running normally before the disks were moved, the kernel would be able to detect and reassemble the pairs properly but if the moved disks had been aggregated into an md1 on the old server, and the new server already has an md1, one of the mirrors would be renamed. Another source of confusion can come when RAID volumes from two servers are consolidated onto one server only. The kernel would then have three physical elements, each claiming to contain half of the same RAID volume. In our example, if the sde disk failure had been real (instead of simulated) and the system had been restarted without removing this sde disk, this disk could start working again due to having been probed during the reboot. However, backing up this configuration is encouraged, because this detection isn't fail-proof, and it is only expected that it will fail precisely in sensitive circumstances. Most of the meta-data concerning RAID volumes are saved directly on the disks that make up these arrays, so that the kernel can detect the arrays and their components and assemble them automatically when the system starts up.

# mdadm /dev/md1 -add /dev/sdf mdadm: added /dev/sdf We want to avoid that risk, so we'll replace the failed disk with a new one, sdf: The contents of the volume are still accessible (and, if it is mounted, the applications don't notice a thing), but the data safety isn't assured anymore: should the sdd disk fail in turn, the data would be lost. # mdadm -query /dev/md1 /dev/md1: 4.00GiB raid1 2 devices, 0 spares. Mdadm: largest drive (/dev/sdd2) exceeds size (4192192K) by more than 1%Ĭontinue creating array? y mdadm: Defaulting to version 1.2 metadata Your boot-loader understands md/v1.x metadata, or use

Store '/boot' on this device please ensure that # mdadm -create /dev/md1 -level=1 -raid-devices=2 /dev/sdd2 /dev/sde mdadm: Note: this array has metadata at the start and # mkdir /srv/raid-0 # mount /dev/md0 /srv/raid-0 # df -h /srv/raid-0 Filesystem Size Used Avail Use% Mounted onĬreation of a RAID-1 follows a similar fashion, the differences only being noticeable after the creation: Writing superblocks and filesystem accounting information: done # mdadm -query /dev/md0 /dev/md0: 8.00GiB raid0 2 devices, 0 spares. # mdadm -create /dev/md0 -level=0 -raid-devices=2 /dev/sdb /dev/sdc mdadm: Defaulting to version 1.2 metadata We're going to use these physical elements to build two volumes, one RAID-0 and one mirror (RAID-1).
