{"id":789,"date":"2023-07-11T22:01:32","date_gmt":"2023-07-11T20:01:32","guid":{"rendered":"https:\/\/www.anginf.de\/?p=789"},"modified":"2023-07-11T22:01:37","modified_gmt":"2023-07-11T20:01:37","slug":"change-from-zfs-to-mdadm-and-increase-raid5-size","status":"publish","type":"post","link":"https:\/\/www.anginf.de\/?p=789","title":{"rendered":"Change from zfs to mdadm and increase RAID5-Size"},"content":{"rendered":"\n<p>I&#8217;ve created via zfs a RAIDZ, which is <a rel=\"noreferrer noopener\" href=\"https:\/\/www.klennet.com\/notes\/2019-07-04-raid5-vs-raidz.aspx\" target=\"_blank\">roughly something like a RAID5<\/a> in the traditional way. It contained three disks and as time goes by, the resulting array (roughly double the size of a single disk) was nearly full.<\/p>\n\n\n\n<p>I got an identical vendor and size disk like the first three disks and added it to the case. But the RAIDZ-expansion-feature is at the time of this writing still not completed and far from being included in Ubuntu releases.<\/p>\n\n\n\n<p>So I came up with a plan to get a bigger array without loosing any data and without backing up all the data to an external system (I simply didn&#8217;t have *that* many spare disks).<\/p>\n\n\n\n<p>mdadm has a &#8222;&#8211;grow&#8220;-feature, so I had to copy everything to a mdadm-Raid, but without any additional disks.<\/p>\n\n\n\n<p>The plan looked like this:<\/p>\n\n\n\n<p>Initial 3 disks in RAIDZ\/zfs, set one as offline<\/p>\n\n\n\n<p>Use the offline disk and the recently added new disk to a new mdadm (RAID-5), degraded from the start.<\/p>\n\n\n\n<p>Copy everything from RAIDZ to mdadm.<\/p>\n\n\n\n<p>Destroy the remaining zfspool and add the remaining two disks to mdadm, growing, reshaping and resyncing all in one big step.<\/p>\n\n\n\n<p>The only possible setback would be a drive failure on ZFS after degrading the ZFS or a drive failure in the first two disks of the new mdadm. &#8211; This would always be a full disaster. So &#8211; fingers crossed &#8211; and everything worked fine. It took a few days (copy and resyncing is slow), but it finally worked.<\/p>\n\n\n\n<p>To make sure that I don&#8217;t mess up, I created a demo-script to test the several steps and whether my idea worked at all.<\/p>\n\n\n\n<p>It comes in two steps, the first one works until the start of resyncing the mdadm. The second one should be started AFTER the resyncing has finished. In the small demo files I used this happend very fast, in reality this could take <em>days<\/em>.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"bash\" class=\"language-bash\">#!\/bin\/bash\nmkdir -p \/test\ncd \/test # Separates Verzeichnis\ncd test\/\nrm -f 1.disk\nrm -f 2.disk\nrm -f 3.disk\nrm -f 4.disk\nlosetup -D\numount \/test\/mnt\nmdadm --stop \/dev\/md0\nmdadm --remove \/dev\/md0\nrm -f \/test\/backupfile.mdadm\n\necho \"##### Creating images\"\ndd if=\/dev\/zero of=1.disk bs=1M count=256\ndd if=\/dev\/zero of=2.disk bs=1M count=256\ndd if=\/dev\/zero of=3.disk bs=1M count=256\ndd if=\/dev\/zero of=4.disk bs=1M count=256\nDISK1=$(losetup --find --show .\/1.disk)\nDISK2=$(losetup --find --show .\/2.disk)\nDISK3=$(losetup --find --show .\/3.disk)\nDISK4=$(losetup --find --show .\/4.disk)\nparted .\/1.disk mklabel gpt\nparted .\/2.disk mklabel gpt\nparted .\/3.disk mklabel gpt\nparted .\/4.disk mklabel gpt\nparted -a optimal -- .\/1.disk mkpart primary 0% 100%\nparted -a optimal -- .\/2.disk mkpart primary 0% 100%\nparted -a optimal -- .\/3.disk mkpart primary 0% 100%\nparted -a optimal -- .\/4.disk mkpart primary 0% 100%\n\necho \"##### Starting zfs pool on disk 1, 2, 3\"\nzpool create origtank raidz ${DISK1} ${DISK2} ${DISK3}\n\necho \"##### zpool status\"\nzpool status -v origtank\n\necho \"##### Creating test file on \/origtank\"\ndd if=\/dev\/zero of=\/origtank\/data bs=1M count=300\n\necho \"##### Setting third disk as faulty\"\nzpool offline origtank ${DISK3}\n\necho \"##### zpool status\"\nzpool status -v origtank\n\necho \"##### ls -lA \/origtank; df -h \/origtank\"\nls -lA \/origtank; df -h \/origtank\n\necho \"##### Creating new md0 from disk3 and disk4\"\n#parted -s .\/3.disk mklabel gpt\n#parted -s .\/4.disk mklabel gpt\n#parted -s -a optimal -- .\/3.disk mkpart primary 0% 100%\n#parted -s -a optimal -- .\/4.disk mkpart primary 0% 100%\nwipefs -a ${DISK3}\nwipefs -a ${DISK4}\nparted -s ${DISK3} set 1 raid on \nparted -s ${DISK4} set 1 raid on \nmdadm --create \/dev\/md0 -f --auto md --level=5 --raid-devices=3 ${DISK3} ${DISK4} missing\n\necho \"##### mdstat\"\ncat \/proc\/mdstat\nmdadm --detail \/dev\/md0\n\necho \"## # ## Formatting \/dev\/md0\"\nsleep 2\nmkfs.ext4 \/dev\/md0\n\necho \"##### Mount md0\"\nmkdir \/test\/mnt\nmount \/dev\/md0 \/test\/mnt\n\necho \"##### ls -lA \/test\/mnt; df -h \/test\/mnt\"\nls -lA \/test\/mnt; df -h \/test\/mnt\n\necho \"## # ## Copy data\"\nsleep 2\n# rsync --delete -avPH \/origtank\/ \/test\/mnt\nrsync -avPH \/origtank\/ \/test\/mnt\n\necho \"##### ls -lA \/test\/mnt; df -h \/test\/mnt\"\nls -lA \/test\/mnt; df -h \/test\/mnt\n\necho \"##### Creating NEW test file on \/origtank\"\ndd if=\/dev\/zero of=\/origtank\/dataNEW bs=1M count=30\n\necho \"## # ## Copy NEW data\"\nsleep 2\n# rsync --delete -avPH \/origtank\/ \/test\/mnt\nrsync -avPH \/origtank\/ \/test\/mnt\n\necho \"##### ls -lA \/test\/mnt; df -h \/test\/mnt\"\nls -lA \/test\/mnt; df -h \/test\/mnt\n\necho \"## # ## destroying pool\"\nsleep 2\nzpool destroy origtank\n\necho \"## # ## Adding disks to md0\"\nsleep 2\nmdadm --add \/dev\/md0 ${DISK1} ${DISK2}\nmdadm --grow --raid-devices=4 \/dev\/md0 --backup-file=\/test\/backupfile.mdadm\ncat \/proc\/mdstat\n<\/code><\/pre>\n\n\n\n<p>After the successful resync of the mdadm, you can resize the filesystem:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"bash\" class=\"language-bash\">resize2fs \/dev\/md0\ncat \/proc\/mdstat\n<\/code><\/pre>\n\n\n\n<p>This only takes a few minutes (even on very huge disks), but be patient! You can see the progress by looking at mdadm &#8211;detail.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"bash\" class=\"language-bash\">echo \"##### mdstat\"\ncat \/proc\/mdstat\nmdadm --detail \/dev\/md0\n\necho \"##### ls -lA \/test\/mnt; df -h \/test\/mnt\"\nls -lA \/test\/mnt; df -h \/test\/mnt<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>I&#8217;ve created via zfs a RAIDZ, which is roughly something like a RAID5 in the traditional way. It contained three disks and as time goes by, the resulting array (roughly double the size of a single disk) was nearly full. I got an identical vendor and size disk like the first three disks and added &hellip; <a href=\"https:\/\/www.anginf.de\/?p=789\" class=\"more-link\"><span class=\"screen-reader-text\">Change from zfs to mdadm and increase RAID5-Size<\/span> weiterlesen <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-789","post","type-post","status-publish","format-standard","hentry","category-allgemein"],"_links":{"self":[{"href":"https:\/\/www.anginf.de\/index.php?rest_route=\/wp\/v2\/posts\/789","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.anginf.de\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.anginf.de\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.anginf.de\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.anginf.de\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=789"}],"version-history":[{"count":2,"href":"https:\/\/www.anginf.de\/index.php?rest_route=\/wp\/v2\/posts\/789\/revisions"}],"predecessor-version":[{"id":791,"href":"https:\/\/www.anginf.de\/index.php?rest_route=\/wp\/v2\/posts\/789\/revisions\/791"}],"wp:attachment":[{"href":"https:\/\/www.anginf.de\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=789"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.anginf.de\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=789"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.anginf.de\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=789"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}