[sf-lug] Disk is cheap ...

Michael Paoli Michael.Paoli at cal.berkeley.edu
Sat Apr 10 06:55:07 PDT 2021


Disk is cheap ...
http://linuxmafia.com/pipermail/sf-lug/2021q1/015201.html
Might we be detecting a theme now?  ;-)

So ... the balug Virtual Machine (VM).
It was starting to get a bit tight on space.
No biggie, easy peasy, it's virtual, right?  Well, mostly quite easy.
I'd "only" given it 16 GiB of virtual drives space.
Given what it was using and what was left, and probable near to
medium-term future
(including also Debian buster --> bullseye upgrade
probable later this year), I figured it was time to bump it up to
at least 20 GiB ... and preferably even 24 GiB ... but not more than
that ... at least anytime soon.  And ... why not more?
Well, not only is that less space available for other purposes (probably
matters most on my personal laptop, where drive space is more generally
and largely consumed, compared to the other physical host vicki,
which has quite ample space to spare, as it's basically just [L]UG stuff
and nothing else there), but in addition to that, live migrations with
--copy-storage-all.  A lovey feature that lets one do live migrations
between physical hosts - even when the hosts have no shared common
storage (no clustered filesystems/drives, SAN, NFS, or the like).
What --copy-storage-all does, behind the scenes, to accomplish this,
is it uses network block device(s).  It essentially changes the
storage to network block device, then sets that up as RAID-1 and
mirrors it between the two hosts - once that storage is synced,
it can do the remainder of the live migration, and once the live
migration is completed, it can break the mirror and continue on
its merry way on the migrated to physical host.  Anyway,
larger physical storage would mean more data to be written on target
host with each such live migration, and more time to do so.  So,
a tradeoff between storage size for the VM, and not being excessive
beyond what's needed, or likely to be needed in the near to
medium-term future.  Also, relatively easy to grow later, and
more difficult to shrink (which could also be done) so ...
bump it up from 16 GiB to ... 20 or 24 GiB.

I might've preferred 24 GiB (less likely to need to do likewise again
anytime soon), but the factor that kept me back on that ...
space on my personal laptop - the other physical host.
And more notably, the remaining RAID-1 space on that.

So ... that physical host ... RAID-1 ... on both my laptop, and the
vicki physical hosts, some fair bit of RAID-1 storage ... using md
(mdadm & friends).  Quite nice.  Has many advantages over hardware
RAID (much more flexible, avoids hardware and hardware support
dependencies, etc.).  But the laptop ... not so much
RAID-1 ... two SSDs, but not the same size:
$ grep . /sys/block/sd[a-z]/size
/sys/block/sda/size:4004704368
/sys/block/sdb/size:312581808
$ echo '4004704368/2/1024/1024;312581808/2/1024/1024' | bc -l
1909.59185028076171875000
149.05062103271484375000
So, basically 2TB and 150GB
Well, the 150GB is pretty much all set up as RAID-1
... and ... nearly all consumed.
The remainder as non-RAID ... but more on that in a bit.

Anyway, ... balug VM ... bumped it up to 20 GiB - ample for now
and a fair while.  And, quite easy with the various virtual and
management layers ... each physical host ... that storage is a LVM
volume ... # lvextend -l +... and done
But the VM doesn't yet know about the space ...
# virsh blockresize balug /var/local/balug/balug-sda 20971520
Block device '/var/local/balug/balug-sda' is resized
and that's done.  Same logical path on both hosts, so the VM
configurations match ... though those paths (symbolic links)
go do different physical storage on each (but is md RAID-1 on
each).
After that, just the VM itself - and all of this all done live.
On the VM, it's /dev/vda - once that virsh command was
done, the VM automagically sees the additional space on
/dev/vda.  Had it been, e.g., /dev/sda, probably would've
needed to rescan it, e.g.:
# echo 1 > /sys/block/sda/device/rescan
But with /dev/vda, didn't even need to do that.
Then repartitioned ... did that (carefully!) with sfdisk.
Then partprobe for the kernel to pick up the changes.
Then pvcreate and vgextend and the space is all and well
available.  Then I did a bit of filesystem growing - enough
for now ... lvextend ... resize2fs ... and done.

The remainder as non-RAID ... but more on that in a bit.
That's on the physical laptop.  Yes, md is very flexible with
RAID (and non-RAID, etc.).  E.g. one can set up "fake" RAID-1,
with just a single device - so no actual redundancy and a
nominal case of just one device.  Why in the heck would one
want to do that?  Future-proofing.  Want to potentially go
to actual protected RAID-1 in future?  Set it up like that
now, then to go to RAID-1, just add the 2nd device, set the
nominal number of devices to 2, instead of just one, and done,
now on RAID-1.  That's way easier than taking a device
that's not used at all by md, and converting it to md RAID-1.
Need space for the header for md - normally on the device
itself.  Well, if one already put that there earlier, then
it's already covered.  If not, one has to work out how to
put it there, e.g. shrink the filesystem a bit and reposition it
a bit, so the space is there for it, then add the header.
Anyway, I so future-proofed my non-RAID / "fake" RAID on
md.  Looking at /proc/mdstat, and rearranging and reformatting a bit,
and dropping out some less interesting bits, we have:
md1 : active raid1 sdb1[1] sda1[2] 248640 blocks super 1.2 [2/2] [UU]
md5 : active raid1 dm-1[1] dm-0[0] 19484672 blocks super 1.2 [2/2] [UU]
md6 : active raid1 dm-6[1] dm-5[0] 19484672 blocks super 1.2 [2/2] [UU]
md7 : active raid1 dm-8[1] dm-7[0] 19484672 blocks super 1.2 [2/2] [UU]
md8 : active raid1 dm-10[1] dm-9[0] 19484672 blocks super 1.2 [2/2] [UU]
md9 : active raid1 dm-12[1] dm-11[0] 19484672 blocks super 1.2 [2/2] [UU]
md10 : active raid1 dm-14[1] dm-13[0] 19484672 blocks super 1.2 [2/2] [UU]
md11 : active raid1 dm-4[1] dm-3[0] 19484672 blocks super 1.2 [2/2] [UU]
md12 : active raid1 dm-16[1] dm-15[0] 19484672 blocks super 1.2 [2/2] [UU]
md13 : active raid1 dm-2[0] 230624256 blocks super 1.2 [1/1] [U]
md14 : active raid1 dm-17[0] 230624256 blocks super 1.2 [1/1] [U]
md15 : active raid1 dm-18[0] 230624256 blocks super 1.2 [1/1] [U]
md16 : active raid1 dm-19[0] 230624256 blocks super 1.2 [1/1] [U]
md17 : active raid1 dm-20[0] 230624256 blocks super 1.2 [1/1] [U]
md18 : active raid1 dm-21[0] 230624256 blocks super 1.2 [1/1] [U]
md19 : active raid1 dm-22[0] 230624256 blocks super 1.2 [1/1] [U]
md20 : active raid1 dm-23[0] 230624256 blocks super 1.2 [1/1] [U]
I also set all the md device #s to correlate to the partion #s -
where the underlying data is stored - much less confusing that way.
So ... all the devices are set up as "raid1" ... but ... are they
really?  md{[19,1[0-2]} are real raid1, notice each with
two devices, [2/2] and [UU].  Whereas md{1[3-9],20} are
one device, [1/1] and [U] - those are "fake" raid1 - not only
just a single devices, but set to nominally be so (so md doesn't constantly
nag me about the missing device from my raid1).
And ... nicely future-proofed.  Just add/grow the storage (which I'm
now planning to do), and I can convert any and/or all of those
"fake" raid1 md devices into real raid1 ... and easy peasy since the
headers for md are already on all of 'em.
So, yes, md RAID - quite nice and flexible ... hardware often isn't
nearly as flexible, among many of hardware RAID's disadvantages.
E.g. When I replace my 150GB SSD with 2TB, then I can do it all as
real raid1.  Or, I can mix and match - some real raid1, and some "fake"
raid1 - which is probably what I'll do.  I definitely want to
grow my RAID-1 capacity on that host beyond 150 GB (the more
crucial/important data).  But much of the other storage - not really
worthy of RAID-1 - often very redundant and/or rather to quite
unimportant, and certainly not critical.  So, with md, I can
nicely mix and match as I see fit.  Can also change 'em on-the-fly.
Also works very nicely with LVM too - making things even more
flexible.  E.g. my RAID-1 was running bit tight on space, took
a careful look at what I had on RAID-1, some of it not worthy of
such ... LVM ... pvmove ... got it off RAID-1, leaving more space
to grow the LV that's used for the storage of the balug VM.

Disk is cheap ... so ...
investing the md header overhead - makes it way easier to later
upgrade the non-RAID ("fake" RAID) to real protected RAID-1.
And ... 2TB SSD I'm looking at acquiring (have some gift card
bits ... used some "reward" points I had piling up ... will
all be covered between existing and ordered gift cards from
that).  And ... disk is cheap (and getting cheaper) ...
my first 1.5 TB drives ... those were around $100 each - fair
number of years back ... SSD - my first was 150 GB - don't
recall exactly what I paid for that, but it was part of a new
laptop order ... roughly 3 years later, I bought a 2 TB SSD ...
that was around $529 at the time.  Bought a 2nd 2 TB SSD after
that (to replace a failing/flakely 1 TB HDD used in the off-site
backup rotations).  And now ... 2 TB SSD - around $200.00,
heck, if we adjust for inflation, probably cheaper than the 1.5 TB
HDD from about 9 years ago, greater capacity, and greatly superior
performance.  So ... disk is cheap ... and keeps getting cheaper.

Oh, also, to make it easy with LVM and mix of (real) RAID-1 and
"fake" RAID-1, so I use the intended storage - I've tagged the
PVs ... @raid1 and @unprotected.  So it's very easy to select
which I want to use, and also check things are stored as I want
regarding actual protected RAID-1, and not so protected.

And, yes, like the vicki physical host, quite set up the
personal laptop (which has two internal drive slots), so it
can well boot from either drive - and also tested that long
ago.  So, "worst case"*, if the large drive dies, I still have all
the more/most important, and up to current.  And if the smaller
dies, I lose nothing but redundancy.
*well, of common single drive failure scenarios.  Then of course
there's always backups and off-site backups, etc.

Also looks like I might have a 150 SSD looking for a home in the
nearish future ... or maybe not.  Might repurpose it for the balug
VM's alternate physical host - presently vicki ... that may change
to something else, and maybe that SSD might get used there ... or
maybe even on the existing vicki physical host ... haven't really
decided yet.

And remember, RAID is *not* backup, it's merely some limited
redundancy.




More information about the sf-lug mailing list