[sf-lug] Meeting notes for May 2, 2021
Michael.Paoli at cal.berkeley.edu
Sun May 2 18:45:50 PDT 2021
> From: "Bobbie Sellers" <bliss-sf4ever at dslextreme.com>
> Subject: [sf-lug] Meeting notes for May 2, 2021
> Date: Sun, 2 May 2021 14:35:35 -0700
> The question was raised as to the best distribution to
> install on an older machine with lower specification and
> I called MX-Linux, someone else said AntiX and I believe
> it was Michael Paoli who suggested Debian's multi-arch
I suggested that more/most generally, as a handy way to test relatively
arbitrary x86 hardware for 64-bit compatibility. With that ISO, can
(attempt to) boot 64-bit. It will be pretty obvious in not only failure
but manner of failure, if the hardware isn't 64-bit. Can also boot 32-bit
from that same ISO. And either way, can proceed, if so desired, to install
from there - 64-bit, or 32-bit. I'd often find it quite handy, e.g. at in
person LUG meetings and such. Often someone brings in a computer, and they're
not sure if it's 64-bit or not. Well, boot the ISO, and in a matter of
minutes or less - we know. And too, as others essentially pointed out,
could just boot 32-bit and examine the relevant CPU flags, e.g. in
/proc/cpuinfo - and determine from that.
Though too, booting 64-bit is a more thorough test. E.g. if the computer
hasn't been run with 64-bit and the hardware has bugs with 64-bit.
E.g. I remember an installfest done at Noisebridge. Myself and one other
person were trying to diagnose why 64-bit install attempts were failing on
someone's theoretically 64-bit capable computer. We did isolate it to
some hardware issue - even though CPU was compatible ... other person
even further isolated it - to a faulty memory controller. It failed in
64-bit - but that controller only controlled one of the two RAM banks.
So ... the practical options were, either install 32-bit, or don't use
the RAM bank that has the faulty memory controller that fails in 64-bit
mode but works in 32-bit mode.
> Then we got into the matter of the possibility of converting a MBR system
> to GPT without Data Loss. Michael P. tried to setup a MBR disk to test the
> He did most of his work in terminal which he shared with the rest of us.
> He did have problems working from the terminal but eventually he got
Uh, I don't know that I'd say I had any problems.
> the MBR straightened out. He might let us know if the conversion
And yes did fully and successfully complete the in-place conversion,
with no data loss. And not unexpectedly - having first allocated all the
partition space in MBR, did need to slightly shrink one filesystem
and two partitions (one logical and one extended). So, really not all that
complex to do.
It went roughly about like this:
To test on, I created 2 flat files, sparse, 2 GiB logical each.
Then I set up loop devices to each.
Then I just worked on the one - MBR partitioned it, and per request, set up
a modest sized FAT filesystem on primary partition 1, and logical partition
5 (and the extended), used all remaining available partition space,
and formatted partition 5 as ext4.
Then partx -a to access the partitions, mounted 'em, and put a file with some
contents on each, so could sanity check our final results.
Was suggested to first try gparted ... not where I would'a started, but
per request ... didn't seem to be a way to do it in gparted - also
checked top Internet search results on the matter ... top few results or
so, doing it with gparted, basically said back up all the data,
blow away your target using gparted (and then partition & restore and such).
Not what we wanted to do, so ... CLI tools, and relatively low-level,
as I would've been inclined to do anyway. Went that route.
Examine and save situation with sfdisk (-uS -d),
Check for and find similar tool for GPT ... I recalled a rough
equivalent to sfdisk (MBR) ... starting with s, containing disk ...
found it again ... sgdisk. Roughly functional equivalency, but very
different syntax, etc. (so man sgdisk ... peeked a bit - Ken did too).
sgdisk has a nice -g option to convert ... handy ... tried it, expecting
it might fail ... it did, but with informative information.
Used that and some more poking about - I think I created GPT on our
other loop device / flat file ... and used sgdisk to examine that.
Between that and the earlier diagnostic, was clear now exactly what needed
to be moved/shrunk off from where. Then basically proceeded to do so.
Shrink the ext4 filesystem by 5 4096 byte blocks (34 512 byte blocks
we needed to get partition(s) off of, rounded up to filesystem block
size). So, did that. Then used sfdisk to do our partition shrink of
34 512 byte blocks/sectors, for both the logical and extended partitions.
Then sgdisk -g - and we were essentially done.
Proceeded to verify ... fsck -n on the shrunk filesystem - fine
mounted both filesystems
checked the files we'd put on our filesystems ... all fine.
After that just tear it down and clean up.
And for brevity, the above doesn't cover all of the step details.
Anyway, (optionally sparse) flat file(s) and loopback device(s), can
be a very handy way to test or dry-run many various potential disk
change activities (repartitioning non-destructively or otherwise,
moving/shrinking partitions, filesystem data, etc.). Can even do
that with/on Virtual Machine(s) - but sometimes that's overkill for
testing out the disk data manipulation stuff.
And the shrinking ... from innermost layer out. The reverse for growing.
In our case, filesystem, logical, extended ... though with sfdisk we did
the last two simultaneously (fine in this case, but if done separately,
would need to be done in the specified order).
Did also earlier
cover case of shrinking and relocating start of filesystem ... but the
filesystem to be relocated on that one was quite small enough, I took
the lazy/efficient approach and just copied it, rather than an in-place move.
Maybe I'll show that later.
> was practical but around 12:45 PM we developed audio problems and
> after a reboot and reconnection there was no improvement at my end
> so at 12:54 I signed out for the day leaving only Michael and Aaron
> still in the meeting.
I think Bobbie had some audio issues ... Aaron too a bit, but not as much
and fairly quickly cleared up, or mostly so. I think Tom intermittently had
slight bandwidth issues - but mostly worked fine. Don't think I was aware of
anyone else having issues with Jitsi Meet in general. Oh, I think
intermittently some lost the video feeds from others - and it wasn't
like we all lost 'em or at the same time. I think Jitsi Meet with the
public server we use, was otherwise glitch free for this meeting.
And I think the audio and bandwidth issues, etc. were local, so don't think
anything definitively had issues on the Jitsi Meet server side.
More information about the sf-lug