[conspire] NVME SSDs, SATA SSDs; SATA connectors, M.2 connectors
Rick Moen
rick at linuxmafia.com
Sat Jan 18 22:11:50 PST 2020
Passing along for Craig's useful description of NVME in contrast to
SATA, and his disambiguing the highly desirable and adaptable M.2 socket
from everything else. That explanation starts at paragraph 11, just
after the third quoted block.
Bear in mind that any price Craig mentions will be in AUS$, where (at
current exchange rates) $1 AUS is worth $0.69 US. (Stated reciprocally,
$1 US = $1.45 AUS.)
----- Forwarded message from Craig Sanders <cas at taz.net.au> -----
Date: Sun, 19 Jan 2020 15:47:00 +1100
From: Craig Sanders <cas at taz.net.au>
To: luv-main at luv.asn.au
Subject: Re: Rebuild after disk fail
On Sat, Jan 18, 2020 at 11:06:50PM +1100, Andrew Greig wrote:
> Yes, the problem was my motherboard would not handle enough disks, and
> we did format sdc with btrfs, and left the sdb alone, so that btrfs
> could arrange things between them.
>
> I was hoping to get an understanding of how the RAID drives remembered
> the "Balance" command, when the whole of the root filesystem was
> replaced on a new SSD.
Your rootfs and your /data filesystem(*) are entirely separate. Don't
confuse them.
The /data filesystem needed to be re-balanced when you added the second
drive (making it into a RAID-1 array). 'btrfs balance' reads and
rewrites all the existing data on a btrfs filesystem, so that it is
distributed equally over all drives in the array. For RAID-1, that
means mirroring all the data on the first drive onto the second, so that
there's a redundant copy of everything.
Your rootfs is only a single partition, it doesn't have a RAID-1 mirror,
so re-balancing isn't necessary (and would do nothing).
BTW, there's nothing being "remembered". 'btrfs balance' just
re-balances the existing data over all drives in the array. It's a
once-off operation that runs to completion and then exits. All **NEW**
data will be automatically distributed across the array. If you ever
add another drive to the array, or convert it to RAID-0 (definitely NOT
recommended), you'll need to re-balance it again. Until and unless that
happens, you don't need to even think about re-balancing: It's no longer
relevant.
(*) I think you had your btrfs RAID array mounted at /data, but I may be
mis-remembering that. To the best of my knowledge, you have two
entirely separate btrfs filesystems - one is the root filesystem,
mounted as / (it also has /home on it, which IIRC you have made a
separate btrfs sub-volume for). Anyway, it's a single-partition btrfs
fs with no RAID. The other is a 2 drive btrfs fs using RAID-1, which I
think is mounted as /data.
> I thought that control would have rested with /etc/fstab. How do the
> drives know to balance themselves? Is there a command resident in
> sdc1?
/etc/fstab tells the system which filesystems to mount. It gets read at
boot time by the system start up scripts.
> My plan is to have auto backups, and given that my activity has seen
> an SSD go down in 12 months, maybe at 10 months I should build a new
> box, something which will handle 64Gb RAM and have a decent Open
> Source Graphics driver. And put the / on a pair of 1Tb SSDs.
That would be a very good idea. Most modern motherboards will have more
than enough NVME and SATA slots for that (e.g., most Ryzen x570
motherboards have 2 or 3 NVME slots for extremely fast SSDs, plus 6 or 8
SATA ports for SATA HDDs and SSDs. They also have enough RAM slots for
64GB DDR-4 RAM, and have at least 2 or 3 PCI-e v4 slots - you'll use one
for your graphics card).
2 SSDs for the rootfs including your home dir, and 2 HDDs for your /data
bulk storage filesystem. And more than enough drive ports for future
expansion, if you ever need it.
-----------------------
Some info on NVME vs. SATA:
NVME SSDs are **much** faster than SATA SSDs. SATA 3 is 6 Gbps (600
MBps), so, taking protocol overhead into account, SATA drives max out at
around 550 MBps.
NVME drives run at **up to** PCI-e bus speeds - with 4 lanes, that's a
little under 40 Gbps for PCIe v3 (approx 4000 MBps minus protocol
overhead), double that for PCIe v4. That's the theoretical maximum
speed, anyway. In practice, most NVME SSDs run quite a bit slower than
that, about 2 GBps - that's still almost 4 times as fast as a SATA SSD.
Some brands and models (e.g., those from Samsung and Crucial) run at
around 3200 to 3500 MBps, but they cost more (e.g., a 1TB Samsung 970
EVO PLUS (MZ-V7S1T0BW) costs around $300, while the 1TB Kingston A2000
(SA2000M8/1000G) costs around $160, but is only around 1800 MBps).
AFAIK there are no NVME drives that run at full PCI-e v4 speed (~8 GBps
with 4 lanes) yet; it's still too new. That's not a problem: PCI-e is
designed to be backwards-compatible with earlier versions, so any
current NVME drive will work in pcie v4 slots.
NVME SSDs cost about the same as SATA SSDs of the same capacity, so
there's no reason not to get them, if your motherboard has NVME slots
(which are pretty much standard these days).
BTW, the socket that NVME drives plug into is called "M.2". M.2
supports both SATA & NVME protocols. SATA M.2 runs at 6 Gbps. NVME
runs at PCI-e bus speed. So, you have to be careful when you buy to
make sure you get an NVME M.2 drive and not a SATA drive in M.2
form-factor. Some retailers will try to exploit the confusion over
this.
craig
--
craig sanders <cas at taz.net.au>
_______________________________________________
luv-main mailing list
luv-main at luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main
----- End forwarded message -----
More information about the conspire
mailing list