[sf-lug] tar Workaround for the Ubuntu "slow copy" Problem

Ken Shaffer kenshaffer80 at gmail.com
Sun Aug 3 15:16:59 PDT 2025


tar workaround for the "slow copy" problem

Anyone who has had to copy/move a large amount of data (large being twice
your memory size or more) probably has noticed that the transfer rate
steadly drops
down to unacceptable levels (<10MB/sec personal experience but I have seen
reports of rates as low as 10KB/sec).  People have told me they cannot use
Linux
for their work because they cannot back up their data in a timely fashion!

The basic problem is that writes are slower than reads, system buffers fill
up, memory gets fragmented, and things drag to a crawl (move the mouse, 30
seconds later the cursor may move 2"). Search the askubuntu.com site for
"slow copy" or "low transfer rate" and there are many suggestions for
speeding things up, but most of those suggestions involve tweaking system
parameters like changing the scheduler, altering vm_dirty_bytes/ratio, etc.
Making such changes may be undesirable on a general purpose computer. When
I had to move 300GB of data from an internal SSD (PCIE3) over a USB3.1 gen1
port, to an SSD (PCIE4) in a USB3.1 gen2 enclosure, I used the below
command from the mounted partition location on an otherwise idle system to
avoid the memory fragmentation problem and subsequent slowdown:

sudo nocache tar c --record-size=500M -f- . |(cd /mnt/a2; sudo tar -xpBf -)

The "nocache" prevents the input files from being uffered, saving the
system buffers for output. Apparently the 500M (ridiculously large)
recordsize gives the system time to run defragmentation, keeping transfer
rates up and system performance acceptable. Watching /proc/buddyinfo, you
can see cycles of free memory building up in the smaller blocks, and
eventually getting recombined into larger blocks.

No performance hit was seen on mouse performance. Copy rates for the first
100GB were 215MB/sec (when I went to lunch). No other system changes were
made
(scheduler, vm_dirty_bytes/ratio,...) and the virt-manager daemon which
caches .img files was not killed. A 300GB copy to a USB nvme M.2 SSD from
the internal PCIE4 SSD showed acceptable performance, no slowdowns noticed.

The /proc/buddyinfo did show periods of fragmentation, but the existing
defaults to deal with it worked. This might need a tweak on a busier system
(vm-dirty-bytes/ratio).
Ken
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://linuxmafia.com/pipermail/sf-lug/attachments/20250803/fbab9a64/attachment.html>


More information about the sf-lug mailing list