Cool Command #1 – DD with Progress

Loading

(Amended 18 Sept 2015)

sudo dd if=/dev/sdc of=/dev/sdb bs=512 conv=sync,noerror status=progress

For an insight into using the best Block Size values when copying to/from different media types, and using DD in general, read this link FIRST - see the quotes to get some perspective and optimise your copying time:

https://blog.tdg5.com/tuning-dd-block-size/

"Let's start off with a few tests writing out to a HDD:

  • Reading from /dev/zero and writing out to a HDD with the default block size of 512 bytes yields a throughput of 10.9 MB/s. At that rate, writing 1TB of data would take about 96,200 seconds or just north of 26 hours.
  • Reading from /dev/zero and writing out to a HDD with the Eug-Lug suggested block size of 64K yields a throughput of 108 MB/s. At that rate, writing 1TB of data would take 9,709 seconds or about 2.7 hours to complete. This is a huge improvement, nearly an order of magnitude, over the default block size of 512 bytes.
  • Reading from /dev/zero and writing out to a HDD with a more optimal block size of 512K yields a throughput of 131 MB/s. At that rate, writing 1TB of data would take about 8,004 seconds or about 2.2 hours. Though not as pronounced a difference, this is even faster than the Eug-Lug suggestion and is more than a full order of magnitude faster than the default block size of 512 bytes."
  • --

You may want to run similar tests on your own kit - say a CD size ISO.iso to and from an ISO.img etc. - at the highest rate you find from the tests below.

As the article states - it's all very dependent on YOUR OS, YOUR memory, YOUR drive types (IDE, SATA, SDRAM USB2, USB3, SSD etc.) and there will be different optimums for each case.

As a rough test for CPU/mem performance near the lower default of bs=512 (bytes), try the lower ranges:

Mint5630 stevee # dd if=/dev/zero of=/dev/null bs=512 count=10
10+0 records in
10+0 records out
5120 bytes (5.1 kB) copied, 0.000136822 s, 37.4 MB/s

Mint5630 stevee # dd if=/dev/zero of=/dev/null bs=1K count=10
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000248849 s, 41.1 MB/s
Mint5630 stevee # dd if=/dev/zero of=/dev/null bs=2K count=10
10+0 records in
10+0 records out
20480 bytes (20 kB) copied, 0.000159098 s, 129 MB/s
Mint5630 stevee # dd if=/dev/zero of=/dev/null bs=4K count=10
10+0 records in
10+0 records out
40960 bytes (41 kB) copied, 0.000250597 s, 163 MB/s
Mint5630 stevee # dd if=/dev/zero of=/dev/null bs=8K count=10
10+0 records in
10+0 records out
81920 bytes (82 kB) copied, 0.000203375 s, 403 MB/s
Mint5630 stevee # dd if=/dev/zero of=/dev/null bs=16K count=10
10+0 records in
10+0 records out
163840 bytes (164 kB) copied, 0.000283561 s, 578 MB/s
Mint5630 stevee # dd if=/dev/zero of=/dev/null bs=32K count=10
10+0 records in
10+0 records out
327680 bytes (328 kB) copied, 0.000219022 s, 1.5 GB/s
Mint5630 stevee # dd if=/dev/zero of=/dev/null bs=64K count=10
10+0 records in
10+0 records out
655360 bytes (655 kB) copied, 0.00045279 s, 1.4 GB/s

You see the peak speed is at BS=32K for those tests. It may be even better at larger BS again as below.

This also shows with newer PCs, that the old idea of memory pages of 4k being used for the BS does not necessarily relate optimally, just that binary multiples (of 2k) are probably a better choice than say bs = 3k, but experiment yourself.

If you experiment, you may find that these numbers aren't even consistent for the same block size, when repeated, as it depends what else the OS may be doing, as running the same command over again (UP arrow!) yields very different results, so the best you can infer is the average of say, at least 5 runs of the up arrow, as below! I'd take these averages at about 2.6Gb/s. The average, not peaks, is what counts when the large file transfer starts, after all.

Mint5630 stevee # dd if=/dev/zero of=/dev/null bs=512K count=10
10+0 records in
10+0 records out
5242880 bytes (5.2 MB) copied, 0.0016244 s, 3.2 GB/s
Mint5630 stevee # dd if=/dev/zero of=/dev/null bs=512K count=10
10+0 records in
10+0 records out
5242880 bytes (5.2 MB) copied, 0.0019947 s, 2.6 GB/s
Mint5630 stevee # dd if=/dev/zero of=/dev/null bs=512K count=10
10+0 records in
10+0 records out
5242880 bytes (5.2 MB) copied, 0.00194826 s, 2.7 GB/s
Mint5630 stevee # dd if=/dev/zero of=/dev/null bs=512K count=10
10+0 records in
10+0 records out
5242880 bytes (5.2 MB) copied, 0.00196851 s, 2.7 GB/s
Mint5630 stevee # dd if=/dev/zero of=/dev/null bs=512K count=10
10+0 records in
10+0 records out
5242880 bytes (5.2 MB) copied, 0.00204192 s, 2.6 GB/s

That above was on dual core Intel T5500 laptop with 2GB RAM, running Mint i686:

uname -a
Linux Mint5630 3.16.0-38-generic #52~14.04.1-Ubuntu SMP Fri May 8 09:44:48 UTC 2015 i686 i686 i686 GNU/Linux

As you see below, the same 512k BS run on a different, (MORE POWERFUL !) 64 bit dual core PC with 6 GB RAM, but running the "imperfect" OS for it ideally, gives very different results at less speed:

DELLMINT stevee # uname -a
Linux DELLMINT 3.16.0-38-generic #52~14.04.1-Ubuntu SMP Fri May 8 09:43:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

The better choice for this PC is BS=512k, as 1M drops off again:
DELLMINT stevee # dd if=/dev/zero of=/dev/null bs=512K count=10
10+0 records in
10+0 records out
5242880 bytes (5.2 MB) copied, 0.00306584 s, 1.7 GB/s
DELLMINT stevee # dd if=/dev/zero of=/dev/null bs=512K count=10
10+0 records in
10+0 records out
5242880 bytes (5.2 MB) copied, 0.00307384 s, 1.7 GB/s
DELLMINT stevee # dd if=/dev/zero of=/dev/null bs=512K count=10
10+0 records in
10+0 records out
5242880 bytes (5.2 MB) copied, 0.00295928 s, 1.8 GB/s
DELLMINT stevee # dd if=/dev/zero of=/dev/null bs=512K count=10
10+0 records in
10+0 records out
5242880 bytes (5.2 MB) copied, 0.0030166 s, 1.7 GB/s
DELLMINT stevee # dd if=/dev/zero of=/dev/null bs=512K count=10
10+0 records in
10+0 records out
5242880 bytes (5.2 MB) copied, 0.00292228 s, 1.8 GB/s
DELLMINT stevee # dd if=/dev/zero of=/dev/null bs=512K count=10
10+0 records in
10+0 records out
5242880 bytes (5.2 MB) copied, 0.00302872 s, 1.7 GB/s

So in the real world, what are the results of say cloning a Sandisk USB 3 pendrive of 64GB size, of a Mint OS?

It took about 94 mins at bs=1k.

Running it again at bs =512k, as the optimum found for that PC above gave:

dd if=/dev/sdh of=/Quadra/AMDA8Pen2.img bs=512k
120704+0 records in
120704+0 records out
63283658752 bytes (63 GB) copied, 4671.51 s, 13.5 MB/s

or 4671/60s = 77mins

which is not that much better anyway, but saved 20 mins from the output, BUT that time is incorrect anyway, as real time for me was 1hr25 or 85 mins.

Another real example; I recently copied a 160GB SATA drive to an external USB2 disk that took 30962.4 secs; nearly 9 hrs:

$ sudo dd if=/dev/sdc of=/dev/sdh
[sudo] password for stevee:
312581808+0 records in
312581808+0 records out
160041885696 bytes (160 GB) copied, 30962.4 s, 5.2 MB/s

-----------

So, back to DD itself

man dd

NAME
dd - convert and copy a file

SYNOPSIS
dd [OPERAND]...
dd OPTION

DESCRIPTION
Copy a file, converting and formatting according to the operands.

DD is a very effective tool to copy/clone a device's file system byte for byte - from whatever - a whole drive etc. and so can clone your OS or other file system as backup. 

It's default block size is 512 bytes for a good reason, so it can copy a boot sector and would be less prone to copying read errors from the source, as it is copying smaller chunks of data at a time.

Here is the man dd text file:
manDD.txt

For DD for Windows, see here:

https://uranus.chrysocome.net/linux/rawwrite/dd-old.htm