I recently bought two Western Digital MyBook 500 GB external USB 2.0 hard drives to house our CD collection ripped to FLAC. As they will be storing relatively small numbers of very large files, I opted to use a filesystem with a large blocksize and few inodes:
# mkfs.ext3 -T largefile4 /dev/sde1 # mkfs.ext3 -T largefile4 /dev/sdf1
Keep in mind, however, that the resulting filesystem has a fairly small number of inodes so if you start copying lots of small files to the filesystem, it will quickly become full even though a simple
df might report tons of free blocks (
df -i will give you information about inodes). Of course, you could get around this by tarring and bzipping directories with lots of files.
I noticed a problem, however. While rsyncing data from one drive to another, I noticed abysmal transfer speeds, on the order of 400 KB/sec. Not good. My first instinct was to check that I plugged the drives into USB 2.0 ports and not USB 1.1 ports. Nope – I had the right ports. So I looked at
hdparm but that is ATA-specific. There is a
sdparm utility for SCSI devices, but it doesn’t seem to have much support for USB and Firewire. Eventually, I found this Linux USB FAQ, which talked about the
max_sectors setting. I did:
# echo 1024 > /sys/block/sde/device/max_sectors # echo 1024 > /sys/block/sdf/device/max_sectors
max_sectors from 240 to 1024 (it wouldn’t go any higher than 1024) and now rsyncs were transferring on the order of 15 MB/sec. Quite an improvement from 400 KB/sec to 15 MB/sec.
This is on an Ubuntu system with the 2.6.17-11-386 kernel. YMMV, but this worked like a charm for me.