[conspire] questions on next steps to use external firewire harddrive

Darlene Wallach freepalestin at dslextreme.com
Wed Jul 25 09:33:50 PDT 2007


Rick Moen wrote:
> Quoting Darlene Wallach (freepalestin at dslextreme.com):
> 
> 
>>1. Here is what is reported by df -h for the external
>>firewire harddrive:
>>
>>$ df -h
>>Filesystem            Size  Used Avail Use% Mounted on
>>/dev/sdb2             148G  189M  140G   1% /media/ieee1394disk
>>
>>Since the disk is really larger and I specified
>>+2G for swap and +161G for the remaining, and
>>
>>Command (m for help): p
>>
>>Disk /dev/sda: 163.9 GB, 163928604672 bytes
>>
>>am I correct in assuming that about 21G is bad blocks?
>>I used:
>>mkfs.ext3 -c -c /dev/sda2 157236187
> 
> 
> 
> Er, I don't know.  21GB does sound like an extraordinarily excessively
> high number, for bad blocks on a hard drive.  If true, that would be in
> my opinion ample grounds for considering the drive basically defective,
> and returning it under warranty (if any).
> 
> Wait a minute:  I have to wonder why you said "157236187" at the end of
> that command -- and suspect you may have inadvertantly created a
> partition shorter than the available space.
> 
> That number will get interpreted as "Make the filesystem be exactly this
> number of blocks, regardless of how many blocks there actually are."  I
> have never used that parameter for the mkfs.* commands, and suspect
> they're normally against your best interest.  Normally, one would allow
> the mkfs utility to figure out for itself how many blocks to allocate --
> and I strongly suspect you should do that, and that you'll find yourself
> suddenly regaining the missing space, when you do.
> 
> So, I'd say you should do that.
> 
> 
> 
>>Since I used the mkfs.ext3 as stated above, which I
>>assume wrote to the disk using 4 test patterns, should
>>I also use the following command to really zero out
>>the harddrive before I use it?
>>
>>#  dd if=/dev/zero of=/dev/sdb bs=512 count=1
> 
> 
> No -- though that's probably not what you want to do, at the moment,
> anyway.
> 
> To answer your question as posed, first, that command doesn't zero out
> the (entire) _hard drive_, per se, because of the last two parameter:
> 
> "bs=512" means use a block size of 512 bytes for the operations
> specified in this command.
> 
> "count=1" means perform one write cycle, only.  (Default, without this
> parameter, would have been to keep repeating until one reaches 
> end-of-device.)
> 
> In context, the cited command thus becomes "Please bit-copy information
> from device /dev/zero (which is an endless source of binary zeroes) to
> device /dev/sdb (the second SCSI device, addressed from its very first
> storage sector and moving forwards from there).  In doing so, perform
> one write of 512 bytes, and then stop."
> 
> In other words, that command wipes out sector zero (only) of the target
> device.  In the PC world, that's the sector where the 446-byte MBR boot
> program (if any), the 64-byte partition table, and two bytes left over
> (used by NT & kindred for "disk signature" bits) live.  The command thus
> clobbers the disk catalogue (if any) and boot program, while leaving
> untouched everything else including partitions themselves.  Of course, 
> those partitions (if present) will at that point have only a kind of
> ghost existence, since the partition tables that define them will have
> been zeroed out.
> 
> It's a super-quick way of making the drive _effectively_ empty for all
> practical purposes.
> 
> (By the way, I'm not sure what you mean by "4 test patterns", but you
> might be referring to the badblocks checking, in which case you probably
> know more about that than I do, at the moment.)
> 
> 
> 
[snip]

I redid mkfs.ext3 with the following results:

# mkfs.ext3 -c -c /dev/sdb2
mke2fs 1.38 (30-Jun-2005)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
19660800 inodes, 39309046 blocks
1965452 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=41943040
1200 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
         32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 
2654208,
         4096000, 7962624, 11239424, 20480000, 23887872

Testing with pattern 0xaa: done                        046
Reading and comparing: done                        046
Testing with pattern 0x55: done                        046
Reading and comparing: done                        046
Testing with pattern 0xff: done                        046
Reading and comparing: done                        046
Testing with pattern 0x00: done                        046
Reading and comparing: done                        046
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

# fdisk -l

[snip]
Disk /dev/sdb: 163.9 GB, 163928604672 bytes
255 heads, 63 sectors/track, 19929 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         244     1959898+  82  Linux swap
/dev/sdb2             245       19819   157236187+  83  Linux

# df -h /dev/sdb2
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdb2             148G  189M  140G   1% /media/ieee1394disk

I do not understand why df -h reports only 148G and
140G for size and available respectively. fdisk reports
163.9G. What am I missing? Should I assume the disk is
good, i.e.; I can trust the data I move there will be
safe and available?

After reading "Copying Directory Trees" on http://linuxmafia.com/kb/Admin/
that Rick wrote, it appears
"rsync -avz aSubDir /dev/sdb2/aSubDir" will most
fit what I need for each subdirectory I plan on moving
to the external harddrive. I plan on keeping my login
directory on my internal drive and moving directories
under my home directory to the external drive.

Darlene Wallach





More information about the conspire mailing list