[sf-lug] Linux backup software .. that meets unique requirements

David Rosenstrauch darose at darose.net
Mon Mar 15 11:57:54 PDT 2010


On 03/15/2010 02:23 PM, David Hinkle wrote:
>> Not sure I understand.  Does an sshfs file system work differently
>> than a regular remote file system in this regard?  A typical rsync
>> over ssh won't wind up having to read every file?
>
> No, a typical rsync over ssh passes file checksums back and fourth.
> Once it detects a file that doesn't match it breaks it into blocks
> and trades checksums again.  The end result is that most of the time
> only small portions of only changed files need to be transferred.

Right, I knew that about rsync (just worded the question badly).  I 
guess I'm trying to understand what's different about the sshfs would 
force it to exchange full file data?  Where did you get that info?  Is 
there a doc page you can point me to?


>> I'm not sure I understand how this would work.  I want to rsync
>> from <local src>  to<remote dest>, with the<local src>  being the
>> unencrypted data on my server, and<remote dest>  being the
>> unencrypted remote file system.  How would it be possible to
>> introduce encfs alone into the mix to make this be encrypted?
>
> My assumption, based on the faq and cursory examination of the
> website, is that this is like other encrypted filesystems with which
> I'm familiar.   A file containing encrypted data is mounted as if it
> were any other filesystem on the remote end, and you can use standard
> rsync symantics to sync to it.  Then you unmount the filesystem and
> all that's left is the data stored encrypted in a file.   The
> difference between this and something like cryptfs would then be that
> this filesystem is userspace only, so you shouldn't need root to make
> it happen, but you might need root to get it installed anyway.

Encfs is a bit different in that your encrypted file system doesn't 
reside in a single file, which you mount via loopback.  Rather, it maps 
an encrypted directory to an unencrypted one.  e.g.:

[darose at daroselin encfs]$ encfs /tmp/encfs/encrypted /tmp/encfs/unecrypted
EncFS Password:
[darose at daroselin encfs]$ ls -lR /tmp/encfs
/tmp/encfs:
total 0
drwxr-xr-x 2 darose users 55 Mar 15 14:45 encrypted
drwxr-xr-x 2 darose users 55 Mar 15 14:45 unecrypted

/tmp/encfs/encrypted:
total 4
-rw-r--r-- 1 darose users 16 Mar 15 14:45 4vb,n4s2df9HaSbJ-D3KrV0r

/tmp/encfs/unecrypted:
total 4
-rw-r--r-- 1 darose users 8 Mar 15 14:45 test.txt


So this has the benefit that you don't have to decide ahead of time how 
much space to allocate for the encrypted FS.  Other than that it's 
largely the same.



But again, I can't use encfs as you've described above.  I don't have 
full shell access to the remote system, so I can't perform an encfs 
mount on the remote file system before I do the backup.  Closest I can 
come is to do an encfs mount locally over an sshfs remote file system.


> You've got me intrigued, if this is a niche that can't be filled
> maybe I should write some software for it, I'm going to post a
> proposal to the lug and see what people think.
>
> David

If sshfs can't support rsync-like behavior (and it doesn't look like it 
from an experiment I just did - trying to create a hard link over sshfs 
told me "Function not implemented") then yeah, I don't see how this can 
be easily done.

Next closest thing I could do would have to be a multi-step process:
1) encfs <temp encrypted dir> <temp unencrypted dir>
2) rsync <src> <temp unencrypted dir>
3) fusermount -u <temp unencrypted dir>
3) rsync <temp encrypted dir> <dest>

Would be much nicer to do it in one rsync step though.

Tricky part here, obviously, is not having control over the remote 
system.  That removes the possibility of setting up encryption on the 
remote side.

Thanks,

DR




More information about the sf-lug mailing list