[TriLUG] copying files

Joseph Mack NA3T jmack at wm7d.net
Fri Jun 22 10:47:26 EDT 2012


On Fri, 22 Jun 2012, Steve Litt wrote:

> Hi Joe,
>
> At 30kBps, I'm almost positive your bottleneck is the 
> wire,

I was going to wait till I had more data, but Alan's 
subsequent posting has preempted the matter.

> by a huge factor. Therefore, just about the only thing you 
> should be worrying about is how to compress the daylights 
> out of the files before they hit the wire.

The time of 57mins I quoted was for a null rsync, ie the 
directories/files were already synch'ed. So no files (which 
I assume are what rsync calls data) were transferred, just 
timestamps etc. Just to be sure I tried with -z, but the 
time to do a compressed null sync was the same.

It would seem that comparing file stamps takes a long time. 
Why?

If I ssh to the remote machine and do a `find ./` on the 
remote directory and return the ascii output back over the 
ssh connection and display it locally, it takes 7 sec.

So finding the timestamps over ssh should take 7 secs. The 
local time stamps will be much faster and I expect just 
about any algorithm to compare file sizes/timestamps will 
take 0 time compared to waiting for disk access. So a null 
rsync should be 7secs tops.

The remote directory is mounted sshfs. I expected that 
`find` executed locally on an sshfs mounted filesystem would 
be as fast as `find` executed remotely on the same directory 
over an ssh connection. I didn't even bother to check 
whether this is true. But on the local machine if I do

find ./sshfs_directory_from_remote_machine

the find, that takes 7 secs when done on the remote machine, 
when done locally on the sshfs mounted filesysem, takes 
10mins. I didn't expect sshfs to be so slow, as I get 
wirespeed with sshfs file transfers with it in both 
directions.

Still rsync can get all it needs in 10mins. What the rest of 
the 57mins is for, I haven't figured out.

I can't use nfs, as the clients can come in unannounced, 
with any IP, and I don't want to exportfs to 0/0. At least 
with sshfs the client needs a passwd to mount the remote 
filesystem. This is not a great solution, but it will do for 
the moment.

Possibly openvpn would be better, then the client will have 
preinstalled keys and the server could exportfs to 
10.0.x.0/24.

However if the openvpn server keys are compromised, then all 
the clients have to change their keys. Since the clients are 
designed to operate indefinitely unattended, they would be 
off-line till someone goes out and fixes them.

I would prefer that the openvpn server's keys have a 
passphrase, which you need to enter when you fire up 
openvpn. Then if the keys are compromised, someone has to 
then figure out the passphrase. I'll have to take this up 
with the openvpn people.

Joe
-- 
Joseph Mack NA3T EME(B,D), FM05lw North Carolina
jmack (at) wm7d (dot) net - azimuthal equidistant map
generator at http://www.wm7d.net/azproj.shtml
Homepage http://www.austintek.com/ It's GNU/Linux!



More information about the TriLUG mailing list