Transferring Large Files

Linux has an impressive tool set, if you know how to use it.  The  philosophy of using simple tools that do one job (but do it well) with the ability to chain commands together using pipes creates a powerful system.

Everyone has to transfer large files across the network on occasion.  scp is an easy choice most of the time, but if you’re working with small or old machines the CPU will be a bottleneck due to encryption.

There are several alternatives to scp, if you don’t need encryption.  These aren’t safe on the open internet but should be acceptable on private networks.  TFTP and rsync come to mind, but they have their limitations.

  • tftp is generally limited to 4 gig files
  • rsync either requires setting up an rsync service, or piping through ssh

My new personal favorite is netcat-as-a-server.  It’s a little more complicated to set up than scp or ftp but wins for overall simplicity and speed of transfer.

netcat doesn’t provide much output, so we’ll put it together with pv (pipeviewer) to tattle on bytes read and written.

First, on the sending machine (the machine with the file), we’ll set up netcat to listen on port 4200, and pv will give us progress updates:
pv -pet really.big.file | nc -q 1 -l -p 4200

  • pv -p prints a progress bar, -e displays ETA, -t enables the elapsed time
  • nc -q 1 quits 1 second after EOF, -l 4200 listens on port 4200

Without the -q switch, the sender will have to be killed with control-c or similar.

On the receiver (the machine that wants the file) netcat will read all bytes until the sender disconnects:
nc file.server.net 4200 | pv -b > really.big.file

  • nc will stream all bytes from file.server.net, port 4200
  • -b turns on the byte counter

Once the file is done transferring, both sides will shut down.

Author: H Walker Jones, Esq

A professional programmer with a sordid past involving sysadmin, tech support, and cooking.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.