MemTransfer and Exploring the Potential of Gigabit

The Background

It’s summer and I have free time, so it means lots of projects exploring thoughts I have had for the past few months and haven’t had the time to expand upon. This evenings lucky idea is really just an application of a few concepts I have learned this past semester in my computer hardware systems course (CIS450) at Kansas State University. I have always been fascinated with optimizations of anything and everything computer science related. Frustrated by the bottleneck big ole data drives cause, I wanted to explore what are the true limits of a gigabit connection.

The Hardware Setup

The physical setup for this experiment was a client (my personal 2013 MacBook Air, 1.7 Ghz i7, 8GB ram, 128 GB SSD) connected to my house’s gigabit network infrastructure via a USB3-to-Gigabit adapter, and my families fileserver, a Dell Core2 Quad, 6 GB ram, and loads of disks, also connected via gigabit. Props to the USB3-to-Gigabit adapter for handling the test so well, it could potentially be a bottleneck in the only achieving 600-650 Mb/s, as opposed to the theoretical 1,000 Mb/s (though overhead has to go somewhere in there!).

The Software Setup

The software for this system was a simple client and server written by me in Java. It is available on Github. The client takes a file as input, loads it completely into memory. Files over 4-5GB aren’t really feasible in this method, at least with these machines. It then sends it on its merry way to the server, who was preallocated a place in memory of the correct size. Once it has finished transferring, it marks the transfer as done, and writes it to disk. The key measurement is just the transfer time and not any of the reading from or saving to disk.

The Results

This first image simply shows the spike in receiving speeds on the server, all the way up to 608 Mbps. I have never seen speeds this fast transferring to one of the SATA drives. An interesting note is the sending ~10-15 Mb/s. It is hypothesized as TCP ACKing later on in the post.

As the file transfer happens all in memory, you can see the noticeable bump that comes from allocating the 1.4 GB test file, and the prompt return back down.

This image shows the output of the server client, showing that it received the file, then once all received was done it save the file. About a 20 MB/s difference between the two.

This last image shows the client, sending out a file, then finishing.

What’s Next

While I feel pretty successful about this test, especially since, blogpost included it has taken around 2 hours, I do feel like there are some interesting ways it could be improved. Utilizing multiple NIC’s would be interesting, to attempt to reach 2 Gigabits (or the more practical 1.2 Gigabits (600 Mb/s x 2)). Also using UDP as opposed to TCP can yield faster speeds in large file transfer, it would be interesting to see how that would apply here. You can note that in the network diagnostic image, while it is receiving a large 600 Mb/s, it is also sending back a not that small 15 Mb/s. That’s a lot of ACKs. If we weren’t waiting on so many (as in UDP), what would that mean for results? Interesting questions for another time.

 

blog comments powered by Disqus