Sep 16, 2011

To: The SingleHop Community,

It was brought to our attention this morning that there was an error on our site regarding the  “Network Speed Test.”

It was an extremely important find, as that feature is a quick glance tool that allows potential clients to test our capabilities, and the fact that it was giving inaccurate results in inexcusable.  It was a coding and compression issue with re-making the file with /dev/urandom (random characters) and not /dev/zero (which is a bunch of zero’s and is easier to compress

This was an honest mistake and has been corrected.

I can assure you that we at SingleHop are committed to timely, consistent, and honest service and are thankful this issue was discovered.

If you have any questions, comments, or concerns please let us know.


    Great to see you guys answering this on your public blog. The difference between /dev/urandom and /dev/zero makes sense. But, some people said that the compression was due to using SSL to serve the test file. Why would you choose to serve the test file with SSL, which would make it slower to download due to processing required on the server and client?

    Posted by mark on September 16, 2011 Reply

    This was a simple oversight. We decided to upload the files to, which hosts our customer interface, but also forces SSL’s for obvious reasons.

    Even without SSL, the old download speed was misleading, but now that we’ve changed the contents everything is smooth sailing :)

    Posted by kswan on September 16, 2011 Reply

    You guys use Apache on the server, so you could just do a simple mod_rewrite exception on the download test file to override the SSL requirement.

    Still, point taken, and glad to see that you guys are public in owning up to it. That makes a huge difference to customers and critics alike.

    Posted by Dan on September 16, 2011 Reply

    I’m surprised it’s even possible to make a 500 MB file by gzipping zeros.

    Posted by L on September 16, 2011 Reply

    If it actually was a gziped file (and not just a file of 500 MB of zeros), then why did the compression in the transfer protocol make a difference? Shouldn’t the file already be compressed as much as possible?

    Posted by Per Wiklander on September 22, 2011 Reply

Leave a Comment