Quick Guide to Troubleshooting Network Latency

As you may already be aware SingleHop works hard to remain on the cutting edge when it comes to our network infrastructure. We invest a lot of money in upgrades in order to supply our clients with as little latency as possible. After all, what good is Infrastructure without reliable transport to connect it to the outside world? However, there are times when our clients experience latency. The  majority of the time it occurs outside of our network, but we still have to troubleshoot it to make sure.  So, lets get started.

What is latency?

Latency (also called lag) is defined as the amount of time it takes for a packet of data to be encapsulated, transmitted, processed thru multiple network devices until it is received at its destination, and is decoded by the receiving computer.

Packet queuing tends to be one of the more common reasons for increased latency between networks. For example if you were to execute a trace-route from your location to an IP address allocated on the other side of the ocean, you would see a dramatic increase in latency. This is caused by millions upon millions of packets being routed over fewer paths and each having to wait their turn to be processed.  Typically this is unavoidable and considered to be normal. However, if you notice an increase in latency before or after the hop over the transoceanic fiber chances are there is another reason for increased latency.

Other reasons for latency could be:  Network interface port saturation, interface errors, packet fragments, Upstream provider outages, routing issues ect.

At Singlehop, our network operations team troubleshoots network latency from our network, and from yours. We look at examples of a constant ping from your location to your server. We look at trace-route results from your location to your server and from your server to your location. Then look further outside the box with trace-route results utilizing a looking glass server in order to determine where the bottle-neck is located.

Here is a quick example of the process. I have executed a trace route from one of our core routers to a Russian telecommunication company’s IP


The results are as follows:

 Tracing the route to
xe-4-2-0.mpr1.ord6.us.above.net ( 0 msec
xe-0-2-0.mpr2.ord6.us.above.net ( [AS 6461] 0 msec
xe-0-2-0.cr1.dfw2.us.above.net ( [AS 6461] 28 msec
xe-0-3-0.cr2.dfw2.us.above.net ( [AS 6461] 28 msec
xe-0-1-0.er2.dfw2.us.above.net ( [AS 6461] 32 msec 28 msec 28 msec
  6 ae2-109.dal33.ip4.tinet.net ( [AS 3257] 32 msec 212 msec 216 msec
xe-7-2-0.stk30.ip4.tinet.net ( [AS 3257] 220 msec
rostelecom-gw.ip4.tinet.net ( [AS 3257] 152 msec 152 msec 196 msec [AS 12389] 212 msec 216 msec 208 msec
10 customer-AS35400.ae-1.ebrg-rgr3.ur.ip.rostelecom.ru ( [AS 12389] 220 msec 208 msec 208 msec
11 [AS 35400] 236 msec 228 msec 228 msec
12 [AS 35400] 220 msec 224 msec 220 msec
13 adsl-90-150-82-37.jamal.ru ( [AS 34875] 240 msec 232 msec 240 msec
14 adsl-90-150-82-38.jamal.ru ( [AS 34875] 232 msec 220 msec 56 msec
15 [AS 50699] 236 msec 232 msec 236 msec

As you can see once the packet reaches the ingress interface at hop 6 (highlighted) there is a dramatic increase in latency due to packet queuing over a transoceanic fiber. We can determine this as the cause when we run a trace-route from a looking glass server in Russia back to SingleHop, then compare the two results:

traceroute to (, 30 hops max, 40 byte packets
xe-4-0-0.110.m7-ar4.msk.ip.rostelecom.ru (  9.151 ms  8.939 ms  8.882 ms
xe-1-0-0.stkm-ar1.intl.ip.rostelecom.ru (  24.711 ms xe-9-2-0.stkm-ar1.intl.ip.rostelecom.ru (  33.279 ms  33.238 ms
s-b3-link.telia.net (  18.090 ms  21.387 ms ae2.stk30.ip4.tinet.net (  27.436 ms
s-bb1-link.telia.net (  27.314 ms  30.517 ms s-bb1-link.telia.net (  21.392 ms
ffm-bb1-link.telia.net (  48.528 ms xe-3-0-0.er1.ams1.nl.above.net (  60.268 ms  60.254 ms
ffm-b2-link.telia.net (  47.184 ms  47.252 ms ffm-b2-link.telia.net (  49.665 ms
xe-5-2-0.mpr1.lhr3.uk.above.net (  64.572 ms  64.610 ms  62.171 ms
10  ge-3-3-0.mpr1.la5.us.above.net (  133.746 ms  131.793 ms  131.284 ms
11  xe-3-2-0.cr1.ord2.us.above.net (  150.693 ms  156.502 ms  154.735 ms
12  xe-1-1-0.er1.ord2.us.above.net (  157.564 ms  157.499 ms dr6506b.ord02.singlehop.net (  156.213 ms
13  dr6506a.ord03.singlehop.net (  166.293 ms  166.203 ms  166.059 ms
14  dr6506b.ord02.singlehop.net (  145.481 ms  148.003 ms  147.968 ms
15  dr6506a.ord03.singlehop.net (  165.771 ms dr6506a.ord02.singlehop.net (  157.431 ms  156.985 ms

As you can see at hop 11 latency again increased due to packet queuing at the transoceanic  fiber. Additionally when you compare the 2 results you will notice that on each side of the transoceanic fiber there is very little latency. In summary, Latency or lag is the amount of time it takes for a packet of information to be sent from one host to another.  The most common cause of latency is due to packet queuing at any gateway along the packets course of travel. In order to pinpoint where the bottleneck is occurring follow these steps:

1)      Ping your server from your location. Generally a 100 constant ping is sufficient

2)      From your server, ping the IP address of your physical location

3)      Provide traceroute results from your location to your server

4)      Provide traceroute results from your server to your location

5)      Provide traceroute results from a looking glass server to both your server, and your location