What is Network Latency?

Programming

The term “latency” refers to the time it takes for something to happen.
Latency in a network refers to the time it takes for data to travel across the network to its intended destination.
It’s commonly expressed as a round trip delay, or the time it takes for data to travel from point A to point B.
Because a computer using a TCP/IP network delivers a finite amount of data to its destination and then waits for an acknowledgement before sending any more, the round trip delay is an important metric.
As a result, the round trip delay has a significant impact on network performance.

In most cases, latency is measured in milliseconds (ms).

Typical, approximate, values for latency that you might experience include:

  • 800ms for satellite
  • 120ms for 3G cellular data
  • 60ms for 4G cellular data which is often used for 4G WAN and internet connections
  • 20ms for an mpls network such as BT IP Connect, when using Class of Service to prioritise traffic
  • 10ms for a modern Carrier Ethernet network such as BT Ethernet Connect or BT Wholesale Ethernet in the UK

Why is latency important?

People frequently believe that high bandwidth equals high performance, but this is not the case.

  • A network’s or a network circuit’s bandwidth refers to its capacity to carry traffic.
    It is measured in bits per second, or Megabits per second in most cases (Mbps).
  • More traffic can transport with a higher bandwidth; for example, more simultaneous talks. It does not indicate how quickly the communication will occur (however attempting to send more traffic across a network than the available bandwidth could result in packets of data being deleted and re-transmitted later, degrading your performance).

Latency, on the other hand, is the time it takes for data from one end of your network to reach the other.
In reality, we commonly measure the round trip time for data to travel from one end to the other.

Why is it necessary to keep track of time in both directions?

TCP, as we’ll see later, sends acknowledgement bits back to the sender, which turns out to be quite important.

  • It’s fairly intuitive that a bigger delay means a slower connection.  
  • However, due to the nature of TCP/IP (the most widely used networking protocol), latency has a more complex and far reaching impact on performance:  latency drives throughput.

Latency drives throughput

A network typically carries multiple simultaneous conversations.  

  • Bandwidth limits the number of those conversations that can be supported.
  • Latency drives the responsiveness of the network – how fast each conversation can be.
  • For TCP/IP networks, latency also drives the maximum throughput of a conversation

Latency can become a particular problem for throughput because of the way TCP (Transmission Control Protocol) works. 

TCP is responsible for ensuring that all of your data packets arrive at their destination safely and in the correct order.
It stipulates that only a particular quantity of data be transferred before an acknowledgement is required.

Consider a network path as a lengthy pipe carrying water to a bucket. Once the bucket is full, TCP forces the sender to wait for an acknowledgement to arrive along the pipe before sending any more water.

latency-concept

This bucket is normally 64KB in size in real life. 65535 (i.e. 216 bits) x 8 = 524,280 bits. The TCP Window is what it’s called.

Consider a case in which water travels down the pipe in half a second and the acknowledgement takes another half a second to return… a total delay of one second.

The TCP protocol would limit you from sending more than 524,280 bits in a single second in this instance.
The maximum speed you could send through this pipe is 524,280 bits per second (bps), or half a megabit per second.

The only thing driving this is latency (barring other issues that may slow things down).


So how does latency impact throughput in real life?

Obviously, if you have latency-sensitive apps, you must be aware of your network’s latency.
Keep an eye out for circumstances where there is unusually high latency, which will slow down throughput.
International circuits, for example.

Another intriguing scenario is 4G Cellular WAN, in which the 4G network establishes a dependable, high-speed link to your corporate network or the internet.
This entails the use of many SIM cards that are frequently joined together to create a single, extremely reliable connection. In this instance, the bonded connection’s latency tends to be the highest of all the individual connections.

But keep in mind that latency isn’t the only reason for poor app performance.
When we looked into the core cause of performance issues in our customers’ networks, we discovered that the network was responsible for only 30% of them. The remaining 70% were due to problems with the application, database, or infrastructure. To get to the bottom of such issues, you may need to conduct an Application Performance Audit or install Critical Path Monitoring on your IT infrastructure.
In general, you’ll use a Network and Application monitoring toolkit to track latency and other performance-impacting factors. More information on how to establish the best managed network provider monitoring can be found in this page.


Click here to read more useful and interesting articles. 

X-Soft

X-Soft is an established player in the IT market committed to providing excellent solutions for Web/ Mobile (iOS, Android, Web) around the globe. Namely, we are a team of professional web and mobile developers with 10+ years of experience.

Leave a Reply

Leave a Reply

Your email address will not be published. Required fields are marked *