Yes, I am sure all of us have heard about the 1ms latency requirement for some envisioned applications under the 5G umbrella. Ultra reliable low latency is an important requirement for deployment scenarios where a rapid exchange of data is expected, like critical machine communications of connected vehicles.
I think this has an important consequence regarding the way we perform measurements of network parameters and the current monitoring schemes in our networks.
Notice the timescales we use at this moment in network management systems. We still measure, for example, throughput in the seconds scale: 10MBps, 100 GBps. I believe we should begin upgrading this timescales, exploring possibilities to perform measurements of common network parameters, where apply, in sub second scales. This will help to have a better vision of the network state and monitor the service much better.
Even more important, is the actual data delivery. We need a high 'goodput'. TCP/IP is long known for its large overhead. Should we get serious to implement more efficient protocols? Embrace deterministic networking? All these "little" improvements make an important echo at the service layer.
Same applies to data center (DC) availability and reliability. Is the 5 nines approach still valid? As DC are moving closer to the edge, and due to its software capabilities, it is necessary to re-think the metrics we use to evaluate its performance.
Any thoughts about this? This merits further exploration.
Showing posts with label measurement. Show all posts
Showing posts with label measurement. Show all posts
Tuesday, 26 March 2019
Subscribe to:
Posts (Atom)