Videoconferencing started back when calls were made over expensive ISDN phone lines, with calls sometimes costing $300-$500 per hour. As a result, a lot of work and development went into algorithms to compress video and audio so that a good videoconferencing experience could occur over low bandwidth connections. The need for a high level of compression continued as videoconferencing moved to IP networks.
Very few of these algorithms are proprietary, as almost all of them are ITU (International Telecommunications Union) standards. The advantage of ITU standard compression methods is that every brand of videoconferencing system can use them. So, a Polycom system calling a Cisco system will be able to have a relatively good-looking videoconference at 512Kbps of bandwidth and an HD call at 1Mbps, which is pretty remarkable considering that uncompressed, that same video stream might take 10Mb or more.
While there is a lot of commonality in the compression standards used in videoconferencing and other video applications on the Internet, there are some significant differences that make videoconferencing much more difficult to manage than streaming video applications.
Connection Oriented vs. Connectionless
TCP is referred to as a connection oriented protocol. It’s capable of doing error correction; if the packet is dropped during transmission TCP will ask for it to be retransmitted. In fact, every packet that is transmitted and received at the other end is acknowledged by the recipient. Because of these capabilities, TCP has more overhead per amount of data transmitted.
UDP is a connectionless protocol. It can’t do error correction. As such, it has much less overhead per amount of data transmitted. The videoconferencing industry long ago standardized on UDP as a way to transmit video and audio over the internet, since it doesn’t make sense to retransmit lost information in a videoconference. In the time it would take to correct the error, the conversation has moved on.
A lot of video streaming applications will use TCP, but will buffer or delay the video/audio stream to make sure everything is received and ready to play flawlessly in a continuous stream. We don’t care if Netflix buffers the live stream for 15 seconds and takes care of any errors during that lag. That’s why you can disconnect your Roku or Apple TV from the network and the Netflix application will keep on playing, sometimes for several minutes.
However, we can’t really afford to buffer data in a videoconferencing call. In fact, more lag in the connection, the worse the experience for real-time videoconferencing. Participants end up waiting for people at other sites to talk or participants end up talking over each other. So, a network that works great for streaming HD video may not work for real-time videoconferencing.
Videoconferencing Bandwidth Requirements
It’s relatively straightforward to calculate the amount of bandwidth that you’ll need for your videoconferencing calls. For point-to-point calls, add 10% for overhead to the call speed. For multipoint calls, multiply the number of calls by the bandwidth and add 10% for overhead. For example, for a three site call at 512K we can estimate the required bandwidth as:
4 x 512kbps = 2048kbps + 10% = 2253kbps or ~2Mb.
I may be old school in adding 10% for network overhead, but in my experience it’s better to slightly overestimate the amount of bandwidth required.
At the same time, the amount of bandwidth that’s actually used during a videoconferencing call can be significantly less than the speed at which the call is made. Videoconferencing systems have been optimized so they will not retransmit areas of the screen that have not changed significantly from one moment to the next. So, it’s not unusual to make a call at 1Mb, but the call statistics will show that only 400Kbps is being used at any point in time. This can cause consternation for new videoconferencing users who don’t understand why the call statistics report a bandwidth lower than speed selected for the call.
Bandwidth Quality vs. Quantitiy
It’s relatively easy to purchase a high-speed connection to the Internet. Unfortunately, a fast connection is not all that’s needed for videoconferencing. There are two major factors besides bandwidth that can affect your experience: congestion and jitter
When videoconferencing systems share an organization’s existing internet connection, there can be times when there is more data trying to use that connection than there is available bandwidth. When this happens, packets will be buffered or dropped. There are several solutions to deal with congestion:
- Increase the bandwidth of the internet connection. This may or may not work, depending on how much demand is being placed on the connection to the Internet and whether the ISP has oversubscribed their own network.
- Install a dedicated connection just for videoconferencing. Depending on location (and the relative cost of connections), this can be the best solution. This also has the effect of simplifying network design and making troubleshooting easier.
- Quality of service measures can be implemented on the network’s router(s) so that videoconferencing packets are tagged and prioritized over packets related to web, email, file transfer, etc. This can also be achieved via a “bandwidth shaper” that can assign priorities for all types of traffic.
The time between when a packet leaves it’s source device and when it arrives at it’s destination is called latency. Latency can be affected by a number of factors including all the networks, switches, and routers from point A to point B. If the latency is consistent for all the packets involved in a videoconference, all is good. However, when the latency is different from one packet to another, videoconferencing systems can get upset fast. The difference in latency from one packet to another is called jitter, and once jitter gets above 30-50Ms, a videoconference can be negatively affected.
What is your experience? Are there other network factors to consider for a great videoconferencing call? Comment below to share your experience.