How do compression algorithms get better

Understanding data compression

introduction

Data compression reduces the size of the data frames transmitted over a network connection. Reducing the frame size reduces the time it takes for the frame to travel over the network. Data compression uses an encoding scheme at each end of a transmission link that allows characters to be removed from the data frames on the sending side of the link and then correctly replaced on the receiving side. Since the condensed frames require less bandwidth, we can transfer larger volumes at the same time.

We refer to the data compression schemes used in Internet devices as lossless compression algorithms. These schemes reproduce the original bitstreams exactly, without degradation or loss. This function is required by routers and other devices to transfer data in the network. The two most common compression algorithms used on internet work devices are the stacker compression and the predictor data compression algorithms.

before you start

Conventions

For more information on document conventions, see the Cisco Technical Tips Conventions.

requirements

There are no special requirements for this document.

Components used

This document is not limited to specific software and hardware versions.

Data compression

Data compression can be broadly classified into hardware and software compressions. In addition, software compression can be of two types: CPU intensive or memory intensive.

Stacker compression

Stacker compression is based on the Lempel-Ziv compression algorithm. The Stacker algorithm uses an encoded dictionary that replaces a continuous flow of characters with codes. This saves the symbols represented by the codes in memory in a dictionary-style list. Because the relationship between a code and the original symbol varies as the data varies, this approach is more responsive to the fluctuations in the data. This flexibility is particularly important for LAN data, as many different applications can be transmitted over the WAN at the same time. In addition, depending on what data varies, the dictionary changes to suit different traffic needs. Stacker compression is more CPU-intensive and less memory-intensive.

To configure stacker compression, run the command Stack from the interface configuration mode compress.

Predictor compression

The Predictor compression algorithm attempts to predict the next string in a data stream by using an index to look up a sequence in the compression dictionary. Then the next sequence in the data stream is examined to see if it matches. If so, this sequence replaces the sequence looked up in the dictionary. If there is no match, the algorithm searches the index for the next sequence of characters and the process begins again. The index updates itself by hashing some of the most recent strings from the input stream. There is no time to compress data that has already been compressed. The compression ratio obtained with the predictor is not as good as other compression algorithms, but it remains one of the fastest algorithms available. Predictor is more memory-intensive and less CPU-intensive.

To configure predictor compression, run the command compress vorhertor from the interface configuration mode.

Cisco internetworking devices use the Stacker and Predictor data compression algorithms. The Compression Service Adapter (CSA) only supports the Stacker algorithm. The Stacker method is the most versatile as it runs on any supported point-to-point Layer 2 encapsulation. The Predictor only supports PPP and LAPB.

Cisco IOS data compression

There are no industry standard compression specifications, but Cisco IOSĀ® software supports several third-party compression algorithms, including Hi / fn Stac Limpel Zif Stac (LZS), Predictor, and Microsoft Point-to-Point Compression (MPPC). These compress data on a connection basis or at the network trunk level.

Compression can be based on integers, headers, or just payloads. The success of these solutions can be easily measured by the compression ratio and the platform latency.

Cisco IOS software supports the following data compression products:

  • FRF.9 for frame relay compression

  • Link Access Procedure, Balanced (LAPB) Payload Compression using LZS or Predictor High Level Data Link Control (HDLC) using LZS

  • X.25 payload compression of encapsulated traffic

  • Point-to-Point Protocol (PPP) with LZS, Predictor and Microsoft Point-to-Point Compression (MPPC)

However, compression may not always be adequate and the following factors can affect:

  • No standards: Although the Cisco IOS software supports several compression algorithms, they are proprietary and not necessarily interoperable.

    Note: Both ends of a compression transaction must support the same algorithms.

  • Data type: The same compression algorithm provides different compression ratios, depending on the type of data being compressed. Certain types of data are inherently less compressible than others, which can achieve a compression ratio of up to 6: 1. Cisco conservatively records an average Cisco IOS compression ratio of 2: 1.

  • Data that has already been compressed: Attempting to compress data that has already been compressed, such as JPEG or MPEG files, may take longer than transferring the data without compression.

  • Processor usage: Software compression solutions consume valuable processor cycles in the router. Routers must also support other functions such as management, security, and protocol translation. Compressing large amounts of data can affect router performance and cause network latency.

The highest compression rate is usually achieved with highly compressible text files. Compressing data can cause performance degradation because it is software compression, not hardware compression. Use caution when configuring compression on smaller systems with less memory and slower CPUs.

Cisco hardware compression

Cisco 7000 platforms

CSA performs high-performance, hardware-assisted compression for Cisco Internetwork Operating System (Cisco IOSTM) compression services. It is available for all Cisco 7500, 7200, and RSP 7000 series routers.

CSA enables high compression in the head office. It can receive multiple compression streams from remote Cisco routers that use software-based compression with Cisco IOS. CSA maximizes router performance by offloading compression algorithms from the central processing modules of the RSP7000, 7200, and 7500 (with distributed compression) so that they can still be used for routing and other specialized tasks.

When used with the Cisco 7200 series router, the CSA can offload compression on any interface. When used on the VIP2, the compression is outsourced to the neighboring port adapter only on the same VIP.

Cisco 3620 and 3640 platforms

The Compression Network Module dramatically increases the compression bandwidth of the Cisco 3600 Series by offloading the intensive processing required for compression from the main CPU. It uses a dedicated, optimized co-processor design that supports full-duplex compression and decompression. Compression is done on the link layer or Layer 2 and is supported for PPP and Frame Relay.

Low-speed WAN compression can often be supported by the Cisco IOS software running on the main Cisco 3600 Series CPU. For the Cisco 3620 this bandwidth is well below the T1 / E1 rate, for the Cisco 3640 it approaches the T1 rate. However, you cannot achieve these rates if the Cisco 3600 system also has to perform other processor-intensive tasks. The Compression Network Module triggers the main processor processors so they can perform other tasks while increasing the compression bandwidth to 2 E1 full duplex (2 x 2.048 Mbps full duplex) on both the Cisco 3620 and Cisco 3640. You can use this bandwidth for a single channel or circuit, or you can distribute it over up to 128 channels. Examples of this are an E1 or T1 leased line with up to 128 ISDN B channels or virtual frame relay circuits.

Cisco 3660 platforms

The Data Compression Advanced Integration Module (AIM) for the Cisco 3660 Series uses one of the two available Cisco 3660 internal AIM slots. This ensures that external slots for components such as integrated analog voice / fax devices, digital voice / fax, ATM, CSU / DSUs (Channel Service Unit / Digital Service Unit) as well as analog and digital modems remain available.

Data compression technology maximizes bandwidth and increases WAN link throughput by reducing frame size so that more data can be carried over a link. While software-based compression functions support fractional rates of T1 / E1, hardware-based compression offloads the main processor of the platform to achieve even higher throughput. With a compression ratio of up to 4: 1, the data compression AIM supports compressed data throughput of 16 Mbps with no additional traffic latency - enough to keep four T1 or E1 circuits full of compressed data in both directions at the same time . The Data Compression AIM supports LZS and Microsoft Point-to-Point Compression (MPCC) algorithms.

Cisco 2600 platforms

The data compression AIM for the Cisco 2600 Series uses the internal slot of the Cisco 2600 Advanced Integration Module, leaving external slots available for components such as integrated CSU / DSUs, analog modems, or voice / fax modules.

The AIM for data compression supports 8 Mbps throughput with no additional traffic latency, and it supports LZS and MPCC (Microsoft Point-to-Point Compression) algorithms.