Hot news:

If you find a spelling error, please select an incorrect text and press Ctrl+Enter. Thank you!

Compression project >> Video Area Home
 RUSSIAN

# MSU Quality Measurement Tool: Metrics information

### PSNR

This metric, which is used often in practice, called peak-to-peak signal-to-noise ratio — PSNR.

,

where MaxErr - maximum possible absolute value of color components difference, w - video width, h - video height. Generally, this metric is equivalent to Mean Square Error, but it is more convenient to use because of logarithmic scale. It has the same disadvantages as the MSE metric.
In MSU VQMT you can calculate PSNR for all YUV and RGB components and for L component of LUV color space.
In MSU VQMT there are four PSNR implementations. "PSNR" and "APSNR" use the correct way of PSNR calculation and take maximum possible absolute value of color difference as MaxErr. But this way of calculation gives an unpleasant effect after color depth conversion. If color depth is simply increased from 8 to 16 bits, the "PSNR" and "APSNR" will change, because MaxErr should change according to maximum possible absolute value of color difference (255 for 8 bit components and 255 + 255/256 for 16 bit components). Thus "PSNR (256)" and "APSNR (256)" are implemented. They would not change because they use upper boundary of color difference as MaxErr. The upper boundary is 256. This approach is less correct but it is used often because it is fast. Here are the rules of MaxErr definition:

• "PSNR" and "APSNR" - MaxErr varies on color components bits usage:
• 255 for 8 bit components
• 255 + 3/4 for 10 bit components
• 255 + 63/64 for 14 bit components
• 255 + 255/256 for 16 bit components
• 100 for L component of LUV color space
Notes
1. If bits depth differs for two compared videos, then maximum bits usage is taken to select MaxErr.
2. All color space conversions are assumed to lead to 8 bit images. It means that if, for example, you are measuring R-RGB PSNR for 14 bit YUV file, then 255 will be taken as MaxErr.
• "PSNR (256)" and "APSNR (256)" - MaxErr is selected according to the next rules:
• 256 for YUV and RGB color spaces
• 100 for L component of LUV color space
The difference between "PSNR" and "APSNR" is the same as between "PSNR (256)" and "APSNR (256)" and is in the way of average PSNR calculation for a sequence. The correct way to calculate average PSNR for a sequence is to calculate average MSE for all frames (average MSE is arithmetic mean of the MSE values for frames) and after that to calculate PSNR using ordinary equation for PSNR:

This way of average PSNR calculation is used in "PSNR" and "PSNR (256)". But sometimes it is needed to take simple average of all the per frame PSNR values. "APSNR" and "APSNR (256)" are implemented for this case and calculate average PSNR by simply averaging per frame PSNR values.
The next table summarizes the differences:

 Metric MaxErr calculation Average PSNR calculation PSNR correct correct PSNR (256) 256 (fast, inexact) correct APSNR correct averaging APSNR (256) 256 (fast, inexact) averaging
"PSNR" metric is recommended for PSNR calculation since it is implemented according to the original PSNR definition.

 Source Processed Y-YUV PSNR

Colors, in order of PSNR growing: red, yellow, green, blue, black (Note: the bigger PSNR - the less the difference)

The value of this metric is the mean absolute difference of the color components in the correspondent points of image. This metric is used for testing codecs and filters.

### Delta

The value of this metric is the mean difference of the color components in the correspondent points of image. This metric is used for testing codecs and filters.

 Source Processed Delta

Red color Xij > Yij, green color Xij < Yij

### MSU Blurring Metric

This metric allows you to compare power of blurring of two images. If value of the metric for first picture is greater, than for second it means that second picture is more blurred, than first one.

 Source Processed MSU Blurring Metric

Red color - first image is more more sharp, than second. Green color - second image is more sharp, than first.

### MSU Blocking Metric

This metric was created to measure subjective blocking effect in video sequence. For example, in contrast areas of the frame blocking is not appreciable, but in smooth areas these edges are conspicuous. This metric also contains heuristic method for detecting objects edges, which are placed to the edge of the block. In this case metric value is pulled down, it allows to measure blocking more precisely. We use information from previous frames to achieve better accuracy.

 Source MSU Blocking Metric

### SSIM Index

SSIM Index is based on measuring of three components (luminance similarity, contrast similarity and structural similarity) and combining them into result value.
Original paper

 Original Compressed SSIM (fast) SSIM (precise)

More bright areas corresponds to greater difference.

There are 2 implementation of SSIM in our program: fast and precise. The fast one is equal to our previous SSIM implementation. The difference is that the fast one uses box filter, while the precise one uses Gauss blur.
Notes:

1. Fast implementation visualization seems to be shifted. This effect originates from the sum calculation algorithm for the box filter. The sum is calculated over the block to the bottom-left or up-left of the pixel (depending on if the image is bottom-up or top-down).
2. SSIM metric has 2 coefficients. They depend on the maximum value of the image color component. They are calculated using the following equations:
• C1 = 0.01 * 0.01 * video1Max * video2Max
• C1 = 0.03 * 0.03 * video1Max * video2Max
where video1Max is the maximum value of a given color component for the first video, video2Max is the maximum value of the same color component for the second video. Maximum value of a color component is calculated in the same way as for PSNR:
• videoMax = 255 for 8 bit color components
• videoMax = 255 + 3/4 for 10 bit color components
• videoMax = 255 + 63/64 for 14 bit color components
• videoMax = 255 + 255/256 for 16 bit color components

### MultiScale SSIM INDEX

MultiScale SSIM INDEX based on SSIM metric of several downscaled levels of original images. Result is weighted average of those metrics.
Original paper

 Original frame Compressed frame MSSSIM (fast) MSSSIM (precise)
More bright areas corresponds to greater difference.

There are two algorithms realized for MultiScale SSIM - fast and precise, as for SSIM metric. Êàê è äëÿ ìåòðèêè SSIM, ðåàëèçîâàíî äâà âàðèàíòà MSSSIM áûñòðûé (fast) è òî÷íûé (precise). The difference is that the fast one uses box filter, while the precise one uses Gauss blur.
Notes:

1. Due to result metric calculated as multiplication of several metric values below 1.0 visualization seems to be dark. Fast implementation visualization seems to be shifted. This effect originates from the sum calculation algorithm for the box filter. The sum is calculated over the block to the bottom-left or up-left of the pixel (depending on if the image is bottom-up or top-down).
2. Levels weights (0 corresponds to original frame, while 4 corresponds to higher level):
• WEIGHTS[0] = 0.0448;
• WEIGHTS[1] = 0.2856;
• WEIGHTS[2] = 0.3001;
• WEIGHTS[3] = 0.2363;
• WEIGHTS[4] = 0.1333;

### 3-Component SSIM INDEX

3-Component SSIM Index based on region division of source frames. There are 3 types of regions - edges, textures and smooth regions. Result metric calculated as weighted average of SSIM metric for those regions. In fact human eye can see difference on textured or edge regions precisely than on smooth regions. Division based on gradient magnitude in every pixel of images.
Original paper

 Original frame Compressed frame 3-SSIM regions division 3-SSIM metric
More bright areas corresponds to greater difference.

### Spatio-Temporal SSIM

The idea of this algorithm is to use motion-oriented weighted windows for SSIM Index. MSU Motion Estimation algorithm used to retrieve this information. Based on the ME results, weighting window is constructed for every pixel. This window can use up to 33 consecutive frames (16 + current frame + 16). Then SSIM Index calculated for every window to take into account temporal distortions as well. Also another spooling technique is used in this implementation. We use only lower 6% of metric values for the frame to calculate frame metric value. Thus causes larger metric values difference for difference files.
Original paper

 Source Compressed Metric visualization

More bright blocks corresponds to greater difference.

### VQM

VQM uses DCT to correspond to human perception.
Original paper

 Source Processed VQM

More bright blocks corresponds to greater difference.

### MSE

 Source Processed Y-YUV MSE

### REC.601

This table is default YUV <=> RGB table in AVISynth.
{R [0...255], G [0...255], B [0...255]} => {Y [16...235], U [16...240], V [16...240]}
RGB to YUV

```    Y = (0.257 * R) + (0.504 * G) + (0.098 * B) + 16

U = -(0.148 * R) - (0.291 * G) + (0.439 * B) + 128

V =  (0.439 * R) - (0.368 * G) - (0.071 * B) + 128
```
YUV to RGB
```    R = 1.164 * (Y - 16) + 1.596 * (V - 128)

G = 1.164 * (Y - 16) - 0.391 * (U - 128) - 0.813 * (V - 128)

B = 1.164 * (Y - 16) + 2.018 * (U - 128)
```

### PC.601

{R [0...255], G [0...255], B [0...255]} => {Y [0...255], U [-128...128], V [-128...128]}
RGB to YUV

```    Y = 0.299 * R + 0.587 * G + 0.114 * B

U = -(0.147) * R - 0.289 * G + 0.436 * B

V = 0.615 * R - 0.515 * G - 0.100 * B
```
YUV to RGB
```    R = Y + 1.14 * V

G = Y - 0.395 * U - 0.581 * V

B = Y + 2.032 * U
```

### YUV Files

YUV files form a variety of "raw data" files. Now MSU Video Quality Measurement Tool supports different types of them, but if you use .yuv files in your comparison note that

1. We assumed, that U and V values in YUV files are positive.
2. If you use any YUV<=>RGB table for creating YUV files from AVI (or AVI from YUV) in your program you must choose this table in the settings of MSU Video Quality Measurement Tool.

### MSU Video Quality Measurement Tools

 e-mail:

### Other resources

Video resources:

 Last updated: 25-August-2011

 Search (Russian):
Server size: 8069 files, 1215Mb (Server statistics)

Project updated by
Server Team and MSU Video Group