Hot news:

If you find a spelling error, please select an incorrect text and press Ctrl+Enter. Thank you!

Compression project >> Video Area Home
RUSSIAN

MSU Quality Measurement Tool: Metrics information

MSU Graphics & Media Lab (Video Group)


Metrics Info



PSNR


This metric, which is used often in practice, called peak-to-peak signal-to-noise ratio PSNR.

,

where MaxErr – maximum possible absolute value of color components difference, w – video width, h – video height. Generally, this metric is equivalent to Mean Square Error, but it is more convenient to use because of logarithmic scale. It has the same disadvantages as the MSE metric.
In MSU VQMT you can calculate PSNR for all YUV and RGB components and for L component of LUV color space.
In MSU VQMT there are four PSNR implementations. "PSNR" and "APSNR" use the correct way of PSNR calculation and take maximum possible absolute value of color difference as MaxErr. However, this way of calculation gives an unpleasant effect after color depth conversion. If color depth is simply increased from 8 to 16 bits, the "PSNR" and "APSNR" will change, because MaxErr should change according to maximum possible absolute value of color difference (255 for 8 bit components and 255 + 255/256 for 16 bit components). Thus, "PSNR (256)" and "APSNR (256)" are implemented. They would not change because they use upper boundary of color difference as MaxErr. The upper boundary is 256. This approach is less correct, but it is used often because it is fast. Here are the rules of MaxErr definition:

The difference between "PSNR" and "APSNR" is the same as between "PSNR (256)" and "APSNR (256)" and is in the way of average PSNR calculation for a sequence. The correct way to calculate average PSNR for a sequence is to calculate average MSE for all frames (average MSE is arithmetic mean of the MSE values for frames) and after that to calculate PSNR using ordinary equation for PSNR:

This way of average PSNR calculation is used in "PSNR" and "PSNR (256)". However, sometimes it is needed to take simple average of all the per frame PSNR values. "APSNR" and "APSNR (256)" are implemented for this case and calculate average PSNR by simply averaging per frame PSNR values.
The next table summarizes the differences:

Metric
MaxErr calculation
Average PSNR calculation

PSNR

correct

correct

PSNR (256)

256 (fast, inexact)

correct

APSNR

correct

averaging

APSNR (256)

256 (fast, inexact)

averaging

"PSNR" metric is recommended for PSNR calculation since it is implemented according to the original PSNR definition.

Original
Source
Processed
Processed
Y-YUV PSNR
Y-YUV PSNR

Colors, in order of PSNR growing: red, yellow, green, blue, black (Note: larger PSNR means smaller the difference)


MSAD


The value of this metric is the mean absolute difference of the color components in the correspondent points of image. This metric is used for testing codecs and filters.

Original
Source
Processed
Processed
MSAD
MSAD


Delta


The value of this metric is the mean difference of the color components in the correspondent points of image. This metric is used for testing codecs and filters.

Original
Source
Processed
Processed
Delta
Delta

Red color Xij > Yij, green color Xij < Yij


MSU Blurring Metric


This metric allows you to compare power of blurring of two images. If value of the metric for first picture is greater than for second, it means that second picture is more blurred than first.

Original
Source
Processed
Processed
MSU Blurring Metric
MSU Blurring Metric

Red color – first image is more sharper, than second. Green color – second image is sharper than first.


MSU Blocking Metric


This metric was created to measure subjective blocking effect in video sequence. For example, in contrast areas of the frame blocking is not appreciable, but in smooth areas these edges are conspicuous. This metric also contains heuristic method for detecting objects edges, which are placed to the edge of the block. In this case, metric value is pulled down, allowing to measure blocking more precisely. We use information from previous frames to achieve better accuracy.


Source
MSU Blocking Metric
MSU Blocking Metric


SSIM Index


SSIM Index is based on measuring of three components (luminance similarity, contrast similarity and structural similarity) and combining them into result value.
Original paper

Original
Original
Compressed
Compressed
SSIM (fast)
SSIM (fast)
SSIM (precise)
SSIM (precise)

Brighter areas correspond to greater difference.

There are 2 implementations of SSIM in our program: fast and precise. The fast one is equal to our previous SSIM implementation. The difference is that the fast one uses box filter, while the precise one uses Gaussian blur.
Notes:

  1. Fast implementation visualization seems to be shifted. This effect is caused by the sum calculation algorithm for the box filter. The sum is calculated over the block to the bottom-left or up-left of the pixel (depending on if the image is bottom-up or top-down).
  2. SSIM metric has two coefficients. They depend on the maximum value of the image color component. They are calculated using the following equations:
    • C1 = 0.01 * 0.01 * video1Max * video2Max
    • C2 = 0.03 * 0.03 * video1Max * video2Max
    where video1Max is the maximum value of a given color component for the first video, video2Max is the maximum value of the same color component for the second video. Maximum value of a color component is calculated in the same way as for PSNR:
    • videoMax = 255 for 8 bit color components
    • videoMax = 255 + 3/4 for 10 bit color components
    • videoMax = 255 + 63/64 for 14 bit color components
    • videoMax = 255 + 255/256 for 16 bit color components


MultiScale SSIM INDEX


MultiScale SSIM INDEX based on SSIM metric of several downscaled levels of original images. Result is weighted average of those metrics.
Original paper


Original frame

Compressed frame
SSIM (fast)
MSSSIM (fast)
SSIM (precise)
MSSSIM (precise)
Brighter areas correspond to greater difference.

Two algorithms are implemented for MultiScale SSIM – fast and precise, as for SSIM metric. The difference is that the fast one uses box filter, while the precise one uses Gaussian blur.
Notes:

  1. Because result metric is calculated as multiplication of several metric values below 1.0 visualization seems to be dark. Fast implementation visualization seems to be shifted. This effect is caused by the sum calculation algorithm for the box filter. The sum is calculated over the block to the bottom-left or up-left of the pixel (depending on if the image is bottom-up or top-down).
  2. Levels weights (0 corresponds to original frame, while 4 corresponds to higher level):
    • WEIGHTS[0] = 0.0448;
    • WEIGHTS[1] = 0.2856;
    • WEIGHTS[2] = 0.3001;
    • WEIGHTS[3] = 0.2363;
    • WEIGHTS[4] = 0.1333;


3-Component SSIM INDEX


3-Component SSIM Index based on region division of source frames. There are 3 types of regions – edges, textures and smooth regions. Result metric calculated as weighted average of SSIM metric for those regions. In fact, human eye can see difference more precisely on textured or edge regions than on smooth regions. Division based on gradient magnitude is presented in every pixel of images.
Original paper


Original frame

Compressed frame
3SSIM regions
3-SSIM regions division
3SSIM metric
3-SSIM metric
More bright areas corresponds to greater difference.


Spatio-Temporal SSIM


The idea of this algorithm is to use motion-oriented weighted windows for SSIM Index. MSU Motion Estimation algorithm is used to retrieve this information. Based on the ME results, weighting window is constructed for every pixel. This window can use up to 33 consecutive frames (16 + current frame + 16). Then SSIM Index is calculated for every window to take into account temporal distortions as well. In addition, another spooling technique is used in this implementation. We use only lower 6% of metric values for the frame to calculate frame metric value. This causes larger metric values difference for difference files.
Original paper

source
Source
compressed
Compressed
visualization
Metric visualization

Brighter blocks correspond to greater difference.


VQM


VQM uses DCT to correspond to human perception.
Original paper

source
Source
processed
Processed
VQM
VQM

Brighter blocks correspond to greater difference.


MSE


Original
Source
Processed
Processed
Y-YUV MSE
Y-YUV MSE


[an error occurred while processing this directive]

MSU Video Quality Measurement Tools


e-mail: 


Other resources


Video resources:

Bookmark this page:   Add to Del.icio.us Add to Del.icio.us     Digg It Digg It     reddit reddit


 
Last updated: 30-November-2015

Search (Russian):
Server size: 8069 files, 1215Mb (Server statistics)

Project updated by
Server Team and MSU Video Group


Project sponsored by YUVsoft Corp.

Project supported by MSU Graphics & Media Lab

Rambler's Top100 @Mail.ru