RUSSIAN 
MSU Quality Measurement Tool: Metrics information
MSU Graphics & Media Lab (Video Group)
Metrics Info
 PSNR
 MSAD
 Delta
 MSU Blurring Metric
 MSU Blocking Metric
 SSIM
 MultiScale SSIM
 3Component SSIM
 SpatioTemporal SSIM
 VQM
 MSE
 MSU Brightness Flicking Metric
 MSU Brightness Independent PSNR
 MSU Drop Frame Metric
 MSU Noise Estimation Metric
 MSU Scene Change Detector
PSNR
This metric, which is used often in practice, called peaktopeak signaltonoise ratio — PSNR.
where MaxErr – maximum possible absolute value of color components difference, w – video width,
h – video height. Generally, this metric is equivalent to Mean Square Error, but it is more convenient
to use because of logarithmic scale. It has the same disadvantages as the MSE metric.
In MSU VQMT you can calculate PSNR for all YUV and RGB components and for L component of LUV color space.
In MSU VQMT there are four PSNR implementations. "PSNR" and "APSNR" use the correct way of PSNR calculation
and take maximum possible absolute value of color difference as MaxErr. However, this way of calculation
gives an unpleasant effect after color depth conversion. If color depth is simply increased from 8 to 16 bits,
the "PSNR" and "APSNR" will change, because MaxErr should change according to maximum possible absolute
value of color difference (255 for 8 bit components and 255 + 255/256 for 16 bit components). Thus, "PSNR (256)" and "APSNR (256)" are implemented. They would not change because they use upper boundary of color difference
as MaxErr. The upper boundary is 256. This approach is less correct, but it is used often because it is fast.
Here are the rules of MaxErr definition:
 "PSNR" and "APSNR" – MaxErr varies on color components bits usage:
 255 for 8 bit components
 255 + 3/4 for 10 bit components
 255 + 63/64 for 14 bit components
 255 + 255/256 for 16 bit components
 100 for L component of LUV color space
 If bits depth differs for two compared videos, then maximum bits usage is taken to select MaxErr.
 All color space conversions are assumed to lead to 8 bit images. It means that if, for example, you are measuring RRGB PSNR for 14 bit YUV file, then 255 will be taken as MaxErr.
 "PSNR (256)" and "APSNR (256)" – MaxErr is selected according to the next rules:
 256 for YUV and RGB color spaces
 100 for L component of LUV color space
This way of average PSNR calculation is used in "PSNR" and "PSNR (256)". However, sometimes it is needed to take
simple average of all the per frame PSNR values. "APSNR" and "APSNR (256)" are implemented for this case
and calculate average PSNR by simply averaging per frame PSNR values.
The next table summarizes the differences:



PSNR 
correct 
correct 
PSNR (256) 
256 (fast, inexact) 
correct 
APSNR 
correct 
averaging 
APSNR (256) 
256 (fast, inexact) 
averaging 
Source
Processed
YYUV PSNR
MSAD
The value of this metric is the mean absolute difference of the color components in the correspondent points of image. This metric is used for testing codecs and filters.
Source
Processed
MSAD
Delta
The value of this metric is the mean difference of the color components in the correspondent points of image. This metric is used for testing codecs and filters.
Source
Processed
Delta
MSU Blurring Metric
This metric allows you to compare power of blurring of two images. If value of the metric for first picture is greater than for second, it means that second picture is more blurred than first.
Source
Processed
MSU Blurring Metric
MSU Blocking Metric
This metric was created to measure subjective blocking effect in video sequence. For example, in contrast areas of the frame blocking is not appreciable, but in smooth areas these edges are conspicuous. This metric also contains heuristic method for detecting objects edges, which are placed to the edge of the block. In this case, metric value is pulled down, allowing to measure blocking more precisely. We use information from previous frames to achieve better accuracy.
Source
MSU Blocking Metric
SSIM Index
SSIM Index is based on measuring of three components (luminance similarity,
contrast similarity and structural similarity) and combining them into result
value.
Original paper
Original
Compressed
SSIM (fast)
SSIM (precise)
There are 2 implementations of SSIM in our program: fast and precise. The fast one is equal to our previous
SSIM implementation. The difference is that the fast one uses box filter, while the precise one uses Gaussian
blur.
Notes:
 Fast implementation visualization seems to be shifted. This effect is caused by the sum calculation algorithm for the box filter. The sum is calculated over the block to the bottomleft or upleft of the pixel (depending on if the image is bottomup or topdown).
 SSIM metric has two coefficients. They depend on the maximum value of the image color component. They are
calculated using the following equations:
 C1 = 0.01 * 0.01 * video1Max * video2Max
 C2 = 0.03 * 0.03 * video1Max * video2Max
 videoMax = 255 for 8 bit color components
 videoMax = 255 + 3/4 for 10 bit color components
 videoMax = 255 + 63/64 for 14 bit color components
 videoMax = 255 + 255/256 for 16 bit color components
MultiScale SSIM INDEX
MultiScale SSIM INDEX based on SSIM metric of several downscaled levels of original images. Result is weighted average of those metrics.
Original paper
Brighter areas correspond to greater difference.
Original frame
Compressed frame
MSSSIM (fast)
MSSSIM (precise)
Two algorithms are implemented for MultiScale SSIM – fast and precise, as for SSIM metric. The difference is that the fast one uses box filter, while the precise one uses Gaussian blur.
Notes:
 Because result metric is calculated as multiplication of several metric values below 1.0 visualization seems to be dark. Fast implementation visualization seems to be shifted. This effect is caused by the sum calculation algorithm for the box filter. The sum is calculated over the block to the bottomleft or upleft of the pixel (depending on if the image is bottomup or topdown).
 Levels weights (0 corresponds to original frame, while 4 corresponds to higher level):
 WEIGHTS[0] = 0.0448;
 WEIGHTS[1] = 0.2856;
 WEIGHTS[2] = 0.3001;
 WEIGHTS[3] = 0.2363;
 WEIGHTS[4] = 0.1333;
3Component SSIM INDEX
3Component SSIM Index based on region division of source frames. There are 3 types of regions – edges, textures and smooth regions. Result metric calculated as weighted average of SSIM metric for those regions. In fact, human eye can see difference more precisely on textured or edge regions than on smooth regions. Division based on gradient magnitude is presented in every pixel of images.
Original paper
More bright areas corresponds to greater difference.
Original frame
Compressed frame
3SSIM regions division
3SSIM metric
SpatioTemporal SSIM
The idea of this algorithm is to use motionoriented weighted windows for SSIM Index. MSU Motion Estimation algorithm is used to retrieve this information. Based on the ME results, weighting window is constructed for every pixel. This window can use up to 33 consecutive frames (16 + current frame + 16). Then SSIM Index is calculated for every window to take into account temporal distortions as well. In addition, another spooling technique is used in this implementation. We use only lower 6% of metric values for the frame to calculate frame metric value. This causes larger metric values difference for difference files.
Original paper
Source
Compressed
Metric visualization
VQM
VQM uses DCT to correspond to human perception.
Original paper
Source
Processed
VQM
MSE
Source
Processed
YYUV MSE
[an error occurred while processing this directive]
MSU Video Quality Measurement Tools
email: 
Other resources
Video resources:
Last updated: 10March2016 

Project updated by
Server Team and MSU Video Group
Project sponsored by YUVsoft Corp.
Project supported by MSU Graphics & Media Lab