SAVAM — Semiautomatic Visual-Attention ModelingMSU Graphics & Media Lab (Video Group)
Projects, ideas: Dr. Dmitriy Vatolin, Prof. Galina Rozhkova |
Introduction | ||||||
The maps of attention can be applied in many fields: user interface design, computer graphics, video processing, etc. Many technologies, algorithms and filters can be improved using information about the saliency distribution. During our work we have created the database of human eye-movements captured while viewing various videos (static and dynamic scenes, shots from cinema-like films and scientific databases) | ||||||
Features/Benefits | ||||||
High quality
Diversity
| ||||||
Data post-processing | ||||||
To improve data's accuracy several levels of verification and correction were applied. The test sequence was divided into three five-minute parts. Before each part, we carried out the calibration procedure. The observer followed a target that was placed successively at 13 locations across the screen. Next, we validated the calibration by measuring the error of the gaze position at four points. If the estimated error was greater than 0.3 angular degrees, we restarted the calibration. To reduce inter-video influence we inserted cross-fade by adding a black frame between adjacent scenes. Additionally, to measure observer's fatigue we placed a special pattern after each three-scene part. We asked observers to track a stimulus, enabling us to measure the squared tracking error, which we defined as the fatigue value. On the next step, we improve the accuracy of determining the position of gaze using transformation, which is obtained by averaging of eye tracking data on calibrate pattern. To understand the influence of an observer's fatigue on fixations at the end of a sequence, we asked eight observers to view the whole sequence a second time with the scenes appearing in reverse order. | ||||||
Downloads | ||||||
ICCP Paper (2017)Accepted version of the paper: Download Supplementary materials: final compression examples pdf zip ICIP Paper (2014)Accepted version of the paper: Download Published version of the paper: IEEE link Saliency-aware video encoderA fork of x264 video encoder supporting custom saliency maps as an additional input to improve quality of salient objects. View on GitHub Robust Saliency Map ComparisonSaliency maps comparison method invariant to most common transforms: The Base of Gaze Map
To download the database, please fill-in the request form.
| Reference
| CitationY. Gitman, M. Erofeev, D. Vatolin, A. Bolshakov, A. Fedorov. "Semiautomatic Visual-Attention Modeling and Its Application to Video Compression". 2014 IEEE International Conference on Image Processing (ICIP). Paris, France, pp. 1105-1109. Bibtex@INPROCEEDINGS { Gitm1410:Semiautomatic, AUTHOR = "Yury Gitman and Mikhail Erofeev and Dmitriy Vatolin and Andrey Bolshakov and Alexey Fedorov", TITLE = "Semiautomatic {Visual-Attention} Modeling and Its Application to Video Compression", BOOKTITLE = "2014 IEEE International Conference on Image Processing (ICIP) (ICIP 2014)", ADDRESS = "Paris, France", PAGES = "1105-1109", DAYS = 27, MONTH = oct, YEAR = 2014, KEYWORDS = "Saliency;Visual attention;Eye-tracking;Saliencyaware compression;H.264", }
| Application to video compression
| x264, 1920x1080, 1500 kbps x264, 1920x1080, 1500 kbps x264, 1920x1080, 1500 kbps x264, 1920x1080, 1500 kbps x264, 1920x1080, 1500 kbps x264, 1920x1080, 1500 kbps x264, 1920x1080, 1500 kbps x264, 1920x1080, 1500 kbps
| Acknowledgments
This work was supported by the Intel/Cisco Video Aware Wireless Networking (VAWN) Program. We acknowledge Institute of Information Transmission Problems for help with eye tracking.
| |
Server size: 8069 files, 1215Mb (Server statistics)
Project updated by
Server Team and
MSU Video Group
Project sponsored by YUVsoft Corp.
Project supported by MSU Graphics & Media Lab