Download: ресурсы по сжатию

Сжатие видео - Quantization


Английские материалы
Авторы Название статьи Описание Рейтинг
Byeungwoo Jeon, and Jechang Jeong Blocking Artifacts Reduction in Image Compression with Block Boundary Discontinuity Criterion
Abstract—This paper proposes a novel blocking artifacts reduction method based on the notion that the blocking artifacts are caused by heavy accuracy loss of transform coefficients in the quantization process. We define the block boundary discontinuity measure as the sum of the squared differences of pixel values along the block boundary. The proposed method compensates for selected transform coefficients so that the resultant image has a minimum block boundary discontinuity. The proposed method does not require a particular transform domain where the compensation should take place; therefore, an appropriate transform domain can be selected at the user’s discretion. In the experiments, the scheme is applied to DCT-based compressed images to show its performance.
RAR  427 кбайт
?
Hsien-Chung Wei, Pao-Chin Tsai, and Jia-Shung Wang Three-Sided Side Match Finite-State Vector Quantization
Abstract—Several low bit-rate still-image compression methods have been presented for the past two years, such as SPHIT, hybrid VQ, and the Wu–Chen method. In particular, the image “Lena” can be compressed using less than 0.15 bpp at 31.4 dB or higher. These methods exercise the analysis techniques (wavelet or subband) before distributing the bit rate to each piece of an image, thus the dilemma between bit rate and distortion can be solved. In this paper,we propose a simple but comparable method that adopts the technique of side match VQ only. The side match vector quantization (SMVQ) is an effective VQ coding scheme at low bit-rate. The conventional side match (two-sided) VQ utilizes the codeword information of two neighboring blocks to predict the state codebook of an input vector. In this paper, we propose a hierarchical three-sided side match finite-state vector quantization (HTSMVQ) method that can: 1) make the state codebook size as small as possible —the size is reduced to one if the prediction can perform perfectly; 2) improve the prediction quality for edge blocks; and 3) regularly refresh the codewords to alleviate the error propagation of side match. In the simulation results, the image “Lena” can be coded with PSNR 34.682 dB at 0.25 bpp. It is better than SPIHT, EZW, FSSQ and hybrid VQ with 34.1, 33.17, 33.1, and 33.7 dB, respectively. At a bit rate lower than 0.15 bpp, only the enhanced version of EZW performs better than our method, about 0.14 dB. Thus, each input vector is encoded using its own state codebook.
RAR  189 кбайт
?
Ralph Neff and Avideh Zakhor Modulus Quantization for Matching-Pursuit Video Coding
Abstract—Overcomplete signal decomposition using matching pursuits has been shown to be an efficient technique for coding motion-residual images in a hybrid video coder. Unlike orthogonal decomposition, matching pursuit uses an in-the-loop modulus quantizer which must be specified before coding begins. This complicates the quantizer design, since the optimal quantizer depends on the statistics of the matching-pursuit coefficients which in turn depend on the in-loop quantizer actually used. In this paper, we address the modulus quantizer design issue, specifically developing frame-adaptive quantization schemes for the matching-pursuit video coder. Adaptive dead-zone subtraction is shown to reduce the information content of the modulus source, and a uniform threshhold quantizer is shown to be optimal for the resulting source. Practical two-pass and one-pass algorithms are developed to jointly determine the quantizer parameters and the number of coded basis functions in order to minimize coding distortion for a given rate. The compromise one-pass scheme performs nearly as well as the full two-pass algorithm, but with the same complexity as a fixed-quantizer design. The adaptive schemes are shown to outperform the fixed quantizer used in earlier works, especially at high bit rates, where the gain is as high as 1.7 dB.
RAR  2015 кбайт
?
Jin Soo Choi, Yong Han Kim, Ho-Jang Lee, In-Sung Park, Myoung Ho Lee, and Chieteuk Ahn Geometry Compression of 3-D Mesh Models Using Predictive Two-Stage Quantization
Chieteuk Ahn Abstract—In conventional predictive quantization schemes for 3-D mesh geometry, excessively large residuals or prediction errors, although occasional, lead to visually unacceptable geometric distortion. This is due to the fact that they cannot limit the maximum quantization error within a given bound. In order to completely eliminate the visually unacceptable distortion caused by large residuals, we propose a predictive two-stage quantization scheme. This scheme is very similar to the conventional DPCM, except that the embedded quantizer is replaced by a series of two quantizers. Each quantizer output is further compressed by an arithmetic code. When applied to typical 3-D mesh models, the scheme performs much better than the conventional predictive quantization methods and, depending upon input models, even than the MPEG-4 compression method for 3-D mesh geometry both in rate-distortion sense and in subjective viewing.
RAR  418 кбайт
?
Guobin Shen and Ming L. Liou An Efficient Codebook Post-Processing Technique and a Window-Based Fast-Search Algorithm for Image Vector Quantization
Abstract—Vector quantization is an efficient image-coding technique to achieve a very low bit-rate compression. Furthermore, a lower bit rate can be achieved by equipping the vector quantizer with a memory unit or feedback loop so as to utilize the intervector correlation. For example, predictive vector quantization exploits the linear inter-vector correlation in the spatial domain by a linear vector prediction. Despite the better performance of this kind of vector quantizer, they are usually much more complex. In this paper, we proposed a simple but efficient codebook post-processing technique which enables the vector quantizer to possess higher correlation preservation property. As will be shown, the proposed post-processing technique leads to much higher interindex correlation, or equivalently, smaller first-order (or higher order) entropy. Based on the special pattern of the codebook imposed by the post-processing technique, a window-based fast search (WBFS) algorithm is proposed. The WBFS algorithm not only accelerates the vector quantization processing, but also results in better rate-distortion performance.
RAR  406 кбайт
?
James E. Fowler Adaptive Vector Quantization for Efficient Zerotree-Based Coding of Video with Nonstationary Statistics
Abstract—A new system for intraframe coding of video is described. This system combines zerotrees of vectors of wavelet coefficients and the generalized-threshold-replenishment (GTR) technique for adaptive vector quantization (AVQ).Adata structure, the vector zerotree (VZT), is introduced to identify trees of insignificant vectors, i.e., those vectors of wavelet coefficients in a dyadic subband decomposition that are to be coded as zero. GTR coders are then applied to each subband to efficiently code the significant vectors by way of adapting to their changing statistics. Both VZT generation and GTR coding are based upon minimization of criteria involving both rate and distortion. In addition, perceptual performance is improved by invoking simple, perceptually motivated weighting in both the VZT and the GTR coders. Our experimental findings indicate that the described VZTGTR system handles dramatic changes in image statistics, such as those due to a scene change, more efficiently than wavelet-based techniques employing nonadaptive scalar quantizers.
RAR  398 кбайт
?
Seung-Kwon Paek and Lee-Sup Kim A Real-Time Wavelet Vector Quantization Algorithm and Its VLSI Architecture
Abstract—In this paper, a real-time wavelet image compression algorithm using vector quantization and its VLSI architecture are proposed. The proposed zerotree wavelet vector quantization (WVQ) algorithm focuses on the problem of how to reduce the computation time to encode wavelet images with high coding efficiency. A conventional wavelet image-compression algorithm exploits the tree structure of wavelet coefficients coupled with scalar quantization. However, they can not provide the real-time computation because they use iterative methods to decide zerotrees. In contrast, the zerotree WVQ algorithm predicts in real-time zero-vector trees of insignificant wavelet vectors by a noniterative decision rule and then encodes significant wavelet vectors by the classified VQ. These cause the zerotree WVQ algorithm to provide the best compromise between the coding performance and the computation time. The noniterative decision rule was extracted by the simulation results, which are based on the statistical characteristics of wavelet images. Moreover, the zerotree WVQ exploits the multistage VQ to encode the lowest frequency subband, which is generally known to be robust to wireless channel errors. The proposed WVQ VLSI architecture has only one VQ module to execute in real-time the proposed zerotree WVQ algorithm by utilizing the vacant cycles for zero-vector trees which are not transmitted. And the VQ module has only + 1 processing elements (PE's) for the real-time minimum distance calculation, where the codebook size is . PE's are for Euclidean distance calculation and a PE is for parallel distance comparison. Compared with conventional architectures, the proposed VLSI architectures has very cost-effective hardware (H/W) to calculate zerotree WVQ algorithm in real time. Therefore, the zerotree WVQ algorithm and its VLSI architectures are very suitable to wireless image communication, because they provide high coding efficiency, real-time computation, and cost-effective H/W. image-compression techniques robust to transmission channel errors are essential to wireless image communication, because wireless communication channels suffer from burst errors in which a large number of consecutive bits are lost or corrupted by the channel-fading effect. The conventional image-coding standards are very susceptible to transmission errors, and hence, they need powerful error-correction codes. Therefore, it is desirable to design a robust image-coding technique, which has a high compression ratio and produces acceptable image quality over a fading channel. Finally, we should consider image compression algorithms and their VLSI architectures which allow portable decoders with small size, low-power consumption, and acceptable reconstructed image quality.
RAR  694 кбайт
?
Hugh. Q. Cao and Weiping Li A Fast Search Algorithm for Vector Quantization Using a Directed Graph
Abstract—A fast search algorithm for vector quantization (VQ) is presented in this letter. This approach provides a practical solution to the implementation of a multilevel search based on a specially designed directed graph (DG). An algorithm is also given to find the optimal DG for any given practical source. Simulation results applying this approach to still images have shown that it can reduce searching complexity to 3% of the exhaustive search vector quantization (ESVQ) while introducing only negligible searching errors. It has also been shown that the searching complexity is close to a linear growth with the bit rate rather than an exponential growth in ESVQ.
RAR  426 кбайт
?
Woontack Woo and Antonio Ortega Optimal Blockwise Dependent Quantization for Stereo Image Coding
Abstract—Research in coding of stereo images has focused mostly on the issue of disparity estimation to exploit the redundancy between the two images in a stereo pair, with less attention being devoted to the equally important problem of allocating bits between the two images. This bit-allocation problem is complicated by the dependencies arising from using a prediction based on the quantized reference images. In this paper, we address the problem of blockwise bit allocation for coding of stereo images and show how, given the special characteristics of the disparity field, one can achieve an optimal solution with reasonable complexity, whereas in similar problems in motioncompensated video only approximate solutions are feasible. We present algorithms based on dynamic programming that provide the optimal blockwise bit allocation. Our experiments based on a modified JPEG coder show that the proposed scheme achieves higher mean peak signal-to-noise ratio over the two frames (0.2–0.5 dB improvements) as compared with blockwise independent quantization. We also propose a fast algorithm that provides most of the gain at a fraction of the complexity.
RAR  246 кбайт
?
Nam Ik Cho, Heesub Lee, and Sang Uk Lee An Adaptive Quantization Algorithm for Video Coding
Abstract—This paper proposes an adaptive quantization algorithm for video coding using the information obtained from the previously encoded image. Before quantizing the discrete cosine transform coefficients, the properties of reconstruction error of each macro block (MB) are estimated from the previous frame. For the estimation of the error of current MB, a block with the size of MB in the previous frame is chosen. Since the original and reconstructed images of the previous frame are available in the encoder, we can evaluate the tendency of reconstruction error of this block in advance. Then, this error is considered as the expected error of the current MB if it is quantized with the same step size and bit rate. Comparing the error of the MB with the average of overall MB’s, if it is larger than the average, a small step size is given for this MB, and vice versa. As a result, the error distribution of the MB is more concentrated to the average, yielding low variance and improved image quality. Especially for low bit application, the proposed algorithm yields much smaller error variance and higher peak signal-to-noise ratio compared to the conventional TM5. We also propose a modified algorithm for efficient hardware implementation.
RAR  328 кбайт
?
Hyun Wook Park and Yung Lyul Lee A Postprocessing Method for Reducing Quantization Effects in Low Bit-Rate Moving Picture Coding
Abstract—The reconstructed images from highly compressed MPEG data have noticeable image degradations, such as blocking artifacts near the block boundaries, corner outliers at crosspoints of blocks, and ringing noise near image edges because the MPEG quantizes the transformed coefficients of 8 . 8 pixel blocks. A postprocessing algorithm is proposed to reduce quantization effects, such as blocking artifacts, corner outliers, and ringing noise, in MPEG-decompressed images. The proposed postprocessing algorithm reduces the quantization effects adaptively by using both spatial frequency and temporal information extracted from the compressed data. The blocking artifacts are reduced by one-dimensional (1-D) horizontal and vertical low-pass filtering (LPF), and the ringing noise is reduced by two-dimensional (2-D) signal-adaptive filtering (SAF). A comparison study of the peak signal-to-noise ratio (PSNR) and the computation complexity analysis between the proposed algorithm and the MPEG-4 VM (verification model) postprocessing algorithm is performed by computer simulation with several image sequences. According to the comparison study of PSNR and computation complexity analysis, the proposed algorithm shows better performance than the VM postprocessing algorithm, whereas the subjective image qualities of both algorithms are similar.
RAR  539 кбайт
?
Chun Wang, Hugh Q. Cao, Weiping Li, and Kenneth K. Tzeng, Fellow Lattice Labeling Algorithms for Vector Quantization
Abstract— Labeling algorithms for Construction-A and Construction-B lattices with respect to pyramid boundaries are presented. The algorithms are developed based on relations between lattices and linear block codes as well as on transformations among several specifically defined lattices and their translations. The mechanism for the construction of these algorithms can be considered as an extension of that given by Fischer. The algorithms are noted to achieve 100% efficiency in utilizing index bits for binary representations. Furthermore, it is determined that many important lattices (E8, .16; . . .) can be indexed to arbitrary norms and dimensions. The complexity of these algorithms in terms of both memory and computation is quite low and thus it is possible to develop practical lattice vector quantizers of large norms and high dimensions using these algorithms.
RAR  781 кбайт
?
R. Chandramouli, N. Ranganathan, and Shivaraman J. Ramadoss Adaptive Quantization and Fast Error-Resilient Entropy Coding for Image Transmission
Abstract—Recently, there has been an outburst of research in image and video compression for transmission over noisy channels. Channel matched source quantizer design has gained prominence. Further, the presence of variable-length codes in compression standards like the JPEG and the MPEG has made the problem more interesting. Error resilient entropy coding (EREC) has emerged as a new and effective method to combat catastrophic loss in the received signal due to burst and random errors. In this paper, we propose a new channel-matched adaptive quantizer for JPEG image compression. A slow, frequencynonselective Rayleigh fading channel model is assumed. The optimal quantizer that matches the human visibility threshold and the channel bit-error rate is derived. Further, a new fast error-resilient entropy code (FEREC) that exploits the statistics of the JPEG compressed data is proposed. The proposed FEREC algorithm is shown to be almost twice as fast as EREC in encoding the data, and hence the error resilience capability is also observed to be significantly better. On an average, a 5% decrease in the number of significantly corrupted received image blocks is observed with FEREC. Upto a 2-dB improvement in the peak signal-to-noise ratio of the received image is also achieved.
RAR  315 кбайт
?
Syed A. Rizvi and Nasser M. Nasrabadi Finite-State Residual Vector Quantization Using a Tree-Structured Competitive Neural Network
Abstract—Finite-state vector quantization (FSVQ) is known to give better performance than the memoryless vector quantization (VQ). This paper presents a new FSVQ scheme, called finite-state residual vector quantization (FSRVQ), in which each state uses a residual vector quantizer (RVQ) to encode the input vector. This scheme differs from the conventional FSVQ in that the state- RVQ codebooks encode the residual vectors instead of the original vectors. A neural network predictor estimates the current block based on the four previously encoded blocks. The predicted vector is then used to identify the current state as well as to generate a residual vector (the difference between the current vector and the predicted vector). This residual vector is encoded using the current state-RVQ codebooks. A major task in designing our proposed FSRVQ is the joint optimization of the next-state codebook and the state-RVQ codebooks. This is achieved by introducing a novel tree-structured competitive neural network in which the first layer implements the next-state function, and each branch of the tree implements the corresponding state- RVQ. A joint training algorithm is also developed that mutually optimizes the next-state and the state-RVQ codebooks for the proposed FSRVQ. Joint optimization of the next-state function and the state-RVQ codebooks eliminates a large number of redundant states in the conventional FSVQ design; consequently, the memory requirements are substantially reduced in the proposed FSRVQ scheme. The proposed FSRVQ can be designed for high bit rates due to its very low memory requirements and the low search complexity of the state-RVQ’s. Simulation results show that the proposed FSRVQ scheme outperforms conventional FSVQ schemes both in terms of memory requirements and the visual quality of the reconstructed image. The proposed FSRVQ scheme also outperforms JPEG (the current standard for still image compression) at low bit rates.
RAR  1822 кбайт
?
Tung-Shou Chen and Chin-Chen Chang Diagonal Axes Method (DAM): A Fast Search Algorithm for Vector Quantization
Abstract—Vector quantization (VQ) is a fundamental technique for image compression. But it takes time to search for a similar codeword in a codebook. Thus, the codebook search is one of the major bottlenecks in VQ. In this paper, we propose a new search algorithm which is used to speed up both the codebook generation and the encoding. We call it the diagonal axes method (DAM). This new algorithm contains two major techniques: diagonal axes analysis (DAA) and orthogonal checking (OC). Since most of these procedures simply involve additions and subtractions, DAM is more efficient than some other related algorithms. Simulation results confirm this effectiveness.
RAR  223 кбайт
?
Jiebo Luo, Chang Wen Chen, Kevin J. Parker and Thomas S. Huang A Scene Adaptive and Signal Adaptive Quantization for Subband Image and Video Compression Using Wavelets
Abstract—Discrete wavelet transform (DWT) provides an advantageous framework of multiresolution space-frequency representation with promising applications in image processing. The challenge as well as the opportunity in wavelet-based compression is to exploit the characteristics of the subband coefficients with respect to both spectral and spatial localities. A common problem with many existing quantization methods is that the inherent image structures are severely distorted with coarse quantization. Observation shows that subband coefficients with the same magnitude generally do not have the same perceptual importance; this depends on whether or not they belong to clustered scene structures. We propose in this paper a novel scene adaptive and signal adaptive quantization scheme capable of exploiting both the spectral and spatial localization properties resulting from wavelet transform. The proposed quantization is implemented as a maximum a posteriori probability (MAP) estimation-based clustering process in which subband coefficients are quantized to their cluster means, subject to local spatial constraints. The intensity distribution of each cluster within a subband is modeled by an optimal Laplacian source to achieve the signal adaptivity, while spatial constraints are enforced by appropriate Gibbs random fields (GRF) to achieve the scene adaptivity. Consequently, with spatially isolated coefficients removed and clustered coefficients retained at the same time, the available bits are allocated to visually important scene structures so that the information loss is least perceptible. Furthermore, the reconstruction noise in the decompressed image can be suppressed using another GRF-based enhancement algorithm. Experimental results have shown the potentials of this quantization scheme for low bit-rate image and video compression.
RAR  998 кбайт
?

Сайт о сжатии >> Статьи и исходники >>
Материалы по видео


Смотрите также материалы:
- По цветовым пространствам
- По JPEG
- По JPEG-2000


наверх
Подготовили Сергей Гришин и Дмитрий Ватолин