Computational Toolsmiths
Software Tools for Computational Science, Engineering, and Medicine |
Some of the files below are available in Adobe Acrobat Reader *.pdf format. You may view and print these files with the freely redistributable Acrobat Reader from Adobe. For a list of publications in BibTeX format, download the bibliography of citations.
Previous simulation experiments for the comparison of wavelet shrinkage denoising methods have failed to demonstrate significant differences between methods. Such differences have never been clearly demonstrated due to the use of qualitative comparisons or of quantitative comparisons that suffered from insufficient sample size and/or absent confidence intervals for the figure of merit investigated.
In particular, previous studies have used non-robust measures as figures of merit for fixed signal classes defined by adding instances of noise to the same instance of the fixed test signal. New simulation experiments are reported here that instead use robust measures for randomized signal classes defined by adding instances of noise to different instances of randomized test signals.
Significantly greater variability in the performance of the denoising methods was observed when comparing results obtained with randomized rather than fixed signal classes. However, the use of robust measures does facilitate statistically valid comparisons with respect to this variability. Indeed, the use of non-robust or of non-randomized signal classes can result in misleading inferences from invalid comparisons. Thus, the combined use of both should yield more realistic and meaningful simulation results that better represent the real-world context intended for applied use of the denoising methods.
Empirical tests have been developed for evaluating the numerical properties of multirate M-band filter banks represented as N x M matrices of filter coefficients. Each test returns a numerically observed estimate of a 1 x M vector parameter in which the mth element corresponds to the mth filter band. These vector valued parameters can be readily converted to scalar valued parameters for comparison of filter bank performance or optimization of filter bank design. However, they are intended primarily for the characterization and verification of filter banks. By characterizing the numerical performance of analytic or algorithmic designs, these tests facilitate the experimental verification of theoretical specifications.
Tests are introduced, defined, and demonstrated for M-shift biorthogonality and orthogonality errors, M-band reconstruction error and delay, frequency domain selectivity, time frequency uncertainty, time domain regularity and moments, and vanishing moment numbers. These tests constitute the verification component of the first stage of the hierarchical three stage framework (with filter bank coefficients, single-level convolutions, and multi-level transforms) for specification and verification of the reproducibility of wavelet transform algorithms.
Filter banks tested as examples included a variety of real and complex orthogonal, biorthogonal, and nonorthogonal M-band systems with M >= 2. Coefficients for these filter banks were either generated by computational algorithms or obtained from published tables. Analysis of these examples from the published literature revealed previously undetected errors of three different kinds which have been called transmission, implementation, and interpretation errors. The detection of these mistakes demonstrates the importance of the evaluation methodology in revealing past and preventing future discrepancies between observed and expected results, and thus, in insuring the validity and reproducibility of results and conclusions based on those results.
A unifying algorithm has been developed to systematize the collection of compact Daubechies wavelets computable by spectral factorization of a symmetric positive polynomial. This collection comprises all classes of real and complex orthogonal and biorthogonal wavelet filters with maximal flatness for their minimal length. The main algorithm incorporates spectral factorization of the Daubechies product filter into analysis and synthesis filters. The spectral factors are found for search-optimized families by examining a desired criterion over combinatorial subsets of roots indexed by binary codes, and for constraint-selected families by imposing sufficient constraints on the roots without any optimizing search for an extremal property. Daubechies wavelet filter families have been systematized to include those constraint-selected by the principle of separably disjoint roots, and those search-optimized for time-domain regularity, frequency-domain selectivity, time-frequency uncertainty, and phase nonlinearity. The latter criterion permits construction of the least and most asymmetric and least and most symmetric real and complex orthogonal filters. Biorthogonal symmetric spline and balanced-length filters with linear phase are also computable by these methods. This systematized collection has been developed in the context of a general framework enabling evaluation of the equivalence of constraint-selected and search-optimized families with respect to the filter coefficients and roots and their characteristics. Some of the constraint-selected families have been demonstrated to be equivalent to some of the search-optimized families, thereby obviating the necessity for any search in their computation.
Principles of wavelet shrinkage denoising are reviewed. Both 1-D and 2-D examples are demonstrated. The performance of various ideal and practical Fourier and wavelet based denoising procedures are evaluated and compared in a new Monte Carlo simulation experiment. Finally, recommendations for the practitioner are discussed.
Previous simulation experiments for the comparison of wavelet shrinkage denoising methods have used fixed signal classes defined by adding instances of noise to a single test signal. New simulation experiments are reported here with randomized signal classes defined by adding instances of noise to instances of randomized test signals. As expected, significantly greater variability in the performance of the denoising methods was observed. Statistically valid comparisons must be conducted with respect to this variability. Use of randomized, rather than fixed, signal classes should yield more realistic and meaningful results.
A unifying algorithm has been developed to systematize the collection of compact Daubechies wavelets computable by spectral factorization of a symmetric positive polynomial. This collection comprises all classes of real and complex orthogonal and biorthogonal wavelet filters with maximal flatness for their minimal length. The main algorithm incorporates spectral factorization of the Daubechies product filter into analysis and synthesis filters. The spectral factors are found for search-optimized families by examining a desired criterion over combinatorial subsets of roots indexed by binary codes, and for constraint-selected families by imposing sufficient constraints on the roots without any optimizing search for an extremal property. Daubechies wavelet filter families have been systematized to include those constraint-selected by the principle of separably disjoint roots, and those search-optimized for time-domain regularity, frequency-domain selectivity, time-frequency uncertainty, and phase nonlinearity. The latter criterion permits construction of the least and most asymmetric and least and most symmetric real and complex orthogonal filters. Biorthogonal symmetric spline and balanced-length filters with linear phase are also computable by these methods. This systematized collection has been developed in the context of a general framework enabling evaluation of the equivalence of constraint-selected and search-optimized families with respect to the filter coefficients and roots and their characteristics. Some of the constraint-selected families have been demonstrated to be equivalent to some of the search-optimized families, thereby obviating the necessity for any search in their computation.
A novel compression algorithm incorporating the near-best wavelet packet transform is introduced for multi-modal signals in low bitrate telemonitoring applications. The method permits flexible control of the total bitrate subject to the constraints of the channel, and the allocation of the total bitrate to each of the constituent modes subject to the quality preferences of the observer. Its performance is demonstrated using polysomnograms.
Near-best and best wavelet packet transforms were compared with wavelet transforms for the compression of polysomnograms in a low bitrate telemonitoring application. Near-best wavelet packet transforms provided better overall performance with respect to efficiency of computation and compression, and amenability for use in more sophisticated quality-on-demand algorithms.
Principles of wavelet shrinkage denoising are reviewed. Both 1-D and 2-D examples are demonstrated. The performance of various ideal and practical Fourier and wavelet based denoising procedures are evaluated and compared in a new Monte Carlo simulation experiment. Finally, recommendations for the practitioner are discussed.
A new set of wavelet filter families has been added to the systematized collection of Daubechies wavelets. This new set includes complex and real, orthogonal and biorthogonal, least and most disjoint families defined using constraints derived from the principle of separably disjoint root sets in the complex z domain. All of the new families are considered to be constraint selected without a search and without any evaluation of filter properties such as time-domain regularity and frequency-domain selectivity. In contrast, the older families in the collection are considered to be search optimized for extremal properties. Some of the new families are demonstrated to be equivalent to some of the older families, thereby obviating the necessity for any search in their computation. A library that displays images of all filter families in the collection is available at FirWav Filter Library.
A new set of wavelet filter families has been added to the systematized collection of Daubechies wavelets. This new set includes complex and real, orthogonal and biorthogonal, least and most disjoint families defined using constraints derived from the principle of separably disjoint root sets in the complex z domain. All of the new families are considered to be constraint selected without a search and without any evaluation of filter properties such as time-domain regularity and frequency-domain selectivity. In contrast, the older families in the collection are considered to be search optimized for extremal properties. Some of the new families are demonstrated to be equivalent to some of the older families, thereby obviating the necessity for any search in their computation.
Image sequences from functional neuroimaging experiments with 3-D magnetic resonance scans of the human brain were wavelet transformed using a variety of orthogonal and biorthogonal wavelet filters with different treatments of the image borders. Contrary to the expectation that higher-order wavelets with more sophisticated boundary treatments would yield better compression, simple Haar wavelets without any boundary treatment provided the best compression throughout the rate-distortion performance curve.
A single unifying algorithm has been developed to systematize the collection of compact Daubechies wavelets. This collection comprises all classes of real and complex orthogonal and biorthogonal wavelets with the maximal number K of vanishing moments for their finite length. Named and indexed families of wavelet filters were generated by spectral factorization of a product filter in which the optimal subset of roots was selected by a defining criterion within a combinatorial search of subsets meeting required constraints. Several new families have been defined some of which were demonstrated to be equivalent to families with roots selected solely by geometric criteria that do not require an optimizing search. Extensive experimental results are tabulated for 1 <= K <= 24 for each of the families and most of the filter characteristics defined in both time and frequency domains. For those families requiring optimization, a conjecture for K > 24 is provided for a search pattern that reduces the order of the computational complexity but permits attainment of the desired optimum.
Evaluation of the Sherlock-Monro algorithm orthogen for generating the space of real orthonormal wavelets of length N = 2J from J angle parameters revealed that it was valid only for 1 <= J <= 3. A correction is provided and validated for all J >= 1 with tests for orthonormality and perfect reconstruction. Some additional comments are offered discussing wavelet filter design algorithms based on angular parameterizations and those based on spectral factorizations.
A spectral-factorization combinatorial-search algorithm has been developed for unifying a systematized collection of Daubechies minimum length maximum flatness wavelet filters optimized for a diverse assortment of criteria. This systematized collection comprises real and complex orthogonal and biorthogonal wavelets in families within which the filters are indexed by the number of roots at z = -1. The main algorithm incorporates spectral factorization of the Daubechies polynomial with a combinatorial search of spectral factor root sets indexed by binary codes. The selected spectral factors are found by optimizing the desired criterion characterizing either the filter roots or coefficients. Daubechies wavelet filter families have been systematized to include those optimized for time domain regularity, frequency domain selectivity, time frequency uncertainty, and phase nonlinearity. The latter criterion permits construction of the orthogonal least and most asymmetric real and least and most symmetric complex filters. Biorthogonal symmetric spline and balanced length filters with linear phase are also computable by these methods.
Numerical methods are described for evaluating the time-domain moments and regularity of multirate filter banks. Estimates of the Holder regularity are computed for the continuous functions obtained from the iterated discrete filters. Estimates of the centered moments are also computed for both the discrete filters and continuous functions. These estimates are then used to obtain the vanishing moment numbers. None of the methods require any preprocessing of the filters or a priori information about them. Thus, the methods can serve as tests for the evaluation of arbitrary filter banks. Results are presented for various examples.
Empirical tests have been developed for evaluating the numerical properties of multirate M-band filter banks represented as N x M matrices of filter coefficients. Each test returns a numerically observed estimate of a 1 x M vector parameter in which the mth element corresponds to the mth filter band. These vector valued parameters can be readily converted to scalar valued parameters for comparison of filter bank performance or optimization of filter bank design. However, they are intended primarily for the characterization and verification of filter banks. By characterizing the numerical performance of analytic or algorithmic designs, these tests facilitate the experimental verification of theoretical specifications. Tests are introduced, defined, and demonstrated for M-shift biorthogonality and orthogonality errors, M-band reconstruction error and delay, frequency domain selectivity, time frequency uncertainty, time domain regularity and moments, and vanishing moment numbers. These tests constitute the verification component of the first stage of the hierarchical three stage framework (with filter bank coefficients, single-level convolutions, and multi-level transforms) for specification and verification of the reproducibility of wavelet transform algorithms. Filter banks tested as examples included a variety of real and complex orthogonal, biorthogonal, and nonorthogonal M-band systems with M >= 2. Coefficients for these filter banks were either generated by computational algorithms or obtained from published tables. Analysis of these examples from the published literature revealed previously undetected errors of three different kinds which have been called transmission, implementation, and interpretation errors. The detection of these mistakes demonstrates the importance of the evaluation methodology in revealing past and preventing future discrepancies between observed and expected results, and thus, in insuring the validity and reproducibility of results and conclusions based on those results.
Specifications for reproducibility standards are developed for wavelet transform algorithms. Reproducibility of an algorithm is defined as the requirement that the specified algorithm yield the same transform coefficients for the same signal data regardless of implementation in any programming language or on any computing machine. The specification is built with three heirarchical stages consisting of 1) filter bank coefficients, 2) single-level convolutions, and 3) multi-level transforms. Each stage is specified with all the necessary choices, parameters, and operators required to insure reproducibility. Each stage can be further characterized by additional properties that provide relevant information. The specification has been designed to be a sufficiently general and flexible framework which encompasses many different convolution types addressing the issues of both the phase shift and the boundary treatment. New convolution phase shift variants are introduced. In particular, a peak near-aligned phase variant is demonstrated on test signals and applied to fast wavelet based matrix multiplication. Finally, in the context of computational science and engineering, the concept of scientific reproducibility of an algorithm is discussed and contrasted with two other concepts introduced as repetitive executability and input-output repeatability.
Computational algorithms have been developed for generating minimum length maximum flatness finite impulse response (FIR) filter coefficients for a systematized collection of wavelet filters designed by spectral factorization of a product filter. Both Lagrange and Daubechies polynomials have been studied numerically as alternative constructions for the required product filter which must be a halfband autocorrelation filter.
The systematized collection obtained from the product filter comprises real and complex orthogonal and biorthogonal wavelets in families defined by optimization criteria for various filter parameters. The main algorithm incorporates spectral factorization of the Daubechies polynomial with a combinatorial search of spectral factor root sets indexed by binary codes. The selected spectral factors are found by optimizing the desired criterion characterizing either the filter roots or coefficients.
Polynomial roots for the spectral factors are computed by a composite conformal mapping with affine and inverse Joukowski transformations. Filter coefficients are computed from the roots by iterative convolution of linear root factors previously sorted in increasing absolute value order. Experiments with higher order root factors and other root sort orders, such as the Edrei-Leja order, revealed no significant benefit to these alternatives.
Daubechies wavelet filter families have been systematized to include those optimized for time-domain regularity, frequency-domain selectivity, time-frequency uncertainty, and phase nonlinearity. The latter criterion permits construction of the least and most asymmetric and least and most symmetric real and complex orthogonal filters. Biorthogonal symmetric spline and balanced length filters are also computable by these methods.
All families have been indexed by the number K of roots at z = -1 corresponding to the number of vanishing moments on the wavelets. All filters have been subjected to extensive numerical tests including all of the filter parameters defining each of the different families as well as the M-shift biorthogonality, M-shift orthogonality, and M-band reconstruction errors. The numerically observed vanishing moments number should equal K. However, it was observed to peak at approximately 12 for each family tested when all computations were done in double precision with tolerance for a vanishing moment set at 1e-4.
New filters with distinguishing features are demonstrated with examples for each of the families. All of the filter families are catalogued extensively for K = 1,2,...,24 with tables listing numerical estimates of the filter parameters and figures displaying plots of the filter zeros, impulse responses, and frequency responses.
Computational algorithms have been developed for generating min-length max-flat FIR filter coefficients for orthogonal and biorthogonal wavelets with varying degrees of asymmetry or symmetry. These algorithms are based on spectral factorization of the Daubechies polynomial with a combinatorial search of root sets selected by a desired optimization criterion. Daubechies filter families were systematized to include Real Orthogonal Least Asymmetric (DROLA), Real Biorthogonal symmetric balanced Most Regular (DRBMR), Complex Orthogonal Least Asymmetric (DCOLA), and Complex Orthogonal Most Symmetric (DCOMS). Total phase nonlinearity was the criterion minimized to select the roots for the DROLA, DCOLA, and DCOMS filters. Time-domain regularity was used to select the roots for the DRBMR filters (which have linear phase only). New filters with distinguishing features are demonstrated with examples.
As the number of applications and use of wavelet transforms continues to grow, so does the number of classes and variations of wavelet transform algorithms. All of these algorithms incorporate a filter convolution in some implementation, typically, as part of an iterated filter bank. In contrast to implementations of the classical Fourier transform where there is at most a choice of sign and normalization constant in the complex exponential kernel, for wavelet transform algorithms there are multiple choices including both the signs and normalization constants of the wavelet kernels as well as the phase delays or advances of each of the filters in the wavelet filter bank. These algorithmic details, however, are usually not reported in the literature albeit with certain exceptions such as the FBI fingerprint image compression standard. Nevertheless, it is necessary to specify such details in order to insure the reproducibility of results output by each algorithm regardless of its implementation by any programmer working in any language or any engineer designing any DSP chip. This report itemizes a list of choices that must be specified clearly in order to insure the reproducibility of a sequence of transform coefficients generated by a specific wavelet transform algorithm. Moreover, this report proposes a simple but novel solution to the phase alignment problem for wavelet transforms. The general principles of this solution apply in various specific forms to both non-subsampled and critically subsampled wavelet transforms and to both symmetric and asymmetric wavelet filters.
Satisficing search algorithms have been proposed for adaptively selecting near-best basis and near-best frame decompositions in redundant tree-structured wavelet transforms. Any of a variety of additive or non-additive information cost functions can be used as the decision criterion for comparing and selecting nodes when searching through the tree. The algorithms are applicable to tree-structured transforms generated by any kind of wavelet whether orthogonal, biorthogonal, or non-orthogonal. These satisficing search algorithms implement sub-optimizing rather than optimizing principles, and acquire the important advantage of reduced computational complexity with significant savings in memory, flops, and time. Despite the sub-optimal approach, top-down tree-search algorithms with additive or non-additive costs that yield near-best bases can be considered in many practical situations better than bottom-up tree-search algorithms with additive costs that yield best bases. Here "better than" means that effectively the same level of performance can be attained for a fraction of the computational work. Experimental results comparing the various information cost functions and basis selection methods are demonstrated for both data compression of real speech and time-frequency analysis of artificial transients.
Explicit algorithms are presented for the generation of Daubechies compact orthogonal least asymmetric wavelet filter coefficients and the computation of their Holder regularity. The algorithms yield results for any number N of vanishing moments for the wavelets. These results extend beyond order N=10 those produced by Daubechies for the values of the filter coefficients and those produced by Rioul for the values of their Holder regularity. Moreover, they reveal that the choice of phase for the filters published by Daubechies for orders N=4 to N=10 was not made consistently. In particular, her filter coefficients for orders N=7 to N=9 should be reflected to their mirror image sequence.
Compression of speech from the TIMIT corpus was investigated for several transform domain methods coding near-best and best bases from cosine and wavelet packet transforms. Satisficing (suboptimizing) search algorithms for selecting near-best bases were compared with optimizing algorithms for best bases in these adaptive tree-structured transforms. Experiments were performed on several hundred seconds of speech spoken by both male and female speakers from all dialect regions of the TIMIT corpus. Near-best bases provided rate-distortion performance effectively as good as that of best bases but without the additional computational penalty. Cosine packet bases outperformed wavelet packet bases.
Satisficing search algorithms have been proposed for adaptively selecting near-best basis and near-best frame decompositions in redundant tree-structured wavelet transforms. Any of a variety of additive or non-additive information cost functions can be used as the decision criterion for comparing and selecting nodes when searching through the tree. The algorithms are applicable to tree-structured transforms generated by any kind of wavelet whether orthogonal, biorthogonal, or non-orthogonal. These satisficing search algorithms implement sub-optimizing rather than optimizing principles, and acquire the important advantage of reduced computational complexity with significant savings in memory, flops, and time. Despite the sub-optimal approach, top-down tree-search algorithms with additive or non-additive costs that yield near-best bases can be considered in many practical situations better than bottom-up tree-search algorithms with additive costs that yield best bases. Here "better than" means that effectively the same level of performance can be attained for a fraction of the computational work. Experimental results comparing the various information cost functions and basis selection methods are demonstrated for both data compression of real speech and time-frequency analysis of artificial transients.
Top-down tree search algorithms with non-additive information cost comparisons as decision criteria have recently been proposed by Taswell for the selection of near-best bases in wavelet packet transforms. Advantages of top-down non-additive near-best bases include faster computation speed, smaller memory requirement, and extensibility to biorthogonal wavelets in addition to orthogonal wavelets. A new compression scheme called parameterized-model coding was also proposed and demonstrated for one-dimensional signals. These methods are extended here to two-dimensional signals and applied to the compression of images. Significant improvement in compression while maintaining comparable distortion is demonstrated for parameterized-model coding relative to quantized-scalar coding. In general, the lossy compression scheme is applicable for low bit rate coding of the M largest packets of wavelet packet decompositions with wavelet packet basis libraries and the M atoms of matching pursuit decompositions with time-frequency atom dictionaries.
A wavelet software toolbox called WavBox has been developed for wavelet transforms and adaptive wavelet packet decompositions. WavBox provides both a function library for use in programming and a computing environment for use in interactive exploratory data analysis. The scope of this work therefore encompasses both computational mathematics with the development of new algorithms as well as computer science with the development of new interfaces, both textual command and graphical icon, for the use of these algorithms within an interactive computing environment.
The development of interfaces for the WavBox computing environment focused on principles of convenience and utility for the user. All transform and decomposition algorithms are integrated for simultaneous use with both textual command and graphical icon interfaces through an architectural design incorporating heirarchical modules, switch-driven function suites, and an object property expert system with artificial intelligence for configuring valid property combinations.
The development of computational algorithms focused on principles of pragmatism. New algorithms include the development of a) methods for computing the wavelet transform of signals of arbitary length not restricted to a power of two, b) satisficing searches instead of optimizing searches for selecting bases in wavelet packet transforms, and c) parameterized-model coding instead of quantized-vector or quantized-scalar coding for further compression of the selected transform decompositions. These methods are shown to be especially useful for image compression.
Wavelet packet basis decompositions require selection of a single basis represented as the terminal leaves of a subtree within a redundant collection of many bases represented as the full tree. For this purpose, top-down and bottom-up tree searches with additive and non-additive information cost functions as decision criteria are proposed as selection methods. These new algorithms are satisficing searches and find near-best basis decompositions.
The satisficing searches are benchmarked in computational experiments comparing their performance with optimizing searches (the Coifman-Wickerhauser best basis decomposition and the Mallat-Zhang matching pursuit decomposition). Near-best basis decompositions outperform the other decompositions as measured by significant increases in efficiency of computation (reductions in memory, flops, and time) for comparable levels of distortion on reconstruction after fixed-rate lossy compression.
WavBox provides both a function library and a computing environment for wavelet transforms and adaptive wavelet packet decompositions. WavBox contains a collection of these transforms, decompositions, and related functions that perform multiresolution analyses of 1-D multichannel signals and 2-D images. The current version 4.1c includes overscaled pyramid transforms, discrete wavelet transforms, and adaptive wavelet and cosine packet decompositions by best level, best basis, and matching pursuit as described by Mallat, Coifman, Wickerhauser, and other authors. WavBox also implements Taswell's new search algorithms with decision criteria, called near-best basis and non-additive information costs respectively, for selecting bases in wavelet packet transforms, as well as Donoho and Johnstone's wavelet shrinkage denoising methods. Various choices of filter classes (orthogonal, biorthogonal, etc), filter families (Daubechies, Vetterli, etc), and convolution versions (interval, circular, extended, etc) exist for each transform and decomposition. The software has been designed for efficient automated computation, interactive exploratory data analysis, and pedagogy. Essential features of the design include: perfect reconstruction for multiresolution decomposition of data of arbitrary size not restricted to powers of 2; both command line and graphical user interfaces with a comprehensive set of plots and visual displays; an object property expert system with artificial intelligence for configuring valid property combinations; heirarchical modules and switch-driven function suites; vector-filter and matrix-operator implementations of convolutions; extensibility for the inclusion of other wavelet filters, convolution versions, and transforms; optional arguments with built-in defaults for most m-files; and extensive on-line help and self-running tutorial demos.
Search algorithms for finding signal decompositions called near-best bases using decision criteria called non-additive information costs have recently been proposed by Taswell for selecting bases in wavelet packet transforms represented as binary trees. These methods are extended here to distinguish between top-down and bottom-up tree searches. Other new non-additive information cost functions are also proposed. In particular, the near-best basis with the non-additive cost of the Shannon entropy on probabilities is compared against the best basis with the additive cost of the Coifman-Wickerhauser entropy on energies. All wavelet packet basis decompositions are also compared with the nonorthogonal matching pursuit decomposition of Mallat and Zhang and the orthogonal matching pursuit decomposition of Pati et al. Monte Carlo experiments using a constant-bit-rate variable-distortion paradigm for lossy compression suggest that the statistical performance of top-down near-best bases with non-additive costs is superior to that of bottom-up best bases with additive costs. Top-down near-best bases provide a significant increase in computational efficiency with reductions in memory, flops, and time while nevertheless maintaining similar coding efficiency with comparable reconstruction errors measured by l^p-norms. Finally, a new compression scheme called parameterized model coding is introduced and demonstrated with results showing better compression than standard scalar quantization coding at comparable levels of distortion.
Search algorithms for finding signal decompositions called near-best bases using decision criteria called non-additive information costs are proposed for selecting bases in wavelet packet transforms. These new methods are compared with the best bases and additive information costs of Coifman and Wickerhauser. All near-best and best bases were also compared with the matching pursuit decomposition of Mallat and Zhang. Preliminary experiments suggest that for the application of time-frequency analysis, a wide variety of results can be obtained with the different methods, and that for the application of data compression, the near-best basis selected with non-additive costs may outperform the best basis selected with additive costs.
The algorithms "split" for the wavelet transform and "merge" for the inverse wavelet transform are presented for finite-duration discrete-time signals of arbitrary length not restricted to a power of 2. Alternative matrix- and vector-filter implementations of alternative truncated, circulant, and extended versions are discussed. Matrix- and vector-filter implementations yield identical results and enhance, respectively, didactic conceptualization and computational efficiency. Truncated, circulant, and extended versions produce the signal-end effects of, respectively, errors, periodization, and redundancy in the transform coefficients. The use of any one of these three versions avoids the signal-end effects associated with the other two versions. Additional alternatives which eliminate all signal-end effects (albeit at the cost of increased algorithmic complexity) are discussed briefly.