Computational Toolsmiths
Software Tools for Computational Science, Engineering, and Medicine |

- Papers 1991-1996: 1991, 1992, 1993, 1994, 1995, 1996.
- Papers 1997-2001: 1997, 1998, 1999, 2000, 2001.

Some of the files below are available in Adobe Acrobat Reader *.pdf format. You may view and print these files with the freely redistributable Acrobat Reader from Adobe. For a list of publications in BibTeX format, download the bibliography of citations.

- C. Taswell, 2001,
Experiments in Wavelet Shrinkage Denoising . Invited paper in Journal of Computational Methods
in Science and Engineering 1(2s-3s):315--326. See also Computational Toolsmiths Technical Report
CT-2001-01 for alternate version with color figures.
Previous simulation experiments for the comparison of wavelet shrinkage denoising methods have failed to demonstrate significant differences between methods. Such differences have never been clearly demonstrated due to the use of qualitative comparisons or of quantitative comparisons that suffered from insufficient sample size and/or absent confidence intervals for the figure of merit investigated.

In particular, previous studies have used non-robust measures as figures of merit for

*fixed*signal classes defined by adding instances of noise to the same instance of the fixed test signal. New simulation experiments are reported here that instead use robust measures for*randomized*signal classes defined by adding instances of noise to different instances of randomized test signals.Significantly greater variability in the performance of the denoising methods was observed when comparing results obtained with randomized rather than fixed signal classes. However, the use of robust measures does facilitate statistically valid comparisons with respect to this variability. Indeed, the use of non-robust or of non-randomized signal classes can result in misleading inferences from invalid comparisons. Thus, the combined use of both should yield more realistic and meaningful simulation results that better represent the real-world context intended for applied use of the denoising methods.

- C. Taswell, 2001,
Empirical Tests for Evaluation of Multirate Filter Bank Parameters. Invited paper, pages
111--140 in ``Wavelets in Signal and Image Analysis: From Theory To Practice'' edited by A. Petrosian
and F. Meyer, Kluwer Academic Publishers. See also Computational Toolsmiths Technical Report
CT-1998-02 for alternate version with link to additional color figures.
Empirical tests have been developed for evaluating the numerical properties of multirate M-band filter banks represented as N x M matrices of filter coefficients. Each test returns a numerically observed estimate of a 1 x M vector parameter in which the m

^{th}element corresponds to the m^{th}filter band. These vector valued parameters can be readily converted to scalar valued parameters for comparison of filter bank performance or optimization of filter bank design. However, they are intended primarily for the characterization and verification of filter banks. By characterizing the numerical performance of analytic or algorithmic designs, these tests facilitate the experimental verification of theoretical specifications.Tests are introduced, defined, and demonstrated for M-shift biorthogonality and orthogonality errors, M-band reconstruction error and delay, frequency domain selectivity, time frequency uncertainty, time domain regularity and moments, and vanishing moment numbers. These tests constitute the verification component of the first stage of the hierarchical three stage framework (with filter bank coefficients, single-level convolutions, and multi-level transforms) for specification and verification of the reproducibility of wavelet transform algorithms.

Filter banks tested as examples included a variety of real and complex orthogonal, biorthogonal, and nonorthogonal M-band systems with M >= 2. Coefficients for these filter banks were either generated by computational algorithms or obtained from published tables. Analysis of these examples from the published literature revealed previously undetected errors of three different kinds which have been called transmission, implementation, and interpretation errors. The detection of these mistakes demonstrates the importance of the evaluation methodology in revealing past and preventing future discrepancies between observed and expected results, and thus, in insuring the validity and reproducibility of results and conclusions based on those results.

- C. Taswell, June 2000,
Constraint-Selected and Search-Optimized Families of Daubechies Wavelet Filters Computable by Spectral
Factorization. Invited Paper in Special Issue on Numerical Analysis in the 20th Century,
Journal of Computational and Applied Mathematics 121:179-195. See also Computational Toolsmiths Technical
Report CT-1999-05 for alternate version with additional color figures.
A unifying algorithm has been developed to systematize the collection of compact Daubechies wavelets computable by spectral factorization of a symmetric positive polynomial. This collection comprises all classes of real and complex orthogonal and biorthogonal wavelet filters with maximal flatness for their minimal length. The main algorithm incorporates spectral factorization of the Daubechies product filter into analysis and synthesis filters. The spectral factors are found for search-optimized families by examining a desired criterion over combinatorial subsets of roots indexed by binary codes, and for constraint-selected families by imposing sufficient constraints on the roots without any optimizing search for an extremal property. Daubechies wavelet filter families have been systematized to include those constraint-selected by the principle of separably disjoint roots, and those search-optimized for time-domain regularity, frequency-domain selectivity, time-frequency uncertainty, and phase nonlinearity. The latter criterion permits construction of the least and most asymmetric and least and most symmetric real and complex orthogonal filters. Biorthogonal symmetric spline and balanced-length filters with linear phase are also computable by these methods. This systematized collection has been developed in the context of a general framework enabling evaluation of the equivalence of constraint-selected and search-optimized families with respect to the filter coefficients and roots and their characteristics. Some of the constraint-selected families have been demonstrated to be equivalent to some of the search-optimized families, thereby obviating the necessity for any search in their computation.

- C. Taswell, March 2000,
The What, How, and Why of Wavelet Shrinkage Denoising. Invited paper, IEEE Computing
in Science and Engineering 2(3):12-19. See also Computational Toolsmiths Technical Report
CT-1998-09.
Principles of wavelet shrinkage denoising are reviewed. Both 1-D and 2-D examples are demonstrated. The performance of various ideal and practical Fourier and wavelet based denoising procedures are evaluated and compared in a new Monte Carlo simulation experiment. Finally, recommendations for the practitioner are discussed.

- C. Taswell, July 1999,
Randomized Signal Classes for Evaluating the Performance of Wavelet Shrinkage Denoising Methods
. Paper #296-214, pages 352-355 in Proceedings of the
IASTED SIP'99 Conference, held in Nassau, Bahamas, October, 1999.
Previous simulation experiments for the comparison of wavelet shrinkage denoising methods have used

*fixed*signal classes defined by adding instances of noise to a single test signal. New simulation experiments are reported here with*randomized*signal classes defined by adding instances of noise to instances of randomized test signals. As expected, significantly greater variability in the performance of the denoising methods was observed. Statistically valid comparisons must be conducted with respect to this variability. Use of randomized, rather than fixed, signal classes should yield more realistic and meaningful results. - C. Taswell, June 1999,
Constraint-Selected and Search-Optimized Families of Daubechies Wavelet Filters Computable by Spectral
Factorization. Technical Report CT-1999-05, Computational Toolsmiths. See also
JCAM 2000 preprint for version with black and white figures.
A unifying algorithm has been developed to systematize the collection of compact Daubechies wavelets computable by spectral factorization of a symmetric positive polynomial. This collection comprises all classes of real and complex orthogonal and biorthogonal wavelet filters with maximal flatness for their minimal length. The main algorithm incorporates spectral factorization of the Daubechies product filter into analysis and synthesis filters. The spectral factors are found for search-optimized families by examining a desired criterion over combinatorial subsets of roots indexed by binary codes, and for constraint-selected families by imposing sufficient constraints on the roots without any optimizing search for an extremal property. Daubechies wavelet filter families have been systematized to include those constraint-selected by the principle of separably disjoint roots, and those search-optimized for time-domain regularity, frequency-domain selectivity, time-frequency uncertainty, and phase nonlinearity. The latter criterion permits construction of the least and most asymmetric and least and most symmetric real and complex orthogonal filters. Biorthogonal symmetric spline and balanced-length filters with linear phase are also computable by these methods. This systematized collection has been developed in the context of a general framework enabling evaluation of the equivalence of constraint-selected and search-optimized families with respect to the filter coefficients and roots and their characteristics. Some of the constraint-selected families have been demonstrated to be equivalent to some of the search-optimized families, thereby obviating the necessity for any search in their computation.

- C. Taswell and J. Niederholz, April 1999,
Quality Controlled Compression of Polysomnograms. Paper #10.3.1.4, page 944 in Proceedings
of the BMES-EMBS'99 Conference, held in Atlanta,
Georgia, October, 1999.
A novel compression algorithm incorporating the near-best wavelet packet transform is introduced for multi-modal signals in low bitrate telemonitoring applications. The method permits flexible control of the total bitrate subject to the constraints of the channel, and the allocation of the total bitrate to each of the constituent modes subject to the quality preferences of the observer. Its performance is demonstrated using polysomnograms.

- J. Niederholz and C. Taswell, April 1999,
Near-Best WPT Compression of Polysomnograms. Paper #10.3.4.1, page 961 in Proceedings
of the BMES-EMBS'99 Conference, held in Atlanta,
Georgia, October, 1999.
Near-best and best wavelet packet transforms were compared with wavelet transforms for the compression of polysomnograms in a low bitrate telemonitoring application. Near-best wavelet packet transforms provided better overall performance with respect to efficiency of computation and compression, and amenability for use in more sophisticated quality-on-demand algorithms.

- C. Taswell, October 1998,
The What, How, and Why of Wavelet Shrinkage Denoising. In press, IEEE Computing in
Science and Engineering, to appear in 2000. Also Technical Report CT-1998-09, Computational Toolsmiths.
Principles of wavelet shrinkage denoising are reviewed. Both 1-D and 2-D examples are demonstrated. The performance of various ideal and practical Fourier and wavelet based denoising procedures are evaluated and compared in a new Monte Carlo simulation experiment. Finally, recommendations for the practitioner are discussed.

- C. Taswell, August 1998,
Least and Most Disjoint Root Sets for Daubechies Wavelets. Paper #1164, Proceedings
of IEEE ICASSP-99 Conference, held in Phoenix, March, 1999. See
CT-1998-08 for extended version with multicolor figures.
A new set of wavelet filter families has been added to the systematized collection of Daubechies wavelets. This new set includes complex and real, orthogonal and biorthogonal, least and most disjoint families defined using constraints derived from the principle of separably disjoint root sets in the complex z domain. All of the new families are considered to be

*constraint selected*without a search and without any evaluation of filter properties such as time-domain regularity and frequency-domain selectivity. In contrast, the older families in the collection are considered to be*search optimized*for extremal properties. Some of the new families are demonstrated to be equivalent to some of the older families, thereby obviating the necessity for any search in their computation. A library that displays images of all filter families in the collection is available at FirWav Filter Library. - C. Taswell, August 1998,
Disjoint Sets of Daubechies Polynomial Roots for Generating Wavelet Filters with Extremal Properties
. Technical Report CT-1998-08, Computational Toolsmiths. See ICASSP99 for abridged
version with black and white figures.
A new set of wavelet filter families has been added to the systematized collection of Daubechies wavelets. This new set includes complex and real, orthogonal and biorthogonal, least and most disjoint families defined using constraints derived from the principle of separably disjoint root sets in the complex z domain. All of the new families are considered to be

*constraint selected*without a search and without any evaluation of filter properties such as time-domain regularity and frequency-domain selectivity. In contrast, the older families in the collection are considered to be*search optimized*for extremal properties. Some of the new families are demonstrated to be equivalent to some of the older families, thereby obviating the necessity for any search in their computation. - C. Taswell, August 1998,
Wavelet Transform Compression of Functional Magnetic Resonance Image Sequences. Paper #281-162,
pages 725-728 in Proceedings of the
IASTED SIP'98 Conference, held in Las Vegas, October, 1998.
Image sequences from functional neuroimaging experiments with 3-D magnetic resonance scans of the human brain were wavelet transformed using a variety of orthogonal and biorthogonal wavelet filters with different treatments of the image borders. Contrary to the expectation that higher-order wavelets with more sophisticated boundary treatments would yield better compression, simple Haar wavelets without any boundary treatment provided the best compression throughout the rate-distortion performance curve.

- C. Taswell, June 1998,
The Systematized Collection of Daubechies Wavelets. Technical Report CT-1998-06,
Computational Toolsmiths.
A single unifying algorithm has been developed to systematize the collection of compact Daubechies wavelets. This collection comprises all classes of real and complex orthogonal and biorthogonal wavelets with the maximal number K of vanishing moments for their finite length. Named and indexed families of wavelet filters were generated by spectral factorization of a product filter in which the optimal subset of roots was selected by a defining criterion within a combinatorial search of subsets meeting required constraints. Several new families have been defined some of which were demonstrated to be equivalent to families with roots selected solely by geometric criteria that do not require an optimizing search. Extensive experimental results are tabulated for 1 <= K <= 24 for each of the families and most of the filter characteristics defined in both time and frequency domains. For those families requiring optimization, a conjecture for K > 24 is provided for a search pattern that reduces the order of the computational complexity but permits attainment of the desired optimum.

- C. Taswell, June 1998,
Correction for the Sherlock-Monro Algorithm for Generating the Space of Real Orthonormal Wavelets
. Technical Report CT-1998-05, Computational Toolsmiths.
Evaluation of the Sherlock-Monro algorithm

*orthogen*for generating the space of real orthonormal wavelets of length N = 2J from J angle parameters revealed that it was valid only for 1 <= J <= 3. A correction is provided and validated for all J >= 1 with tests for orthonormality and perfect reconstruction. Some additional comments are offered discussing wavelet filter design algorithms based on angular parameterizations and those based on spectral factorizations. - C. Taswell, May 1998,
A Spectral-Factorization Combinatorial-Search Algorithm Unifying the Systematized Collection of Daubechies
Wavelets. Invited Paper and Session Keynote Lecture, Paper #323, pages 76-80 in Proceedings
of the IAAMSAD International SSCC98 Conference,
held at Durban South Africa, September 1998.
A spectral-factorization combinatorial-search algorithm has been developed for unifying a systematized collection of Daubechies minimum length maximum flatness wavelet filters optimized for a diverse assortment of criteria. This systematized collection comprises real and complex orthogonal and biorthogonal wavelets in families within which the filters are indexed by the number of roots at z = -1. The main algorithm incorporates spectral factorization of the Daubechies polynomial with a combinatorial search of spectral factor root sets indexed by binary codes. The selected spectral factors are found by optimizing the desired criterion characterizing either the filter roots or coefficients. Daubechies wavelet filter families have been systematized to include those optimized for time domain regularity, frequency domain selectivity, time frequency uncertainty, and phase nonlinearity. The latter criterion permits construction of the orthogonal least and most asymmetric real and least and most symmetric complex filters. Biorthogonal symmetric spline and balanced length filters with linear phase are also computable by these methods.

- C. Taswell, April 1998,
Numerical Evaluation of Time-Domain Moments and Regularity of Multirate Filter Banks. Paper
#71 in Proceedings of the 8th IEEE DSP Workshop, held at
Bryce Canyon, August 1998.
Numerical methods are described for evaluating the time-domain moments and regularity of multirate filter banks. Estimates of the Holder regularity are computed for the continuous functions obtained from the iterated discrete filters. Estimates of the centered moments are also computed for both the discrete filters and continuous functions. These estimates are then used to obtain the vanishing moment numbers. None of the methods require any preprocessing of the filters or

*a priori*information about them. Thus, the methods can serve as tests for the evaluation of arbitrary filter banks. Results are presented for various examples. - C. Taswell, March 1998, Revision December 1998,
Empirical Tests for the Evaluation of Multirate Filter Bank Parameters. Technical
Report CT-1998-02, Computational Toolsmiths. For multicolor figures to accompany this paper, view the
MRFB Parameters Page.
Empirical tests have been developed for evaluating the numerical properties of multirate M-band filter banks represented as N x M matrices of filter coefficients. Each test returns a numerically observed estimate of a 1 x M vector parameter in which the m

^{th}element corresponds to the m^{th}filter band. These vector valued parameters can be readily converted to scalar valued parameters for comparison of filter bank performance or optimization of filter bank design. However, they are intended primarily for the characterization and verification of filter banks. By characterizing the numerical performance of analytic or algorithmic designs, these tests facilitate the experimental verification of theoretical specifications. Tests are introduced, defined, and demonstrated for M-shift biorthogonality and orthogonality errors, M-band reconstruction error and delay, frequency domain selectivity, time frequency uncertainty, time domain regularity and moments, and vanishing moment numbers. These tests constitute the verification component of the first stage of the hierarchical three stage framework (with filter bank coefficients, single-level convolutions, and multi-level transforms) for specification and verification of the reproducibility of wavelet transform algorithms. Filter banks tested as examples included a variety of real and complex orthogonal, biorthogonal, and nonorthogonal M-band systems with M >= 2. Coefficients for these filter banks were either generated by computational algorithms or obtained from published tables. Analysis of these examples from the published literature revealed previously undetected errors of three different kinds which have been called transmission, implementation, and interpretation errors. The detection of these mistakes demonstrates the importance of the evaluation methodology in revealing past and preventing future discrepancies between observed and expected results, and thus, in insuring the validity and reproducibility of results and conclusions based on those results. - C. Taswell, December 1996, Revision March 1998,
Reproducibility Standards for Wavelet Transform Algorithms , Technical Report CT-1998-01,
Computational Toolsmiths.
Specifications for reproducibility standards are developed for wavelet transform algorithms. Reproducibility of an algorithm is defined as the requirement that the specified algorithm yield the same transform coefficients for the same signal data regardless of implementation in any programming language or on any computing machine. The specification is built with three heirarchical stages consisting of 1) filter bank coefficients, 2) single-level convolutions, and 3) multi-level transforms. Each stage is specified with all the necessary choices, parameters, and operators required to insure reproducibility. Each stage can be further characterized by additional properties that provide relevant information. The specification has been designed to be a sufficiently general and flexible framework which encompasses many different convolution types addressing the issues of both the phase shift and the boundary treatment. New convolution phase shift variants are introduced. In particular, a peak near-aligned phase variant is demonstrated on test signals and applied to fast wavelet based matrix multiplication. Finally, in the context of computational science and engineering, the concept of scientific reproducibility of an algorithm is discussed and contrasted with two other concepts introduced as repetitive executability and input-output repeatability.

- C. Taswell, December 1997, The Systematized Collection of Wavelet Filters Computable
by Spectral Factorization of the Daubechies Polynomial, Computational Toolsmiths. This 100 page
manuscript contains hyperlinked text, tables, and color figures as an Adobe *.pdf file (6 MB). It is
available with the FirWav Filter Library.
Computational algorithms have been developed for generating minimum length maximum flatness finite impulse response (FIR) filter coefficients for a systematized collection of wavelet filters designed by spectral factorization of a product filter. Both Lagrange and Daubechies polynomials have been studied numerically as alternative constructions for the required product filter which must be a halfband autocorrelation filter.

The systematized collection obtained from the product filter comprises real and complex orthogonal and biorthogonal wavelets in families defined by optimization criteria for various filter parameters. The main algorithm incorporates spectral factorization of the Daubechies polynomial with a combinatorial search of spectral factor root sets indexed by binary codes. The selected spectral factors are found by optimizing the desired criterion characterizing either the filter roots or coefficients.

Polynomial roots for the spectral factors are computed by a composite conformal mapping with affine and inverse Joukowski transformations. Filter coefficients are computed from the roots by iterative convolution of linear root factors previously sorted in increasing absolute value order. Experiments with higher order root factors and other root sort orders, such as the Edrei-Leja order, revealed no significant benefit to these alternatives.

Daubechies wavelet filter families have been systematized to include those optimized for time-domain regularity, frequency-domain selectivity, time-frequency uncertainty, and phase nonlinearity. The latter criterion permits construction of the least and most asymmetric and least and most symmetric real and complex orthogonal filters. Biorthogonal symmetric spline and balanced length filters are also computable by these methods.

All families have been indexed by the number K of roots at z = -1 corresponding to the number of vanishing moments on the wavelets. All filters have been subjected to extensive numerical tests including all of the filter parameters defining each of the different families as well as the M-shift biorthogonality, M-shift orthogonality, and M-band reconstruction errors. The numerically observed vanishing moments number should equal K. However, it was observed to peak at approximately 12 for each family tested when all computations were done in double precision with tolerance for a vanishing moment set at 1e-4.

New filters with distinguishing features are demonstrated with examples for each of the families. All of the filter families are catalogued extensively for K = 1,2,...,24 with tables listing numerical estimates of the filter parameters and figures displaying plots of the filter zeros, impulse responses, and frequency responses.

- C. Taswell, September 1997,
Computational Algorithms for Daubechies Least-Asymmetric, Symmetric, and Most-Symmetric Wavelets
, pages 1834-1838 in Proceedings ICSPAT'97 San Diego.
Computational algorithms have been developed for generating min-length max-flat FIR filter coefficients for orthogonal and biorthogonal wavelets with varying degrees of asymmetry or symmetry. These algorithms are based on spectral factorization of the Daubechies polynomial with a combinatorial search of root sets selected by a desired optimization criterion. Daubechies filter families were systematized to include Real Orthogonal Least Asymmetric (DROLA), Real Biorthogonal symmetric balanced Most Regular (DRBMR), Complex Orthogonal Least Asymmetric (DCOLA), and Complex Orthogonal Most Symmetric (DCOMS). Total phase nonlinearity was the criterion minimized to select the roots for the DROLA, DCOLA, and DCOMS filters. Time-domain regularity was used to select the roots for the DRBMR filters (which have linear phase only). New filters with distinguishing features are demonstrated with examples.

- C. Taswell, December 1996, Reproducibility Standards for Wavelet Transform Algorithms , See March 1998 Revision.
- C. Taswell, October 1996,
Specifications and Standards for Reproducibility of Wavelet Transforms , pages 1923-1927
in Proceedings ICSPAT'96 Boston.
As the number of applications and use of wavelet transforms continues to grow, so does the number of classes and variations of wavelet transform algorithms. All of these algorithms incorporate a filter convolution in some implementation, typically, as part of an iterated filter bank. In contrast to implementations of the classical Fourier transform where there is at most a choice of sign and normalization constant in the complex exponential kernel, for wavelet transform algorithms there are multiple choices including both the signs and normalization constants of the wavelet kernels as well as the phase delays or advances of each of the filters in the wavelet filter bank. These algorithmic details, however, are usually not reported in the literature albeit with certain exceptions such as the FBI fingerprint image compression standard. Nevertheless, it is necessary to specify such details in order to insure the reproducibility of results output by each algorithm regardless of its implementation by any programmer working in any language or any engineer designing any DSP chip. This report itemizes a list of choices that must be specified clearly in order to insure the reproducibility of a sequence of transform coefficients generated by a specific wavelet transform algorithm. Moreover, this report proposes a simple but novel solution to the phase alignment problem for wavelet transforms. The general principles of this solution apply in various specific forms to both non-subsampled and critically subsampled wavelet transforms and to both symmetric and asymmetric wavelet filters.

- C. Taswell, October 1996,
Satisficing Search Algorithms for Selecting Near-Best Bases in Adaptive Tree-Structured Wavelet
Transforms , pages 2423-2438 in IEEE Transactions on Signal Processing, Vol 44 No 10.
Satisficing search algorithms have been proposed for adaptively selecting near-best basis and near-best frame decompositions in redundant tree-structured wavelet transforms. Any of a variety of additive or non-additive information cost functions can be used as the decision criterion for comparing and selecting nodes when searching through the tree. The algorithms are applicable to tree-structured transforms generated by any kind of wavelet whether orthogonal, biorthogonal, or non-orthogonal. These satisficing search algorithms implement sub-optimizing rather than optimizing principles, and acquire the important advantage of reduced computational complexity with significant savings in memory, flops, and time. Despite the sub-optimal approach, top-down tree-search algorithms with additive or non-additive costs that yield near-best bases can be considered in many practical situations better than bottom-up tree-search algorithms with additive costs that yield best bases. Here "better than" means that effectively the same level of performance can be attained for a fraction of the computational work. Experimental results comparing the various information cost functions and basis selection methods are demonstrated for both data compression of real speech and time-frequency analysis of artificial transients.

- C. Taswell, August 1995,
Algorithms for the Generation of Daubechies Orthogonal Least Asymmetric Wavelets and the Computation
of Their Holder Regularity Scientific Computing and Computational Mathematics, Stanford
University, Stanford CA.
Explicit algorithms are presented for the generation of Daubechies compact orthogonal least asymmetric wavelet filter coefficients and the computation of their Holder regularity. The algorithms yield results for any number N of vanishing moments for the wavelets. These results extend beyond order N=10 those produced by Daubechies for the values of the filter coefficients and those produced by Rioul for the values of their Holder regularity. Moreover, they reveal that the choice of phase for the filters published by Daubechies for orders N=4 to N=10 was not made consistently. In particular, her filter coefficients for orders N=7 to N=9 should be reflected to their mirror image sequence.

- C. Taswell, August 1995,
Speech Compression with Cosine and Wavelet Packet Near-Best Bases Scientific Computing
and Computational Mathematics, Stanford University, Stanford CA. Published in Proceedings ICASSP '96,
Atlanta May 1996, pages 566-568.
Compression of speech from the TIMIT corpus was investigated for several transform domain methods coding near-best and best bases from cosine and wavelet packet transforms. Satisficing (suboptimizing) search algorithms for selecting near-best bases were compared with optimizing algorithms for best bases in these adaptive tree-structured transforms. Experiments were performed on several hundred seconds of speech spoken by both male and female speakers from all dialect regions of the TIMIT corpus. Near-best bases provided rate-distortion performance effectively as good as that of best bases but without the additional computational penalty. Cosine packet bases outperformed wavelet packet bases.

- Taswell, C., June 1995,
Satisficing Search Algorithms for Selecting Near-Best Bases in Adaptive Tree-Structured Wavelet
Transforms Technical Report SC-CM 95-08, Scientific Computing and Computational Mathematics,
Stanford University, Stanford CA; revision published in IEEE Transactions on Signal Processing, October
1996, Vol 44 No 10 pages 2423-2438.
Satisficing search algorithms have been proposed for adaptively selecting near-best basis and near-best frame decompositions in redundant tree-structured wavelet transforms. Any of a variety of additive or non-additive information cost functions can be used as the decision criterion for comparing and selecting nodes when searching through the tree. The algorithms are applicable to tree-structured transforms generated by any kind of wavelet whether orthogonal, biorthogonal, or non-orthogonal. These satisficing search algorithms implement sub-optimizing rather than optimizing principles, and acquire the important advantage of reduced computational complexity with significant savings in memory, flops, and time. Despite the sub-optimal approach, top-down tree-search algorithms with additive or non-additive costs that yield near-best bases can be considered in many practical situations better than bottom-up tree-search algorithms with additive costs that yield best bases. Here "better than" means that effectively the same level of performance can be attained for a fraction of the computational work. Experimental results comparing the various information cost functions and basis selection methods are demonstrated for both data compression of real speech and time-frequency analysis of artificial transients.

- C. Taswell, April 1995,
Image Compression by Parameterized-Model Coding of Wavelet Packet Near-Best Bases
pages 153-161 in Proceedings of the SPIE Conference on Wavelet Applications Orlando FL, edited by Szu
H., SPIE Press Volume 2491.
Top-down tree search algorithms with non-additive information cost comparisons as decision criteria have recently been proposed by Taswell for the selection of near-best bases in wavelet packet transforms. Advantages of top-down non-additive near-best bases include faster computation speed, smaller memory requirement, and extensibility to biorthogonal wavelets in addition to orthogonal wavelets. A new compression scheme called parameterized-model coding was also proposed and demonstrated for one-dimensional signals. These methods are extended here to two-dimensional signals and applied to the compression of images. Significant improvement in compression while maintaining comparable distortion is demonstrated for parameterized-model coding relative to quantized-scalar coding. In general, the lossy compression scheme is applicable for low bit rate coding of the M largest packets of wavelet packet decompositions with wavelet packet basis libraries and the M atoms of matching pursuit decompositions with time-frequency atom dictionaries.

- Taswell, C., March 1995, Algorithms for Wavelet Transforms and Adaptive Wavelet
Packet Decompositions, PhD Thesis, Scientific Computing and Computational Mathematics, Stanford
University, Stanford CA. Reading Committee consisted of:
- Principal Advisor: Prof. Joseph Keller, Stanford Univ Dept of Mathematics;
- Second Reader: Prof. George Papanicolaou, Stanford Univ Dept of Mathematics;
- Third Reader: Prof. Stephane Mallat, New York Univ Dept of Mathematics.

A wavelet software toolbox called WavBox has been developed for wavelet transforms and adaptive wavelet packet decompositions. WavBox provides both a function library for use in programming and a computing environment for use in interactive exploratory data analysis. The scope of this work therefore encompasses both computational mathematics with the development of new algorithms as well as computer science with the development of new interfaces, both textual command and graphical icon, for the use of these algorithms within an interactive computing environment.

The development of interfaces for the WavBox computing environment focused on principles of convenience and utility for the user. All transform and decomposition algorithms are integrated for simultaneous use with both textual command and graphical icon interfaces through an architectural design incorporating heirarchical modules, switch-driven function suites, and an object property expert system with artificial intelligence for configuring valid property combinations.

The development of computational algorithms focused on principles of pragmatism. New algorithms include the development of a) methods for computing the wavelet transform of signals of arbitary length not restricted to a power of two, b) satisficing searches instead of optimizing searches for selecting bases in wavelet packet transforms, and c) parameterized-model coding instead of quantized-vector or quantized-scalar coding for further compression of the selected transform decompositions. These methods are shown to be especially useful for image compression.

Wavelet packet basis decompositions require selection of a single basis represented as the terminal leaves of a subtree within a redundant collection of many bases represented as the full tree. For this purpose, top-down and bottom-up tree searches with additive and non-additive information cost functions as decision criteria are proposed as selection methods. These new algorithms are satisficing searches and find near-best basis decompositions.

The satisficing searches are benchmarked in computational experiments comparing their performance with optimizing searches (the Coifman-Wickerhauser best basis decomposition and the Mallat-Zhang matching pursuit decomposition). Near-best basis decompositions outperform the other decompositions as measured by significant increases in efficiency of computation (reductions in memory, flops, and time) for comparable levels of distortion on reconstruction after fixed-rate lossy compression.

- C. Taswell, WavBox 4: A Software
Toolbox for Wavelet Transforms and Adaptive Wavelet Packet Decompositions pages 361-375
in Wavelets and Statistics (Proceedings of the Villard de Lans Conference November 1994), Lecture Notes
in Statistics Vol. 103, edited by Antoniadis A. and Oppenheim G., Springer Verlag, 1995.
WavBox provides both a function library and a computing environment for wavelet transforms and adaptive wavelet packet decompositions. WavBox contains a collection of these transforms, decompositions, and related functions that perform multiresolution analyses of 1-D multichannel signals and 2-D images. The current version 4.1c includes overscaled pyramid transforms, discrete wavelet transforms, and adaptive wavelet and cosine packet decompositions by best level, best basis, and matching pursuit as described by Mallat, Coifman, Wickerhauser, and other authors. WavBox also implements Taswell's new search algorithms with decision criteria, called near-best basis and non-additive information costs respectively, for selecting bases in wavelet packet transforms, as well as Donoho and Johnstone's wavelet shrinkage denoising methods. Various choices of filter classes (orthogonal, biorthogonal, etc), filter families (Daubechies, Vetterli, etc), and convolution versions (interval, circular, extended, etc) exist for each transform and decomposition. The software has been designed for efficient automated computation, interactive exploratory data analysis, and pedagogy. Essential features of the design include: perfect reconstruction for multiresolution decomposition of data of arbitrary size not restricted to powers of 2; both command line and graphical user interfaces with a comprehensive set of plots and visual displays; an object property expert system with artificial intelligence for configuring valid property combinations; heirarchical modules and switch-driven function suites; vector-filter and matrix-operator implementations of convolutions; extensibility for the inclusion of other wavelet filters, convolution versions, and transforms; optional arguments with built-in defaults for most m-files; and extensive on-line help and self-running tutorial demos.

- Taswell, C. Top-Down and Bottom-Up
Tree Search Algorithms for Selecting Bases in Wavelet Packet Transforms pages 345-359 in
Wavelets and Statistics (Proceedings of the Villard de Lans Conference November 1994), Lecture Notes
in Statistics Vol. 103, edited by Antoniadis A. and Oppenheim G., Springer Verlag, 1995.
Search algorithms for finding signal decompositions called near-best bases using decision criteria called non-additive information costs have recently been proposed by Taswell for selecting bases in wavelet packet transforms represented as binary trees. These methods are extended here to distinguish between top-down and bottom-up tree searches. Other new non-additive information cost functions are also proposed. In particular, the near-best basis with the non-additive cost of the Shannon entropy on probabilities is compared against the best basis with the additive cost of the Coifman-Wickerhauser entropy on energies. All wavelet packet basis decompositions are also compared with the nonorthogonal matching pursuit decomposition of Mallat and Zhang and the orthogonal matching pursuit decomposition of Pati et al. Monte Carlo experiments using a constant-bit-rate variable-distortion paradigm for lossy compression suggest that the statistical performance of top-down near-best bases with non-additive costs is superior to that of bottom-up best bases with additive costs. Top-down near-best bases provide a significant increase in computational efficiency with reductions in memory, flops, and time while nevertheless maintaining similar coding efficiency with comparable reconstruction errors measured by l^p-norms. Finally, a new compression scheme called parameterized model coding is introduced and demonstrated with results showing better compression than standard scalar quantization coding at comparable levels of distortion.

- C. Taswell, Near-Best Basis Selection
Algorithms with Non-Additive Information Cost Functions pages 13-16 in IEEE Intl Symposium
on Time-Frequency Time-Scale Analysis Philadelphia PA October 1994, edited by Amin, M. G., IEEE Press
Volume 94TH8007, 1994.
Search algorithms for finding signal decompositions called near-best bases using decision criteria called non-additive information costs are proposed for selecting bases in wavelet packet transforms. These new methods are compared with the best bases and additive information costs of Coifman and Wickerhauser. All near-best and best bases were also compared with the matching pursuit decomposition of Mallat and Zhang. Preliminary experiments suggest that for the application of time-frequency analysis, a wide variety of results can be obtained with the different methods, and that for the application of data compression, the near-best basis selected with non-additive costs may outperform the best basis selected with additive costs.

- Taswell, C. and McGill, K. C. Algorithm
735: Wavelet Transform Algorithms for Finite-Duration Discrete-Time Signals ACM Transactions
on Mathematical Software 20(3):398-412, September 1994.
The algorithms "split" for the wavelet transform and "merge" for the inverse wavelet transform are presented for finite-duration discrete-time signals of arbitrary length not restricted to a power of 2. Alternative matrix- and vector-filter implementations of alternative truncated, circulant, and extended versions are discussed. Matrix- and vector-filter implementations yield identical results and enhance, respectively, didactic conceptualization and computational efficiency. Truncated, circulant, and extended versions produce the signal-end effects of, respectively, errors, periodization, and redundancy in the transform coefficients. The use of any one of these three versions avoids the signal-end effects associated with the other two versions. Additional alternatives which eliminate all signal-end effects (albeit at the cost of increased algorithmic complexity) are discussed briefly.

- K. McGill and C. Taswell, Wavelet Transform Algorithms for Finite-Duration Discrete-Time Signals pages 221-224 in Progress in Wavelet Analysis and Applications, Proceedings of the International Conference on Wavelets and Applications, Toulouse France, June 1992, edited by Meyer Y. and Roques S., Editions Frontieres, 1993.

- Taswell, C. Reconstruction of a Wavelet Transform from its Sign Change Representation Scientific Computing and Computational Mathematics, Stanford University, Stanford CA, March 1992.
- McGill, K. and Taswell, C. Length-Preserving Wavelet Transform Algorithms for Zero-Padded and Linearly-Extended Signals Veterans Affairs Medical Center, Palo Alto CA, 1992. See also the 1993 paper published in the Proceedings of the International Conference on Wavelets and Applications held in Toulouse.

- McGill, K. and Taswell, C. Wavelet Transform Algorithms for Finite-Duration Discrete-Time Signals: II. Elimination of Signal-End Effects Veterans Affairs Medical Center, Palo Alto CA, 1991. See also McGill and Taswell 1992.
- Taswell, C. and McGill, K. Wavelet Transform Algorithms for Finite-Duration Discrete-Time Signals: I. Signal-End Effects Scientific Computing and Computational Mathematics, Stanford University, Stanford CA, Technical Report NA-91-07, November 1991. See also Taswell and McGill 1994 Algorithm 735.