4. Information Theoretic Methods
The information theoretic methods in signal processing are the major routes towards systematic and principled selection of the structure of various models by properly balancing the ability of fitting data and the complexity of the models.
Our investigations were directed both to developing new methodologies in the family of information theoretic methods and to finding new application areas where the newly derived methods could reveal their potential. The applications of data compression to full genome compressions achieved the best existing results for the human genome. A clearly tangible use of data compression in bioinformatics is to represent DNA sequences in a compact form, which is optimized for mass storage or data transmission. Additionally, the uncovering of the most effective statistical models has a profound value in understanding the patterns and interrelationships present in the biological sequence. Another highlight of the results achieved is the lossless audio compression research, where we obtain the best existing combination of compression performance and encoding/decoding speed. The field of statistical inference based on information theory expands very quickly to all areas of science and engineering, with some of the trends captured by the two workshops co-organized by our group in 2008 and 2009 [826,832] and also with the notable contributions presented in the Shannon lecture presented by Jorma Rissanen .
We derived exact and implementable solutions to the problem of universal coding using the normalized maximum likelihood model for the class of Markov sources. The case of the first order models was analyzed in  while the derivations for general order models and for additionally accounting for typical implementation constraints were presented in a book chapter  and in . These exact evaluations can be used for example in inference about the underlying structure of the sequence, and for segmenting the haplotype data .
In 2008-2009 we utilized universal models with memory for the segmentation of the duplicated genes in genomes  showing that we can better infer the duplicated stretches of DNA if we model the approximate matching as a process with memory, with further results presented in .
One typical application is DNA compression, where besides a good compression performance one wants to provide random access to the data stream. We have shown ways to ensure random access for functional elements in DNA sequences, without decompressing the whole file in [372,497].
We provided a general methodology for building optimal tree machines for sources with memory by using a pruning strategy for the complete tree machine derived from a training set. We then combined the optimal resulting model with our previous Tunstall coding for sources with memory .
We provided new solutions to several fundamental problems in lossless compression: optimal interaction between the predictor and the quantizer (in forward, or frame based, coding)[352,692], solutions for exploiting the correlation of the stereo channels [351,470], compensating the nonlinear effects (saturation) present in most of the audio material [469,471], optimal segmentation into frames by an MDL criterion .
In 2008-2009 we extended the results obtained for the interleaved optimization-quantization presented for order-recursive algorithms in , to the least square estimation of direct form AR parameterization, showing at ICASSP’2008 a spectacular improvement of the lossless compression factor and of the encoding-decoding speed , when compared to the existing standard MPEG-ALS. Further points of refinement of the lossless audio schemes were pursued for bit allocation of the linear prediction coefficients in  and for carefully benchmarking the compression and speed performance of our lossless audio compression algorithms in .
The ideal rate-distortion curves can be achieved by unbounded random dictionaries, as already known from Shannon’s rate-distortion theory and attempts to more practical constructs, without employing dictionaries are still very complex.
In 2008-2009 we provided a constructive method to achieve good rate-distortion performance without utilizing large dictionaries and we analyzed the performance for the case of memoryless models, in a book chapter .
We proposed a new low bit rate quantization scheme by using Golay dictionaries at DCC’2009  and in the patent application . The bottleneck of complexity being the search on each separate shell of the Golay codebooks, we derived two types of fast search algorithms submitted as patent applications, one utilizing the cyclic shifting property of the Goaly code vectors  and the other utilizing the properties of the hexacode for organizing the search of the nearest Golay neighbour on a given shell in the most economical way . Further simplifications of the overall greedy optimization procedure led in [787,788] to low bit rate quantization schemes outperforming the traditionally RE8 lattice schemes at SNR performance and having a lower computational cost.
A very interesting connection between Kolmogorov complexity and hypothesis testing was proposed and analyzed. This topic has been investigated with B. Ryabko from the Siberian State University of Telecommunication and Computer Science, Novosibirsk, Russia, in a Theoretical Computer Science article  and in Journal of Statistical Planning and Inference article.
In 2008-2009 we proposed a number of new contributions starting from the newest approach to composite hypothesis testing proposed by Rissanen, which relies on the concept of optimally distinguishable distributions (ODD). In , we derived the ODD detector for the classical linear model. The work was further extended in a paper published in IEEE Transactions on Signal Processing (see ), where we provided answers to the following problems that have not been previously investigated in the literature: (i) the relationship between ODD and the widely used Generalized Likelihood Ratio Test; (ii) the connection between ODD and the Information Theoretic Criteria (ITC) applied in model selection.
For most of the models, the computation of the stochastic complexity is difficult, and various approximations have been proposed. We considered two approximations based on the Fisher information, and we demonstrated their performance in solving the following problems: order selection for AR and ARMA models , estimation of the number of sine-waves in various types of noise [43,472,473], and order selection for Markov models .
Since the ITC have been derived under the assumption that the measured signals are stationary, it is not straightforward to employ them in combination with the forgetting factor least-squares algorithms (FFLSA). In [268,606], we modified the Predictive Densities Criterion and the Sequentially Normalized Maximum Likelihood criterion such that to be compatible with the FFLSA. Additionally, we provided rigorous proofs concerning the asymptotic approximations of other four modified ITC.
Based on the MDL principle, the dependence between time series was recast to reflect the predictability of each of the two time series from the other, and the method provided a measure of the synchronous oscillatory brain activity for comparing the EEG response of epileptic and control children [254,414].
Important estimation problems can be formulated in the new field of genomics and proteomics, with the unusual demand of estimating hundreds of parameters. We derived maximum likelihood (ML) estimators and corresponding MDL model order selection methods, for a new technology in proteomics, and applied it to several biological problems. This lead to a IEEE Transactions in Signal Processing article , two other journal papers [61,81] and a conference paper . This is joint work with the group of W. Zhang at MD Anderson Cancer Center, USA.
Discrete regression models appear in many classes of problems in genomics: classification, prediction, regulatory network modelling. A unified derivation of the normalized maximum likelihood (NML) model for different classes of discrete regression was introduced in a book chapter  and further refined in . The same type of methodology was successfully applied also to the logit regression models in the book chapter .
The Kolmogorov structure function, can be seen to be similar to using MDL with distortion, and both these concepts can be utilized to modelling regulatory network of genes. The initial idea was proposed in a conference paper .
In 2008-2009 we proposed a method for inference of gene regulatory networks based on a universal model and Kolmogorov structure function, published in EURASIP Journal of Bioinformatics and Systems Biology .
We analyzed and compared different compression algorithms employed for DNA sequences, showing the superiority of our normalized maximum likelihood model in a IEEE Signal Processing Magazine article .
We proposed a specialized compression program for DNA sequences, able to compress the full human genome with the best compression results known to date, about 1.7 bits per base for most human chromosomes (except the Y chromosome, which requires only about 1.2 bits per base) .
We derived and extensively tested the best compression algorithm for annotated DNA sequences, which is based on a grammar analyzer for the textual part of the annotation, while for the sequence part, it uses grammar substitutions and NML based compression. The practical implementation of the algorithm achieves the best results known for these DNA files, compressing to about 1 bit each character of the original file. The results were published in IEEE/ACM Transactions on Computational Biology and Bioinformatics .
Most proteins files carry along with the amino acid sequence the information about the secondary structure of the proteins. Jointly encoding of these two strings leads to an overall better compression , but also gives rise to the possibility to use the compression results for predicting the secondary structure of the proteins .
We applied the general theory of positive polynomials to the case of multivariate trigonometric polynomials and connected it to convex optimization. The most significant theoretical contribution is related to polynomials that are positive on semialgebraic sets, with application to 2-D FIR filter design  in the minimax sense; the distinctive feature of the method is the expression of frequency mask constraints as linear matrix inequalities and the formulation of the whole optimization problem in the semidefinite programming (SDP) framework, where it can be solved with freely available libraries; fast algorithms for this family of problems were proposed in .
In 2008-2009, several other applications of this theory were pursued, resulting in papers published in IEEE Trans. on Signal Processing, IEEE Signal Processing Letters and other journals. Stability tests for multidimensional systems, investigated previously in , have now been enhanced to the Fornasini-Marchesini model , using an SDP problem that is practically equivalent to testing if a matrix polynomial is nonsingular on a frequency domain. Among the applications related to compression, we have optimized dual-tree orthogonal wavelets using the positive trigonometric polynomials approach in  for the critically sampled case and in  for higher density wavelets. The theoretic foundation of the design algorithms consists of a parameterization of the convex domain of free parameters describing the generating filters (the free parameters result after imposing perfect reconstruction and regularity constraints); the parameterization helps in structuring the search procedure used in the optimization. Another publication  relates the design of dual-tree biorthogonal wavelets to fractional delay approximation, approached via a Bounded Real Lemma for trigonometric polynomials, expressed as a linear matrix inequality.
Other applications that use the theory of positive polynomials are in the design of nonuniform oversampled filter banks  that are composed of sections of uniform banks. An extension of the theoretical results has been made in , where positive hybrid polynomials are studied; these polynomials depend on both real and complex variables and an example of application is the design of variable FIR filters implemented with a Farrow structure.