CA2171293A1 - Method and apparatus for data analysis - Google Patents

Method and apparatus for data analysis

Info

Publication number
CA2171293A1
CA2171293A1 CA002171293A CA2171293A CA2171293A1 CA 2171293 A1 CA2171293 A1 CA 2171293A1 CA 002171293 A CA002171293 A CA 002171293A CA 2171293 A CA2171293 A CA 2171293A CA 2171293 A1 CA2171293 A1 CA 2171293A1
Authority
CA
Canada
Prior art keywords
signal
record
component
samples
factors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002171293A
Other languages
French (fr)
Inventor
Harald Aagaard Martens
Jan Otto Reberg
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
IDT Inc
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2171293A1 publication Critical patent/CA2171293A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
    • H04N19/23Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding with coding of regions that are present throughout a whole video segment, e.g. sprites, background or mosaic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding

Abstract

A method and apparatus are disclosed for converting between samples of an input signal (104) and an encoded signal composed of a plurality of component signals each representing a characteristic of the input signal (104) in a different domain. The input signal (104) is comprised of data samples organized into records of multiple samples, with each sample occupying unique position within its record and each component signal is formed as a combination of a plurality of factors, each factor being the product of a score signal (104) and the load signal (206) defines the relative variation of a subgroup of samples in different positions of a record.

Description

W095/08240 217 ~ ~ ~ PCT~S9~110190 h~-l~OV AND APPARATUS FOR DATA ANALYSIS

FIELD OF THE lN V ~:N-l lON
The present invention relates generally to a method and apparatus for data analysis. More specifically, the present invention relates to a method and apparatus for analyzing data and extracting and utilizing relational structures in different dnm~;n~, such as temporal, spatial, color and shape dom~;n~.

BACRGROUND OF TEE lN V~N-l lON-Full motion digital image sequences in typical video applications require the processing of massive amounts o~ data in order to produce good quality visual images from the point of view of shape, color and motion. Data compres-sion is often used to reduce the amount of data which mustbe stored and manipulated. A data compression system typi-cally includes modelling sub-systems which are used to provide simple and efficient representations of the large amount of video data.

W095/08240 - ~- 7~ 2 ~ ~ PCT~S9~llO190 A number of compression systems have been devel-oped which are well suited for video image compression.
These systems can be classified into three main groups according to their operational and modelling characteris-tics. First, there is the causal global modelling approach.An example of this type of model is a three ~;mpn~ional (3D) wire frame model which implies spatial controlling position and intensity at a small set of more or less fixed wireframe grid points and interpolates between the grid points. In some applications, this approach is combined with 3D ray tracing of solid objects. This wire frame approach is capable of providing very e~icient and compact data repre-sentation, since it involves a very deep model, i.e., a significant amount of effort must be invested up front to develop a comprehensive model. Accordingly, this model provides good visual appearance.
However, this approach suffers from several sig-nificant disadvantages. First, this causal type model requires detailed a priori (advance) modelling information on 3D characterization, surface texture, lighting character-ization and motion behavior. Second, this approach has very limited empirical flexibility in generic encoders, since once the model has been defined, it is difficult to supple-ment and update it dynamically as new and unexpected images are encountered. Thus, this type of model has limited usefulness in situations requiring dynamic modelling of real time video sequences.
A second type of modelling system is an empirical, updatable compression system which involves very limited W095/08240 ~1 712 9 3 PCT~594110190 model development, but provides relatively inefficient compression. The MPEG 1 and MPEG 2 compatible systems represent such an approach. For example, in the MPEG stan-dard, an image sequence is represented as a sparse set of still image frames, e.g., every tenth frame in a sequence, which are compressed/decompressed in terms of pixel blocks, such as 8 x 8 pixel blocks. The intermediate frames are reconstructed based on the closest decompressed frame, as modified by additional information indicating blockwise changes representing block movement and intensity change patterns. The still image compression/decompression is typically carried out using Discrete Cosine Transforms (DCT), but other approaches such as sl~hh~nA, wavelet or fractal still image coding may be used. Since this approach involves very little modelling depth, long range systematic re~lln~ncies in time and space are often ignored so that essentially the same information is stored/transmitted over and over again.
A third type of modelling system is an empirical global modelling of image intensities based on factor analy-sis. This approach utilizes various techniques, such as principal component analysis, for approximating the intensi-ties o~ a set of N images by weighted sums of F "factors."
Each such factor has a spatial parameter for each pixel and a temporal parameter for each frame. The spatial parameters of each factor are sometimes referred to as "loadings", while the temporal parameters are referred to as "scores".
One example of this type of approach is the Karhunen-Loeve expansion of an N x M matrix of image intensities (M pixels wosslo824o ~,7 ` ~ ~ PCT~S9~/10190 per frame, N frames) for compression and recognition of human facial images. This is discussed in detail in Kirby, M. and Sirovich, L. "Application of the Karhunen-~oeve Procedure for the Characterization of Human Faces", IEEE
Transactions on Pattern Analysis and Machine Intelligence, Vol. 12, No. 1, pp. 103-108 (1990), and R.C.Gonzales and R.E.Woods, Diqital Image Processinq, Chapter 3.6 (Addison-Wesley Publ.Co., ISBN 0-201-50803-6, 1992) which are incor-porated herein by reference.
In Karhunen-Loeve expansion (also referred to as eigen analysis or principal component analysis, Hotelling transform and singular value decomposition), the product of the loadings and the scores for each consecutive factor m; n; m;zeS the squared difference between the original and the reconstructed image intensities. Each of the factor loadings has a value for each pixel, and may therefore be referred to as "eigen-pictures"; the corresponding factor score has a value for each frame. It should be noted that the Karhunen-Loeve system utilizes factors in only one do-main, i.e., the intensity domain, as opposed to the presentinvention which utilizes factors in multiple dom~; n~, such as intensity, address and probabilistic ~om~; n~, Such a compression system is very efficient in certain situations, such as when sets of pixels display interrelated intensity variations in fixed patterns from image to image. For example, if every time that pixels a, b, c become darker, pixels d, e, f become lighter, and vice versa, then all of pixels a, b, c, d, e, f can be effective-ly modelled by a single factor consisting of an eigen pic-2~0 ~17 ~ 2 9 3 PCT~S91/10190 ture intensity loading having positive values for pixels a, b, c and negative values for pixels d, e, f. The group of pixels would then be modelled by a single score number for s each image. Other interrelated pixel patterns would also 5 give rise to additional factors.
This type of approach results in visually disrup-tive errors in the reconstructed image if too few factors are used to represent the original images. Additionally, if the image-to-image variations include large systematic l0 spatial changes, such as moving objects, then the number of eigen pictures required for good visual representation will be correspondingly high. As a result, the compression rate - deteriorates significantly, Thus, the Karhunen-Loeve sys-tems of factor modelling of image intensities cannot provide 15 the necessary compression required for video applications.
A fourth approach to video coding is the use of object oriented codecs. This approach focuses on identify-ing "natural" groups of pixels ("objects") that move and/or change intensity together in a fairly simple and easily 20 compressible manner. More advanced versions of object oriented systems introduce a certain flexibility with re-spect to shape and intensity of individual objects, e.g., affine shape transformations such as translations, scaling, rotation and shearing, or one factor intensity changes.
25 However, it should be noted that the object oriented ap-proach typically employs only single factors.
In prior art systems, motion is typically approxi-mated by one of two methods. The first of these methods is incremental movement compensation over a short period of W095/082~0 2 ~ ~ 12~9~ PCT~S9~/10190 time which is essentially a di~erence coding according to which the difference between pixels in a frame, n, and a previous frame, n-1, are transmitted as a difference image.
MPEG is one example of this type of system. This approach allows for relatively simple introduction of new features since they are merely presented as part of the difference image. However, this approach has a significant disadvan-tage in that dynamic adaptation or learning is very diffi-cult. For example, when an object is moving in an image, there is both a change in location and intensity, making it very difficult to extract any systematic data changes. As a result, even the simplest form of motion requires extensive modelling.
Another approach to incremental movement compensa-tion is texture mapping based on a common reference frame,according to which motion is computed relative to a common reference frame and pixels are moved from the common refer-ence frame to synthesize each new frame. This is the ap-proach typically employed by most wire frame models. The advantage of this approach is that very efficient and com-pact representation is possible in some cases. However, the significant downside to this approach is that the efficiency is only maintained as long as the moving objects retain their original intensity or texture. Changes in intensity and features are not easily introduced, since existing systems incorporate only one ~;mPn~ional change models, in either intensity or address.
Accordingly, it is an object of the present inven-tion to provide a method and apparatus for data analysis WogS/08240 ~ 7 ~` 2 9 3 PCT~S9~/10190 which provides very efficient and compact data representa-tion without requiring a significant amount of advanced modelling information, but still being able to utilize such `S information if it does exist.
It is also an object of the present invention to provide a method and apparatus for data analysis having empirical flexibility and capable of dynamic updating based on short and long range systematic redundancies in various ~m~;nR in the data being analyzed.
It is a further object of the present invention to provide a method and apparatus for data analysis which utilizes factor analysis in multiple ~om~;nR, such as ad-dress and probabalistic dom~;nR, in addition to the intensi-ty ~om~;n. Additionally, the factor analysis is performed for individual subgroups of data, e.g., for each separate spatial object.
An additional object of the present invention is to provide a method and apparatus for data analysis which uses multiple factors in several dom~;nR to model objects.
These "soft" models (address, intensity, spectral property, transparency, texture, type and time) are combined with "hard" models in order to allow for more effective learning and modelling of systematic change patterns in input data, such as a video image. Examples of such "hard" modelling are: a) conventional affine motions modelling of moving objects w.r.t. translation, rotation, scaling and shearing (including camera p~nn;ng and zooming effects), and, b) multiplicative signal correction (MSC) and extensions of this, modelling of mixed multiplicative and additive inten-woss/0824~ ~17 1~9~ ~ PCT~59J/Iolgo -sity effects (H. Martens and T. Naes, Multivariate Calibra-tion, pp. 345-350, (John Wiley & Sons, 1989), which is incorporated herein by reference.
A further object of the present invention is the model-ling of objects in ~om~; n~ other than the spatial domain, e.g., grouping of local temporal change patterns into tempo-ral objects and grouping of spectral patterns into spectral objects. Thus, in order to avoid undesirable oversimplify-ing associated with physical objects or object oriented programming, the term "holon" is used instead.
Yet another object of the present invention is the use of change data in the various ~om~; n~ to relate each individual frame to one or more common reference frames, and not to the preceding frame of data.
SnMM~Ry OF T~E lNV~NllON
The method and apparatus for data analysis of the present invention analyze data by extracting one or more systematic data structures found in the variations in the input se~uence of data being analyzed. These variations are grouped and parameterized in various dom~; n~ to form a reference data structure with change models in these do-mains. This is used in modelling of input data being ana-lyzed. This type of parameterization allows both compres-sion, interactivity and interpretability. Each data input is then approximated or reconstructed as a composite of one or more parameterized data structures maintained in the reference data structure. The flexibility of this approach lies in the fact that the systematic data structures and their associated change model parameters that make up the ~ W~95/08~40 ~ 17 12 9 ~ PCT~S94tlOl90 reference data structure can be modified by appropriate parameter changes in order to insure the flexibility and applicability of each individual systematic data structure to a larger number of input data. The parameterization consists of "soft" multivariate factor modelling in various dom~;nq for various holons, which is optionally combined with "hard" causal modelling of the various domains, in addition to possible error correction residuals. A pre-ferred embodiment of the present invention is explained with reference to the coding of image sequences such as video, in which case the most important d~m~; n.q are the intensity, address and probabilistic ~om~; nq, The present invention includes a method and appa-ratus for encoding, editing and decoding. The basic model-ling or encoding method (the "IDLE" modelling method) may becom~bined with other known modelling methods, and several ways of using the basic modelling method may be combined and carried out on a given set of data.
The encoding portion of the present invention in-cludes methods for balancing the parameter estimation in the various d~m~;nq. Also, the modelling according to the present invention may be repeàted to produce cascaded model-ling and meta-modelling.
BRIEF D~SCRIPTION OF T~E DRAWINGS
The foregoing brief description and further ob-jects, features, and advantages of the present invention will be understood more completely from the following de-scription of presently preferred embodiments with re~erence to the drawings in which:

WO9s/08240 ~17 12 9 3 pcT~ss4llolso -,.t ~;~

Figure 1 is a flow-chart illustrating the high level operation of the encoding and decoding process accord-ing to the present invention;
Figure 2 is a block diagram illustrating singular value decomposition of a data matrix into the product of a score matrix and a loading matrix plus a residual matrix;
Figure 3a is a pictorial representation of the data format for each individual pixel in a reference image;
Figure 3b is a pictorial representation of how a reference frame is derived;
Figures 4a-n are pictorial illustrations of model-ling in the intensity (blush) ~om~; n, wherein, Figures 4a through 4c illustrate various de-grees of blushing intensity in input images;
Figures 4d through 4f illustrate the intensi-ty change fields relative to a reference frame in the encoder;
Figures 4g and 4h illustrate a blush factor loading that summarizes the change fields of several frames in the encoder;
Figures 4i through 4k illustrate the recon-struction of the change fields in the decod-er;
Figures 41 through 4n illustrate the result-ing reconstruction of the actual image inten-sities from the changefields and reference image, in the decoder.
Figures 5a-n are a pictorial illustration of modelling in the address (smile) ~om~;nl wherein, ~ woss/o824o Z~7 i ~ ~ 3 PCT~S94/10190 Figures 5a through 5c illustrate various de-grees of smiling (movments or address changes for pixels);
Figures 5d through 5f illustrate the address change fields corresponding to various de-grees of movements relative to the reference image;
Figure 5g shows the reference intensity image and Figure 5h illustrates a smile factor loading;
Figures 5i through 5k illustrate the recon-structed address change fields;
Figures 51 and 5n illustrate the resulting reconstructed smiled image intensities.
Figure 6 is a block diagram representation of an encoder according to the present invention;
Figure 7 is a block diagram representation of a model estimator portion of the encoder of Figure 6;
Figure 8 is a block diagram representation of a change field estimator of the model estimator of Figure 7;
Figure 9 is a pictorial representation of the operation of a the use of forecasting and local change field estimates in the change field estimator of Figure 8;
Figure 9a is a step-wise illustration of the use of forecasting and local change field estimates;
Figure 9b is a summary illustration of the move-ments shown in Figure 9a;
Figure 10 is a detailed block diagram of portions of the change field estimator of Figure 8;

W09s/08240 2 ~ ~ ~ 2 9 ~ PCT~$9~/10190 ~

Figure 11 is a block diagram of the local change field estimator portion of the change field estimator shown in Figures 8 and 10;
Figure 12 is a block diagram of the intepreter portion of the encoder shown in Figure 7;
Figure 13 is a block diagram of the decoder, used both as part of the encoder in Figure 8, and as stand-alone decoder.

DET~TTT~n DESCRIPTION OF THE PREFERRED EMBODl ~NlS
The method and apparatus for data analysis of the present invention may be used as part of a data compression system, including encoding and decoding circuits, for com-pressing, editing and decompressing video image sequences by efficient modelling of data re~lln~ncies in various data dom~;nq of the video image sequences.
Self-Modellinq of Redundancies in Various Dom~;n~ and Sub-Operands The system of the present invention models redun-dancies in the input data (or transformed input data).These re~lln~ncies may be found in the various dnm~;ns or ~operandsll (such as coordinate address, intensity, and probabalistic) and in various sub-properties of these do-mains ("sub-operands"), such as individual coordinate direc-tions and colors. Intensity covariations over time andspace between pixels and frames, and over time and space between color ch~nnels may be modelled. Movement covariations are also modelled over time and space between pixels, and over time and space between different coordinate W095/08240 21 ~12 9 3 PCT~S94/10190 channels. These movement covariations typically describe the movement of an object as it moves across an image. The objects or holons need not be physical objects, rather they represent connected structures with simplified multivariate models of systematic changes in various d~m~;n~, such as spatial distortions, intensity changes, color changes, transparency changes, etc.
Other re~l]n~ncies which may be modelled include probabalistic properties such as opacity, which may be modelled over time and space in the same manner as color intensities. In addition, various low-level statistical model parameters from various data ~om~;n~ may be modelled over time and space between pixels and between frames.
In the present invention, successive input frames are modelled as variations or deviations from a reference frame which is chosen to include a number of characteristics or factors in the various d~m~;n~. For example, factors indicative of intensity changes, movements and distortions are included in the reference frame, such that input frames can be modelled as scaled combinations of the factors in-cluded in the reference frame. The terms factors and load-ings will be used interchangeably to refer to the systematic data structures which are included in the reference frame.
Abstract Re~lln~ncy Modelling The system and method of the present invention combine various model structures and estimation principles, and utilize data in several different dom~;n~, producing a model with a high level of richness and capable of recon-2 9 '~
woss/08240 PCT~S94/lOl90 structing several different image elements. The model may be expressed at various lèvels of depth.
The modelling features of the present invention are further enhanced by using externally established model parameters from previous images. This procedure utilizes pre-established spatial and/or temporal change patterns, which are adjusted to model a new scene. Further enhance-ment may be obtained by modelling re~lln~ncies in the model parameters themselves, i.e., by performing principal compo-nent analysis on the sets of model parameters. This isreferred to as meta-modelling.
The present invention may employ internal data representations that are different from the input and/or output data format. For example, although the input/output format of video data may be RGB, a different color space may be used in the internal parameter estimation, storage, transmission or editing. Similarly, the coordinate address system may be cartesian coordinates at a certain resolution (e.g., PAL format), while the internal coordinate system may be different, e.g., NTSC format or some other regular or irregular, dense or sparse coordinate system, or vice versa.
Encoder An encoder embodying the present invention pro-vides models to represent systematic structures in the input data stream. The novel model parameter estimation is multi-variate and allows automatic self-modelling without the need for any prior model information. However, the system can still make effective use of any previously established model information if it is available. The system also provides WO95/08240 2111~93 PCT~S91/10190 dynamic mechanisms for updating or eliminating model compo-nents that are found to be irrelevant or unreliable. The system is also flexible in that different level models may be used at different times. For example, at times it may be advantageous to use shallow intensity based compression, while at other times it may be desirable to use deep hard models which involve extensive prior analysis.
Additionally, the present system includes automat-ic initialization and dynamic modification of the compres-sion model. In addition, the present invention may be usedfor any combination of compression, storage, transmission, editing, and control, such as are used in video telephone, video compression, movie editing, interactive games, and medical image databases.
In addition, the present invention can use factor modelling to simplify and enhance the model parameter esti-mation in the encoder, by using prel~m;n~ry factor models for conveying structural information between various local parts of the input data, such as between individual frames in a video sequence. This structural information is used statistically in the parameter estimation for restricting the number of possible parameter values used to model each local part, e.g., frame. This may be used in the case of movement estimation, where the estimation of the movement field for one frame is stabilized with the help of a low-m~nsional factor movement model derived from other frames in the same sequence.
An encoder according to the present invention com-presses large amounts of input data, such as a video data WO9S/08240 ~ 7 ~ 2 ~ ~ PCT~S9~/10190 stream, by compressing the data in separate stages according to various models. In general, video sequences or frames can be represented by the frame-to-frame or interframe variations, including the variation from a blank image to the first frame as well as subsequent interframe variations.
In the present encoder, interframe variations are detected, analyzed and modelled in terms of spatial, temporal and probabalistic model parameters in order to reduce the amount of data required to represent the original frames. The obtained model parameters may then be further compressed to reduce the data stream necessary for representing the origi-nal images. This further compression may be carried out by run length coding, Huffman coding or any other statistical compression technique.
The compressed data may then be edited (e.g., as part of a user-controlled video game or movie editing sys-tem), stored (e.g., in a CD-ROM, or other storage medium) or transmitted (e.g., via satelite, cable or telephone line), and then decompressed for use by a decoder.

Decoder The present invention also provides for a decoder, at a receiving or decompression location which essentially performs the inverse function of the encoder. The decoder receives the compressed model parameters generated by the encoder and decompresses them to obtain the model parame-ters. The model parameters are then used to reconstruct the data stream originally input to the encoder.
Parameter Estimation in the Encoder W095/08240 ~ 7 1 2 ~ 3 PCT~S9~/lOl90 Extending, Wideninq and Deepening of a Reference Model In the encoder of the present invention, one or more extended reference images are developed as a basis for other model parameters to represent the input data stream of image sequences or frames. Thus, all images are represented as variations or changes relative to the extended reference images. The referenceJimages are chosen so as to be repre-sentative of a number of spatial elements found in a se-quence of images. The reference image is "extended" in thesense that the size of the reference image may be extended spatially relative to an image or frame in order to accommo-date and include additional elements used in modelling the image sequences. Conceptually, the reference frame in the preferred embodiment is akin to a collage or library of picture elements or components.
Thus, a long sequence of images can be represented by a simple model consisting of an extended reference image plus a few parameters for modelling systematic image changes in address, intensity, distortion, transparency or other variable. When combined with individual temporal parameters for each frame, these spatial parameters define how the reference image inensities in the decoder are to be trans-formed into a reconstruction of that frame's intensities.
Reconstruction generally involves two stages. First, it must first be determ;ned how the reference frame intensities are to be changed spatially in terms of intensity, transpar-ency, etc. from the reference coordinate system and repre-sentation to the output frame coordinate system and repre-W095/08240 ~ 7 ~ PCT~S94/10190 1 8 ~ r ~
sentation. Second, the re~erence frame intensities must bechanged to the output frame intensities using image warping.
System Operation Figure 1 is a block diagram illustration of the high level operation of the present invention, showing both the encoding and decoding operations. In the encoder, video input data 102 is first input to the system at step 104 and changes are detected and modelled at steps 106 and 108 respectively, in order to arrive at appropriate model param-eters 110.
The model parameters 110 are then compressed atstep 111 in order to further reduce the amount of informa-tion required to represent the original input data. This further compression takes advantage of any systematic data re~nn~ncies present in the model parameters 110. These temporal parameters also exhibit other types of re~lln~n-cies. For example, the scores or scalings which are applied to the loadings or systematic data structure in the refer-ence frame, may have temporal autocorrelation, and can therefore be compressed by, for example, predictive coding along the temporal ~;m~nqion Additionally, there are correlations between scores which can be exploited by bilinear modelling, followed by independent compression and transmission of the model parameters and residuals. Like-wise, other r~lln~ncies such as between colorintercorrelations or between parameter re~lln~ncies that may be modelled.
These model parameters 110 are then used by a decoder according to the present invention where the model Woss/08240 PCT~S94/10190 Z17i293' parameters are first decompressed at step 120, and at step 122, used to reconstruct the original input image, thereby producing the image output or video output 124.
The decompression procedure at step 120 is essen-tially the inverse process that was performed in the com-pression step 111. It should be noted that the encoder and decoder according to the present invention may be part of a real-time or pseudo real-time video transmission system, such as picture telephone. Alternatively, the encoder and decoder may be part of a storage type system, in which the encoder compresses video images or other data for storage, and retrieval and decompression by an encoder occur later.
For example, a video sequences may be stored on floppy disks, tape or another portable medium. Furthermore, the system may be used in games, interactive video and virtual reality applications, in which case the temporal scores in the decoder are modified interactively. The system may also be used for database operations, such as medical imaging, where the parameters provide both compression and effective search or research applications.
Soft Modelling by Factor Analysis o~ Different DomA;n~ and Sub-Operands The present invention utilizes factor analysis, which may be determined by principal component analysis or singular value decomposition, to determine the various factors which will be included in the reference frame. A
video sequence which is input to the present invention may be represented as a series of frames, each frame represent-ing the video sequence at a specific m~m~nt in time. Each W095/08240 ~ 7 ~ 2 9 3 PCT~S9~/10190 -frame, in turn, is composed of~ a ~umber of pixels, each pixel containing the data representing the video information at a specific location in the frame.
In accordance with the present invention, input frames are decomposed into a set of scores or weightings in various domA;n~ and sub-operands which are to be applied to one or more factors contained in a reference frame. As shown in Figure 2, N input frames, each composed of M vari-ables, e.g., pixels, may be arranged in an N by M matrix 202. In this representation, the pixels are arranged as one line for each frame, instead of the conventional two-dimen-sional row/column arrangement. The matrix 202 may then be decomposed or represented by temporal score factors f=1, 2, . . . F for each frame, forming an N by F matrix 204, multi-plied by a spatial reference model, consisting of spatialloadings for the F factors, each with values for each of the M pixels, thus forming a loading matrix 206 of size F by M.
If the number of factors F is less than the smaller of N or M, a matrix of residuals (208) may be used to summarize the unmodelled portion of the data. This is described in fur-ther detail in H. Martens and T. Naes, Multivariate Calibra-tion, Chapter 3 (John Wiley & Sons, 1989), which is incorpo-rated herein by reference. This type of assumption-weak self-modelling or "soft modelling" may be optionally com-bined with more assumption-intensive "hard modelling" in other ~om~;n~, such as movements of three-~;mPn~ional solid bodies and mixed multiplicative/addive modelling of intensi-ties by MSC modelling and extensions of this (H. Martens and W095/08240 ~17 1293 PCT~59VI0190 T. Naes, Multivariate Calibration, pp 345-350, (John Wiley &
Sons, 1989), which is incorporated herein by reference.
Figure 3b illustrates how several objects from different frames of a video sequence may be extracted as factors and combined to form a reference frame. As shown in Figure 3, frame 1 includes objects 11 and 12, a taxi and building, respectively. Frame 4 includes the building 12 only, while frame 7 includes building 12 and car 13. An analysis of these frames in accordance with the present invention results in reference frame 20 which includes objects 11, 12, and 13. It should be noted that the holons need not be solid objects such as a house or a car. Rather, the same principles may be used to spatially represent more plastic or deformable objects such as a talking head; howev-er, change factors in other d~m~in~ may be required.
Figure 3a is a pictorial representation of the data format for each individual pixel in a reference image.
Coordinate systems other than conventional pixels may also be used in the model representation. These include pyrami-dal representations, polar coordinates or any irregular, sparse coordinate system.
As shown in Figure 3a, each pixel contains inten-sity information, which may be in the form of color informa-tion given in some color space, e.g., RGB; address informa-tion which may be in the form of vertical (V), horizontal (H), and depth (Z) information; in addition to probabilistic, segment, and other information, the number of such probabilistic values being different during the encoder W095/08240 PCT~S9~/10l90 ~7~9~

parameter estimation as compared with after the parameter estimation.
Each of these information components may in turn at various stages be composed of one or more information sub-components which may in turn be composed of one or more further information sub-components. For example, as shown in Figure 3a, the red (R) color intensity information con-tains several red information components R(0), R(1), R(2), ..... Similarly, R(2) contains one or more information sub-components indicating parameter value, uncertainty, and other statistical information.
The choice of objects which are used to construct the reference image depends on the type of application. For example, in the case of off-line encoding of previously re-corded video images, objects will be chosen to make thereference image as representative as possible for long sequences of frames. In contrast, for on-line or real time encoding applications, such as picture telephone or video conferencing, objects will be selected such that the refer-ence image will closely correspond to the early images in the sequence of frames. Subsequently, this initial refer-ence frame will be improved or modified with new objects as new frame sequences are encountered and/or obsolete ones eliminated.
General temporal information ("scores") are repre-sented by the letter u followed by a second letter indicat-ing the type of score, e.g., uA for address scores. Occa-sionally, a subscript is added to indicate a specific point in time, e.g., uAn, to indicate frame n.

W095/0824~ ~ 1 7 1 2 9 3 pcT~s94llolss Spatial information is represented in a hierarchi-cal format. The letter X is used to represent spatial information in general, and includes one or more of the following ~nm~;n~: I (intensity), A (address) and P
(prababilistic properties). These domains represent data flow between operators and are thus referred to as operands.
Each of these ~om~; n operands may in turn contain one or more "sub-operands." For example, intensity I may contain R, G and B sub-operands to indicate the specific color representation being used. Similarly, address A may contain V (vertical), H (horizontal) and Z (depth) sub-operands to indicate the specific coordinate system being used. Also, probabilistic properties P may include sub-operands S (seg-ment) and T (transparency). Spatial information may be represented in different formats for different pixels. In addition, the various ~om~;n~ and sub-operands may be refor-mulated or redefined at various stages of the data input, encoding, storage, transmission, decoding and output stages.
Each spatial point or pixel may thus be represent-ed by a nu-mber of different values from different domains and su~-operands. For each sub-operand, there may be more than one parameter or "change factor." The factors are counted up from zero, with the zeroth factor representing the normal image information (default intensity and address). Thus, within X(0), I(0) represents normal picture intensity information, A(0) represents implicit coordinate address information and P(0) represents probabilistic infor-mation such as transparancy, while X(f), f~0 represents various other change model parameters or factor loadings, wosslo824o PCT~S9~/10190 -7~29~ 24 i.e., systematic patterns in which the pixels vary together in the different d~m~;n~.
Spatial information is defined for objects accord-ing to some spatial position, which is given in upper case letters, lower case letters and subscripts. Upper case letters refer to spatial information in the reference image position, lower case letters refer to spatial information in the position of a specific image, with the specific image being indicated by a subscript. Thus, XRef refers to the spatial model in the reference position for a given sequence, while ~ refers to spatial data for input frame n.
Change fields, which are unparameterized differ-ence images, are used to indicate how to change one image into another according to the various dnm~;nq. Change fields are indicated using a two letter symbol, typically used in conjunction with a two letter subscript. The first letter of the two letter symbol is D or d which indicates difference or delta, while the second letter indicates the domain or sub-operand. The subscripts are used to designate the starting and ending positions. For example, DARefm de-fines how to move the pixel values given in the reference position into those of reconstructed frame # m, while da~
defines how to move pixel values from frame # m to frame #
n.
Widening a Reference Model to Allow a Wider Ranqe of Systematic Expression A reference image may be "widened" to include more types of change information than those available in the individual input images. For example, the picture intensity W095/08240 ~ 71 2 9 3 pcT~ss~llol9o 25of a color image in an RGB system is typically represented by a single R, G and B intensity value for each of the red, green and blue color components associated with each indi-vidual pixel. However, in the case of a widened reference image, there may be several systematic ways in which groups of pixels change together. These change factor loadings may be defined for individual colors or combinations of colors, and for individual holons or groups of holons.
The "widening" of the reference image for a given video sequence may also be performed for data dnmA;n~ other than color intensities, such as address (coordinates) and various probabilistic properties such as transparency.
Widening of the reference image is used to refer to the parameterization of the model used for a particular scene.
By combining different model parameters in different ways in a decoder, different individual manifestations of the model may be created. These output manifestations may be statis-tical approximations of the individual input data (individu-al video frames), or they may represent entirely new, syn-thesized outputs, such as in virtual reality applications.
The widening parameterization of the referenceframe in various ~mA~n~ may be obtained using a combination of "soft" factor analytic modelling, traditional statistical parameters, ad hoc residual modelling and "hard" or more t 25 causally oriented modelling.
Once an extended or widened reference image model is established, it may be dynamically modified or updated to produce a "deepened" reference image model. This "deepened'~
reference model includes "harder" model parameters that have W095/08240 ~ 2 ~ 3 PCT~S94/10190 a high probability of representing important and relevant image informa~ion, and a low probability of representing unimportant and irrelevant change information.
The purpose of widening in the various ~om~ ~ n~ is to combine in a compact and flexible representation, change - image information from various frames in a sequence. In the case of automatic encoding, this may be accomplished by combining new change information for a given frame with the change image information from previous frames in order to extract systematic and statistically stable common struc-tures. This is preferably accomplished by analyzing the residual components of several frames and extracting model parameter loadings. The computations may be carried out directly on the residuals or on various residual cross products. Different weighting functions can be used to ensure that precise change information i9 given more empha-sis than imprecise change information, as described in H.
Martens and T. Naes, Multivariate Calibration, pp 314-321,(John Wiley & Sons, 1989), which is incorporated herein by reference. The extraction of new bilinear factors and other parameters may be performed on different forms of the data, all providing essentially the same result. The data format may be raw image data, residual image information after removal of previously extracted model parameters or model parameters already extracted by some other method or at a different stage in the encoding process.
Several types of modellable structures may be ex-tracted during the widening process. One general type is based on spatio-temporal covariations, i.e., one or more woss/08240 ~17 ~ ~ 9 3 PCT~Ss4/1Olso informational domA;n~ vary systematically over several pixels over several frames. A typical form of covariation is multivariate linear covariance, which can be approximated ~ by bilinear factor modelling. This type of factor extrac-tion is applicable to each of the different dom~;n~, e.g., address, intensity and probabilistic. Nonlinear or non-metric summaries of covariations may also form the basis for the widening operations.
Bilinear factors may, for example, be extracted using singular value decomposition, which is applied to the residual components from a number of frames. Singular value decomposition m~xtmtzes the weighted sum-of-squares used for extracting factors, but does not provide any balancing or filtering of noise, or optimizing of future compression.
More advanced estimation techniques, such as the non-linear iterative least squares power method (NIPALS), may be used.
The NIPA~S method is an open architecture allowing the use of additional criteria, as needed.
The NIPA~S method is applied to a matrix of resid-ual values Eal (matrix E in a system with a-l factors), from several frames in order to extract an additional factor and thereby reduce the size of the residual matrix to Ea (residu-al matrix in a system having a factors). The residual matrix Ea can in turn be used to find the (a+l)th factor resulting in residual matrix Ea+l.
This type of factor analysis may be applied to the different sub-operands in the various dom~;n~, and not just to the image intensities. Typically, address information for a picture frame is typically given in terms of cartesian W095/082~0 ~ PCT~S9~/lOl90 -coordinates which specify horizontal and vertical addresses for each pixel location. However, in a widened reference frame, the address information may include multiple vari-ables for each single input pixel's coordinates.
The additional change factors in a widened refer-ence image, widen the range of applicability of the result-ing image model in the sense that many additional different visual qualities or patterns may be represented by different combinations of the additional change factors or "loadings."
In a preferred embodiment according to the present inven-tion, the different loadings are combined linearly, i.e., each loading is weighted by a "score" and the weighted loadings are summed to produce an overall loading. The score values used in the weighting process may be either positive or negative and represent a scale factor applied to the loadings or change factors. This will now be illustrat-ed for sub-operands red intensity rn,n=1,2,...,N and vertical address vn,n=l,2,..,N. When modelling intensity changes, the scores may be used to "turn up" or "turn down" the intensity pattern of the loading. Similarly, when modelling address distortion (movements), the scores are used to represent how much or how little the loading is to be distorted.
Utilizing the above-mentioned widening principle for widening a reference frame, an individual input frame's redness intensity ~, for example, may be modelled as a linear combination or summation of redness change factor loadings (note that the "hat" symbol here is used in its W095/08240 PCT~S94110190 ~171293 conventional statistical m~nlng of "reconstructed" or "estimated"):

rnhat = RRef ( O ) *uR ( ) n + RRef ( 1 ) *UR(l) n + RRef ( 2)*uR(2) n + ... (1) which may also be summarized over factors f=0,l,2,... using matrix notation as:
rnhat = RRef*~JRn where RRe~{ RRef() ~ RRef(l) ~ RRef(2) 1 } represents the spatial change factor loadings for redness in the extended reference model (for this holon), and [U~ = {U0,~, Ul,in ~ ] URn = {
uR(0) n~ uR(l) n~ uR(2)D,...} represents the temporal redness scores which are applied to the reference model, (designated as i) to produce an estimate of frame n's redness.
Intensity change factors of this type are herein called "blush factors" because they may be used to model how a face blushes. However, it will be appreciated that these factors may be used to model many other types of signals and phenom-enon, including those not associated with video.
The use of these so-called blush factors is illustrated in Figures 4a through 4n. Figures 4a, 4b and 4c show the intensity images rn,n=l,2,3 of a red color channel for a person blushing moderately (4a), blushing intensely (4b) and blushing lightly (4c), respectively. The first frame r1 is here defined as the reference frame. Accordingly, R(0) Ref =

Figures 4d through 4f show the corresponding intensity change fields DRRef,n,n=l,2,3. In this non-moving illustration, the change field for a frame equals the dif-ference between the frame and the reference image, or drn=rn-;~ 7~
WO95/08240 PCT~S9~llO190 `3l0 RRef(0). The change field is aLSO shown as a curve for a single line taken through the blushing cheeks of Figures 4a through 4c. As shown in Figures 4d through 4f, the lightly blushing (pale) face of figure 4c has the lowest intensity change field values (Figure 4f), the moderately blushing face of Figure 4a has no intensity change, since it actually is the reference image, (Figure 4d), while the intensely blushing face of Figure 4b has the highest intensity change field values (Figure 4e).
The statistical processing of the present inven-tion will extract a set of generalized blush characteristics or change factor loadings, to be used in different frames to model blushing states of varying intensity. Figures 4a through 4f indicate a single blush phenomenon with respect to the reference image. The principal component analysis of the change fields DRRef,n,n=1,2,3 may give a good description of this using one single blush factor, whose loading R(1) Ref is shown in figure 4h with the respective scores (0, 1.0 and -0.5) given below. The modelling of the red intensity during decoding in this case is achieved by applying these differ-ent scores to the main blush factor loading R(l)Rcf to produce different change fields DRRef,n (Figures 4i through 4k) and adding that to the reference image redness (Figure 4g) to produce the reconstructed redness images (Figures 41 through 4n):
rnhat = RRef ( ) + DRRef,n where the redness change field is:

DRRef,~= RRef ( 1 ) *uR ( 1 ) n Woss/08240 ~ 7 ~ 2 9 3 PCT~S94/lOl90 As indicated by the numbers below figures 4d-f, the score value uR(1) D in this case is 0 for the reference image (4a) itself, since here rlhat=RRef(0), is positive, e.g., 1.0, for the second frame (4b) with more intense blushing, and is negative, e.g., -0.5, for the pale face in the third frame (4c). It should be noted that the negative score for the third frame, Figure 4c, transforms the posi-tive blush loadings Figure 4h into a negative change field DRRef,3 for the the third image which is paler than the refer-ence frame.
If more than one ph~nomenon contributed to theredness change in the images of this sequence, then the model would require more than one change factor. For exam-ple, if the general illumination in the room was varied, independent of the person blushing and paling, this situa-tion may be modelled using a two factor solution, where the second factor involves applying a score uR(O)n to the refer-ence frame itself:

rnhat = RRef ( O ) + DRRef,n where the blush change field is:
DRRef,n = RRef(O)*uR(o) n + RRef ( 1 ) *uR(l) n which may be generalized for different colors and different factors as:

DIRef,n = IRef*uIn (2) Thus, Figures 4a-4n show how the effect of blush factor loading 4h (contained in Iref) can be increased or decreased (appropriately scaled by scores uIn) to produce various blush change fields such as are shown in Figures 4d through 4f.

W09s/08~40 2 1 7 1 2 9 3 PCT~59J/10190 In this manner, significant amounts of intensity information may be compressed and represented by a single loading (Fig-ure 4h) and a series o~ less data intensive scores.
Changes in transparency T and changes in probabilistic properties P may be modelled in a similar manner. In the case of probabilistic modelling, bilinear modelling is used in the preferred embodiment of the present invention. The spatial loadings P(f),f=0,1,2.... and corre-sponding scores uP(f)n,f=1,2,... together constitute the probabilistic change factors.
Similar to the blush factors used to represent intensity information, address information may also be modelled by a linear combination of change factor loadings.
For example, a frame's vertical address information Vn may be modelled in terms of a linear combination or summation of change ~actor loadings:

DVn = VRef ( O ) UV ( ) n + VRef ( 1 ) *UV ( 1 ) n + VRCf ( 2)*UV(2) n +
-- (1) which may also be summarized over vertical movement factors f-0,1,2,... in matrix notation as:
DVn = VRef*~Vn where VRef={ VRef(0), VRef(1), VRef(2),...} is the vertical spa-tial address change factor loadings for redness in the extended reference model (for this holon), and ~Vn = {
uV(O)n, uV(l)n, uV(2)n,...} represents the temporal vertical movement scores which are applied to reference model in order to produce an estimate fo frame n's vertical coordi-nates for the various pixels in the frame. Address change W095/08240 ~ 7 i 2 ~ 3 PCT~S94/lOlgo factors of this type are referred to as "smile" factors, because they may be used to model how a face smiles.
Similar to the blush factors, here the vertical - address change field needed to move the contents of the reference frame to approximate an input frame is referred to as DVRefn. It may be modelled as a sum of change contributions from address change factor loadings (VrCf) scaled by appropri-ate scores (un). The address change factors are used to model motion and distortion of objects. The address change fac-tors used to model distortion of objects are referred to as"smile factors" because they may be used to model general-ized, "soft" movements, e.g. how a face smiles. However, it will be appreciated that smile factors can equally well model any signal or phenomenon, including those not associ-ated with video, which may be modelled as a complex ofsamples which may be distorted while still retaining a common fl~n~mental property.
The use of smile factors in accordance with the present invention is illustrated in Figures 5a through 5n.
Figures 5a through 5c show a face exhibiting varying degrees of smiling. Figure 5a shows a moderate smile; Figure 5b shows an intense smile; and Figure 5c shows a negative smile or frown. The moderately smiling face of Figure 5a may be used as part of the reference frame Figure 5g for illustra-tion. The address change fields DVRCf~ corresponding tovertical movements of the mouth with respect to the refer-ence image, as shown in Figures 5a through 5c, are shown in Figures 5d through 5f. The concept of "reference position"
(corresponding to the reference image Figure 5g) is here W09s/08240 ~ ~ PCT~S9~/10l90 illustrated for Figures 5d, e and f, in that numerical values of each pel in an address change field DVRefn are given at pixel coordinates in the reference image of Figure 5g, not at the coordinates in frames n=1,2,3 (Figures 5a through 5c). Thus, the vertical change fields (movements) necessary to transform the reference image (Figure 5g) into each of the other frames Figures 5a through 5c are shown as vertical arrows at three points along the mouth at the posi-tion where the mouth is found in the reference image (Figure 5g). The base of the arrows is the location of the mouth in the reference image (Figure 5g), while the tips of the arrows are located at the corresponding points on the mouth in the other frames of Figures 5a through 5c. The full change fields are also given quantitatively alongside Fig-ures 5d through 5f as continous curves for the single linethrough the mouth in the reference image (Figure 5g).
Since the first frame of Figure 5a in this illus-tration functions both as the reference image (Figure 5g) and as an individual frame, the vertical smile change field DVRCf,1 for frame 1 (Figure 5d) contains all zeros. In Figure 5b, the middle of the mouth moves downward and the ends of the mouth move upward. Thus, the smile field DVRef, is nega-tive in the middle and positive at either side of the mouth in its reference position. The frown of Figure 5c illus-trates the opposite type pattern. These change fields thuscontain only one type of main movement and and may thus be modelled using only one smile factor, and this may be ex-tracted by principal component analysis of the change fields in Figures 5d through 5f. The smile factor scores uvn are in W095/08240 2 ~ 7 ~ 2 ~ 3 PCT~S91/10l90 ~ . .

this illustration, zero for the reference image itself (Figure 5a), positive for frame 2 (Figure 5b) and negative for frame 3 (Figure 5c), when the common vertical smile loading is as shown in Figure 5h.
If the head shown in Figures 5a through 5c were also moving, i.e., nodding, independently of the smile action, then a more involved movement model would be needed to accurately model all the various movements. In the simplest case, one or more additional smile factors could be used to model the head movements, in much the same manner as multi-factor blush modelling. Each smile factor would then have spatial loadings, with a variety of different movements being simply modelled by various combinations of the few factor scores. Spatial rotation of image objects in two or three ~;m~nRions would require factor loadings in more coordinate ~;m~nRions, or alternatively require various coordinate ~;menRions to share some factor loadings. For example, if the person in Figures 5a-5n tilted their head 45 degrees sideways, the smile movements modelled in Figures 5a-5n as purely vertical movements would no longer be purely vertical. Rather, an equally strong horizontal component of movement would also be required. The varying smile of the mouth would still be a one-factor movement, but now with both a vertical and a horizontal component. Both a vertical and a horizontal loading may be used, in this case with equal scores. Alternatively, both the vertical and horizon-tal movement may share the same loading (Figure 5h), but again with different scores depending on the angle of the tilting head.

W095/08240 ~ 71293 PCT~Sg~/10l90 -For better control and simpler decoding and com-pression, some movements may instead be modelled by a hard movement model, referred to as "nod" factors. The nod factors do not utilize explicit loadings, but rather refer to affine transformations of solid bodies, including camera zoom and movements. Smile and nod movements may then be combined in a variety of ways. In a pre~erred embodiment according to the present invention, a cascade of movements is created according to some connectivity criteria. For example, minor movements and movement of pliable, non-solid bodies, such as a smiling mouth, may be modelled using smile factors (soft modelling), while major movements and movement of solid bodies, such as a head, may be modelled using nod factors (hard modelling). In the case of a talking head, the soft models are first applied to modify the initial vertical reference addresses VRef to the "smiled" coordinates in the reference position, V~ l~RCf The same procedure is carried out for the horizontal, and optionally to the depth, coordinates for forming A~ ... These smiled coordinates A~ r are then modified by affine transformations, i.e., rotation, scaling, shearing, etc., to produce the smiled and nodded coordinate values, still given in the reference position, An~RCf. The final address change field DARefn is then calculated as DARef,n- An~Ref-ARef.

ENCODING
Generally, the encoding process includes estab-lishing the spatial model parameters Xref for one or more WO9~/08240 ~ PCT~S94/lOl90 reference images or models and then estimating the temporal scores Un and residuals En for each frame. The encoding process may be fully m~nll~l, fully automatic or a mix of ~ manual and automatic encoding. The encoding process is carried out for intensity changes, movement changes, distor-tions and probabalistic statistical changes.

Manual Encoding In one embodiment according to the present inven-tion, video sequences may be modelled m~n~ ly. In the caseof m~nn~l modelling, an operator controls the modelling and interprets the sequence of the input video data. ~nn~l modelling may be performed using any of a number of avail-able drawing tools, such as "Corel Draw" or "Aldus Photoshop", or other specialized software.
Since humans are fairly good at intuitively dis-criminating between smile, blush and segmenting, the encod-ing process becomes mainly a matter of conveying this infor-mation to a computer for subsequent use, rather than having a computerized process develop these complicated relation-ships.
If there are reasons for using separate models, such as if the sequence switches between different clips, the clip boundaries or cuts may be determined by inspection of the sequence. Related clips are grouped together into a scene. The different scenes can then be modelled separate-ly .
For a given scene, if there are regions which exhibit correlated changes in position or intensity, these wosslo824o ~ 7 ~ 2 9 3 PCT~S94/10190 -regions are isolated as holons by the human operator. These regions may correspond to objects in the sequence. In addition, other phennmPn~ such as shadows or reflections may be chosen as holons. In the case of a complex object, it may be advantageous to divide the object into several holons. For instance, instead of modelling an entire walk-ing person as one holon, it may be easier to model each portion, e.g., limb, separately.
For each holon, the frame where the holon is best represented spatially is found by inspection. This is referred to as the reference frame. A good representation means that the holon is not occluded by or affected by shadows from other holons, is not significantly affected by motion blur, and is as representative for as much of the sequence as possible. If a good representation cannot be found in any specific frame in the sequence, the holon representation may be synthesized by assembling good repre-sentation portions from several different original frames, or by retouching. In this case of a synthesized holon, the reference frame is made up of only the synthesized holon.
Synthesized holons are quite adequate for partially trans-parent holons such as shadows, where a smooth dark image is often sufficient. This chosen or synthetic holon will be included as part of the reference image. The intensity images of the holons from the respective frames are extract-ed and assembled into one common reference image.
Each holon must be assigned an arbitrary, but unique, holon number. A segmentation image the same size as the reference image is then formed, the segmentation image ~ W095/08240 ~17 12 9 ~ PCT~S94110190 containing all the holons; however, the pixel intensity for each pixel within the holon is replaced by the specific holon number. This image is referred to as the segmentation or S field.
Holon depth information is obtained by judging occlusions, perspective or any other depth clue, in order to arrange the holons according to depth. If there are several possible choices of depth orderings, e.g., if two holons in the sequence never occlude each other and appear to have the same depth, an arbitrary order is chosen. If no single depth ordering is possible, because the order changes during the sequence, e.g., holon A occludes holon B at one time while holon B occludes holon A at another time, one of the possible depth orderings is chosen arbitrarily. This depth ordering is then converted into a depth scale in such a way that zero corresponds to something infinitely far away and full scale corresponds to essentially zero depth, i.e., nearest to the camera. Depth scale may conveniently be specified or expressed using the intensity scale available in the drawing tool, such that infinitely far away objects are assigned an intensity of zero, and very close objects are assigned full scale intensity. Based on this depth ordering, an image is then formed having the same size as the re~erence image; however, each pixel value has an inten-sity value functioning as a depth value. This image isreferred to as the Z field.
~ m1~l modelling or encoding also includes deter-m; n; ng holon opacity information. Opacity is det~rm;ned by first forming an image that has m~;m1~m intensity value for WO95/08240 ~L7~9.3 ~ ~ ~ PCT~S9~/10190 ~

completely opaque pixels, zeros for entirely transparent pixels, and intermediate values for the rPm~;n;ng pixels.
Typically, most objects will have the maximum value (maximum opacity) for the interior portion and a narrow zone with intermediate values at the edges to make it blend well with the background. On the other hand, shadows and reflections will have values at approximately half the m~;ml~m This image which indicates opacity is referred to as the Prob field.
Holon movement information is obtained by first det~rm; n i ng the vertical and horizontal displacement, be-tween the reference image and the reference frame for each holon. This is carried out for selected, easily recogniz-able pixels of the holons. These displacements are then scaled so that no movement corresponds to more than half of the m~;mnm intensity scale of the drawing tool. Darker intensity values correspond to vertically upward or horizon-tally-leftward movements. Similarly, lighter intensity values correspond to the opposite directions, so that maxi-mum movements in both directions do not exceed the maximumintensity value of the drawing tool. Two new images, one for the vertical and one for the horizontal ~;mpn~ion~
collectively form the "first smile load", which is the same size as the reference image. The scaled displacements are then placed at the corresponding addresses in the first smile load, and the displacements for the r~m~;n;ng pixels are formed using m~nll~l or automatic interpolation.
The first smile load should preferably be verified by preparing all of the above-described fields for use in Wo95/08240 ~ 7 i 2 ~ 3 PCT~S94/10190 the decoder, along with a table of score values (this table will is referred to as the "Time Series"). Next, the scores for the first smile factor are set to 1 for all holons which form part of a test frame, which is then decoded. The resulting decoded frame should provide good reproduction of the holons in their respective reference frame (except for blush effects, which have not yet been adressed). If this is not the case, the cause of each particular error can easily be attributed to an incorrect smile score or load, which may be adjusted, and then the process repeated using the new values. This process correctly establishes how to move holons from the reference image position to the refer-ence frame position.
Next, the movement of holons between frames must be estimated. For each holon, a frame is selected where the holon has moved in an easily detectable m~nner relative to the decoded approximation of the reference frame, Iml which is referred to as an intermediate frame. The same procedure for det~rm;n;ng the first smile load is carried out, except that now movement is measured from the decoded reference frame to the selected new frame, and the resulting output is referred to as the "Second smile load." These displacements are positioned in the appropriate locations in the reference image, and the r~m~;n;ng values obtained by interpolation.
The smile scores for both the first and second smile loads for all holons are set to 1, and then the selected frame is decoded. The result should be a good reproduction of the selected frame (except for blush effects, which have not yet been adressed).

woss/08240 ~1 71 2 9 ~ pcT~ss The movement for the r~m~;n;ng frames in the sequence is obtained by merely changing the smile scores using trial and error based on the already established smile loads. Whenever a sufficiently good reproduction of the movement cannot be found using the already established smile factors only, a new factor must be introduced according to the method outlined above. The displacement for selected features (pixels) between each decoded intermediate frame Im and the corresponding frame in the original sequence is measured and the result stored in the reference image posi-tion. The r~m~;n;ng pixels are obtained by interpolation, and the final result verified and any necessary correction performed.
When the above process for calculating smile factors has produced sufficiently accurate movement repro-duction, blush factors may then be introduced. This may be performed automatically by working through each frame in the sequence, and decoding each frame using the established smile factors, and calculating the difference between each decoded and the corresponding frame in the original sequence. This difference is then moved back to the refer-ence position and stored. Singular value decomposition may then be performed for the differences represented in the reference position, in order to produce the desired blush loads and scores.

Addition of nod factors Nod and smile factors may be combined in several ways, two of which will be discussed. In the first method, WO95N82~0 2171~ 9 3 PCT~S9~l10190 movement can be described as one contribution from the smile factors and one contribution from the nod factors, with the two contributions being added together. In the second method, the pixel coordinates can first be smiled and then nodded.
In the first method, i.e., additive nod and smile factors, the decoding process for one pixel in the reference image adds together the contributions from the different smile factors, and calculates the displacement due to the nod factors using the original position in the reference image. These two contributions are then added to produce the final pixel movement.
In the second method, i.e., cascaded nod and smile factors, the decoding process first adds together the con-tributions from the different smile factors, and then ap-plies the nod factors to the already smiled pixel coordi-nates.
The first method is somewhat simpler to implement, while the second method may produce a model which corre-sponds more closely to the true phy~ical interpretation ofse~uences where nod factors correspond to large movements of entire objects and smile factors correspond to small plastic deformations of large objects.
The process of extracting smile factors can be extended to also include nod factors, which are used to represent movements of solid objects (affine transforma-tions). Essentially, nod factors are special situations of smile factors. Specifically, each time a new smile factor has been calculated for a holon, it can be approximated by a woss/o82~o ,~ PCT~S9~/10190 ~171293 44 nod factor. This approximation will be suf~iciently accu-rate if the smile loads possess characteristics such that for verical and horizontal ~;m~n~ions, movement of a pixel can be considered as a function of its vertical and horizon-tal position, which can be fitted to a specific planethrough 3-~;m~n~ional space. Nod factors essentially corre-spond to the movement of rigid objects. The approximation will be less accurate when the smile factors correspond instead to plastic deformations of a holon.
To establish the nod loads, the smile loads are projected onto three "nod loads" of the same size as the extended reference image. The first nod load is an image where each pixel value is set to the vertical address of that pixel. The second nod load is an image where each pixel value is set to the horizontal address of that pixel.
Finally, the third nod load is an image consisting of all ones.
In the case of a nod factor added to a smile factor, i.e., additive nod, the above procedure for extract-ing new smile ~actors may be utilized. However, for thecase of a cascaded nod factor, i.e., encoding using first a nod factor and then a smile factor, one additional step must be performed in the encoding process. Whenever a new smile load is estimated based on an interm~;ate frame Im which has been produced using nod factors, not only must the position in Im of the displacement be mapped back to the reference image, but the actual displacements must also be mapped back using the inverse of the nod factor. In the ~O95/08240 ~1 7 1 2 9 3 PCT~9~ l90 case of cascaded nod and smile, in the decoder, each frame is first "smiled" and then "nodded.

DEEPENING NOD

In the general case of one nod factor per holon, the nod factors transmitted to the decoder consist of one set of nod parameters for each holon for each frame. Howev-er, there may be strong correlations between the nod parame-ters between holons and between frames. The correlationsbetween holons may be due to the fact that the holons repre-sent individual parts of a larger object that moves in a fairly coordinated m~nner~ which is however, not sufficient-ly coordinated to be considered a holon itself. In addi-tion, when the holons correspond to physical objects, theremay also be correlations between frames due to physical objects exhibiting fairly linear movement. When objects move in one direction, they often continue moving at approx-imately the same speed in a similar direction over the course of the next few frames. Based on these observations, nod factors may be deepened.
In the case of m~nll~l encoding, the operator can usually group the holons so that there is a common relation-ship among the holons of each group. This grouping is referred to as a superholon and the individual holons within such a group are referred to as subhol~ons. This type of grouping may be repeated, whereby several superholons may themselves be subholons of a higher superholon. Both sub-holons and superholons retain all their features as holons.

W095/08240 ~ 7 ~ 2 ~ 3 PCT~S9~/lO190 In the case of automatic encoding, similar groupings can be established through cluster analysis of the nod transforms.
The nod factors for the subholons of one super-holon may be separated into two components, the first compo-nent used to describe movements of the superholon and thesecond component used to describe movement of that individu-al sub-holon relative to the superholon.
The deepening of the nod factors between frames includes det~rm;n;ng relationships between frames for nod ~actors belonging to the same holon, be it a st~n~rd holon, superholon or subholon. This is accomplished by dividing the nod factors into a static part, which defines a starting position for the holon; a trajectory part, which defines a trajectory the holon may follow; and a dynamic part, which describes the location along the trajectory for a specific holon in a given frame. Both the static and trajectory parts may be defined according to the reference image or to the nod factors of superholons.
The deepened nod factors represent sets of affine transforms and may be represented as a set of matrices, see William M. Newman and Robert F. Sproull, Principles of Interac~ive Computer Graphics,`page 57 (mCGraw Hill 1984), which is incorporated herein by reference. The static part corresponds to one fixed matrix. The trajectory and dynamic parts correspond to a parameterized matrix, the matrix being the trajectory part and the parameter being the dynamic part, see Newman & Sproull, page 58, which is incorporated herein by reference. These transforms may be concatenated together with respect to the relationships between the W~95/08240 ~1~12 9 3 PCT~59J/10l90 static, trajectory and dynamic parts. The transforms may also be concatenated together with respect to the combina-tions of several behaviors along a trajectory, as well as with respect to the relationships between superholons and subholons, see Newman & Sproull, page 58, which is incorpo-rated herein by reference.
The above operations may be readily performed by a human operator utilizing: a method for specifying full affine transform matrices without parameters; a method for storing transform matrices with sufficient room for one parameter each specifying translation, scaling, rotation or shear; a method for specifying which transform matrices should be concatenated together in order to form new trans-form matrices; and a method for specifying which transform (which may be a result of concatenating several transforms) should be applied to each holon.

Automatic Encodinq In the case of automatic or semi-automatic encod-ing, the encoding process may be iterative, increasing theefficiency of the encoding with each iteration. An impor-tant aspect of automatic encoding is achieving the correct balance between intensity changes and address changes be-cause intensity changes may be modelled inefficiently as address changes and vice versa. Thus, in the modelling of the dom~ ~ nC it is critical that the respective scores and residuals be estimated by a process which avoids inefficient modelling of intensity changes as address changes and vice versa. This is accomplished by building the sequence model W095/08240 ~1 712~3 PCT~S94/10190 -in such a way that blush modelling is introduced only when necessary and, making sure that the model parameters have applicability to multiple frames. A preferred embodiment involving full sequence modelling, and an alternative em-bodiment involving simplified sequence modelling, will bedescribed herein. In the present description, the individu-al building blocks of the encoder will first be presented at a fairly high level, and then the operation and control of these building blocks will be described in more detail.
Automatic Encoder Overview Automatic or semiautomatic encoding according to the present invention in the case of video sequence data will be described in detail with reference to Figures 6-13.
Figure 6 is a block diagram of an encoder according to the present invention. Figure 7 is a block diagram of a model estimator portion of the encoder of Figure 6. Figures 8-10 show details and principles of a preferred embodiment of the ChangeFieldEstimator part of the ModelEstimator.
Figure 11 shows details of the LocalChangeFieldEstimator part of the ChangeFieldEstimator.

Figure 12 outlines the Interpreter of the Model-Estimator.
Figure 13 outlines the separate Decoder.

High ~evel Encoder Operation The input data (610), which may be stored on a digital storage medium, consists of the video sequence x5~
with input images for frames n=1,2,...,nFrames. This input ~71293 ~ W095/08240 PCT~Ss4/lolsO
. .

includes the actual intensity data iS~I with individual color ~h~nnel~ according to a suitable format for color represen-tation, e.g. [~,Gs~,B~] and some suitable spatial resolu-tion format. The input also consists of implicit or explic-it 2D coordinate address or location data aS~ for the differ-ent pixels or pels. Thus, the video se~uence ~ for each frame consists of in~ an and Pn information.
Finally, ~ may also consist of probabalistic qualities p~ to be used for enhancing the IDLE encoding.
These data consist of the following results of preprocessing of each frame: (a) Modelability, which is an estimate of the probability that the different parts of a frame are easily detectable in preceding or subsequent frames; (b) HeteroPel, which indicates the probability that the pels represent homogenous or heterogenous optical structures.
The automatic encoder according to the present invention consists of a high-level MultiPass controller 620 and a ModelEstimator 630. The MultiPass controller 620 optimizes the repeated frame-wise estimation performed for a series of frames of a given sequence. The ModelEstimator 630 optimizes the modelling of each individual video frame n.
In the preferred embodiment, a full se~uence model with parameters in the different d~m~;n~ is gradually expanded ("extended" and "widened") and refined ("deepened" or sta-tistically "updated") by including information from the different frames of a sequence. The full sequence model is further refined in consecutive, iterative passes through the sequence.

Wo95/08240 ~ 7 ~ ~ ~ 3 PCT~S9~/10190 -In contrast, in the alternative embodiment in-volving simplified modelling, a set of competing extra sequence models are developed in the different dom~;n~ and over a number of different frames, in order to model the as yet unmodelled portion of the input frames ~. It should be noted that the modelled portion of the input frames xn has been modelling using the established sequence model XRef.
Each of these competing extra models has parameters in only one single domain. The number of frames (length of a pass) used to estimate parameters in each of the dom~;n~ depends on how easily the frames are modelled. At the end of the pass in each ~om~;n, the full sequence model is then "wid-ened" or "extended" by choosing a new factor or segmentation from the competing extra ~om~;n model that has shown the best increase in modelling ability for the frames. This embodiment is described in detail in Appendix II SIMPLIFIED
ENCODER.
The ModelEstimator 630 takes as input the data for each individual frame xn (640), consisting of [in~ an and Pn]
as defined above. It also takes as input, a prel;m;n~ry, previously estimated model XRef (650) as a stabilizing input for the sequence. As output, the ModelEstimator 630 deliv-ers a reconstructed version of the input image xnhat (660) and a corresponding lack-of-fit residual en=xn-xnhat (665), plus an improved version of the model XRCf (655).
The ModelEstimator 630 may also input/output LocalMo-dels 670 for the data structures in the vicinity of frame n.
Additionally, the ModelEstimator 630 may take as input pre-established model elements from an external Model-WO95/082~0 ~ .9 ~3 PCT~59~/l0l90 Primitives data base 680, which may consist of spatial andtemporal models of movement patterns, e.g. a hnm~n face or body, running water, moving leaves and branches, and sim-pler modelling elements such as polyhedral object models (see David W. Murray, David A. Castelow and Bernard F.
Buxon, "FROM IMAGE SEQUENCES TO RECOGNIZED MOVING POLYHEDRAL
OBJECTS", Internatl Journal of Computer Vision, 3, pp. 181-208, 1989, which is incorporated herein by reference.
The ModelEstimator 630 also ~h~nges control information 635 and 637 from and to the Multipass Controller 620. Details regarding the control parameters are not explicitly shown in the subsequent figures.

Model Estimator A full implementation of the ModelEstimator 630.of Figure 6 is shown in Figure 7 for a given frame n. The ModelEstimator 630 contains a ChangeFieldEstimator 710 and an Interpreter 720. The ChangeFieldEstimator 710 takes as primary input the data for the frame, ~ (corresponding to 640) (consisting of image intensity data in~ address informa-tion an and probabilistic information Pn). It also takes as input, information from the prel;m;n~ry version of the current spatial and temporal Model XRef, ~s~ 760 (correspond-ing to 650) existing at this point in time in the encoding process.. The prel;m;n~ry model information 760 is used to stabilize the estimation of the changefield image fields in the ChangeFieldEstimator 710, the change fields being used to change the intensity and other ~uantities of the prelimi-nary SequenceModel XRef~Useq (760) of the extended Reference ~17~2~ --WogS/08240 .: ' PCT~S9~/10190 image in order to approximate as close as possible the input image intensities, in~
The ChangeFieldEstimator 710 also inputs various control parameters from the Multipass Controller 620 and P~ch~nges local control information 755 and 756 with the Interpreter 720.
As its main output, the ChangeFieldEstimator 710 yields the estimated change image fields DXRCfD (730) which are used to change the spatial and temporal parameters of the prel;m;n~ry SequenceModel XRef ~Seq (760) of the extended Reference image in order to approximate, as closely as possible, the input image intensities, in. It also yields prel;m;n~ry model-based decoded (reconstructed) versions of the input image, xnhat (640) and the corresponding lack-of--fit residuals en (645).
The ChangeFieldEstimator 710 also yields local probabilistic quantities wn (750), which contain various warnings and guidance statistics for the subsequent Inter-preter 720. Optionally, the ChangeFieldEstimator 710 inputs and updates local models 670 to further optimize and stabi-lize the parameter-estimation process.
The Interpreter 720 det~rm;nPs the estimated change image fields DXRef,n, 730 as well as the prel'm;n~ry forecast xnhat and residual en, plus the estimation warnings wn 750 and control parameters output from the MultiPass Controller 620. Optionally, the Interpreter 720 receives input information from the external data base of model primitives, 780. These model primitives are of several woss/08240 ~ 71 2 9 3 PCT~S94110190 types: Sets of spatial loadings or temporal score series previou~ly estimated from other data may be included in present IDLE model in order to improve compression or model functionality. One example of usage of spatial loading models is when already established general models for mouth movements are adapted into the modelling of a talking person's face in picture telephone encoding. Thereby a wide range of mouth movements become available without having to estimate and store/transmit the detailed factor loadings;
only the parameters for adapting the general mouth movement loadings to the present person's face need to be estimated and stored/transmitted.
Similarly, including already established movement patterns into an IDLE model is illustrated by using pre-estimated score time series for the movement of a walkingand a running person in video games applications. In this case the pre-established scores and their corresponding smile loadings must be adapted to person(s) in the present video game reference image, but the full model for walking and running people does not have to be estimated.
A third example of the use of model primitives is the decomposition of the reference image into simpler, pre-defined geometrical shapes (e.g. polygons) for still image compression of the reference model XR~f.
The Interpreter subsequently modifies the contents of the SequenceModel XRCf 760 and outputs this as an updated sequence SequenceModel (765), together with a modified model-based decoded version of the input image, ~hat (770) and the corresponding lack-of-fit residual en (775)- Upon W095l08240 2 ~7 12 3 3 PCT~S9~/10190 ;~54 convergence (determ;ned in the MultiPass Controller 620) these outputs are used as the outputs of the entire ModelE-stimator (630).

Chan~e Field Estimator Figure 8 is a block diagram representation of a ChangeFieldEstimator 710 according to a preferred embodiment of the present invention. As shown in Figure 8, an input frame xn, which has been converted into the correct format and color space used in the present encoder, is provided to the ChangeFieldEstimator 710. The SequenceModel XRef (760), in whatever form available at this stage of the model esti-mation, is also input to the ChangeFieldEstimator 710. The main output from the ChangeFieldEstimator 710 is the change image field DXRCfn (890) which converts the SequenceModel XRef 810 into a good estimate of the input frame xn.
The ChangeField Estimator 710 may be implemented in either of two ways. First, in the preferred embodiment, the change fields are optimized separately ~or each ~om~; n, and the optimal combination determ;ne~ iteratively in the Interpreter 720. Alternatively, the change fields may be optimized jointly for the different ~om~;ne~ within the ChangeField Estimator 710. This will be described in more detail below.
Additional outputs include the prel;m; n~ry esti-mate~ ~hat (892) the dif~erence betweèn the input and pre-l;m;n~ry estimate, en (894), together with warnings w~ (896).

Forecasting positio~ m Wogs/0824~ Z~ 712 9 3 ~CT~S9~/lOl90 For both computational and statistical reasons, it is important to simplify the estimation of the change field as much as possible. In the present embodiment of the change field estimator, this is accomplished by forecasting an estimate xm which should resemble the input frame ~, and then only estimating the local changes in going from xm to in order to represent each input frame ~ more accurately.
As will be described in more detail below, the ChangeFieldEstimator 710 of the present preferred embodi-ment, initially utilizes an internal Forecaster 810 andDecoder 830 to forecast an estimate, termed xm 835, to resemble the input frame ~. The Forecaster (810) receives as input the temporal SequenceModel U&~ (811) and outputs forecasted temporal scores um (815) which are then input to the Decoder (830). The Decoder 830 combines these scores with the spatial sequence model XRef 831, yielding the desired forecasted frame xm (835). Additional details regarding the decoder are set forth below.

Estimatinq local chanqe field from m to input frame n Next, a LocalChangeFieldEstimator (850) is em-ployed to estimate the local change field needed to go from the forecasted xm to the actual input frame ~. This change is referred to as the estimated local change field dx~
(855), and contains information in several dom~;n~, mainly Wo95/08240 ~ t 7 12 9 ~ PCT~S9~/10190 movement and intensity change, as will be discussed in detailed below.
In the estimated local change field dx~, the data on how to change the content of the forecast xm are given for each pixel in the "m position", i.e. in the position where the pixel is positioned in the forecasted frame xm.
In order to be able to model these new changefield data together with corresponding changefield data obt~;ne~ previ-ously for other frames, it is important to move the change-field data for all frames to a common position. In thepresent embodiment, this common position is referred to as the Reference position, or reference frame XRef. This move-ment back to the common reference position will be described below. Note that capital letters will be used to designate data given in this reference position of the extended reference image model, while lower-case letters will be used for data given in the input format of image ~ and approxi-mations of the input image ~.
An auxiliary output from the Decoder 830 is the inverse address change field, dam,Rcf 865 that allows a Mover operator 870 to move the obtained local change field infor-mation d~m from being given in the m position back to the common Reference position. This moved version of dx~ output is referred to as nx ~R,f 875, with capital letters denoting that the information is not given in the reference position.
The local ChangeFieldEstimator 850 may also re-ceive the full model XRef, moved to the m position (xR~f~m 836), plus correspondingly moved versions of DXRef,m 825, and the woss/o824o Z ~ 712 ~ 3 PCT~S94/1olgo return smile field dam,Ref 865 as inputs (not shown) from the Decoder 830, for use in internal stabilization of the param-eter estimation for d~nn 835-Estimatinq the full change field for frame n The next step in the encoding process is to deter-mine the full estimated change field in going from the Reference position to the estimated position of input frame n. This is accomplished by presenting the change field DXRef,n originally used for transforming XRef to xm to Adder 880 together with the obtained nX ~ yielding the desired main output, DXRef,n.

Illustration of local chanqe estimation The use of the forecasted position m, which has been described above, is illustrated conceptually in Figure 9 for the case of an address change DA for a given pel in an image representing a moving object. The detPrm; n~ tion of DARCfn, (as part of the change field DXRCf,n) is represented as element 902 in figure 9. The estimation of DARCf,n, is a four stage process.
The first step is to determine the forecast change field that moves spatial information from the Reference position to the forecasted m position, resulting in an - 25 approximation of the input frame n. This is based on the address change field DARCf,m (904) represented by the vector from point Ref to point m. This vector is determ~ n~ by forecasting, and is a part of DXRCfm.
3 ~
W095/08240 ~ PCT~S9~/lO190 Second, the local movement field from the fore-casted position m to the actual input frame # n, damn (926), is determined.
Third, the estimated result damn is "moved~' or translated back from the m position to the Reference posi-tion, using the inverse movement field daRef,m (905) (i.e., the vector from the m position to the Reference position), thus yielding n~ ~, ( 9 36).
Finally, the two fields given with respect to the Reference position Ref, i.e., DARefm and n~ ~r are added to yield the desired DARef,n (946)-Thus, the function of the mover 870 is to "move"the local change field damn back to the reference image model position Ref. Thus, all the elements in dxmn (dimn, damn and dpmn) are thus moved back to the Ref position. The output of mover 870 is n~ ~, (875), which is the local change informa-tion in going from the forecasted frame m to the input frame n, but positioned with respect to the Reference position Ref. The change information is "moved" back to the refer-ence position Ref in order to ensure that change informationobtained from frame n about a given object is positioned together with change information obt~;n~ from other frames about the same object. By positioning all information about an object in the same pel position, it is possible to devel-op simple models of the systematic changes in the se~uence.In this way, the system attempts dynamically to improve the initial estimation of input frames. In the case where the address change field DARef,m (904) is defined to be all zeros, W095/08240 ~1712 9 ~ PCT~S94/10190 the LocalChangeFieldEstimator 850 has to estimate the full change field DARefn directly as dan,n. This may for example take place at the beginning of an encoding process, and for frames n, close to the frame used for initializing the reference image model.
It should be noted that the local probabilistic change information dp,nn contains extra ~;m~n~ions cont~tn;ng statistical descriptions of the performance of the Local ChangeField Estimator (850). For these ~lm~n~ions, the corresponding change field in DARCf,m is considered as being empty. These additional ~;m~nqions are used by the Inter-preter (720) for encoding optimization. These ~;mPnqions may, for example, reflect possible folding or occlusion problems causing xm to have lost some of XRef's spatial infor-mation needed to estimate input frame xnl as well as spatialinnovations in xn needed to be included into XRef at a later stage.
The LocalChangeFieldEstimator (850) also outputs an estimate of the input frame, xnhat (892), the lack-of-fit residual en (894) and certain interpretation warnings wn (896). These are also passed on to the Interpreter (720) where they are used for encoding optimization.
The input and output o~ Local Model information (899) for the LocalChangeFieldEstimator will be discussed in detail below.
Chanqe Field Estimator The Local Change Field Estimator 850 of Figure 8 is shown in more detail in Figure 10, with each domain I, A
and P illustrated separately. It should be noted that each W095/08240 ~ 7 ~ 2 9 3 PCT~S9~/10190 of these dom~;n~ again contains subd~m~n~ (e.g. R, G, B in I; V, H, Z in A). For purposes of simplicity, these are not illustrated explicitly.
Referring now to Figure 10, which is a more de-tailed illustration of the main parts of the Change Field Estimator of Figure 8, the available temporal score esti-mates for the sequence are used in the Forecaster 1010 to yield forecasted factors or scores for frame m in the three ~om~;n~: Intensity (uIm), Address (uAm) and Probabilities 10 (UPm) ~

Internal decoder portion of encoder ChangeFieldMaker The internal decoder portion of the encoder in-cludes ChangeField Maker 1020, Adder 1030 and Mover 1040 which operate on their associated input, output and internal data streams. In the first stage (change field maker) of the decoder portion internal to the encoder, the ~actors or scores are combined with the corresponding spatial factor loadings available in the (prel;m;nAry) spatial model XRCf in the ChangeField Maker 1020 to produce the forecast change fields. For each domain I, A and P, and for each of their subdnm~; n~, the estimated factor scores and factor loadings are multiplied and the result accumulated, yielding the forecast change fields DIRCf~/ DARcf,m/ DPRefm.
For simplicity, the additional functionality of hard modelling is not included in figures 8 and 10 for the inter-nal decoder portion of the encoder. This will instead be W095/08240 ~ 7 ~ 2 ~ 3 PCT~Ss~/lolso discussed below in conjunction with the separate Decoder Figure 13 together with various other additional details, as the separate Decoder is essentially identical to the present internal decoder portion of the encoder.

Adder In the second stage (adder) of the decoder, the change fields are added to the corresponding basic (prelimi-nary) spatial images in Adder 1030, i.e., the extendedreference image intensities IRef(0) (e.g. RGB), the (implicit) extended reference image addresses ARef~0) (e.g. VHZ) and the extended reference image probabilities PRef(0) (e.g. opacity).
This results in Im~Ref, Am~Ref and Pm~Ref.

Mover The forecast change fields are transformed in Mover 1040 in accordance with the movement ~ield DARefm (904 in Fig.9), yielding the forecasted intensity image im (e.g.
in RGB), ~orecasted address image am (e.g. VHZ) and fore-casted probabilistic image Pm (e.g. opacity). Together, these ~oreca~ted data portions ~orm the forecast output xm (835 in figure 8) ~rom decoder 830 of Figure 8.

~ocal ChanqeField Estimator The Local ChangeField Estimator (850) estimates how to change the forecasted image xm generated in the Decoder 830, in one or more d~m~- n~, primarily the intensity ~ ::

Woss/08240 ~ 2 ~ 3 PCT~Ss~/lolsO

~om~ 1 n, in order to accrately approximate the input ~rame, . The resulting estimated changes are referred to as the Local Change Fields d~.
The sequence model loadings, moved from the refer-ence position to the forecasted position, xRef~m 837 may beused as input for statistical model stabilization. In addition, a Local Models 899 may be used to stabilize this estimation. The Local Models may be a special case model optimized for a particular subset of frames.

Separate versus joint dom~;nR in chanqe field estimation In the case of joint ~nm~; n estimation of the local change fields in the ChangeField Estimator 710, some m-n deviations are attributed to intensity difference di~, while some m-n deviations are instead attributed to move-ments da~, and additional m-n deviations attributed to segmentation and other probabilistic differences dp~. The ChangeField Estimator 710 then requires internal logic and iterative processing to balance the different dnmA;n~ so that the same m-n change is not modelled in more than one ~m~; n at the same time. Since the resulting local change field dx~ already contains the proper balance of the contri-butions from the different dom~;n~, this simplifies the r~m~;n;ng portion of the encoding process.

However, when dealing with joint local change field dom~;n~/ the Local ChangeField Estimator 850 must make iterative use of various internal modelling mechanisms in order to balance the contributions from the various domains.

W095/08240 ~ 2 ~ 3 PCT~S94/10190 Since these internal mechanisms (factor-score estimation, segmentation) are already required in the Interpreter (to balance the contributions of different frames), the pre-ferred em~bodiment instead employs separate modelling of the various change field dom~;n~ in the Local ChangeField Esti-mator 850. This results in a much simpler design of the Local ChangeField Estimator 850. However, the encoding process must then iterate back and forth between the ChangeField Estimator 710 and the Interpreter 720 several times for each frame n, in order to arrive at an optimal balance between modelling in the different dom~n~ for each frame. The forecasted frame xm is thus changed after each iteration in order to better approximate ~, and the incremental changes in the different ~om~;n~ are accumulated by the Interpreter 720, as will be described below.
Local ChangeField Estimator using separate domain modelling The primary purpose of the LocalChangeField Esti-mator 850, shown in detail in Figure 11, is to estimate using the forecasted frame xm 1101 and input frame ~ 1102, the local change fields dx~ 1103, used in going from the forecasted frame m to the input frame n.
The Local ChangeFieldEstimator 850 employs sepa-rate estimation of the different dom~n~. An estimator, EstSmile 1110, estimates the local address change fields (smile fields) da~ 1115, while a separate estimator, EstBlush 1120, estimates the local intensity change fields (blush fields) di~ 1125. Either of these estimators may be used to estimate the probabilistic change fields dp~ 1126.

WO95/08240 PCT~S9~l10190 ~
~7 ~3~ ~64 The embodiment of Figure 11 illustrates the case where the probabilistic change fields are estimated by the EstBlush estimator 1120.
In addition, both estimators 1110 and 1120 provide approximations, 1112 and 1114 respectively, of the input data, residuals and warnings. The warnings are used for those image regions that are difficult to model in the given estimator. The output streams 1112 and 1114 from the two estimators are then provided as two separate sets of output approximations, ~hat, residuals e~ and warnings wn.

EstSmile 1110 motion estimator The EstSmile 1110 motion estimator estimates the local address change field da~ primarily by comparing the forecasted intensity i~ to the actual input intensity in using any of a number of different comparison bases, e.g., sum of absolute differences or weighted sum of squared differences. A variety of motion estimation techniques may be used for this purpose, such as the frequency ~om~;n techniqes described in R.C. Gonzales and R.E. Woods, Diqital Imaqe Processing, pp. 465-478, (Addison-Wesley, 1992), which is incorporated herein by reference, or methods using cou-pled Markov random field models as described in R. Depommier and E. Dubois, MOTION ESTIMATION WITH DETECTION OF OCCLUDED
AREAS, IEEE 0-7803-0532-9/92, pp. III269-III272, 1992, which is incorporated herein by reference.
The preferred embodiment according to the present invention utilizes a motion estimation technique which seeks to stabilize the statistical estimation and m; n;m;ze the ~ W095/08240 ~1 7 1 2 9 3 PcT~sg~/lol9n need for new spatial smile loadings by using model informa-tion already established. The spatial model structures, moved from the reference position to the m position, xRef~m is one such type of model information. This type of model information also includes the moved version of the estimated weights Wgts_XRef, as will be described in greater detail below.
The probabilistic ~om~; n PRcf~m includes segment information sRcf~m which allows the pixels in the area of holon edges to move differently from the interior of a holon. This is important in order to obtain good motion estimation and holon separation when two holons are ad~acent to each other. The EstSmile estimator 1110 itself may find new local segments which are then passed to the Interpreter 720 as part of the warnings wn or probabilistic properties dpmn. Local segments are generally sub-segments or portions of a segment that appear to move as a solid body from the forecasted frame m to frame n.
The address ~o~; n contains spatial address factor loadings a(f)Ref~m, f=0,1,2,... in each coordinate sub-operand and for each holon. The motion estimation seeks pre~erably to accept motion fields damn that are linear combinations of these already reliably established address factor loadings.
This necessitates the use of an internal score estimator and residual changefield estimator similar to those used in the Interpreter 720. Temporal smoothness of the scores of frame n vs. frames n-1, n+l etc, may then be imposed as an addi-tional stabilizing restriction.

wos5lo824o ^ ~ PCT~S94/lO190 2~7~29~
~ 66 The motion estimation may also include estimation of "hard" nod factors for the different segments. These segments may be the whole frame (for pan and zoom estima-tion), the holons defined in the forecast sml or they may be new local segments found by the motion estimation operator itself.
The input uncertainty variances of the intensities and addresses of the various inputs, ~n, Xnl XRCf~m are used in such a way as to ensure that motion estimation based on uncertain data are generally overridden by motion estimation based on relatively more certain data. Likewise, motion estimates based on pixel regions in the forecasted frame xm or input frame xn previously determined to be difficult to model, as judged e.g. by Pnl are generally overridden by motion estimates from regions judged to be relatively easier to model.
During the initial modelling of a sequence, when no spatial model structures have as yet been determined, and when the extracted factors are as yet highly unreliable, other stabilizing assumptions, such as spatial and temporal smoothness, are afforded greater weight.
The BstSmile 1110 estimator may perform the motion estimation in a different coordinate system than that used in the rest of the encoder, in order to facilitate the motion estimation process.

EstBlush 1120 intensity change estimator W095/08240 ~1712 9 3 PCT~S9~/10190 The EstBlush estimator 1120 estimates the local incremental blush field dimn, which in its simplest version may be expressed as:

dimn = in~ im It should be noted that during the iterative improvement of the estimated change fields for a given frame, it is ex-tremely important that the blush field used for reconstruct-ing the forecasted frame xm in the Decoder 830 in a certain iteration, be not just based on dimm = i~-im from the previous iteration, since this would give an artificially perfect fit between the forecasted frame m and input frame n, thus prematurely terminating the estimation process for better smile and probabilistic change fields.
The EstBlush estimator 1120 also detects local changes in the probabilistic properties, dpmn, by detecting, inter alia, new edges for the existing holons. This may be based on local application of standard segmentation tech-niques. Changes in transparancy may also be detected, based on a local trial-and-error search for minor changes in the transparancy scores or loadings available in PRef~m which improve the fit between im and in/ without requiring further blush or smile changes.

Reverse Mover The estimated local change fields (corresponding to d~m 855 in Figure 8) are "moved" back from the forecasted position m to the reference position Ref in the Reverse Mover 1060, using the return address change field from m to Ref, dam,Ref, from the Decoder Mover 870. These outputs W095/08240 ~ i2 9 3 PCT~S9~/l0190 DI,nn$Rcf/ DAmn$Ref and DPmn$Ref, correspond to n~ 908 in Figure 9 and nx ~_, in Figure 8.

Reverse Adder Finally, nx ~,f iS added to the original forecast-ing change fields, DXRef,m [DIRef,m, DARef,m and DPRef,m] in the Reverse Adder 1070, to yield the desired estimated change fields which are applied to the reference model XRef to esti-mate input frame n, xn. These change fields of DXRCfn are I 0 D rRef,n ~ DARef,n and DPRcf,n ~
The Local ChangeFieldEstimator 1050 also yields residuals and predictions corresponding to en (894) and xnhat (892) in the various dom~;n~ as well as various other statistical warnings wn (896) in Figure 8.

Interpreter Interpreter Overview The main purpose of the Interpreter 720 is to extract from the estimated change field and other data for the individual frames, stable model parameters for an entire se~uence of data or portion of a se~uence. The Interpreter 720 in conjunction with the Change Field Estimator 710, is used both for prel; m; n~ ry internal model improvement, as well as for final finishing of the model. In the case of video coding, the Interpreter 720 converts change field information into spatial, temporal, color and other model parameters in the address, intensity and probabilistic ~nm~;n~. The Interpreter 720 and the Change Field Estimator 710 are repeatedly accessed under the control of the Multi-WO9S/08240 ~17 ~ 2 9 3 PCT~S94110190 69Pass Controller 620 for each individual ~rame n, for each sequence of frames and for repeated passes through the sequence of frames.
For a given frame n at a given stage in the encod-ing process, the Interpreter 720 takes as input the estimat-ed change fields in the various ~om~;n~, DXRef,n 730 (including uncertainty estimates) as well as additional warnings wn 750 from the ChangeField Estimator 710. The Interpreter also receives prel'm;n~ry coded data for individual fram.es, ~hat (735), and residual error en (745) from the Change Field Estimator 710. The Interpreter 720 also receives existing models {XRef~Useq} 760, and may optionally receive a data base of Model Primitives 780 for model deepening, in addition to local model information 899 and Local Change Field Esti-mates d~nn and the input frame information ~. The Inter-preter 720 also receives and returns control signals and parameters 635 and 637 from and to the MultiPass Controller, and 755 and 756 to and from the ChangeField Estimator 710.
The Interpreter 720 processes these inputs and outputs an updated version of the model {XRef~seq} 765 The changes in this model may be spatial extensions or redefini-tions of the holon structure of the reference image mod-; el(s), widened sub-operand models, or new or updated values of the factor loadings XRef and sequence scores ~Seq~ The Interpreter 720 also outputs scores in the various dom~; n~
and sub-operands un (772) for each individual frame n, as well as a reconstructed frame xnhat (770) and residuals en(775). It should be noted that all of the Interpreter W095/08240 ~ 2 ~ 3 PCT~S9~/10l90 outputs are expressed as both a signal value and its associ-ated uncertainty estimate.
The internal operational blocks of the Interpreter 720 are shown in detail in Figure 12. Referring now to Figure 12, the Interpreter 720 includes a Score Estimator 1202 which estimates the scores un (1204) of factors with known loadings for each holon and each sub-operand. The Interpreter 720 also estimates the matrix of nod scores corresponding to affine transformations, including scores for moving and scaling the entire frame due to camera pan and zoom motions. These scores are provided to the Residual Change Estimator 1210 which subtracts out the effect of these known factors from the Change Field input DXRCfn, to produce the residual or unmodelled portion EXn (1212). The residuals 1212 (or the full Change Field DXRCfn, depending on the embodiment) are then used by the Spatial Model Widener 1214 in order to attempt to extract additional model parame-ters by analyzing these change field data obtained from several ~rames in the same sequence. Since all of the change fields from the different frames in the subsequence have been moved back to the reference position as described above, spatio-temporal change structures that are common to many pixels and frames may now be extracted using factor analysis of these change field data. New factors, which are considered to be reliable as judged by their ability of describe unmodelled changes found in two or more frames, are used to stabilize the change field estimation for subsequent frames. In contrast, minor change patterns which affect only a small number of pixels and frames are not used for WO95/08240 ~l 7 ~ 2 9 3 ~CT~S91/10190 statistical stabilization, but rather, are accumulated in memory in case they represent emerging change patterns that have not yet fully emerged but will become statistically significant as more frames are brought into the modelling process.
The Spatial Model Widener 1214 also handles additional tasks such as 3D sorting/structure estimation and assessment of transparency and shadow effects. The scores 1215 are also provided to the Temporal Model Updater 1206 and Spatial Model Updater 1208, where they are used for statistical ref;n~ment, simplification and optimization of the models.
In the Interpreter 720, the input sequence Xi is also provided to the Spatial Model Extender 1216 which carries out various segmentation operations used to extract new spatial segments from each individual frame n. The Spatial Model Extender 1216 also merges and splits image segments in order to provide more efficient holon struc-tures. The input sequence ~ is also provided to the Model Deepener 1218 which attempts to replace model parameters in various ~om~; n~ by equivalent model parameters, but in more efficient ~om~;n~. This may, for example, include convert-ing "soft" modelling factors such as smile factors into "hard~ nod factors, which require less explicit information.
Detailed description of Interpreter opera-tional blocks The Score Estimator 1202 estimates the scores ofeach individual frame n, ~, in the various dom~; n~
(operands) and sub-operands for the various holons for use with factors having known loadings in XRef. Each score con-WO95/08240 2i~ i293 PCT~S9~l10190 72tains a value and associated estimation uncertainty. Robust statistical estimation i5 used in order to balance the statistical noise stabilization (m1n;m;zation of erroneous score estimation due to noise in the loadings or input data), versus statistical robustness (m;n;m,zing erroneous score estimation due to outlier pixels, i.e., those pixels with innovation, i.e., change patterns not yet properly described using the available spatial model.) Detection of outliers is described in H. Martens and T. Naes, Multi-variate Calibration, pp 267-272, (John Wiley & Sons, 1989), which is incorporated herein by reference. Statistical stabilization to m;n;m;ze noise is achieved by combining the impact of a larger number of pixels during the score estima-tion. Statistical stabilization to mtn;m;ze the effect of outlier pixels is achieved by reducing or eliminating the impact of the outlier pixels during the score estimation.
In a preferred embodiment, the robust estimation technique is an iterative reweighted least squares optimization, both for the estimation of smile, blush and probabilistic scores of "soft models" with explicit loadings as well as for the nod score matrices of the affine transformations of solid objects.
Two different approaches to score estimation may be used. The first approach is a full iterative search in the score-parameter space to optimize the approximation of the input image xn. The second approach is a simpler projec-tion of the estimated change fields DXRef,n onto the known factor loadings (including the explicit loadings in XRef and the implicit loadings associated with nod affine transforma-~ W095/08240 ~ 1 7 ~ 2 9 3 PCT~S94/lO190 73tions). In addition, combinations of both methods may be used.
In the case of the iterative search in the score-parameter space, nonlinear iterative optimization is used to find the combinations of scores ~ in the different domains (operands), sub-operands, holons and factors that result in optimal decoding conversion of the model XRef into estimate ~hat. The optimization criterion is based on the lack of fit difference (~ - ~hat), mainly in the intensity domain.
A different set of one or more functions may be used in order to optimize the fit for individual holons or other spatial subsegments. These function(s) indicate the lack of fit due to different pixels by calculating, for example, absolute or squared differences. The different pixel con-tributions are first weighed and then added according to the reliability and importance of each pixel. Thus, outlier pixels are assigned a lower weighting, while pixels that correspond to vi~ually or estimationally important lack of ~it residuals are assigned a higher weight.
The search in the score-parameter space may be a full global search of all factor scores, or may instead utilize a specific search strategy. In a preferred embodi-ment, the search strategy initially utlizes score values predicted from previous frames and iterations. In order to control the computational resources required, the optimiza-tion may be performed for individual spatial subsegments (e.g., for individual holons), at different image resolu-tions (e.g., low resolution images first) or different time resolutions, e.g., initially~less than every frame, or ~or WO95/0824Q ~ 293 PCT~S9~/l0190 dif~erent color ch~nnel representations (e.g., first ~or luminosity, then for other color channels). It should be noted that more emphasis should be placed on estimating major factors with reliable loadings, and less emphasis on minor ~actors with less reliable loadings. This may be controlled by the Score Ridge parameter from the MultiPass Controller which drives unreliable scores toward zero.
Score estimation by projection o~ the estimated change field DXRCf,n on 'known' loadings in XRef does not re-quire any image decoding o~ the reference model. Instead,statistical projections (multivariate regressions) of the obtained change field DXRef,n (regressands) on known loadings in XRef (regressors) are used. The regression is carried out for all factors simultaneously within each ~om~ ~ n ' S sub-operand and for each holon, using least squares multiplelinear regression. If the weights of the different pixels are changed, e.g., for outlier pixels, or the regressor loadings become highly non-orthogonal, then a reduced rank regression method is preferably used. Otherwise, the sta-tistical modelling becomes highly unstable, especially forintercorrelated factors with low weighted loading contribu-tions. In a preferred embodiment, the regression is per-formed using standard biased partial least squares regres-sion (PLSR) or principal component regression tPCR), as outlined in detail in H. Martens and T. Naes, Multivariate Calibration, pp 73-166, (John Wiley & Sons, 1989), which i8 incorporated herein by reference.
Other robust regression techniques, such as purely non-metric regressions or conventional ridge regressions, ~ W0 95/0824~ ~ 7 ~ ~ ~ 3 pcTlss~ l9o utilizing a ridge parameter, (H. Martens and T. Naes, Multi-variate Calibration, pp 230-232, (John Wiley & Sons, 1989), which is incorporated herein by reference, may be used. The ridge parameter serves to stabilize the score estimation of minor factors. Ridging may also be used to stabilize the latent regressor variables in the PLSR or PCR regression.
Alternatively, the scores may be biased towards zero by controlling the ScoreRidge parameter from the MultiPass Controller so that only major factors are used in the ini-tial estimation process for the Change Field stabilization.The uncertainties of the scores may be calculated using st~n~rd sensitivity analysis or linear model theory, as discussed in H. Martens and T. Naes, Multivariate Calibra-tion, pp. 168, 206, (John Wiley & Sons, 1989), which is incorporated herein by reference.
Residual Change Field Estimator The Residual Change Field Estimator 1210 deter-mines the rPm~;ning umodelled residual EXRef,n by removing the effects of the various scores which were estimated in the Score Estimator 1202 from the respective changefields DXRefn for the various sub-operands and holons. In the preferred embodiment, the effects of the factors (e.g. the sum of available loadings multiplied by the appropriate scores) are simply subtracted from the change fields. For e~ample, in the case of red intensity:
ERRef,n = DRRef,n - (R ( ) Ref*UR ( ) nR + R ( 1 ) Rcf*uR ( 1 ) nR +

.... ) Optionally, the model parameters used in this residual con-struction may be quantized in order to make sure that the W095/08240 ~ 9 3 ` PCT~S9~/10190 effects of quantization errors are fed back to the encoder for possible subsequent correction.
Spatial Model Widener The Spatial Model Widener 1214 of the Interpreter accumulates the residual change fields EXRefn ~or frame n along with the unmodelled residuals from previous frames.
These residual change fields represent as yet unmodelled information for each holon and each operand (~om~;n) and sub-operand. These residuals are weighted according to their uncertainties, and statistically processed in order to extract new factors. This ~actor extraction may preferably be accomplished by performing NIPALS analysis on the weight-ed pixel-frame matrix of unmodelled residuals, as described in e.g. H. Martens and T. Naes, Multivariate Calibration, pp 97-116 and p.163 (John Wiley & Sons, 1989), which is incor-porate herein by reference, or on their frame by frame crossproduct matrix, see H. Martens and T. Naes, Multivariate Calibration, p. 100 (John Wiley & Sons, 1989), which is incorporated herein by reference. However, this iterative NIPALS method does not necessarily have to iterate to full convergence for each factor. Alternatively, the factor extraction from the weighted pixel-frame matrix of unmodelled residuals may be att~;ne~ using singular value decomposition, Karhunen-Loeve transforms, eigen analysis using Hotelling transforms, such as are outlined in detail in, e.g., R.C.Gonzales and R.E.Woods, Digital Image Process-ing, pp 148-156, (Addison-Wesley 1992), which is incorporat-ed herein by refernce, and Carlo Tomasi and Takeo ~n~e, 5HAPE AND MOTION WITHOUT DEPTH, IEEE CH2934-8/90 pp. 91-95, W095/08240 ~ 2 9 ~ PCT/US9~/lolsO

1990, which is incorporated herein by refernce. The signif-icant change structures in the resulting accumulated residu-al matrix are extracted as new factors and included as part of the model [XRef~Useq]~ Change structures which involve 5 several pixels over several frames are deemed to be signifi-cant. The Spatial Model Widener portion of the Interpreter may be used for both local models 670, as well as more complete sequence or subsequence models 650.
In the case of real time encoding, the effect of 10 the rPm;31n~ng unmodelled residuals from each individual ~rame may be scaled down as time passes, and removed from the accumulation of unmodelled residuals if they fall below a certain level. In this way, residuals r~m~;n'ng for a long time and not having contributed to the formation of any 15 new factors are essentially removed from further consider-ation, since statistically, there is a very low probability that they will ever contribute to a new factor. In this embodiment, the Spatial Model Widener 1214 produces indi-vidual factors that may be added to the existing model.
20 Subsequently, this new set of factors, i.e., model, may be optimized in the Temporal Updater 1206 and Spatial Model Updater 1208, under the control of the MultiPass Controller.

In an alternative embodiment, the existing model 25 is analyzed together with the change fields in order to generate a new model. This new model preferably includes factors which incorporate the additional information from the newly introduced change fields. Essentially, the entire model [XRef~ Useq] iS re-computed as each new frame is intro-w09s/08240 ~ 2g3 PCT~S9~/1019 78duced. This is preferably accomplished using loadings XRCf and scores Useq which are scaled so that the score matrix Useq is orthonormal, (see H. Martens and T. Naes, Multivariate Calibration, p.48, (John Wiley & Sons, 1989), which is incorporated herein by reference. The different factor loading vectors in XRef then have different sums of squares reflecting their relative significance. The new loadings [XRef](new) are then generated using ~actor analysis, e.g., singular value decomposition svd, of the matrix consisting of tXRef(old)/ DXRef,n]. This is a simplified, one-block svd based version of the two-block P~SR-based updating method described in H. Martens and T. Naes, Multivariate Calibra-tion, pp. 162-123, (John Wiley ~ Sons, 1989), which is incorporated herein by reference. New scores corresponding to the new loadings are also obtained in this process.
Three-~;~en~ional depth estimation The Spatial Model Widener 1214 may also be used to estimate the approximate three ~ n~ional depth structure Zn o~ the pixels in a scene forming part of a frame sequence.
This type of estimation is important for modelling of ob-jects moving in front of each other, as well as for model-ling of horizontally or vertically rotating objects. The depth information Zn may also be of intrinsic interest by itself.
Depth modelling requires the depth to be estimat-ed, at least approximately, for the pixels involved in an occlusion. It is preferable to represent this estimated information at the involved pixel positions in the reference image model.

W095/08240 ~1 7~ PCT~S9~/10190 Depth estimation may be performed using any of a number of different methods. In a preferred embodiment, topological sorting of pixels, based on how some pixels occlude other pixels in various frames is used. For pixels where potential occlusions are detected (as indicated in the warnings wn from the Local ChangeField Estimator), different depth hypotheses are tried for several consecutive frames.
For each frame, the ChangeField Estimator is repeatedly operated for the different depth hypotheses, and the result-ing modelling success of the input frame intensity in usingthe different hypotheses is accumulated. The depth hypothe-sis that results in the most consistent and accurate repre-sentation of the intensity data in over the frames tested, is accepted and used as the depth model information. Ini-tially, this depth information may be used to establish thebasic depth Z(0) Ref for those pixels where this is required.
Subsequently in the encoding process for the same sequence, the same techniques may be used to widen the depth change factor model with new factors Z(f)RCf,f=1,2,... for those pixels that show more complex occlusion patterns owing to their depth changing from one frame to another.
In an alternative embodiment, singular value decomposition of the address change fields DAR~f,n, may be used to establish 3D depth information, as outlined in Carlo Tomasi and Takeo ~n~e, "SHAPE AND MOTION WITHOUT DEPTH", IEEE C~2934-8/90, pp. 91-95, 1990.

Iterative control for frame n W095/08240 ~ 12~3 PcT~s91/lolgn ~
.

A special mode of operation for the Spatial Model Widener 1214 is used during iterative optimization for each frame n. When separate (competing) estimates of local change fieldæ damn, di,nn, dp,nn are used, as described above in the preferred embodiment of the Local ChangeField Estimator 850, the Spatial Model Widener 1214 must formulate a joint compromise DXRefD(joint) to be used simultaneously for all ~om~;n~ In the preferred embodiment, information from only one of the ~om~;n~ is accepted into the joint change field 10 DXRef n ( joint) during each iteration.
At the beginning of the iterative estimation of each frame, smile changes are accepted as the most probable changes. However, throughout the iterative estimation, care must be taken that the accepted smile fields be sufficiently smooth and do not give erroneous occlusions in the subse-quent iteration(s). In general, change ~ield in~ormation that fits the already established factor loadings in XRef (as determ'ne~ in the Score Estimator 1202) are accepted in favor of unmodelled residuals EXRef.n (as determined in the Residual ChangeField Estimator 1210), which are only accept-ed as change field information towards the end of the itera-tive process for each ~rame. Thus, the change fields are modified according to the particular stage of encoding and the quality of the change fields of this iteration compared to those of previous iterations. In each iteration, the resulting accepted change field information is accumulated as the joint change field DXRef,n(joint).
During each iteration, the Interpreter 720 must convey this joint change field, DXRef n ( joint) back to the Wo 95/08240 ~ ~ ~L 2 9 3 PCT/US94/lOl90 ChangeField Estimator 710 for further refinement in the next iteration. This is accomplished by including the joint change field DXRef",(joint) as one extra factor in XRef (with score allways equal to 1). Thus, this extra factor accumu-5 lates incremental changes to the change field for frame nfrom each new iteration. At the end of the iterative pro-cess, this extra factor represents the accumulated joint change field, which can then be used for score and residual estimation, widening, deepening, updating and extending, as 10 described above.

Model Updaters The two updating modules, the Temporal Model Updater 1206 and Spatial Model Updater 1208, serve to opti-15 mize the temporal and spatial model with respect to various criteria, depending on the application. In the case of real-time video coding, such as in video conference applica-tions, the Temporal Model Updater 1206 computes the eigenvalue structure of the covariance matrix between the 20 different factors' scores within each domain, as time pass-es. Variation phenomen~ no longer active (e.g., a person who has left the video conference room) are identified as tl;m~nqions corresponding to low eigenvalues in the inter-score covariance matrices, and are thus eliminated from the 25 score model in the Temporal Model Updater 1206. The corre-sponding loading dimension is eliminated from the loadings in the Spatial Model Updater 1208. The resulting eigenvalue-eigenvector structure of the inter-score covariance matrix may also be used to optimize the quant-W095/08240 ~ 2 9 3 PCT~S9~llOI90 ~
, . . .
82ization and transmission control for the temporal parameters of the other, still active factors.
During encoding of video data (both real-time and off-line), unreliable factor ~;m~n~ions are likewise elimi-nated as the encoding proceeds repeatedly though the se-quence, by factor rotation of the loadings and scores in the two Model Updaters 1206 and 1208 based on singular value decomposition of the inter-score covariance matrix or the inter-loading covariance matrix, and eliminating ~;m~n~ions corresponding to low eigenvalues.
The eigen-analysis of the factor scores in the Temporal Model Updater 1206 and of the ~actor loadings in the Spatial Model Updater 1208 correspond to a type of meta-modelling, as will be discussed in more detail below. The Spatial Model Updater 1208 may check for spatial pixel clus-ter patterns in the loading spaces indicating a need for changes in the holon segmentation in the Spatial Model Bxtender 1216.
The Model Updaters 1206 and 1208 may also perform conventional factor analysis rotation, such as varimax rota-tion, to obtain a "simple structure" for the factor scores in the case of Temporal Model Updater 1206 or loadings (in the case of Spatial Model Updater 1208), for improved com-pression, editing and memory usage. Factor analytic "simple structures" can be understood by way of the following exam-ple. First, assume that two types of changes patterns, e.g., blush patterns "A" (blushing cheeks) and "B" (room lighting) have been modelled using two blush factors, but the blush factor have coincidentally combined the patterns WO95/08240 ~ 712 g 3 PCT~S94/10190 in such a way that factor 1 models "A" and "B" and factor 2 models "A" and "-B." Factor rotation to a simple structure, in this case, means computing a new set of loadings by multiplying the two loadings with a 2x2 rotation matrix g so that after the matrix multiplication, only pattern "A" is represented in one factor and only pattern "B" is represent-ed in the other factor. Corresponding new scores are ob-tained by multiplying the original scores with the inverse of matrix g. Alternatively, the original scores may be used. However, the new loadings must then be multiplied by the inverse of g.
Yet another function of the Temporal Model Updater 1206 is to accumulate multi~;m~n~ional histograms of "co-occurrence" of various model parameters, e.g., smile and blush factors. This histogram gives an accumulated count of how often various combinations of score values of the vari-ous dom~;n~ occur simultaneously. If particular patterns of co-occurence appear, this may indicate the need for deepen-ing the model, e.g., by converting blush factor information into smile factor information.
Spatial Model Extender The Spatial Model Extender 1216 organizes and reorganizes data into segments or holons. In the case of video coding, the segments are primarily spatial holons, and thus, the extender is referred to as a "Spatial" Model Extender. The Spatial Model Extender a216 receives as input a set of holons, each represented by pixel loadings XRef, sequence frame scores ~Seql change fields DXRef,n, and unmodelled change field residuals EXRef,n. The Spatial Model WO 95/08240 ~ ~ PCT/US9~/10190 Extender 1216 also receives as input, the abnormality warn-ings from the ChangeField Estimator 710 wn, the actual input frame xn, in addition to various input control parameters.
The Spatial Model Extender 1216 processes these inputs and 5 outputs an updated set of holons, each with pixel loadings XRef, sequence frame scores Useq~ unmodelled residuals EXRefn, and various output control parameters.
The Spatial Model Extender 1216 is activated by the Multipass Controller 620 whenever the accumulated signal 10 from the warnings wn output from from the ChangeField Estima-tor indicate a significant amount of unmodelled spatial information in a new frame xn. The segmentation of as yet unmodelled regions into new holons may be performed using the estimated address change fields DARCf,n, e.g. as described 15 in John Y.A. Wang and Edward H. Adelson, "LAYERED REPRESEN-TATION FOR IMAGE SEQUENCE CODING", IEEE ICASSP, Vol.5, pp.
221-224, Minneapolis, Minnesota, 1993, which is incorporated herein by reference. This is particularly important in the areas where the incoming warnings wn indicate the need for 20 segmentation. The pixels in such areas are given particluarly high weights in the search for segments with homogenous movement patterns.
As an alternative, or even additional, method of seg-mentation, the segments may be det~rm; n~tl using various 25 factor loading structures in XRef, such as clusters of pixels in the factor loading vector spaces (f=1,2,...) as deter-mined using st~n~l~rd cluster analysis in the factor loading spaces. Clusters with simple internal structures indicate pixels that change in related ways, and are thus, possible ~ W095l08240 2 ~ 7 ~ 2 9 3 PCT~S94/10190 candidates for segments. In addition, those pixels that are adjacent to each other in the address space ARCf(0), are identified as stronger candidates for segmentation. In this manner, new segments are formed. On the other hand, exist-ing segments are expanded or merged if the new segments lieadjacent to the existing ones and appear to have similar temporal movement behavious. Existing segments that show hetorgenous movements along the edges may be contracted to a smaller spatial region, and segments that show heterogenous movements in their spatial interiors may be split into independent holons.
One of the probabilistic properties of PRCf is used to indicate a particularly high probability of segment shape changes or extensions along existing segment edges, i.e., there is a probability that seemingly new segments are in fact just extensions of existing segments, extended at the segment edges. Similarly, this probabilistic property may be used to classify into segments those new objects appear-ing at the image edge. In addition, this property may also be used to introduce semi-transparency at holon edges.
The Spatial Model Extender 1216, as operated by the MultiPass Controller 620, produces both temporary holons or segments which are used in the initial stabilization or tentative modelling in the encoding process; these holons may be merged or deleted during the iterative encoding process, resulting in the final holons used to model each individual sequence at the end of the encoding process. As illustrated in Figure 3, since with the introduction of new holons, the Extended Reference Image becomes larger than the 3 ~Wo95/08240 ~ PCT~S9~/l0190 individual input frames, the holons must be spatially stored in the Extended Reference Image Model XRef, so as not to overlap with each other. Alternatively, storage methods such as the multilayer structure described in John Y.A. Wang and Edward H. Adelson, "~AYERED REPRESENTATION FOR IMAGE
SEQUENCE CODING", IEEE ICASSP, Vol.5, pp. 221-224, Minneapo-lis, M;nne~ota, 1993, which is incorporated herein by refer-ence, may be used.
Model Deepener The Model Deepener 1218 of the Interpreter 720 provides various functions that improve the modelling effi-ciency. One of these functions is to estimate transparency change fields as a sub-operand of the probabilistic domain DPRef,n. This may be performed using the technique described in Masahiko Shizawa and Kenji Mase, "A UNIFIED COMPUTATIONA~
THEORY FOR MOTION TRANSPARANCY AND MOTION BOUNDARIES BASED
ON EIGENENRGY ANALYSIS", IEEE CH2983-5/91, pp. 289-295, 1991, which is incorporated herein by reference.
Further, the Model Deepener 1218 is used to convert blush factors into smile factors whenever the amount and type of blush modelling of a holon indicates that it is inefficient to use blush modelling to model movements. This may be accomplished, for example, by reconstructing (decod-ing) the particular holon and then analyzing (encoding) it using an increased bias towards selection of a smile factor, rather than a blush factor. Similarly, smile factors may be converted to nod factors, whenever the smile factor loadings indicate holons having spatial patterns consistent with affine transformations of solid objects, i.e., translations, Wogsl082~0 ~1 7 ~ 2 9 3 PCT~59J/10l90 rotations, scaling, or shearing. This may be accomplished by determ;n;ng the address change fields DARef,n for the holons and then modelling them in terms of pseudo smile loadings corresponding to the various affine transformations.

WO9St08240 ~17 12~3 pcT~ss~llolsn ~

DECODER
The present invention includes a decoder that reconstructs images from the spatial model parameter load-ings XRef and temporal model parameters scores U. In applica-tions such as video compression, storage and transmission,the primary function of the decoder is to reproduce a cer-tain input sequence of frames [~,n=1,2,....]= Xscq using the scores [~,n=1,2,....] = Useq which were estimated during the encoding of the sequence [xn,n=1,2,....]= Xseq In other applications such as video games and virtual reality, the scores at different points in time [~,n=nl,n2,...]=U may be generated in real time, for example, by a user activated joystick.
In the present description, the predicted results for each frame n are denoted as the forecasted frame m.
Thus, xm is equivalent to ~hat.
A preferred embodiment of the Decoder 1300 is illustrated in block diagram form in Figure 13. This Decod-er 1300 is substantially equivalent to the Internal Decoder 830 of the Change Estimator 710 (Figure 8) of the Encoder.
However, the Decoder 1300 of Figure 13 includes some addi-tional functional elements. These additional elements are discussed in detail in the attached appendix, DECODER-APPEN-DIX.

WO95/08240 ~l~l 2 9 3 PCT~Ss~/10190 The resulting change fields DXRefm 1358 are then passed to the Adder 1330 where they are added to the basic reference image X(O)Ref 1360, to produce Xm~Ref 1362, i.e., the forecasted values for frame m given in the reference posi-tion. This contains the changed values which the variousholons in the reference image will assume upon output in the forecasted frame; however, this information is still given in the reference position.
These changed values given in the reference posi-tion, X~Ref 1362, are then "moved" in the Mover 1340 from thereference position to the m position using the movement parameters provided by the address change field DARef,m 1364.
In the case of an internal decoder 830 of an encoder 600, the Mover 1340 may provide the return field dam,Ref 1366, which may used to move values back from the m position to the reference position.
The primary output of the Mover 1340 is the forecasted result ~m~ to which error corrections exm 1368 may optionally be added. The resulting signal may then be filtered inside the post processor 1350, for example, to enhance edge ef-fects, in order to yield the final result xm 1370. The Adder 1330, Mover 1340 and post processor 1350 may employ st~n~rd decoding techniques, such as are outlined in George Wolberg, Diqital Image Warping, Chapter 7, (IEEE Computer Society Press 1990), which is incorporated herein by refer-ence.
The Decoder 1300 may also include additional func-tionality for controlling and h~n~lng the external comml-n;-cation, decryption, local storage and retrieval of model ~7~293 WO95/08240 PCT~S94/l0l90 ~ ., ~ .

parameters which are repeatedly used, for commlln;cation to the output medium (such as a computer video display t~rm;n~l or TV screen) and other functions that are readily under-stood by those skilled in the art.
It should be noted that the Mover operators 1040 (1340) and 1010 (870) may use different methods for combin-ing two or more pieces of information which are placed at the same coordinate position. In the preferred implementa-tion for video encoding and decoding, different information is combined using 3D occlusion, modified according to the transparancy of the various overlaid media. For other applications, such as analysis of images of two-way electro-phoresis gels for protein analysis, the contributions of different holons may simply be added.

ENCODER OPERATION - MULTIPASS CONTROLLER

Encoder System Control and Operation The operation of the encoder/decoder system described in detail above, will now be explained for an off-line video encoding application. First, the simplified encoder (alter-native embodiment) and the full encoder (preferred embodiment) will be compared. Then, the simplified encoder will first be described, followed by a description of the full encoder.
A video encoding system must be able to detect sequences of sufficiently related image information, in order that they be modelled by a sequence model. For each such sequence, a model must be developed in such a way as to give adequate reconstruction quality, efficient compression, and W095t08240 ~ 712~ 3 PCT~S94/10190 editability. This must be accomplished within the physical constraints of the encoding system, the storage/transmission and decoding systems.
To achieve compact, parsimoneous modelling of a se-quence, the changes in the sequence should be ascribed toappropriate domain parameters, viz., movements should mainly be modelled by smile and nod factors, intensity changes should mainly be modelled by blush factors and transparancy effects mainly modelled by probabilistic factors. Effective modelling of various change types to the proper domain parameters re-quires statistical stabilization of the model parameter estima-tion, in addition to good separation of the various model ~om~; nR ~ This in turn requires modelling over many frames.
The two encoder embodiments differ in how they accomplish this task.
The simplified encoder employs a simple sequential control and operation mechanism that results in identification of suitable frame sequences during parameter estimation.
However, it does not attempt to optimize the simultaneous statistical modelling in the various ~om~'n~. The full encoder on the other hand, requires sequence identification as part of a separate preprocessing stage. This preprocessing stage also initializes various statistical weighting functions that are updated and used throughout the encoding process to optimize the noise and error robustness of the multi-domain modelling.
The simplified encoder repeatedly searches through a video frame sequence for related unmodelled change structures woss/08240 ~ ~7 ~ 2 ~ ~ ~ PCT~S9~/10190 which may be modelled either as a new factor in the smile ~nm~; n, the blush domain, or as a new spatial image segmentation. The optimal choice from among the potential smile, blush and segmentation changes, is included in the sequence model, either as a widening of the smile or blush model, or as an extension or reorganization of the holons. The search process is then repeated until adequate modelling is at-tained.
The full encoder, in contrast, gradually widens, extends and deepens the model for a given sequence by passing through the sequence several times, each time attempting to model each frame in the three ~om~;n~ in such a way as to be m~;m~l ly consistent with the corresponding modelling of the other frames.
In the simplified encoder, the estimation of unmodelled change fields for each frame is relatively simple, since each domain is modelled separately. Smile change fields DARefn,n-n1,n2,... are extracted and modelled in one pass, which may be shorter than the entire sequence of frames, and intensi-ty change fields DIRef,n, n=nl,n2,... are extracted and modelled in a second pass, which may also be shorter than the entire sequence of frames. Each pass is continued until the incremen-tal modelling information obtained is outweighed by the model-ling complexity. In the full encoder, the corresponding estimation of unmodelled change fields for each frame is more complicated, since the change fields for each frame are mod-elled jointly and therefore must be mutually compatible. This ~ ~71293 W095/08240 PCT~S94/10190 compatability is obtained by an iterative development of change fields in the different dom~;ns for each frame.
Simplified Encoder systems Control and Operation For each frame, the simplified encoder uses the Score Estimator 1202 of the Interpreter 720 to estimate factor scores for the already established factors in XRef. The model may be temporarily widened with tentatively established new factors in the domain being modelled. Subsequently, the ChangeField Estimator 710 is used to generate either an estimate of unmodelled smile change fields DARef,n or unmodelled blush change fields DIRef,D. In each case, the tentative new factors are developed in the Spatial Model Widener 1214. The Interpreter 720 also checks for possible segmentation improvements in the Spatial Model Extender 1216. The MultiPass Controller 620 in conjunction with the Spatial Model Widener 1214, widens e-ther the blush or the smile model with a new factor, or alternative-ly imposes spatial extension/reorganization in the Spatial Model Extender 1216. The MultiPass Controller 620 also initiates the beginning of a new sequence model whenever the change fields exhibit dramatic change. The process may then be repeated until satisfactory modelling is obt~; n~ .

Full Encoder Systems Control and Operation Preprocessinq The input data are first converted from the input color space, which may for example be RGB, to a different format, such as YUV, in order to ensure better separation of W095/08240 2~ ~293 PCT~S94/lOl90 luminosity and chrom;n~nce. This conversion may be carried out using known, standard techniques. In order to avoid confusion between the V color component in YUV and the V (vertical) coordinate in HVZ address space, this description is given in terms of RGB color space. The intensity of each converted frame n is referred to as in~ Also, the input spatial coordi-nate system may be changed at various stages of the encoding and decoding processes. In particular, the spatial resolution may during preprocessing be changed by successively reducing the input format (vertical and horizontal pels, adresses an) by a factor of 2 in both horizontal and vertical direction using st~n~rd techniques. This results in a so-called ~Gaussian pyramid" representation of the same input images, but at different spatial resolutions. The smaller, low-resolution images may be used for prel;m;n~ry parameter estimation, and the spatial resolution increased as the model becomes increas-ingly reliable and stable.
Continuing, prel;m;n~ry modelabilities of the input data are first estimated. For each of the successive spatial resolutions, the intensity data in for each frame are analyzed in order to assess the probabilities of whether the intensity data for the individual pixels are going to be easy to model mathematically. This analysis involves det~rm~n;ng different probabilities which are referred to as Pnl and discussed in detail below.
The prel;m~n~ry modelability includes a determ;n~tion of the two-~;m~n~ional recognizability of the input data, i.e., 2 9 ~
W095/08240 PCT~S94/10190 an estimate of how "edgy" the different regions of the image are. "Edgy" regions are easier to detect and follow with respect to motion, than continuous regions. Specifically, an estimate of the degree of spatially recognizable structures p(1) n is computed such that pixels representing clear 2D spa-tial contours and pixels at spatial corner structures are assigned values close to 1, while pixels in continuous areas are assigned values close to zero. Other pixels are assigned intermediate values between zero and one. This may be carried out using the specific procedure set forth in Carlo Tomasi and Takeo Kanade, "SHAPE AND MOTION WITHOUT DEPTH", IEEE CH2934-8/90 pp. 91-95, 1990, which is incorporated herein by refer-ence, or in Rolf Volden and Jens G. Balchen, "DETERMINING 3-D
OBJECT COORDINATES FROM A SEQUENCE OF 2-D IMAGES", Proc. of the Eighth Internatl Symposium on Unm~nned Untethered Submersible Technology, Sept. 1993, pp. 359-369, which is incorporated herein by reference.
Similarly, the prel; m; n~ry modelability includes a determination of the one-~;mPn~ional recognizability of the input data, i.e, an indication of the intensity variations along either a horizontal or vertical line through the image.
This procedure involves formulating an estimate of the degree of horizontally or vertically clear contours. Pixels which are part of clear horizontally or vertically contours (as detected from e.g. absolute values of the spatial derivatives in hori-zontal and vertical directions) are assigned a value p(2)n=1, W095/08240 pcT~ss4llol9o ~ 96 while those which are in continuous areas are assigned a value of zero, and other pixels are assigned values in between.
The prel; m; n~ry modelability also includes determin-ing aperture problems, by estimating the probability of aper-5 ture problems for each pixel as P(3) n . Smooth local movements~i.e., spatial structures that appear to move linearly over the course of several consecutive frames are assigned a mA~;mnm value o~ 1, while pixels where no such structures are found are assigned a value of 0. Similarly, structures which appear not to move at all over the course of several consecutive frames are treated in much the same m~nner. Collectively, this estimate of seemingly smooth movement or non-movement is referred to as P(4)n This property may also be used to esti-mate smooth intensity changes (or non-changes) over the course 15 of several consecutive frames.
The probability of half pixels which may arise at boundary edges and are unreliable because they are an average o~ different inten~ity spatial areas, and as such, do not represent true intensities, is computed and referred to as P (5 ) n Together, the intensity, address and probabilistic data are symbolized by xnl and include address properties, intensity properties, and the different probabilistic proper-ties, such as P(l)n through P(5)n The preprocessing also includes detection of sequence length and the det~rm; n~ tion of subsequence limits. This is ~ ~7~3 WO95/08240 PCT~S9~/10190 accomplished by analyzing the change property P(4) n and the intensities in over the entire sequence and performing a multivariate analysis of the low-resolution intensities in order to extract a low number of principal components. This is followed by a cluster analysis of the factor scores, in order to group highly related frames into sequences to be modelled together. If a scene is too long or too heterogenous, then it may be temporally split into shorter subsequences for simpli-fied analysis using local models. ~ater in the encoding process, such subsequence models may be merged together into a full sequence model. In the initial splitting of sequences, it is important that the subsequences overlap by a few frames in either direction.
The thermal noise level in the subsequence is esti-mated by accumulating the overall r~nAo~ noise variance associ-ated with each of the intensity rh~nnels and storing this value as the initial uncertainty variance s2in along with the actual valueS in in~
The preprocessing also produces an initial reference image XRCf for each subsequence. Initially, one frame nRcf in each subsequence is chosen as the starting point for the reference image. This frame is chosen on the basis of princi-; pal component analysis of the low resolution intensities, followed by a search in the factor score space for the most typical frame in the subsequence. Frames within the middleportion of the subsequence are preferred over frames at the woss/08~40 ~1293 98 pcT~s9l/lols~ ~

start or end of the subsequence, since middle frames have neighboring frames in both directions of the subsequence.
Initialization Initialization includes setting the initial values of the various control parameters. First, the ScoreRidge is set to a high initial value for all ~om~;n~ and all sub-operands.
This parameter is used in the ScoreEstimator 1202 to stabilize the scores of small factors. (When singular value decomposition (principal component analysis etc) is used for extracting the factors, the size of individual factors is defined by their associated eigenvalue size,- small factors have small eigenval-ues. In the more general case, small factors are here defined as factors whose scores x loading product matrix has a low sum of squared pixel values. The size of a factor is determined by how many pixels are involved and how strongly they are affect-ed by the loadings of that factor, and by how many frames are af~ected and how strongly they are affected by the factor scores).
SqueezeBlush is set to a high initial value for each ~rame in order to make sure that the estimation of smile fields is not mistakenly thwarted by prel;m;n~ry blush fields that erroneously pick up movement effects. Similarly SqueezeSmile is set to a high initial value for each frame in order to make sure that the proper estimation of the blush fields is not adversely affected by spurious inconsistencies in the prelimi-nary smile fields. The use of SqueezeBlush and SqueezeSmile is an iterative process designed to achieve the proper balance W095/08240 ~i 7 1 2 ~ 3 PCT~S94/lolgO

between smile and blush change fields that optimally model the image changes. The initialization also includes initially establishing the full reference image XRef as one single holon, and assuming very smooth movement fields.
The spatial model parameters XRef and temporal model parameters ~seq are estimated by iteratively performing several passes through the subsequence. For each pass, starting at the initial reference frame, the frames are searched bidirectionally through the subsequence on either side of the frame nRef until a sufficiently satisfactory model is obtained.
For each frame, the statistical weights for each pixel, for each iteration and for each frame are determined.
These statistical or reliability weights are an indication of the present modelability of the pixels in a given frame. These reliability weights wgts_xn for each pixel for frame n, xnj for the various sub-operands are:
an: wgts_an = function of (Pn~ g an~Wn) in: wgts_in = ~unction o~ (Pn~ s in~wn) The reliability weights are proportional to the probabilistic properties Pn~ and inversely proportional to both the variances s2an and the warnings wn. Similarly, the reliability weights Wgts_XRef for each pixel in the prel;m;n~ry model(s) XRef, for each sub-operand, each factor and each holon are:
ARef: Wgts_ARef: inversely proportional function of (S2ARef) for each factor in each sub-operand.

-W095/08240 2 1 7 ~ 2 g 3 PCT~S91110190 ~

IRef: Wgts_IR~f: inversely proportional function of (S2IRef) for each factor in each sub-operand.

In general, only those factors which are found to be applicable to a sufficient number of frames are retained.

Multi-frame applicability of the extracted factors is tested by cross validation or leveraged correction, as described in H.

Martens and T. Naes, Multivariate Calibration, pp 237-265, (John Wiley & Sons, 1989), which is incorporated herein by reference. Specifically, in the case of multi-pass or itera-tive estimation, this may include preventing the contributiondue to the current frame n from being artificially validated as a multi-frame factor based on its own contribution to the model during an earlier pass.

The estimation of the change field DXRef,n and its subsequent contribution to the model {XRef, ~Seq} for each frame n relative to the subsequence or full sequence model to which it belongs is an iterative process, which will now be discussed in detail. For the first few frames encountered in the first pass through the subsequence, no reliable model has as yet been developed. Thus, the estimation of the change fields for these first few frames is more difficult and uncertain than for subsuquent frames. As the model develops further, it increas-ingly assists in the stabilization and simplification of the estimation of the change fields for later frames. Therefore, during the initial pass through the first few frames, only those image regions that have a high degree of modelability are used. In addition, with respect to movement, strong assump-W095/08240 ~ 7 ~ 2 9 3 PCT~Sg~/10190 101tions about smooth change fields are used in order to restrict ~he possible degrees of freedom in estimating the change fields for the first few frames. Similarly, with respect to blush factors, strong assumptions about smoothness and multi-frame applicability are imposed in order to prevent unnecessary reliance on blush factors alone. As the encoding process iterates, these assumptions and requirements are relaxed so that true minor change patterns are properly modelled by change factors.
The encoding process for a sequence according to the preferred embodiment, requires that the joint change fields DXRef,n be estimated for each frame, i.e., the different domain change fields DARef,n, DIRCf~n and DPRef,n may be used simultaneously to give acceptable decoded results ~n. As explained above, this re~uires an iterative modification of the different domains change fields for each frame. The weights, wgts_~ and Wgts_XRef, defined for address and intensity, are used for optimization of the estimation of the local change field d~nn.
During this iterative process, the Interpreter 720 is used primarily for accumulating change field information in DXRef,n(joint), as described above. The values in the already established se~uence model XRCf, ~Seq are not modified.
In the iterative incremental estimation of the change field information DXRefn(joint), the model estimation keeps track of the results from the individual iterations, and back-wossto824o pcT~s94llolso Z~ 71~3 tracks out of sets of iterations in which the chosen increments fail to produce satisfactory modelling stability.
Once the joint change field DXRef n ( j oint) has been estimated for a given frame, this is analyzed in the Interpret-er 720 in order to optimize the sequence model XRef, Us~q based onDXRef D ( j O int).
Developinq the sequence model The reliability weights for frame n and for the model are updated. Subsequently, scores un and residuals EXRef,n are estimated, and the change field information is accumulated for the possible widening of the reference model with new valid change factors. The reference model is extended using segmen-tation, improvement of 3D structures are attempted, and oppor-tunities for model deepening are checked. All of these opera-tions will be described in detail below.
When all the frames in a subsequence have been thusanalysed so that a pass is completed, the weights and probabilistic properties are further updated to enhance the estimation during the next pass, with the obtained model being optionally rotated statistically to attain a simpler factor structure. In addition, the possibility of merging a given subsequence with other subsequences is investigated, and the need for further passes is checked. If no further passes are necessary, the parameter results obtained thus far may be run through the system one final time, with the parameters being quantized.

~ 2 9 3 W095/08240 pcT~ss~llol9o The control and operation of the full encoding process will now be described in more detail. First, the weights are modified according to the obtained uncertainty variances of the various sub-operands in DXRof~n~ Pixels with high uncertainty in a given sub-operand change field are given lower weight for the subse~uent statistical operations for this sub-operand. These weights are then used to optimize the multivariate statistical processes in the Interpreter 720.
The scores ~ for the various dom~; n ~ and sub-oper-ands are estimated for the different holons in the ScoreEstimator 1202. Also, the associated uncertainty covariances are estimated using conventional linear least squares method-ology assuming, e.g., normally distributed noise in the residu-als, and providing corrections for the intercorrelations between the various factor weighted loadings. The scores with small total signal effects are biased towards zero, using the ScoreRidge parameter, for statistical stabilization.
The residual change field E~ is estimated, after subtraction of the effects of the known factors, in Residual ChangeField Estimator 1210.
Next, the widening of the existing models XRef Us~ for various ~om~; n~, sub-operands and holons, is attempted in the Spatial Model Widener 1214. This is performed using the esti-mated uncertainty variances and weights as part of the input, to make sure that data elements with high certainty dominate.
The uncertainty variances of the loadings are estimated using woss/08240 ~ 3 PCT~S9~/10190 st~n~rd linear least squares methodology assuming, e.g., normally distributed noise.
As part of the Widening process, the basic 3D struc-ture Z(0) and associated change factors Z(f),f-1,2,... are estimated according to the available data at that stage. In particular, warnings for unmodelled pixels in wn suggest tenta-tive 3D modelling.
Modification of the segmentation is accomplished by checking the various domain data, in particular the ~unmodellability" warnings wn and associated data in in~ against similar unmodelled data for adjacent ~rames, in order to detect the accumulated development o~ unmodelled related areas. The unmodelled parts of the image are analyzed in the Spatial Model Extender 1216, thereby generating new holons or modifications o~ existing holons in SRef. During the course of segmentation, higher probability of segmentation changes is expected along the edges of existing holons and along the edges of xn and XRef than elsewhere. Holons that are spatially adjacent in the reference image and temporally correlated are merged. In contrast, holons that display inconsistent spatial and temporal model structure are split.
Shadows and transparent objects are modelled as part of the Widening process. This includes estimating the basic probabilistic transparancy of the holons. In a preferred embodiment for the identification of moving shadows, groups of adjàcent pixels which in frame n display a systematic, low-~;men~ional loss of light in the color space as compared to a ~ W095/08240 ~17 12 9 3 ~ ~ PCT~S91/10190 different frame are designated as shadow holons. The shadow holons are defined as having dark color intensity and being semi-transparent.
Areas in the reference image with no clear factor structure, i.e., many low-energy factors instead of a few high-energy factors in A or I d~m~; n~, are analyzed for spatiotempo-ral structures. These areas are marked for modelling with special modelling techniques, such as modelling of quasi-random systems such as running water. This part of the encoder may require some human intervention in terms of the selection of the particular special technique. The effect of such special areas are m;nlm;zed in subsequent parameter estimations.
The encoding operations described may be used with more complex local change field estimates d~. In the pre-ferred embodiment, for each pixel in each sub-operand of the forecasted frame m, only one change value (with its associated uncertainty) is estimated and output by the Local ChangeField Estimator 1050. In an alternative embodiment, there may be multiple alternative change values (each with its associated uncertainy) estimated by the ~ocal ChangeField Estimator 1050 for each ~om~;n or sub-operand. For example, two or more alternative potentially acceptable horizontal, vertical and depth movements of groups of pixels may be presented as part of damn in d~ 855 by the Local ChangeField Estimator 850. Each of these alternatives are then moved back to the reference position as part of DXRef,n 890. Subsequently, the Interpreter attempts to model the different combinations of alternatives, W095/08240 ~i 7 ~ 2 ~ 3 i PCT~S9~/10190 and chooses the one that produces the best result. A similarly flexible alternative approach to local modelling is to let the ~ocal ChangeField Estimator 850 output only one value for each pixel for each suboperand, as in the preferred embodiment, but instead to replace the uncertainty (e.g., uncertainty variance s2d~) by local statistical covariance models that describe the most probable combination of change alternatives. These covariance models may then be accumulated and used by the Interpreter to find the most acceptable combination of model widening, extension and deepening.

II. Update models After all the frames of the present subsequence have been analyzed during a particular pass and the system has arrived at a stable model of a sequence, the model is updated in the Temporal and Spatial Model Updaters 1206 and 1208, respectively, in the Interpreter 720, thus allowing even more compact and easily compressible/editable factor structures.

III. Merging subsequences In the Multipass Controller 620, an attempt is made to merge the present subsequence with another subsequence, according to meta-modelling, or the technique given in appendix MERGE_SUBSEQUENCES. This converts the local subsequence models into a model which is representative for more frames of the se-quence, than the individual sub-sequences.

W095/08240 ~ ~ 7 12 ~ 3 pcT~ss~llolso IV Convergence control At the end of each pass, the Multipass Controller 650 checks for convergence. If convergence has not been reached, more passes are required. Accordingly, the MultiPass Control-ler 650 modifies the control parameters and initiates the nextpass. The MultiPass Controller also keeps track of the nature and consequences of the various model developments in the various passes, and may back-track if certain model development choices appear to provide unsatisfactory results.

V Final model optimization Depending on the particular application, quantization errors due to parameter compression are introduced into the estimation of model parameters. The modelling of the sequence is again repeated once more in order to allow subsequent parameters the opportunity to correct for the quantization errors introduced by prior parameters. Finally, the parameters in XRef and ~Seq and error correction residuals EXRef are com-pressed and ready for storage and/or transmission to be used by a decoder.
The internal model data may be stored using more precision than the input data. For example in video coding, by modelling accumulated information from several input frames of related, but moving objects, the final internal model XRef may have higher spatial resolution than the individual input frames~ On the other hand, the internal model may be stored using completely different resolution than the input or output W095/08240 ~ 2 ~ ~ pcT~ss4llol9o data, e.g., as a compact subset o~ irregularly spaced key picture elements chosen by the Model Deepener from among the full set of available pixels, so that good output image ~uality may be obtained by interpolating between the pixels in the Mover portion of the Decoder. The present invention may also output decoded results in a different representation than that of the input. For example, using interpolation and extrapola-tion o~ the temporal and spatial parameters, along with a change of the color space, the system may convert between NTSC
and PAL video formats.
The IDLE modelling of the present invention may be used to sort the order of input or output data elements. This type of sorting may be applied so that the rows of individual input or output frames are changed relative to their common order, as part of a video encryption scheme.
Deleterious effects due to missing or particularly noisy data elements in the input data may be handled by the present system since the modelling contribution of each indi-vidual input data element may be weighted relative to that of the other data elements, with the individual weights being estimated by the encoder system itself.
The preferred embodiment of the present invention uses various two-way bi-linear factor models, each consisting of a sum (hence the term "linear") of factor contributions, each factor being defined as the product of two types of parameters, a score and a loading (hence the therm "bi-lin-ear"). These parameters describe, e.g., temporal and spatial WQ95/08240 ~1~ 12 9 3 PCT~Ss~/10190 ~, ;,, ~, change information, respectively. This type of modelling may be generalized or extended. One such generalization is the use of higher-way models, such as a tri-linear model where each factor contribution is the product of three types of parame-ters, instead of just two. Alternatively, each of the bi-linear factors may be further modelled by its own bi-linear model.
META MODELLING
Single-sequence meta-modelling The IDLE model parameters obtained according to the system and method described above already have redundancies within the individual suboperands removed. However, the model parameters may still have rPm~;n;ng redl~n~ncies across domains and suboperands. For instance, the spatial pattern of how of an object changes color intensity may resemble the spatial pattern of how that object also moves. Thus, there is spatial correlation between some color and movement loadings in XRef.
Similarly, the temporal patterns of how one object changes color over time may resemble how that object or some other object moves over time. In this latter case, there is temporal correlation between some color and movement scores in ~Seq~
Meta-modelling is essentially the same as IDLE modelling, except that the input is the set of model parameters rather than a set of input frames.
Spatial meta-modelling Spatial meta-modelling is essentially the same as IDLE modelling; however, the inputs to the model are now the wo9slo824o PCT~S9~/lOl90 ~ 7~3 llo individual loads det~rm;n~d as part of a first IDLE model. For each holon of the initial model X~c~, we may collect all the factor loadings of all colors, e.g., in the case of RGB repre-sentations: red loadings R(f) Refl f=0,1,2,..., green loadings loadings Gtf)Ref,f=0,1,2,..., and blue loadings B(f)Ref,f=0,1,2,.-.., totalling F factors, into an eguivalent single meta-se-~uence consisting of F intensity "frames," each frame being an intensity loading having the same size as the holon in the extended reference frame. When each of the loadings is strung out as a line, as in the Spatial Widener in the Interpreter, the color intensity loadings form an FxM matrix, with a total of F intensity loadings each having M pixels. A singular value decomposition (svd) of this matrix generates meta-factors with meta-loadings for each of the M pixels and meta-scores for each of the F original factors. The svd yields a perfect recon-struction of the original loadings if the number of meta-factors equals the smaller o~ M or F. However, if there are significant inter-color spatial correlations in the original loadings, these will be accumulated in the meta-factors, resulting in ~ewer than the smaller of M or F factors necessary for proper reconstruction. The meta-scores indicate how the F
original color factor loadings are related to each other, and the meta-loadings indicate how these interrelations are spa-tially distributed over the M pixels.
Similarly, if there are spatial intercorrelations between how one holon moves in the th~ee coordinate directions, spatial meta-modelling of the smile loadings in both horizon-W095/08240 ~1 7 ~ ~ 3 pcT~ss~/lol9o 111 ' , tal, vertical and depth direction will reveal these intercorre-lations. Likewise, if there are spatial intercorrelations between how one holon changes with respect to two or more probabilistic properties, these probabilistic redundancies can be consolidated using spatial meta-modelling of the loadings of the various probabilistic properties.
Finally, the spatial meta-modelling may instead be performed on both the color intensity, movement and probabilis-tic change loadings simultaneously for each holon or for groups of holons. Again, the spatial meta-loadings represent the spatial correlation re~llnA~ncies within the original IDLE
model, and the spatial meta-scores quantify how the original IDLE factor loadings are related to each other with respect to spatial correlation. As in st~n~rd principal component analysis, if the original input loading matrix is standardized, the distribution of eigenvalues from the svd indicates the degree of intercorrelation found, H. Martens and T. Naes, Multivariate Calibration, Chapter 3 (John Wiley ~ Sons, 1989), which is incorporated herein by reference.
Such direct svd on spatial loadings may be considered the equivalent of spatial blush modelling at the meta level.
Similarly, the spatial meta modelling using only meta-blush factors, may be extended to full IDLE modelling, with meta-reference, meta-blush, meta-smile and meta-probabilistic models. One of the original loadings may be used as a meta-reference. The spatial meta-smile factors then define how regions in the different original loadings need to be moved in wosslo824o ~; c PCT~S94/10190 ~1712~ 112 order to optimize their spatial re~-ln~ncy. The meta-holons need not be the same as the original holons. Spatial meta-holons may be defined as either portions of the original holons or groups of the original holons, having regions with similar systematic spatial inter-loading correlation patterns. Other probabilistic spatial meta-suboperands such as spatial meta-transparancy allow blending of the different spatial meta-holons.

Temporal meta-modelling Temporal meta-modelling is essentially the same as ID~E modelling; however, the input to the model is now the scores determined as part of a first IDLE model. In much the same manner as the meta-modelling of the original spatial change factor loadings in XRCf, an IDLE meta-modelling may be applied to the sequence scores in ~scq The temporal meta-analysis may be performed on some or all of the suboperand factors for some or all of the holons over some or all of the sequence frames.
The temporal meta-factor loadings thus indicate how the different frames n=1,2,...N in the original video sequence relate to each other, and the temporal meta-factor scores f~1,2,...,F (for whichever suboperands and holons are being meta-analyzed together) indicate how the scores of the differ-ent factors in the original ID~E model relate to each other.
Simple svd on the NxF matrix of scores then models whatever 095/08~40 ~ 71 2 ~ 3 PCT~S91/lOl90 temporal re~l~n~ncies existed between the factors of the origi-nal IDLE model.
Such simple svd of the factor scores corresponds to temporal meta-blush modelling. Full temporal IDLE meta-model-ling allows a reference which is a function of time, ratherthan a function of space as is the case with st~n~rd IDLE
modelling. In this situation, meta-holons represent event(s) or action(s) over time, meta-smile factors represent a time shift of the event(s) or action(s), and meta-blush factors represent the extent of the event(s) or action(s). The meta-reference may be chosen to be one of the original factor score series through the video sequence.
The temporal meta-smile factors can therefore be used to model systematic, yet complicated, temporal deviations away from the meta-reference pattern for the other change patterns represented by the original IDLE model. For instance, if the movements of one object (e.g., a trailing car) in the original sequence followed in time the movements and color changes of another object (e.g., brake lights of a lead car), but exhibit-ed varying, systematic delays (e.g., due to varying accelera-tion patterns), this would give rise to temporal meta-smile factors. The loadings of the temporal meta-smile factors indicate how the different frames in the original input se-quence relate to each other, and the temporal meta-smile scores indica~e how the different factors in the original IDLE model relate to each other.

W095/08240 ~ ~ 7 ~ 2 9 ~ PCT~Sg~/l0190 The temporal meta-holons generally correspond to discrete temporal events that are best modelled separately from each other. Meta-transparancy factors may then be used to smoothly combine different temporal holons. The model parame-ters of the meta-modelling processes described above may in turn themselves be meta-modelled.
When meta-modelling is used in the Encoder ("meta-encoding"), the Decoder system may have corresponding inverse meta-modelling ("meta-decoding").
Mul~i-sequence meta-modelling The single-sequence meta-modelling described above may be further applied to multi-sequence meta-modelling. One primary application of multi-sequence meta-modelling is video coding, where it is used to relate IDLE models from different, but possibly related, video sequences. One way to merge two or more related IDLE models is to meta-model their loadings or scores directly as described above. Such direct meta-modelling of spatial structures is useful if the extended reference images are the same or very similar. However, the direct spatial meta-modelling is difficult to accomplish if the sequences have differently sized extended reference images.
Furth~rmore, although physically achievable, the result is rather mp~n~ngless if the extended reference image sizes are the same, but the holons are different.
The direct temporal meta-modelling is also useful if the sequences are of the same length and reflect related events, such as the leading/trailing car example discussed W095/0X240 ~ 2 9 3 PCT~S91/10190 above. Meta-modelling is difficult to perform if the sequences cannot be separated into sub-sequences of the same length, and becomes rather me~n;ngless if the sequences do not reflect related events.
Indirect multi-sequence meta-modelling Indirect multi-sequence meta-modelling is the use of two or more stages of meta-modelling. One stage for is making two or more model parameter sets compatible, and a second stage of meta-modelling of the resulting compatible sets. Indirect multi-sequence meta-modelling is more flexible than the meta-modelling described above, in that it allows a single model to model a larger class of phenomena.
In the prel;m;n~ry phase of spatial meta-modelling, the extended reference images and the associated factor load-ings of one or more sub-sequences are used to establish a new extended reference image, e.g., by simple IDLE modelling. An alternative method of linking together two spatial sub-sequence models in order to form a new extended reference image, is de-scribed in further detail in the Appendix MERGE_SUBSEQUENCES.
This latter approach is applicable if the sub-sequences overlap each other by at least one frame.
Prel;m;n~ry temporal meta-modelling achieves temporal compatability of one or more temporal reference sub-sequences and associated factor scores, with the temporal reference sub-sequence of another sub-sequence. This may be accomplished using a simple IDLE model to model the temporal domain.

WO95/08240 ~1 71 2 ~ 3 ~cT~sslllol9o Once compatability has been achieved in the spatial and/or temporal ~om~;n~, the different sub-sequence models may then be jointly meta-modelled as if they belonged to a single sub-sequence.
Combininq of models using meta-modelling The scores and loadings from different models may be combined with the loadings and scores from different models.
Alternatively, the scores or loadings of one model may be replaced with other scores or loadings ~rom an alternate source, e.g., a real-time joystick input, and be combined using meta-modelling. Lip synchronization between sound and image data in video dubbing is one example of combining models using meta-modelling. Specifically, smile scores may be estimated from an already established IDLE image mouth movement model.
These scores may then be matched to a corresponding time series representing the sounds produced by the talking mouth. Lip synch may then be accomplished using meta-modelling of the image scores from the already established model and the sound time series loadings to provide optimal covariation of the image data with the sound time series.
Another application of combining models using meta-modelling of IDLE parameters is the modelling of covariations between the IDLE parameters of an already established model, and external data. For example, if IDLE modelling has been used to model a large set of related medical images in a data-base, the IDLE scores for selected images may be related to the specific medication and medical history for each of the sub-W095/08240 ~ 9 3 PCT~S9~ll0190 117jects of the corresponding images. One method for performing this covariation analysis is the Partial Least Squares Regres-sion # 2 ("PLS2"), as described in H. Martens and T. Naes, Multivariate Calibration, pp. 146-163 (John Wiley & Sons, 1989), which is incorporated herein by reference.

Joint vs separate movement modeling for the different image input channels.
The typical input for a color video sequence has six input quantities: 3 implicit position ~;mensions (vertical, horizontal and depth) and 3 explicit intensities (e.g. R,G,B).
In the preferred embodiment of the basic IDLE system, it is assumed that the three intensity chAnnels represent input from the same camera and hence information relating to the same objects. Thus, the same segmentation and movements (S and opacity, smile and nod) are assumed for all three color or intensity ch~nnels. The color channels are only separated in the blush modelling. Further model rP~l1n~ncy is then elimi-nated by joint multivariate modelling of the various loadings as desribed above.
Alternatively, the basic IDLE system may be modified to have stronger connectivity between input quantities, i.e., model blush information in the different color channels simul-taneously, by re~uiring each blush factor to have one common score ~or each frame, but different loadings for each color channel. This gives preference to intensity changes with the same temporal dynamics in all color channels for a holon or a W095/08240 ~ 7 ~ 2 ~ ~ PCT~S94/l0190 group of holons, and could for instance be used in order to stabilize the estimation of the factors, as well as for editing and compression.
Instead, the basic IDLE system may be modified to have weaker connectivity between input quantities, where movement is modeled more or less independently for each color ch~nn~l separately. This could be computationally advantageous and could give more flexibility in cases where the different channels in fact represent different spatial information.
One example of independent movement modelling is the case of multi-sensor geographical input images from a set of surveillance satellites equipped with different sensors. Based on one or more repeated recordings of the same geographical area taken at different times from different positions, and possibly exhibiting different optical aberrations, different times of recording and different resolutions, the IDLE system could be used for effective normalization, compression and interpretation of the somewhat incongruent input images. The different sensor channels may exhibit quite different sensitiv-ities to different spatial structures and ph~nom~n~ Forexample, radar and magnetometric imaging sensors may be sensi-tive to land and ocean surface height changes, whereas some photon-based imaging sensors, e.g W, Visible and Infrared cameras, may have varying sensitivities to various long-term climatic changes and vegetation changes, as well as short-term weather conditions. In this situation, the IDLE system may WO95/08240 ~1 71 2 g 3 PCT~S91/10190 require separate movement and blush modelling for the indepen-dently observed ch~nn~ls.
Another example of this type of system is input data obtained from several medical imaging devices (MRI, PET, CT) repeatedly sc~nn;ng a given subject, over a period of time in order to monitor cancer growth, blood vessel changes or other time varying phenomenon. Since each device requires separate measurements, the subject will be positioned slightly differ-ently for each different device and for each scan over the course of the repeated measurements. The movement of biologi-cal tissue typically does not follow affine transformations.
Thus, IDLE smile factors may be a more flexible, yet suffi-ciently restrictive way of representing body movements and allow the required normalization. Each imaging device could then have its own subset of smile factors from its extended reference position to the results for each individual set of scans from the various imaging devices. With the resulting normalization, blush factors and local smile factors that give early warning of slowly developing tissue changes may be detected. This is particularly effective if the extended reference position is normalized, e.g., by meta-modelling, for the different imaging devices for maximum spatial congruence.
In this way, the joint signal from all the channels of the different imaging devices may be used to stabilize the model-ling against measurement noise, e.g. by requiring that theblush factor scores for all ch~nnPls be identical and that only the loadings be different.

W095/08240 ~, ~7 ~ 2 ~ ~ . PCT~S94/lO190 Generalizations from analysis of two-~;m~n~ional inputs (images) The IDLE modelling system described above may be used for input records of a different format than conventional two-~;m~ncional video images. For instance, it may be used forone-~;m~n~ional data, such as a time series of lines from a line camera, or as individual columns in a still image.
The IDLE system may in the latter case is used as part of a still image compression system. In this type of application, the input information to the still image encoder is lines or columns of pels instead of two ~;m~n~ional frame data. Each input record may represent a vertical column in the two ~;m~n~ional image. Thus, the still image IDLE loading parameters are column-shaped instead of two ~;mPncional images.
The time ~;m~n~ion of the video sequence (frames n=1,2,...) is replaced in this case, by the horizontal pel index (column number) in the image.

~imultaneous modelinq for different input ~;m~nsions If the input to the still-image IDLE codec is an RGB
still image, then the three color-ch~nnPls (or a transform of them like YUV) may be coded separately or jointly, as discussed above for the video ID~E codec. ~ikewise, if the input to the still-image IDLE codec is a set of spatial parameters of the extended image model from a video IDLE codec, the different input ~;m~n~ions (blush factors, smile~factors, probabilistic factors) may be coded separately or jointly.

WO9~/08240 ~ 7 ~ ~3 PCT~S9~/10190 The present invention which has been described above in the context of a video compression application, may be applied to any of a number of in~ormation processing and/or acquisition applications. For example, in the case of the processing of image sequences or video sequences for modelling or editing a video sequence (a set of related images) in black/white or color, the modelling is carried out with respect to ID~E parameters in such a way as to optimize the editing usefulness of the model parameters. The model parameters are possibly in turn related to established parameter sets, and other known editing model elements are forced into the model.
Groups of parameters are related to each other in hierarchical fashion. The sequence is edited by changing temporal and/or spatial parameters. Sets of related video sequences are modelled jointly by multi-sequence metamodelling, i.e., each related sequence is mapped onto a 'Reference sequence' by a special ID~E meta-model.
The present invention may also be applied to compres-sion for storage or transmission. In this application, a video sequence is modelled by IDLE encoding, and the resulting model parameters are compressed. Different compression and represen-tation strategies may be used depending on the bandwidth and storage capacity of the decoding system. Temporal sorting of the change factors, and pyramidal representation and transmis-sion of the spatial parameters may be used to increase thesystem',s robustness in the face of transmission bandwidth limitations.

Wo9st08240 ~ 7 12 9 3 PCTtUS94/10190 -Similarly, the present invention may be applied to the colorization of black/white movies. In this case, the black/white movie sequences are modelled by IDLE encoding. The spatial holons in IRef are colored manually or automatically, and these colors are automatically distributed throughout the sequence. Sets of related sequences may be identified for consistent coloring.
In addition, the present invention may be used in simulators, virtual reality, games and other related applica-tions. The relevant image sequences are recorded and com-pressed. When decoding, a few chosen scores may be controlled by the user, instead of using the recorded scores. Similarly, other scores may be varied according to the user-controlled scores. For example, in the case of a traffic simulator:
record sequences of the interior of a car and of the road and the terrain; identify those scores, probably nod scores, that correspond directly to how the car moves; determine those scores that change indirectly based on those nod scores, such as smile/blush factors for illumination, shadows, perspective etc.; and set up a mathematical model that defines how the car reacts to certain movements of the control inputs, such as the steering wheel, accelerator pedal, brake pedal etc. The user can then sit in a simulated car interior, with a display in front and perhaps also on the sides. The simulated controllers are then connected to the "direct" factors, which in turn may be used to control the "indirect" factors. The resulting images will give a very naturalistic effect.

W095/08240 ~ 7 ~ 2 9 3 PCT~Ss~/10190 The present invention also has application in realti-me systems such as video telephone, television, and HDTV.
Extreme compression ratios for very long sequences may be attained, although there may be bursts of spatial information at the onset of new sequences. This application also includes real-time encoding & decoding. Depending on the computational power available, different degrees of IDLE algorithm complexity may be implemented. For instance, information in the spatial domain may be represented by a st~n~rd Gaussian Pyramid (ref), with the IDLE encoder algorithm operating on variable image size depending on the particular applications's capacity and needs. The encoder Interpreter parts for widening, extending or deepening do not have to be fully realtime for each frame. The complexity of the scenes and size of image then defines the compression ratios and coding qualities which may be attained.
The present invention may also be used in remote camera surveillance. By employing a remote real-time encoder at the source of the image information, both interpretation and transmission of the camera data is simplified. The general blush factors model normal systematic variations such as various normal illumination changes, while general smile factors and nod factors correct for normal movements (e.g., moving branches of a tree). The automatic outlier detection and spatial model extender detect systematic r~ n~ncies in the unmodelled residuals and generate new holons which in turn may be interpreted by searching in a data base of objects before automatic error warnings are issued. Each object in the data ~ 17 ~ 2 9 3 PCT~S9~/10190 base may have its own smile, blush and probability factor loadings and/or movement model. The compressed parameters may be stored or transmitted over narrow bandwidth systems, e.g., twisted-pair copper telephone wire transmission of TV camera output from security cameras in banks etc, or over extremely narrow bandwidth systems, such as are found in deep water or outer space transmission.
Images from technical cameras, i.e., images not intended for direct human visualization may also be modeled/co-mpressed using the IDLE technique. The more 'color'-channels, the more effective the meta-modelling compression of the spatial IDLE models. Examples of this application include multi-wavelength ch~nnel camera systems used to monitor biolog-ical processes in the Near Infrared (NIR), or Ultra-Violet/Vis-ible wavelength ranges (e.g., for recording fluorescence).
The IDLE system may also be used in conjunction withmulti~h~nn~l satellites and/or aerial photography. Repeated imaging of the same geographical area under dif~erent circum-stances and at different times may be modelled by IDLE encod-ing. Such parameterization allows effective compression forstorage and transmission. It also provides effective interpre-tation tools indicating the systematic intensity variations and movements, and how they change over time. If the same geo-graphical area is imaged from slightly different positions or under different measuring conditions, then an extra IDLE
preprocessing model may be used for improved alignment, allow-W095/082~0 2 ~ 71~ g ~ PCT~S9~l1019Q
125ing the geographical area to differ quite significantly (e.g.
more or less day-light) and yet allow accurate identification.
The IDLE approach of the present invention may also be utilized in cross d~m~; n coordination or lip synch applica-tions for movie production and in sound dubbing. For multivar-iate calibration, the temporal parameter scores from an IDLE
video model of the mouth region of talking persons are related to the ~emporal parameters for a speech sound model (e.g. a subband or a Celp codec, or an IDLE sound codec), e.g. by PLS2 regression. This regression modelling may be based on data from a set of movie sequences of people speaking with various known image/sound synchronizations, thus modelling the local lip synch delay for optimizing the lip-sound synchronization.
For each new sequence with lip synch problems, the same image and sound model score parameters are estimated. Once estimat-ed, this local lip synch delay is corrected or compensated for by modifying the temporal IDLE parameters and/or sound parame-ters.
The IDLE principle may also be applied to database compression and/or searching. There are many databases in which the records are related to each other, but these rela-tionships are somewhat complicated and difficult to express by conventional modelling. Examples of this type of application include police photographs of human faces ("mugshots"), various medical images, e.g., MRI body scans, photographs of biological specimens, photographs of cars etc. In such cases, the content of the database can be analyzed and stored utilizing IDLE model Wo9sl08240 æ1 7 ~2~ 3 pcT~ss~llolso -parameters. The IDLE representation of related, but complicat-ed information in a database offers several advantages, viz., high compression, improved searchability and improved flexibil-ity with ~espect to the representation of the individual records in the database. The compression which may be achieved depends on how many records can be modelled and how simple the IDLE model which is used, i.e., the size and complexity of the database content.
The improved searchability (and interpretability) stems from the fact that the data base search in the case of IDLE representation may be performed using the low-~imen~ional set of parameters corresponding to factor scores (e.g., a low number of nod, smile and blush scores), as opposed to the large amount of original input data (e.g., 200,000 pixels per image).
Compression techniques using fractals or DCT do not yield similar searchable parameters. The few IDLE score variables may in turn be related statistically to external variables in the database, providing the capability to search for larger, general patterns, e.g. in the case of medical images and medical treatments. The improved flexibility due to the representation of the records in the database stems from the fact that the bilinear IDLE ~actors allow whatever flexibility is desired. Equipping the holon models with a few smile and blush factors allows systematic unknown variations to be quantified during the pattern recognition without statistical overparameterization.

WO95/08240 ~ 1 712 g 3 PCT~S9l/10190 The use of IDLE modelling in database representation may be used for a variety of record types in databases, such as image databases cont~'n;ng hllm~n faces, e.g. medical, cr;m;n~l;
real estate promotional material; or technical drawings. In these situations, the IDLE modeling may allow multiple use of each holon in each drawing; the holons could in this special case be geometrical primitives. Additional applications include sound (music, voice), events (spatiotemporal patterns), situations (e.g., weather situations which com~bine various meteorological data for various weather structures or geograph-ical locations, for a certain time-span).
The IDLE principle may also be used for improved pattern recognition. In matching unknown records against various known patterns, added flexibility is obtained when the known patterns also include a few smile and blush factor loadings whose scores are estimated during the matching pro-cess. In searching an input image for the presence of a given pattern, added flexibility is obtained by allowing the holons to include a few smile and blush loadings, whose scores are estimated during the searching process. This type of pattern recognition approach may be applied to speech recognition.
The IDLE principle may also be applied to medical and industrial imaging devices, such as ultrasound, MRI, CT etc in order to provide noise filtering, automatic warnings, and improved interpretation. In medical ultrasound imaging, noise is a major problem. The noise is so strong that filtering on individual frames to reduce the noise will often also destroy ~i7~2~ ~
WosS/08240 PCT~S9~/10190 important parts of the wanted signal. Much of the noise is random and additive with an expectation of zero, and if many samples could be collected from the same part of the same object, then the noise could be reduced by averaging samples.
It is often impossible to keep the measured object or subject steady, and the observed movement can seem to be quite complex.
However, the observed movement is due to a limited number of reasons, and so the displacements will need relatively few IDLE
smile and nod factors. In the reference position, noise can be averaged away. The smile and blush factors can also be useful for interpreting such sequences. Finally, ultrasound sequences represent such large amounts of raw data that they are diffi-cult to store. Most often only one or a few still images are stored. The compression aspect of the present invention is therefore highly applicable.
The IDLE principle of the present invention may also be used for credit card and other image data base compression applications. For example, in the case of compression, whenev-er there are sets of images with similar features, this set of images could be regarded as a sequence and compressed with the IDLE technique. This is readily applicable to databases of facial images. If all the loads are known at both the encoder and the decoder side, this means that only the scores need to be stored for each individual. These scores would then be able to fit into the storage capacity of the magnetic stripe on a credit card, and so could form the basis for an authentication system.

W095/08240 ~ ~ 712 9 ~ PCT~Ss~/10190 Other applications of the IDLE principle include still image compression, radar (noise filtering, pattern recogni~ion, and error warnings), automatic dynamic visual art (in an art gallery or for advertisement, two or more computers with e.g. flat color ~CD screens where the output from IDLE
models are shown. The score parameters of the IDLE model on one computer are functions of the screen output of the other IDLE models, plus other sensors in a self-organizing system), consumer products or advertisement (one computer with e.g., a color flat LCD screen displays output from an IDLE model whose scores and loadings are affected by a combination of random number generators and viewer behavior), disjoint sensing &
meta-observation (when a moving scene has been characterized by different imaging sensors at sufficiently different times such that the images cannot be simply superimposed, IDLE modelling may be used to normalize the moving scene for simpler superim-position).
The IDLE system may also be used for data storage device normalization (magnetic, optical). Specifically, if the physical positioning or field intensity of the writing process varies, or the reading process or the medium itself is varying and difficult to model and correct for by conventional model-ling, IDLE modelling using nod, smile and/or blush factors may correct for systematic, but unknown variations. This may be particularly critical for controlling multilayer read/write processes. In such an application, the already written layers -wo95/o824n PCT~S9~/10190 21712~

may serve as input data for the stabilizing latent smile and blush factors.
The IDLE principle of the present invention also has numerous sound applications. For example sound, such as music, voice or electromechanical vibrations, may be modelled and compressed utilizing parameterization by fixed translation/nod, systematic shift/smile, intensity/blush and overlap/opacity in the various dom~;n~ (e.g., time, frequency). A holon in sound may be a connected sound pattern in the time and/or frequency ~m~; ~R . Additional sound applications include sound modifica-tion/editing; industrial process and monitoring, automotive, ships, aircraft. Also, searching may be carried out in sound data bases (similar to searching in image or video databases discussed above). It is thus possible to combine IDLE model-ling in different ~nm~;n~, such as sound modelling both in thetime and the ~requency d~m~;n~.
The IDLE principle may also be used in weather forecasting; mach;n~ry (robot quality control monitoring using a camera as a totally independent sensor and allowing the IDLE
system to learn its normal motions and warn for wear & tear and abnormal behavior); robot modelling which co-mbines classical robot connectivity "hard" nod trees with IDLE smile models for "softly" defined movements and using such "soft" and "hard"
robot modelling in conjunction with blush factors to model hllm~n body motion.

W095/08240 ~1 7 1 ~ 9 ~ PCT~S91/10190 The IDLE principle of the present invention may also be used for forensic research in the areas of finger prints, voice prints, and mug shot images.
~ While the invention has been particularly shown and described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and detail may be made therein without depart-ing from the spirit and scope of the invention.

W095/08240 . ~ 71293 PCT~S9~/10190 DECODER-APPENDIX
1. Overview 2. Frame Reconstruction 2.1 Intuitive explanation 2.2 INRec Formula 2.3 Holonwise loading-score matrix multiplication 2.4 Smile 2.5 Nod 2.6 Move 2.7 Ad hoc residuals 3. References 1. Overview In order to increase readability, collo~uial abbrevi-ations are used in this description instead of the indexed.andsubscripted symbolism used elsewhere in the application.
The decoder performs the following steps for each frame n:
Receives updates of the segmentation S field part of~nm~;n PRef: S
Receives updates of the scores ("Sco") for the blush intensity changes ("Blu"), BluSco; the vertical and horizontal address smile changes ("Smi"), SmiSco; the 3D depth changes (Z), ZSco; and probabilistic changes.("Prob"), ProbSco for for each holon.

WO95/08240 ~l. 712 9 ~ - PCT~S94/lOl90 Receives updates of the Blush, Smile, Prob and Z
loadings for XRef (abbreviated "Loads" or "~od"): Blu~od, Smi~od, ProbLod, ZLod.
Receives updates of the affine transformation ("Nod") matrices, NodMat, cont~ n; ng the nod scores.
Receives optional error residuals ("Res") em = (BluR-es, SmiRes, ZRes, ProbRes).
Reconstructs the intensity of the present frame (i here termed IN) based on the S field, scores, loads and Nod matrices, to produce a reconstructed inhat result ("INRec").

2. Frame Reconstruction -W095/08240 ~ t PCT~S94/l0190 -~17~293 A. Intuitive explanation Blush the image by changing the pixel intensities of the pixels at the various color chan-nels in the reference image according to the blush factors.
Smile the image by changing the address values of the pixels in the reference image according to the smile factors (including the Z factors).
Change the probabilistic properties of the image by changing the probabilistic suboperands like transparancies in the reference image according to the prob factors.
Nod the smiled coordinates by changing the ~miled adresses of the pixels according to nod matri-ces.
Move the pixels from the blushed reference image into the finished image so that each pixel ends up at its smiled and nodded coordinates, so that "holes" in the image are filled with interpolated values, so that the pixel with the highest Z value "wins~ in the cases where several pixels end up at the same coordinates, and so that pixels are partly transparant if they have a Prob value lower than 1.
Optional: Add residual corrections to the reconstructed intensities.
Optional: Post process the resulting output image to provide smooth blending of holons, especial-~ 71293 W095/08240 . PCT~S9~/lOl90 ly along edges formed during the Mover operator due to movements. In the preferred embodiment, this is accomplished by blurring along all segment edges in the moved images.
2.2 INRec Formula The formula for computing INRec is as follows:
INRec = Move(IRef+BluSco*BluLod, S, Nod([V H] + SmiSco*SmiLod, Z+ZSco*ZLod, NodMat, S), . . .
ProbSco*ProbLod) 2.3. Holonwise loading-score matrix multiplication In an expression such as "BluSco*BluLod", the multiplication does not imply traditional matrix multipli-cation, but rather a variation referred to as holonwise loading-score matrix multiplication. That is, each holon has its own score, and ~or each pixel, the S field must be analyzed in order to determine which holon that pixel belongs to, and this holon number must be used to select the correct score from BluSco.

To compute BluSco*BluLod:
For each Pixel:
Sum=0 For each Factor:
Sum = Sum + BluSco[S[Pixel],Factor] * BluL-od[Factor,Pixel]

W095l08240 ~1 7 ~ 2 9 3 PCT~S9~/10190 -Result[Pixel] = Sum The same applies to SmiSco*SmiLod, ZSco*ZLod and ProbSco*ProbLod.

2.4 Smile Smiling pixels means to displace the reference position coordinates according to address change field.
The address change field may have values in each coordi-nate ~;m~n~ion, such as vertical, horizontal and depth ~;m~n~ion (V,H,Z), and may be defined for one or more holons. Each address change field may be generated as the sum of contribution of smile factors, and each change factor contribution may be the product of temporal scores and spatial loadinys.
In order to displace information of pixels away from the reference position, the amount of motion that each of these pixels in the reference position (the address change field DARefD) may be computed first, and the actual moving operation then takes place later in the Mover stage of the decoder.
For each pixel with coordinates V, H, Z in the refer-ence position, its new address after it has been moved is computed by:

VSmi = V + SmiScoV*SmiLodV
HSmi = H + SmiScoH*SmiLodH

W095/08240 ~L~ 2 9 3 PCT~S9~/l0190 ZSmil = Z ~ SmiScoZ*SmiLodZ
In these three expressions, V and H are the coordi-nate of each pixel in the reference position, while Z is the value of the Z field for that pixel. The multipli-cation is Holonwise loading-score matrix multiplication, as defined in the previous paragraph.

2.~ Nod The function of the Nod is to modify the values o~ the coordinates of each pixel, which may be conceptual-ized as a vector having homogenous coordinates:
ASmi = [VSmiled ESmiled ZSmiled 1]
The nodded coordinates, ANod are then given by:

r ~ r ~ r ~
¦ VNod ¦ ¦ T11 T12 T13 0 ¦ ¦ VSmi ¦
¦ HNod ¦ = ¦ T21 T22 T23 0 ¦ * ¦ HSmi ¦
¦ ZNod ¦ ¦ T31 T32 T33 0 ¦ ¦ ZSmi ¦
¦ Dummy¦ ¦ T41 T42 T43 1 ¦

which may be equivalently expressed as:
ANod = NodMat * ASmi 2.6 Move Move the pixels into the finished image so that each pixel ends up at its smiled and nodded coordinates, W095/08240 2 ~ 3 PCT~Ssl/10l9o in such a way that "holes" in the image are filled with interpolated values, and that the pixel with the highest Z
value "wins" in the cases where several pixels end up at the same coordinates, and so that pixels are partly trans-parant if they have a Prob value lower than 1.
If the loadings X(f)Ref,f=1,2,... are also moved together with the level 0 image, X(O)Rcfl the same interpo-lation and Z buffering strategies are used for f=1,2, as for f=0 above.
A description of methods of moving and interpo-lating pixels may be found in, e.g., George Wolberg, Diqital Imaqe Warpinq, Chapter 7, (IEEE Computer Society Press 1990), which is incorporated herein by reference. A
description of Z-buffering may be found in, e.g., William A. Newman and Robert F. Sproull, Principles of Interactive Computer Graphics, Chapter 24 (McGraw Hill 1984), which is incorporated herein by reference. A description of how to combine partly transparent pixels may be found in, e.g., John Y.A. Wang and Edward H. Adelson, "~ayered Representa-tion for Image Sequence Coding", IEEE ICASSP, Vol. 5, pp.
221-224, M;nne~polis, Minnesota, 1993, which is incorpo-rated herein by reference.

WO95/08240 ~ 3 ~ PCT~S9-l/10190 Appendix MERGE_SUBSEQUENCES
Check if the present subsequence model can be merged with other subsequence models A. Call the present reference model 'position I', and another reference model 'position II'. Move the spatial model parameters of the extended reference image ~or the present subsequence, Xl, to the posi-tion of the extended reference image for another subsequence, Xn, using a frame n which has been mod-elled by both of the subsequences:
1. Since:
In Model I : inhat(I) =Move(DAI,n Of II +DIIn) In Model II: inhat(II)=Move(DAnn of In+DInn) and this generalizes from inhat to all ~m~; n~ in xnhat:
In Model I : ~hat(I) =Move(DAI,n of Xl +DXIn) In Model II: ~hat(II)=Move(DAn,n of Xn+DX
2. We can move the estimate for frame n back to the two respective reference positions:
In Model I : Xnhat(I)~I =Move(DA~,I of xn) In Model II: Xnhat(II)~n=Move(DAn,n of xn) woss/08240 ~17 ~ 2 9 3 PCT~S94/lOl90 -3. If the two models mainly contain smile, as op-posed to blush modelling, we may now I move model I to ~rame n~s estimated position, using model I, and then move model I into model II's position using the reverse of model II:

XI~n= Move(D~ of(Move(DAIn of(XI+DX~
4. The obtained model I loadings given in model II's position, Xl$~, may now be compared to and merged into X~, (with local smile and blush estimation and model extension, plus detection of parts in Xl lost in XI$~ . This yield a new and enlarged model Xn that summarizes both mod-els I and II.
5. The new and enlarged model X~ may now similarly be merged with another model III with which is has another overlapping frame, etc. Subsequences are merged together as long as it does not in-volve unacceptable degradation in compression and/or reproduction quality.

W09S/08240 ~ 7 1 ~ 9 3 PCT~S9~/10l90 APPENDIX SIMPLIFIED ENCODER

~ Purpose:
Show one way of implementing a simplified IDLE encoder.

Contents:

1 EncSeq . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 ExpressSubSeqWithModels . . . . . . . . . . . . . . . . . 5 3 Expres~WithModels . . . . . . . . . . . . . . . . . . . . 6 4 ExtractSmiFactSubSeq . . . . . . . . . . . . . . . . ... 8 5 ExtractBluFactSubSeq . . . . . . . . . . . . . . . . . . 11 6 SegSubSeq . . . . . . . . . . . . . . . . . . . . . . . . 13 7 AllocateHolon . . . . . . . . . . . . . . . . . . . . . . 16 8 MoveBack . . . . . . . . . . . . . . . . . . . . . . . . 17 9 AnalyseMove . . . . . . . . . . . . . . . . . . . . . . . 18 10 Other required methods . . . . . . . . . . . . . . . . . 20 W095/08240 ~ 9~ PCT~S9~/lO190 10.1 Move . . . . . . . . . . . . . . . . . . . . . . 20 10.3 Smi2Nod . . . . . . . . . . . . . . . . . . . . . 20 10.4 UpdateModel . . . . . . . . . . . . . . . . . . . 21 10.5 Transmit . . . . . . . . . . . . . . . . . . . . 21 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . 22 Notation . . . . . . . . . . . . . . . . . . . . . . . 22 W095/08240 ~ 712 9 3 PCTtUSg~tlOl90 1 EncSeq Input:
Seq: Sequence of frames; one per row ErrTol: Error tolerance Output:
SmiLod: Smile loads SmiSco: Smile scores BluLod: Blush loads BluSco: Blush scores Informal description:
Work forward through the sequence. Whenever frames cannot be reconstructed with an error less than the tolerance using known smile and blush factors, intro-duce a new factor. Do this by first trying to intro-duce a smile factor and then trying to introduce a blush factor. Choose the factor that improved the reconstruction the most.

- During this process, different parts of the image may be found to move independently of or occluding each other. Each time this is detected, detect which parts of the image move coherently, isolate the W095/08240 PCT~S91/10190 ~7~2~ 144 smallest and define this as one or more new holons, make new room by increasing the size of the image, place the new holons there, and let a smile factor compensate for this repositioning.
Whenever new information is revealed (That is, parts of the image cannot be moved back to reference posi-tion with any fidelity using the existing nod or smile factors), find which holons are nearby and try to model the new information under the assumption that it is an extension to each of these holons. If a good modelling behaviour can be found, extend the holon, else create a new holon.

Take into account how much memory the decoder has left:
If it has much free memory, prefer factors that span many frames and so are believed to be more "correct" (even though they alone may describe each individual frame with less fidelity) by relaxing the test error tolerance TestErrTol.
If it has little free memory, it is important that the required fidelity must be reached with the few r~m~; n; ng factors, so the test error tolerance TestErrTol must be tightened.

WO9S/08240 ~1 712~ PCT~S94/lOl90 Method:
IRef = First image in the sequence Seq Set Smi~od and Blu~od to empty .

While NextFraNo c= length(Seq) [SmiSco, BluSco, FailFraNo] = ...
ExpressSubSeqWithModels(Se~, NextFraNo, IRef, SmiLod, Blu~od, ErrTol) If FailFraNo <= length(Se~):

Try different ways of updating the model:
If the decoder has much memory left (Based on Transmit history):
Set TestErrTol to a large value else if the decoder has little memory left:
Set TestErrTol to a value close to ErrTol FromFraNo = FailFraNo [NewSmiLod, nSmiFra, Tot.~m; ~rr] =
ExtractSmiFactSubSeq(Seq, FromFraNo, Wo95/08240 2 ~ 7 ~ 2 9 3 PCT~S94/10190 TestErrTol, SmiLod, BluLod, SmiSco, BluSco) [NewBluLod, nBluFra, TotBluErr] ~
ExtractBluFactSubSeq(Seq, FromFraNo, TestErrTol, SmiLod, BluLod, SmiSco, BluSco) [NewS, nSegFra, TotSegErr] = SegSubSe-q(Seq, FromFraNo, S, TestErrTol) Based on nSmiFra, nBluFra and nSegFra, and TotS-miErr, TotBluErr and TotSegErr:
Either select one of Smile or Blush to be included in the model, or change the seg-mentation I~ Smile is selected:

Transmit(SmiLod) Update smile factors:
[SmiLod,SmiSco] = UpdateModel(SmiLod,-SmiSco,NewSmiLod) else if Blush is selected:
Transmit(BluLod) Update blush factors:

W095/08240 ~1 712 ~3 PCT~S94/lO190 [BluLod,BluSco] = UpdateModel(BluLod,-BluSco, NewBluLod) else Segment is selcted:
Transmit(NewS-S) S = NewS

End of method EncSeq W095/08240 ; i . PCT~S94/10l90 ~
~1 71.29~ i , 2 ExpressSubSeqWithModels Purpose:
Express a Se~uence with existing models consisting o~
loads in smile and blush ~om~; n, as far as the error tolerance will allow.

[SmiSco, BluSco, NextFraNo] =
ExpressSubSeqWithModels(Seq, NextFraNo, ErrTol, IRef, SmiLod, BluLod, SmiSco, BluS-co) Input:
Seq: The sequence to be expressed NextFraNo: Starting point of the subsequence within Seq ErrTol: Error tolerance; the ending criterion ~or the subsequence IRef: Reference image SmiLod, BluLod: Smile load SmiSco, BluSco: Already known smile and blush scores Output:
SmiSco: Smile scores BluSco: Blush scores FailFraNo: Number of the frame where the modelling failed due to ErrTol WO95/082~0 ~1712 9 3 PCT~S9~/lOl90 Method:
Set current frame number N to NextFraNo Repeat IN = Seq[N]
Try to model IN using the known factors:
[INRec, SmiSco[N], BluSco[N]] =
ExpressWithModels(IN, S, SmiLod, Blu~od) Increase the frame number N
10 until Error(INRec,IN) c ErrTol or IN was the last frame in Seq NextFraNo = N

End of method Expre~sSeqWithModels wosslog24o ~ ~ 7 ~ 2 9 3 ~ PCT~S9~/10190 -3 ExpressWi~Models Purpose:
Express a frame with the known models, i.e. calculate the scores for the existing loads that gives best fit between IN and a reconstruction tINRec, SmiSco, BluSco] = ExpressWithModels(IN, IRef, SmiLod, BluLod, S, SmiSco, BluSco) Input:
IN: One particular frame IRef: Reference image SmiLod: Known smile loads BluLod: Known blush loads S: S field Optional input:
SmiSco, BluSco: Initial estimates ~or the smile and blush scores Output:
INRec: Reconstructed image SmiSco: Improved estimates for the smile and blush scores W095t08240 ~1 712 9 3 PCTtUS94/10190 Informal description:

Find an optimal set of scores by trial and error, i.e. by a search method like Simplex (For a description, see chapter 10.4, William H. Press, et al., "Downhill Simplex Method in Multi~;m~n~ions" in "Numerical Recipes" (Cam-bridge University Press), which is incorporated herein by reference.

Select new smile scores as variations of the previ-ously best known smile scores, estimate blush scores by moving the difference between the decoded and the wanted image back to reference position and then projecting on the existing blush loads.
Judge how well each new image approximates the wanted image, and use this as guidelines for how to select new variations of the smile scores.

W095/08240 ~ PCT~S9~/10190 Method:

For each holon:
Repeat For a small number of variants:
Change the smile scores slightly Decode an image using the new smile scores and the old blush scores Move the difference between the decoded and the wanted image back to reference position Project the difference onto blush loads, producing new BluSco Decode an image using the new SmiSco and BluSco Select the best variant (i.e. keep the scores that gave best reconstruction) until the reconstructed image is good enough or the reconstruction is not improving End of ExpressWithModels method ~ 2 ~ 3 , WO95/08240 - PCT~S94/10190 4 ExtractSmiFactSubSeq Purpose:
Extract one smile factor from a subsequence [NewSmiLod, nSmiFra, TotSmiErr] = ExtractSmiFactSubSeq(Seq, FromFraNo, ErrTol, IRef, SmiLod, BluLod, SmiSco, BluSco) Input:
Seq: The sequence FromFraNo:
Number of first frame in subsequence. This is the same as NextFraNo in EncSeq ErrTol: Error tolerance Smi~od, BluLod: Known smile and blush loads SmiSco, BluSco: Scores to be updated Output:
nSmiFra: Number of frames used for estimating the smile factor NewSmi~od: One new smile load TotSm;~r: Total r~m~;n;ng error after smiling Informal description:
For each frame, as long as smile seems reasonable:

W095/08240 Z1 71~ ~ ~ pcT~ss~llol9o Reconstruct the wanted frame IN as well as possible using only the known loads; call this IM
Find how IM should be smiled in order to look like IN
Map this smile back to reference position UpdateModel Return the first factor of the final model Method:
TestFraNo = FromFraNo TotErrSmi = 0 Set SmiTestLod to empty Repeat IN = Seq[TestFraNo]
Establish an image IM that reconstructs IN as well as possible based on the reference image and known smile and bluæh factors, and as a side effect also compute the return field from M to Reference position:
[IM,SmiSco[TestFraNo], BluSco[TestFraNo]] =
ExpressWithModels(IN, IRef, SmiLod, BluLod, SmiScoInit, BluScoInit) SmiRefToM = SmiSco[M] * SmiLod IM = Move(IRef+BluSco[M]*BluLod, SmiSco[M]*SmiLod) -~ W095/08240 ~7 1293 PCT~S9~/10190 . 155 Find how IM should be made to look like IN when only smiling is allowed, and at the same time calculate the Confidence of this smile field:
[SmiMToN, SmiConfMToN] = EstMov(IM, IN, TestSmi-Lod) Move the smile and its certainity back to reference position:
SmiMToNAtRef = MoveBack(SmiMToN, SmiRefToM) SmiConfMToNAtRef = MoveBack(SmiConfMToN, SmiRef-ToM) Calculate the error when only smiling is used:
ErrSmi = IN - Move(IRefBlushed, SmiRefToM+SmiMT-oNAtRef) [SmiTestLod,SmiTestSco] =

TotErrSmi = TotErrSmi + ErrSmi UpdateModel(SmiTest~od,SmiTestSco, ErrSmi) TotSmiConfMToNAtRef = TotSmiConfMToNAtRef + SmiConf-MToNAtRef TestFraNo = TestFraNo + 1 until W095l08240 2 ~ 7 ~ 2 ~ ~ PCT~S9~/lO190 The energy is too much spread among the factors in SmiTest~od, or ErrSmi is large The last frame should not be included in the summary, 90:
Undo the effect o~ the last UpdateModel Undo the effect of the last error summation:
TotErrSmi = TotErrSmi - ErrSmi TotSmiConfMToNAtRef = TotSmiConfMToNAtRef - SmiConfMToNAt-Re~

NewSmiLod = SmiTestLod[1]
nSmiFra = FromFraNo - NextFraNo End of ExtractSmiFactSe~ method ~ W095/082410 ~7~ ~9 PCT~S94/lOl9O

5 ExtractBluFactSubSeq - Purpose:
Extract one blush factor from a subsequence [NewBluLod, nBluFra, TotBluErr] = ExtractBluFactSubSeq(Seq, NextFraNo, ErrTol, IRef, SmiLod, BluLod, SmiSco, BluSco) Input:
Seq: The sequence NextFraNo: Number of next frame, i.e. start of subsequence ErrTol: Error tolerance, which may define end of subseque-nce IRef: Reference image SmiLod: Known smile load BluLod: Known blush loads SmiSco: Smile scores BluSco: Blush scores Output:
NewBlu~od: New blush load nBluFra: Number of frames for which this blush is defined TotBluErr: Total r~m~;n;ng error after blushing Method:

wos5/08240 ~ 7 ~ 2 ~ ~ PCT~S94/lO190 ,-. ~ ' .

TotBluErr = 0 TestFraNo = NextFraNo Set BluTestLod to empty Repeat If scores for IM are not available from ExtractSmiFa-ctSubSeq:
Establish an image IM that reconstructs IN as well as possible based on the reference image and known smile and blush factors, and as a side effect also compute the return field from M to Reference position:
[IM,SmiSco[TestFraNo], BluSco[TestFraNo]] =
ExpressWithModels(IN, IRef, Smi~od, BluLod, SmiScoInit, BluScoInit) SmiRefToM = SmiScoM * SmiLod Try to make IM look like IN by blushing:
BluMToN = IN - IM
Move this blush back to reference position:
BluMToNAtRef = MoveBack(BluMToN, SmiRefToM) [BluTestLod,BluTestSco] = ...
UpdateModel(BluTestLod,BluTestSco, ErrBlu) Calculate the error when only blushing is used:
ErrBlu = IN - Move(IRefBlushed+BluMToNAtRef, SmiRefToM) W095/08240 , 3 PCT~S9~/10190 TotErrBlu = TotErrBlu + ErrBlu TestFraNo = TestFraNo + 1 un~il The energy is too much spread out among factors in BluTestLod, or Sum(ErrBlu) is large The last frame should not be included in the summary, so:
Undo the effect of the last UpdateModel Undo the effect of the last error summation:
TotErrBlu = TotErrBlu - ErrBlu NewBluLod = BluTestLod[1]

End of ExtractBluFact method W095/08240 Zl ~ 3 PCT~S94/10190 6 SegSubSeq Purpose:
Propose a new segmentation of the holons, and report how much this improves the modelling [S, TotSegErr,nSegFra] - SegSubSeq(Seq, FromFraNo, SmiLod, SmiSco, S) Input:
Smi: Smile field FromFraNo: Number of first frame in the subsequence SmiLod: Smile loads SmiSco: Smile scores S: Previous S ~ield Output:
S: New, updated S field TotSegErr: Total error associated with segmenting nSegFra: Number of frames used for estimating the segmen-tation Informal description:
Use various heuristic techniques to improve how the refer-ence image is split into separate holons.

~ W095/08240 ~ ~I ? 9 3 PCT~S94/lol9O

Check how easy it is to extract either new smile or new blush factors under the assumption of this new split.
Report back the best result.

Method:

Repeat TestFraNo = FromFraNo Repeat IN = Seq(TestFraNo) Smi = SmiSco[TestFraNo] * Smi~od Split one holon into two if necessary:
For each holon in S:
Compute a nod matrix from Smi for that holon If the sum of errors between nod ma-trices and pels is large:
Split each holon along the prin-cipal component of the errors Join two holons into one if necessary:
For each holon in S:

} ~
wos~/08240 ~1 7 i 2 ~ ~ PCT~S94/lO190 If the nod matrix is very similar to the nod matrix of another holon:
Join the two holons Let edge pels with bad fit change holon:
INRec = Move(IRef+BluSco*BluLod, SmiSco*Sm-iLod) For each pel, at position v,h, in INRec that is on the edge of a holon:
If the pel fits better on the neighbo-uring holon, let the pel belong to the neighbouring holon Pick up pels that don't belong to any holon:
VisInFromAtTo = AnalyseMove(Smi) Make a new holon out of pels whereVisInFro-m~tTo[pel]cThereshold TestFraNo = TestFraNo + 1 until SmiSco[TestFraNo] is no longer available from earlier runs of ExtractSmiFactSubSeq until convergence ~ W095/08240 ~ 7 ~ 2 9 3 PCT~S94/lol9O

[NewSmiLod, nSmiFra, Tot~m; ~rr] = ExtractSmiFactSubSeq(Se-q, FromFraNo, TestErrTol, SmiLod, BluLod, SmiSco, BluSco) [NewBluLod, nBluFra, TotBluErr] = ExtractBluFactSubSeq(Se-q, FromFraNo, TestErrTol, SmiLod, BluLod, SmiSco, BluSco) If Smile was "better" than Blush:
TotSegErr = Tot.~m; ~rr nSegFra = nBluFra el~e TotSegErr = TotBluErr nSegFra = nBluFra End of SegSubSeq method WO95108240 ~ 7 ~ 2 ~ 3 PCT~S94/lOl90 7 AllocateHolon Purpose:
SegSubSeq will need to change the spatial definition of holons. Here is one example of an operation that is needed, namely the one to allocate a new new holon in the Reference image.

[S, SmiLod, BluLod, SmiSco, BluSco] = AllocateHolon(S, SNewHol-on, Smi, SmiLod, BluLod, SmiSco, BluSco) Input:
S: Old S field, before updating SNewHolon: S field for one or mory new holons Output:
S: New, updated S field Method:
For each new holon in S:
Find enough free space in S, if necessary increase the size of S
Find a free holon number, put this into each new pel position in S
Put the pels of SNewHolon into the new space W095/08240 ~17 ~ 2 9 3 PCT~S9~/10190 Give the new holon a new smile factor capable of moving the holon ~rom the new reference position back to its last position Reformat the score tables accordingly W095/08240 ~ 7 i ~ 9 ~ ~ PCT~S94/10190 8 MovelBack Purpose:
Move the contents of an image back, e.g. from N to M
position or from M to Ref position. This is almost an inverse of Move.

IBack = MoveBack(IOut, SmiBack, SOut) Input:
IOut: Input image, in Moved Out position, e.g. IM
SmiBack: Smile field, in Back position, e.g. Ref SBack: S field, in Back position Output:
IBack: Image moved back, e.g. to reference position Method:
For each pel at position v,h in SBack:
Interpolate, using two-way linear interpolation, IBack[v,h] from the four pels in IOut that surrounds the sub-pixel position (v+SmiV[v,h], h+SmiH[v,h]) WO9S/08240 2 17 l ~ 9 3 PCT~Ss~/lolsn 9 AnalyseMove Purpose:
Determine features of a smile field:
For each pel in a From image: Will it be visible in the To image ?
For each pel in a To image: Was it visible in the From image ?

[VisInToAtFrom,VisInFromAtTo] = AnalyseMove(SmiFrom,SFrom) Input:
SmiFrom: Smile field, in From position, to be analyzed SFrom: S field, in From position Output:
VisInToAtFrom: Visibility in To image at From position:
For each pel in a From image:
1 if the corresponding pel in the To image is visible 0 otherwise VisInFromAtTo: Visibility in in the From image at To - position:
For each pel in a To image:
1 if the corresponding pel in the From image is visible 2 9 ~ --W095/08240 PCT~$94/lO190 0 otherwise Method:

Generate VisInFromAtTo:
Initialize VisTo to all zeros For each pel, at position v,h, in SmiFrom:
VisInFromAtTo[int(v+SmiV[v,h]), int(h+SmiH[v,h]-)] = 1 For each pel, at position v,h, in VisInFrom~tTo:
Replace VisInFromAtTo[v,h] with the majority value of itself and its four neighbours Generate VisInFromAtTo:
[Dummy2, SmiRet] = Move(Dummyl, Smi) Initialize VisFrom to all zeros For each pel, at position v,h, in SmiRet:
VisInToAtFrom[int(v+SmiRetV[v,h]), int~h+SmiRet-H[v,h])] = 1 For each pel, at position v,h, in VisInToAtFrom:
Replace VisInToAtFrom with the majority value of itself and its four neighbours ~ W095/08240 ~1 7 1 2 ~3 pcT~s94llol9n 10 Other required methods 10.1 Move Purpose: Move the contents of an image according to a Smile field [IMoved, Ret] = Move(IFrom, Smi, S) as described in 10.2 EstMov Purpose:
Estimate the movement (i.e. Smile field) from one frame to another, together with the certainity of the e~timate [Smi, SmiConf] = EstMov(IFrom, ITo) Input:
IFrom: From-image ITo: To-image W095/08240 ~ 7 i ~ 9 3 PCT~Sg~ll0190 Output:
Smi: Smile field SmiConf: Smile confidence: How sure can we be on Smi ?
Method:
E.g. any of those methods described in "Optic Flow Computation, A Unified Perspective", Ajit Singh, IEEE
Computer Siciety Press 1991, ISBN 0-8186-2602, which uses the term "Optical flow field" much like a Smile field is used in this context.

10.3 Smi2Nod Purpose: Compute Nod matrices from Smile fields NodMat = Smi2Nod(Smi, S) as described in 10.4 UpdateModel [NewLod, NewSco] = UpdateModel(Old~od, OldSco, NewData) as described in ...

WO95/08240 ~ 712 ~ 3 PCT~S9~/lO190 10.5 Transmit Purpose:
. Make the computed data available for the decoder so .
it can decode the sequence Transmit(Data) Method:
If Data is a spatial load:
Compress Data using conventional still image compression techniques else if Data is an update of an S field:
Compress Data using run-length encoding else if Data represents scores:
Compress Data using time series compression techniques Send Data to the receiver via whatever cnmml~n;cation medium has been ~elected ~17~293 woss/08240 PCT~S9~/10190 Appendix Notation = (Equals sign):
The expression to the left of the sign is evalu-ated, and the result is assigned to the variable or structure indicated by the identifier to the right of the sign.

If the expression to the left results in several output values, a corresponding list of identifi-ers are given inside brackets on the right side of the sign.

() (Parenthesis):
After an identifier, a pair of parenthesis indi-cates that the identifier indicates a defined function to be evaluated or executed, and the identifiers given inside the paranthesis repre-sent variables or structures that are sent to the function as input parameters.

[] (Square brackets):

W095/08240 ~1 7 1 2 9 3 PCT~S94/10190 One use of square brackets is defined in the paragraph about the Equals sign.
Another use is to indicate indexing: When a pair of square brackets appear after an identifier, this means that the identifier refers to an array or matrix of values, and the expression inside the square brackets selects one of those values.

lo N~min~

~nPmon;c names are used:
"Smi" is used instead of "DA" for Smile "Blu" is used instead of "DI" for Blush "Lod" denotes loads "Scoll is used instead of IIUII for scores Pre- and postfixes are used instead of subscripts, and bold characters are not used, e.g.
~'SmiMToN" is used instead of DA~.

Claims (64)

We Claim:
1. A method for converting samples of an input signal to an encoded signal composed of a plurality of compo-nent signals each representing a characteristic of the input signal in a different domain, said input signal being comprised of data samples organized into records of multiple samples, with each sample occupying a unique position within its record, characterized in that each component signal is formed as the combination of a plurality of factors, each factor being the product of a score signal and a load signal, the score signal defining the variation of data samples from record to record and the load signal defining the relative variation of a subgroup of samples in different positions of a record.
2. The method in accordance with claim 1 wherein a set of reference component signal values is provided which represents a reference pattern of samples and in each record the input signal is represented by a plurality of component change signal values for each record, each component change signal value being equal to the difference between reference pattern of samples and the record.
3. The method of claim 2 wherein each record has the same number of samples arranged in a multi-functional array, a first of said component signals representing the magnitude of samples and a second of said component signals representing the position of a sample in the array.
4. The method of claim 3 wherein a component change signal may result in several pixels of the reference image being mapped to a common pixel of one of the frames, the intensity of the common pixel being equal to a weighted sum of the intensities of the several pixels.
5. The method of claim 1 wherein at least one of a set of load signals and a set of score signals is selected for each component signal so as to be statistically representative of variations in the corresponding characteristic among all records.
6. The method of claim 3 wherein the number of factors and the precision of factors are selected so that the storage space required therefor will not exceed a predefined amount.
7. The method of claim 3 further comprising providing a plurality of error signals each corresponding to one of the component signals, each error signal providing correction to the extent that the corresponding component signal does not represent the corresponding characteristic of the input signal within a predefined range.
8. The method of claim 7 wherein the number of factors and the precision of factors is selected to achieve error signals which remain below a predefined threshold value.
9. The method of claim 8 wherein the number of factors and the precision of factors are selected so that the storage space required therefor will not exceed a predefined amount.
10. The method of claim 1 further comprising provid-ing a plurality of error signals each corresponding to one of the component signals, each error signal providing correction to the extent that the corresponding component signal does not represent the corresponding characteristic of the input signal within a predefined range.
11. The method in accordance with claim 10 wherein a set of reference component signal values is provided which represents a reference pattern of samples and in each record the input signal is represented by a plurality of component change signal values for each record, each component change signal value being equal to the difference between reference pattern of samples and the record.
12. The method of claim 1 wherein each record has the same number of samples arranged in a multi-dimensional array, a first of said component signals representing the magnitude of samples and a second of said component signals representing the position of a sample in the array.
13. The method of claim 12 wherein a component change signal may result in several pixels of the reference image being mapped to a common pixel of one of the frames, the intensity of the common pixel being equal to the sum of the intensities of the several pixels.
14. The method of claim 12 wherein the input signal is a conventional video signal, each sample is a pixel of a video image, each record is a frame of video, said first component signal represents pixel intensity and said second component signal represents the location of a pixel in a frame.
15. The method of claim 14 further comprising providing a plurality of error signals each corresponding to one of the component signals, each error signal providing ? the corresponding component signal does not repre-sent the corresponding characteristic of the input signal within a predefined range.
16. The method in accordance with claim 1 wherein a set of reference component signal values is provided which represents a reference pattern of samples and in each record the input signal is represented by a plurality of component change signal values for each record, each component change signal value being equal to the difference between reference pattern of samples and the record.
17. The method of claim 16 wherein a component change signal may result in several pixels of the reference image being mapped to a common pixel of one of the frames, the intensity of the common pixel being equal to a weighted sum of the intensities of the several pixels.
18. The method of claim 16 wherein a component change signal may result in several pixels of the reference image being mapped to a common pixel of one of the frames, the intensity of the common pixel being equal to be the difference between a constant and the sum of the intensities of the several pixels.
19. The method of claim 16 wherein a component change signal may result in several pixels of the reference image being mapped to a common pixel of one of the frames, said method further comprising defining a depth for each of the several pixels, the intensity of the common pixel being made equal to the intensity of the pixel among the several pixels which has the least depth.
20. The method of claim 19 wherein the depth of pixels is defined as a separate domain represented by a third component signal.
21. The method of claim 16 wherein the reference image is provided with a collection of holons, the collection of holons containing every different holon appearing among all the frames of the input signal.
22. The method of claim 21 wherein the location of a pixel within the reference image is represented in a first system of coordinates and the location of a pixel within at least one of the holons is represented in a different system of coordinates.
23. The method of claim 21 wherein the location of a pixel within different holons is represented in a different system of coordinates.
24. The method of claim 21 wherein the holons include a set of pixels exhibiting coordinated behavior in at least one domain, and at least one of a load signal and score signal of at least one component signal operates only on said set of pixels.
25. A method for producing a set of load and score signals for use in the method of claim 2 comprising the steps of;

a. determining the plurality of component change signal values as the difference between each record and the reference pattern of samples;

b. performing principal component analysis on the plurality of component change signal values to extract a plurality of loads;

c. projecting the plurality of component change signals values on the plurality of loads to produce a set of score values which are applied to the plurality of loads to produce an approximated record;

d. determining the difference between each approximated record and each record;

e. repeating steps c and d until the differ-ence between each approximated record and each record is less than a predetermined value.
26. A method for producing a set of load and score signals for use in the method of claim 25, wherein the principal component analysis is a weighted principal component analysis.
27. A method for producing a set of load and score signals for use in the method of claim 16, comprising the further step of extending the set of reference component signals to include additional component signals.
28. A method for decoding an encoded signal composed of a plurality of component signals in different domains to an input signal comprised of data samples organized into records of multiple samples, with each sample occupying a unique position within its record, said encoded signal repre-sented as a combination of a plurality of factors, each factor being the product of a score signal and a load signal, the score signal defining the variation of data samples from record to record and the load signal defining the relative variation of a subgroup of samples in different positions of a record, said method utilizing a reference pattern of samples, compris-ing the steps of:

a. multiplying each load signal by its associ-ated score signal to produce each factor;

b. combining the factors produced in step a;

c. modifying the set of reference component signal values according to the combined factors produced in step b to produce the records of a reproduced input signal.
29. A method for decoding on encoded signal as in claim 28 wherein at least one of the load signals and score signals is provided on a storage medium.
30. A method for decoding on encoded signal as in claim 28, wherein the reference component signal values are provided on the storage medium.
31. A method for decoding an encoded signal as in claim 28 wherein the method comprises the further step of receiving at least one of the load signals and score signals from a remote location over a communications medium.
32. The method of claim 31 wherein the reference component signal values are also received over the communica-tions medium.
33. A method for editing an encoded signal composed of a plurality of component signals in different domains to an input signal comprised of data samples organized into records of multiple samples, with each sample occupying a unique position within its record, said encoded signal repre-sented as a combination of a plurality of factors, each factor being the product of a score signal and a load signal, the score signal defining the variation of data samples from record to record and the load signal defining the relative variation of a subgroup of samples in different positions of a record, said method utilizing a reference pattern of samples, compris-ing the steps of:

a. modifying at least one score signal to achieve desired editing;

b. multiplying each load signal by its associ-ated modified score signal to produce each factor;

c. combining the factors produced in step a;

d. modifying the set of reference component signal values according to the combined factors produced in step b to produce the records of a reproduced input signal.
34. An apparatus for converting samples of an input signal to an encoded signal composed of a plurality of compo-nent signals each representing a characteristic of the input signal in a different domain, said input signal being comprised of data samples organized into records of multiple samples, with each sample occupying a unique position within its record, comprising means for encoding each record as a combination, each component signal of a plurality of factors, each factor being the product of a score signal and a load signal, the score signal defining the variation of data samples from record to record and the load signal defining the relative variation of a subgroup of samples in different positions of a record.
35. The apparatus in accordance with claim 34 further comprising means for generating a set of reference component signal values which represents a reference pattern of samples, means for producing for each record a plurality of component change signal values representing the input signal, each component change signal value being equal to the differ-ence between the reference pattern of samples and the record.
36. The apparatus of claim 35 wherein each record has the same number of samples arranged in a multi-dimensional array, a first of said component signals representing the magnitude of samples and a second of said component signals representing the position of a sample in the array.
37. The apparatus of claim 36 wherein a component change signal may result in several pixels of the reference image being mapped to a common pixel of one of the frames, further comprising means for causing the intensity of the common pixel to be equal to a weighted sum of the intensities of the several pixels.
38. The apparatus of claim 36 further comprising means for providing a plurality of error signals each corre-sponding to one of the component signals, each error signal providing correction to the extent that the corresponding component signal does not represent the corresponding charac-teristic of the input signal within a predefined range.
39. The apparatus of claim 34 further comprising means for providing a plurality of error signals each corre-sponding to one of the component signals, each error signal providing correction to the extent that the corresponding component signal does not represent the corresponding charac-teristic of the input signal within a predefined range.
40. The apparatus in accordance with claim 34 further comprising means for generating a set of reference component signal values which represents a reference pattern of samples, means for producing for each record a plurality of component change signal values representing the input signal, each component change signal value being equal to the differ-ence between the reference pattern of samples and the record.
41. The apparatus of claim 34 wherein each record has the same number of samples arranged in a multi-dimensional array, said means for encoding causing a first of said compo-nent signals representing the magnitude of samples and a second of said component signals representing the position of a sample in the array.
42. The apparatus of claim 41 wherein the input signal is a conventional video signal, each sample is a pixel of a video image, each record is a frame of video, said first component signal represents pixel intensity and said second component signal represents the location of a pixel in a frame.
43. The apparatus in accordance with claim 42 further comprising means for generating a set of reference component signal values which represents a reference pattern of samples r means for producing for each record a plurality of component change signal values representing the input signal, each component change signal value being equal to the differ-ence between the reference pattern of samples and the record.
44. The apparatus of claim 43 wherein a component change signal may result in several pixels of the reference image being mapped to a common pixel of one of the frames, the intensity of the common pixel being equal to a weighted sum of the intensities of the several pixels.
45. The apparatus of claim 43 wherein a component change signal may result in several pixels of the reference image being mapped to a common pixel of one of the frames, further comprising means for controlling the intensity of the common pixel to equal the difference between a constant and the sum of the intensities of the several pixels.
46. The apparatus of claim 43 wherein a component change signal may result in several pixels of the reference image being mapped to a common pixel of one of the frames, further comprising means for defining a depth for each of the several pixels, and means for controlling the intensity of the common pixel to be equal to the intensity of the pixel among the several pixels which has the least depth.
47. The apparatus of claim 43 wherein the reference image includes a collection of holons, the collection of holons containing every different holon appearing among all the frames of the input signal.
48. The apparatus of claim 47 wherein the holons include a set of pixels exhibiting coordinated behavior in at least one domain, said means for encoding producing at least one of a load signal and score signal of at least one component signal which operates only on said set of pixels.
49. An apparatus for decoding an encoded signal composed of a plurality of component signals in different domains to an input signal comprised of data samples organized into records of multiple samples, with each sample occupying a unique position within its record, said encoded signal repre-sented as a combination of a plurality of factors, each factor being the product of a score signal and a load signal, the score signal defining the variation of data samples from record to record and the load signal defining the relative variation of a subgroup of samples in different positions of a record, said apparatus utilizing a reference pattern of samples, comprising:

a. means for multiplying each load signal by its associated score signal to produce each factor;

b. means for combining the factors produced in step a;

c. means for modifying the set of reference component signal values according to the combined factors produced in step b to produce the records of a reproduced input signal.
50. An apparatus as in claim 49 further comprising a storage medium containing at least one of the load signals and score signals.
51. An apparatus as in claim 49, wherein the storage medium also contains the reference component signal values.
52. An apparatus as in claim 49 further comprising means for receiving at least one of the load signals and score signals from a remote location over a communications medium.
53. The apparatus of claim 52 wherein the reference component signal values are also received over the communica-tions medium.
54. An apparatus for editing an encoded signal composed of a plurality of component signals in different domains to an input signal comprised of data samples organized into records of multiple samples, with each sample occupying a unique position within its record, said encoded signal repre-sented as a combination of a plurality of factors, each factor being the product of a score signal and a load signal, the score signal defining the variation of data samples from record to record and the load signal defining the relative variation of a subgroup of samples in different positions of a record, said apparatus utilizing a reference pattern of samples, comprising:

a. means for modifying at least one score signal to achieve desired editing;

b. means for multiplying each load signal by its associated modified score signal to produce each factor;

c. means for combining the factors produced in step a;

d. means for modifying the set of reference component signal values according to the combined factors produced in step b to produce the records of a reproduced input signal.
55. A system comprising a reading apparatus and a data carrier containing data and adapted to be decoded accord-ing to the method of any one of claims 28-32.
56. A system comprising a recording apparatus and a data carrier containing an encoded signal produced by the method of any one of claims 1-28.
57. A system comprising a reading apparatus and a data carrier comprising data and adapted to be decoded by the apparatus of any one of claims 49-53.
58. A system comprising a recording apparatus and a data carrier containing an encoded signal produced by the apparatus of any one of claims 34-48.
59. A system comprising a recording apparatus, a data carrier and a reading apparatus, wherein the data carrier contains an encoded signal produced according to the method of any one of claims 1-28 and adapted to be decoded by the method of any one of claims 28-32.
60. A system comprising a recording apparatus, a data carrier and a reading apparatus, wherein the data carrier contains an encoded signal produced by the apparatus of any one of claims 34-48 and adapted to be read by the apparatus of any one of claims 49-53.
61. A data carrier containing data recorded thereon and adapted to be decoded by the method of any one of claims 28-32.
62. A data carrier containing an encoded signal produced by the method of any one of claims 1-28.
63. An apparatus producing a transmitted signal containing an encoded signal produced by the method of any one of claims 1-28.
64. The encoded signal produced by the method of any one of claims 1-28 provided on one of a storage medium and a transmission medium.
CA002171293A 1993-09-08 1994-09-08 Method and apparatus for data analysis Abandoned CA2171293A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
NO933205 1993-09-08
NO933205A NO933205D0 (en) 1993-09-08 1993-09-08 Data representation system

Publications (1)

Publication Number Publication Date
CA2171293A1 true CA2171293A1 (en) 1995-03-23

Family

ID=19896406

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002171293A Abandoned CA2171293A1 (en) 1993-09-08 1994-09-08 Method and apparatus for data analysis

Country Status (10)

Country Link
EP (1) EP0748562A4 (en)
JP (1) JPH09502586A (en)
CN (1) CN1130969A (en)
AP (1) AP504A (en)
AU (1) AU693117B2 (en)
CA (1) CA2171293A1 (en)
NO (1) NO933205D0 (en)
OA (1) OA10269A (en)
WO (1) WO1995008240A2 (en)
ZA (1) ZA946904B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NO942080D0 (en) * 1994-06-03 1994-06-03 Int Digital Tech Inc Picture Codes
CA2216109A1 (en) * 1995-03-22 1996-09-26 Idt International Digital Technologies Deutschland Gmbh Method and apparatus for coordination of motion determination over multiple frames
AU8018898A (en) * 1997-07-28 1999-02-22 Idt International Digital Technologies Deutschland Gmbh Method and apparatus for compressing video sequences
JP4224748B2 (en) * 1999-09-13 2009-02-18 ソニー株式会社 Image encoding apparatus, image encoding method, image decoding apparatus, image decoding method, recording medium, and image processing apparatus
US8600132B2 (en) * 2011-05-03 2013-12-03 General Electric Company Method and apparatus for motion correcting medical images
CN102360214B (en) * 2011-09-02 2013-03-06 哈尔滨工程大学 Naval vessel path planning method based on firefly algorithm
CN104794358A (en) * 2015-04-30 2015-07-22 无锡悟莘科技有限公司 Parameter estimation and fitting method for collecting supporting point frequency in vibrating wire mode
EP3688623A1 (en) * 2017-09-26 2020-08-05 Services Petroliers Schlumberger Apparatus and methods for improved subsurface data processing systems
CN109064445B (en) * 2018-06-28 2022-01-04 中国农业科学院特产研究所 Animal quantity statistical method and system and storage medium
WO2021001845A1 (en) * 2019-06-29 2021-01-07 Phadke Sameer System and method for modelling and monitoring processes in organizations using digital twins
CN111913866A (en) * 2020-08-19 2020-11-10 上海繁易信息科技股份有限公司 Method for monitoring equipment model data abnormity in real time and electronic equipment
CN112906650B (en) * 2021-03-24 2023-08-15 百度在线网络技术(北京)有限公司 Intelligent processing method, device, equipment and storage medium for teaching video
US11887222B2 (en) 2021-11-12 2024-01-30 Rockwell Collins, Inc. Conversion of filled areas to run length encoded vectors
US11915389B2 (en) 2021-11-12 2024-02-27 Rockwell Collins, Inc. System and method for recreating image with repeating patterns of graphical image file to reduce storage space
US11954770B2 (en) 2021-11-12 2024-04-09 Rockwell Collins, Inc. System and method for recreating graphical image using character recognition to reduce storage space
US11842429B2 (en) 2021-11-12 2023-12-12 Rockwell Collins, Inc. System and method for machine code subroutine creation and execution with indeterminate addresses

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4394774A (en) * 1978-12-15 1983-07-19 Compression Labs, Inc. Digital video compression system and methods utilizing scene adaptive coding with rate buffer feedback
US4717956A (en) * 1985-08-20 1988-01-05 North Carolina State University Image-sequence compression using a motion-compensation technique
US4786967A (en) * 1986-08-20 1988-11-22 Smith Engineering Interactive video apparatus with audio and video branching
US5136659A (en) * 1987-06-30 1992-08-04 Kokusai Denshin Denwa Kabushiki Kaisha Intelligent coding system for picture signal
US5150432A (en) * 1990-03-26 1992-09-22 Kabushiki Kaisha Toshiba Apparatus for encoding/decoding video signals to improve quality of a specific region
EP0449478A3 (en) * 1990-03-29 1992-11-25 Microtime Inc. 3d video special effects system
CA2087523C (en) * 1990-07-17 1997-04-15 Mark Andrew Shackleton Method of processing an image
DE69222102T2 (en) * 1991-08-02 1998-03-26 Grass Valley Group Operator interface for video editing system for the display and interactive control of video material
US5392072A (en) * 1992-10-23 1995-02-21 International Business Machines Inc. Hybrid video compression system and method capable of software-only decompression in selected multimedia systems

Also Published As

Publication number Publication date
OA10269A (en) 1997-10-07
AU7871794A (en) 1995-04-03
AP9400673A0 (en) 1994-10-31
AU693117B2 (en) 1998-06-25
WO1995008240A3 (en) 1995-05-11
JPH09502586A (en) 1997-03-11
EP0748562A4 (en) 1998-10-21
ZA946904B (en) 1995-05-11
AP504A (en) 1996-07-01
NO933205D0 (en) 1993-09-08
CN1130969A (en) 1996-09-11
WO1995008240A2 (en) 1995-03-23
EP0748562A1 (en) 1996-12-18

Similar Documents

Publication Publication Date Title
US5983251A (en) Method and apparatus for data analysis
Wang et al. Representing moving images with layers
Hemami et al. Transform coded image reconstruction exploiting interblock correlation
EP1016286B1 (en) Method for generating sprites for object-based coding systems using masks and rounding average
US6573890B1 (en) Compression of animated geometry using geometric transform coding
Hötter Object-oriented analysis-synthesis coding based on moving two-dimensional objects
CA2171293A1 (en) Method and apparatus for data analysis
US6072496A (en) Method and system for capturing and representing 3D geometry, color and shading of facial expressions and other animated objects
US5870502A (en) System and method for a multiresolution transform of digital image information
US8908766B2 (en) Computer method and apparatus for processing image data
KR101278224B1 (en) Apparatus and method for processing video data
KR20000064847A (en) Image segmentation and target tracking methods, and corresponding systems
KR20080040710A (en) Image coder for regions of texture
US6845130B1 (en) Motion estimation and compensation for video compression
McLean Structured video coding
JP3663645B2 (en) Moving image processing apparatus and method
Cheng Visual pattern matching in motion estimation for object-based very low bit-rate coding using moment-preserving edge detection
JPH0837664A (en) Moving picture encoding/decoding device
Li et al. A hybrid model-based image coding system for very low bit-rate coding
Smolic et al. A survey on coding of static and dynamic 3D meshes
JPH08161505A (en) Dynamic image processor, dynamic image encoding device, and dynamic image decoding device
Smolic et al. A survey on coding of static and dynamic 3D meshes
Schuster et al. Review of Lossy Video Compression
Horowitz Syntactic and semantic image representations for computer vision
Bozdağı Three-Dimensional Facial Motion and Structure Estimation in Video Coding

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued
FZDE Discontinued

Effective date: 20000608