Here a vector is an entity per se and, while analogies and examples in terms of Euclidean geometry can be useful visually, they are, by no means, exhaustive. In other words: vectors are no longer just N -tuples of numbers; they can be anything. This said, a Hilbert space can be defined in incremental steps starting from a general notion of vector space and by supplementing this space with two additional features: the existence of an inner product and the property of completeness.

Vector Space. Consider a set of vectors V and a set of scalars S which can be either R or C for our purposes. Inner Product Space. What we have so far is the simplest type of vector space; the next ingredient which we consider is the inner product which is essential to build a notion of distance between elements in a vector space. A vector space with an inner product is called an inner product space. From this definition of the inner product, a series of additional definitions and properties can be derived: first of all, orthogonality between two vectors is defined with respect to the inner product, and we say that the non-zero vectors x and y are orthogonal, or , if and only if.

From the definition of an inner product, we can define the norm of a vector as we will omit from now on the subscript 2 from the norme symbol :. In turn, the norm satisfies the Cauchy-Schwartz inequality :. The norm also satisfies the triangle inequality :. For orthogonal vectors, the triangle inequality becomes the famous Pythagorean theorem:. Hilbert Space. A vector space H V,S equipped with an inner product is called an inner product space. To obtain a Hilbert space, we need completeness. This is a slightly more technical notion, which essentially implies that convergent sequences of vectors in V have a limit that is also in V.

To gain intuition, think of the set of rational numbers Q versus the set of real numbers R. The set of rational numbers is incomplete, because there are convergent sequences in Q which converge to irrational numbers. The set of real numbers contains these irrational numbers, and is in that sense the completion of Q. Completeness is usually hard to prove in the case of infinite-dimensional spaces; in the following it will be tacitly assumed and the interested reader can easily find the relevant proofs in advanced analysis textbooks. Finally, we will also only consider separate Hilbert spaces, which are the ones that admit orthonormal bases.

Finite Euclidean Spaces. Polynomial Functions. This series converges as. Square Summable Functions. Another interesting example of functional Hilbert space is the space of square integrable functions over a finite interval. The inner product is a fundamental tool in a vector space since it allows us to introduce a notion of distance between vectors. The key intuition about this is a typical instance in which a geometric construct helps us to generalize a basic idea to much more abstract scenarios. We know that the orthogonal projection defines the point on x which is closest to y and therefore this indicates how well we can approximate y by a simple scaling of x.

To illustrate this, it should be noted that. Clearly, if the vectors are orthogonal, the cosine is zero and no approximation is possible. Since the inner product is dependent on the angular separation between the vectors, it represents a first rough measure of similarity between x and y ; in broad terms, it provides a measure of the difference in shape between vectors. As we have said before, discrete-time signals are vectors; the computation of their inner product will assume different names according to the processing context in which we find ourselves: it will be called filtering , when we are trying to approximate or modify a signal or it will be called correlation when we are trying to detect one particular signal amongst many.

Yet, in all cases, it will still be an inner product, i. In particular, the concept of orthogonality between signals implies that the signals are perfectly distinguishable or, in other words, that their shape is completely different. The need for a quantitative measure of similarity in some applications calls for the introduction of the Euclidean distance, which is derived from the inner product as.

In particular, for C N the Euclidean distance is defined by the expression:. In the practice of signal processing, the Euclidean distance is referred to as the root mean square error ; 4 this is a global, quantitative goodness-of-fit measure when trying to approximate signal y with x. Incidentally, there are other types of distance measures which do not rely on a notion of inner product; for example in C N we could define.

This distance is based on the supremum norm and is usually indicated by ; however, it can be shown that there is no inner product from which this norm can be derived and therefore no Hilbert space can be constructed where is the natural norm. Nonetheless, this norm will reappear later, in the context of optimal filter design. Now that we have defined the properties of Hilbert space, it is only natural to start looking at the consequent inner structure of such a space. The best way to do so is by introducing the concept of basis.

All this is accomplished by a linear transformation called a change of basis ; to anticipate the topic of the next Chapter, we will see shortly that the Fourier transform is an instance of basis change. Sometimes, we are interested in exploring in more detail a specific subset of a given vector space; this is accomplished via the concept of subspace.

A subspace is, as the name implies, a restricted region of the global space, with the additional property that it is closed under the usual vector operations. This implies that, once in a subspace, we can operate freely without ever leaving its confines; just like a full-fledged space, a subspace has its own skeleton i. The set of vectors W is called linearly independent if the following holds:. Sometimes, the set of vectors can be orthogonal but not normal i. Alternatively, an lineary idependent set of vectors can be orthonormalized via the Gramm-Schmidt procedure, which can be found in any linear algebra textbook.

One of the most important properties for finite-dimensional spaces is the following:. In other words, in finite dimensions, once we find a full set of orthogonal vectors, we are sure that the set spans the space. Then the following properties all of which are easily verified hold:. Analysis Formula. Best Approximations. Assume P is a subspace of V ; if we try to approximate a vector y V by a linear combination of basis vectors from the subspace P , then we are led to the concepts of least squares approximations and orthogonal projections.

First of all, we define the best linear approximation P of a general vector y V to be the approximation which minimizes the norm. With W as our usual orthonormal basis for P , the projection is given by. This approximation with an orthonormal basis has a key property: it can be successively refined. Assume you have the approximation with the first m terms of the orthonormal basis:. This is simply given by. While this seems obvious, it is actually a small miracle, since it does not hold for more general, non-orthonormal bases.

For the simplest case of Hilbert spaces, namely C N , orthonormal bases are also the most intuitive since they contain exactly N mutually orthogonal vectors of unit norm. M is called a change of basis matrix. The previous equation shows that y is a linear combination of the columns of M H , which, in turn, are of course the vectors x k. This basis, however, is not an orthonormal basis. It can be transformed to an orthonormal basis by a standard Gramm-Schmidt procedure; the basis vector thus obtained are called Legendre polynomials. This is actually the classic Fourier basis for functions on an interval.

Please note that, here, as opposed to the previous examples, the number of basis vectors is actually infinite. The orthogonality of these basis vectors is easily verifiable; their completeness, however, is rather hard to prove and this, unfortunately, is very much the rule for all non-trivial, infinite-dimensional basis sets.

We are now in a position to formalize our intuitions so far, with respect to the equivalence between discrete-time signals and vector spaces, with a particularization for the three main classes of signals that we have introduced in the previous Chapter. Note that in the following, we will liberally interchange the notations x and x [ n ] to denote a sequence as a vector embedded in its appropriate Hilbert space. The correspondence between the class of finite-length, length- N signals and C N should be so immediate at this point that it does not need further comment.

As a reminder, the canonical basis is the canonical basis for C N. The k -th canonical basis vector is often expressed in signal form as. As we have seen, N -periodic signals are equivalent to length- N signals. The space of N -periodic sequences is therefore isomorphic to C N. In particular, the sum of two sequences considered as vectors is the standard pointwise sum for the elements:.

The canonical basis for the space of N -periodic sequences is the canonical basis for C N , because of the isomorphism; in general, any basis for C N is also a basis for the space of N -periodic sequences. Sometimes, however, we will also consider an explicitly periodized version of the basis. Such a sequence is called a pulse train. Note that here we are abandoning mathematical rigor, since the norm of each of these basis vectors is infinite; yet the pulse train , if handled with care, can be a useful tool in formal derivations. We have already pointed out that the formula:. This is the space of choice for all the theoretical derivations involving infinite sequences.

This is an infinite set, and actually an infinite set of linearly independent vectors, since. Note that, for an arbitrary signal x [ n ] the analysis formula gives. Clearly, the space is closed under addition and scalar multiplication, and the canonical inner product is well behaved since all sequences have only a finite number of nonzero values. However, the space is not complete; to clarify this, consider the following family of signals:.

A comprehensive review of linear algebra, which contains all the concepts of Hilbert spaces but in finite dimensions, is the classic by G. For an introduction to Hilbert spaces, there are many mathematics books; we suggest N. As an alternative, a more intuitive and engineering-motivated approach is in the classic book by D. An operator is a transformation of a given signal and is indicated by the notation. For instance, the delay operator is indicated as.

A linear operator is one for which the following holds:. In C N , we define the delay operator as the left circular shift of a vector:. Which operator do you think it is associated to? What does the operator do? Prove that any vector z S is uniquely represented in this basis. Hint: prove by contradiction. Consider the following change of basis matrix in C 8 , with respect to the standard orthonormal basis:.

The basis described by H is called the Haar basis and it is one of the most celebrated cornerstones of a branch of signal processing called wavelet analysis. To get a feeling for its properties, consider the following set of numerical experiments you can use Matlab or any other numerical package :. Fourier theory has a long history, from J. Fourier theory is a branch of harmonic analysis, and in that sense, a topic in pure and applied mathematics.

At the same time, because of its usefulness in practical applications, Fourier analysis is a key tool in several engineering branches, and in signal processing in particular. Why is Fourier analysis so important? To understand this, let us take a short philosophical detour. Interesting signals are time-varying quantities: you can imagine, for instance, the voltage level at the output of a microphone or the measured level of the tide at a particular location; in all cases, the variation of a signal, over time, implies that a transfer of energy is happening somewhere, and ultimately this is what we want to study.

Now, a time-varying value which only increases over time is not only a physical impossibility but a recipe for disaster for whatever system is supposed to deal with it; fuses will blow, wires will melt and so on. Sinusoids have another remarkable property which justifies their ubiquitous presence. Indeed, any linear time-invariant transformation of a sinusoid is a sinusoid at the same frequency : we express this by saying that sinusoidal oscillations are eigenfunctions of linear time-invariant systems.

This is a formidable tool for the analysis and design of signal processing structures, as we will see in much detail in the context of discrete-time filters. The purpose of the present Chapter is to introduce and analyze some key results on Fourier series and Fourier transforms in the context of discrete-time signal processing. It appears that, as we mentioned in the previous Chapter, the Fourier transform of a signal is a change of basis in an appropriate Hilbert space.

While this notion constitutes an extremely useful unifying framework, we also point out the peculiarities of its specialization within the different classes of signal. In particular, for finite-length signals we highlight the eminently algebraic nature of the transform, which leads to efficient computational procedures; for infinite sequences, we will analyze some of its interesting mathematical subtleties.

The Fourier transform of a signal is an alternative representation of the data in the signal. While a signal lives in the time domain , 1 its Fourier representation lives in the frequency domain. We can move back and forth at will from one domain to the other using the direct and inverse Fourier operators, since these operators are invertible. In this Chapter we study three types of Fourier transform which apply to the three main classes of signals that we have seen so far:. The basic ingredient of all the Fourier transforms which follow, is the discrete-time complex exponential; this is a sequence of the form:.

The power of a complex exponential is equal to the average energy over a period, i. A 2 , irrespective of frequency. In the introduction, we hinted at the fact that Fourier analysis allows us to decompose a physical phenomenon into oscillatory components. However, it may seem odd, that we have chosen to use complex oscillation for the analysis of real-world signals.

It may seem even odder that these oscillations can have a negative frequency and that, as we will soon see in the context of the DTFT, the spectrum extends over to the negative axis.

- Signal Processing for Communications.
- Related Content.
- Evernote Cracked: The Beginners Guide To Mastering Evernote And Organizing Your Life (Evernote for Beginners, Book 1);
- Changeless (Parasol Protectorate, Book 2).

The starting point in answering these legitimate questions is to recall that the use of complex exponentials is essentially a matter of convenience. One could develop a complete theory of frequency analysis for real signals using only the basic trigonometric functions. You may actually have noticed this if you are familiar with the Fourier Series of a real function; yet the notational overhead is undoubtedly heavy since it involves two separate sets of coefficients for the sine and cosine basis functions, plus a distinct term for the zero-order coefficient.

The use of complex exponentials elegantly unifies these separate series into a single complex-valued sequence. Yet, one may ask again, what does it mean for the spectrum of a musical sound to be complex? Simply put, the complex nature of the spectrum is a compact way of representing two concurrent pieces of information which uniquely define each spectral component: its frequency and its phase. These two values form a two-element vector in R 2 but, since R 2 is isomorphic to C , we use complex numbers for their mathematical convenience. We can visualize its evolution over discrete-time as a series of points on the unit circle in the complex plane.

If we decompose a real signal into complex exponentials, we will show that, for any given frequency value, the phases of the positive and negative components are always opposite in sign; as the two oscillations move in opposite directions along the unit circle, their complex part will always cancel out exactly, thus returning a purely real signal.

The final step in developing a comfortable feeling for complex oscillations comes from the realization that, in the synthesis of discrete-time signals and especially in the case of communication systems it is actually more convenient to work with complex-valued signals, themselves. Although the transmitted signal of a device like an ADSL box is a real signal, the internal representation of the underlying sequences is complex; therefore complex oscillations become a necessity.

We start by considering a family of finite-length sinusoidal signals indexed by an integer k of the form. To determine these frequency values, note that, in order for w k [ n ] to contain a whole number of periods over N samples, it must conform to. In more compact notation we can therefore state that. In signal processing practice, however, it is customary to keep the normalization factor explicit in the change of basis formulas; this is mostly due to computational reasons, as we will see later, but, for the sake of consistency with the mainstream literature, we will also follow this convention.

In detail, we have the analysis formula. To stress the point again:. The coefficients X [ k ] are referred to as the spectrum of the signal. While this does not influence the energy distribution in frequency, it does have a significant effect on the shape of the signal in the discrete-time domain as we will shortly see in more detail.

Some examples for signals in C 64 are plotted in Figures 4. Figure 4. The magnitude DFT does not change, but the phase offset appears in the phase of the transform. As a consequence, all of the basis vectors are needed to reconstruct the signal. It can be shown with a few trigonometric manipulations that the DFT of a step signal is. This is the reason why, earlier, we stated that periodic sequences are the natural way to embed a finite-length signal into a sequence: their Fourier representation is formally identical.

This is not surprising since a we have already established a correspondence between C N and N and b we are actually expressing a length- N sequence as a combination of N -periodic basis signals. We now consider a Fourier representation for infinite non-periodic sequences. From a purely mathematical point of view, the Discrete-Time Fourier Transform of a sequence x [ n ] is defined as. While the DFT and DFS were signal transformations which involved only a finite number of quantities, both the infinite summation and the real-valued argument, appearing in the DTFT, can create an uneasiness which overshadows the conceptual similarities between the transforms.

Convergence in the mean square sense implies that, while the total energy of the error signal becomes zero, the pointwise values of the partial sum may never approach the values of the limit. One manifestation of this odd behavior is called the Gibbs phenomenon , which has important consequences in our approach to filter design, as we will see later. Its DTFT can be computed as the sum 4.

When, as is very often the case, the DTFT is complex-valued, the usual way to represent it graphically takes the magnitude and the phase separately into account. A way to gain some intuition about the structure of the DTFT formulas is to consider the DFS of periodic sequences with larger and larger periods. We want to show that, in the limit, we end up with the reconstruction formula of the DTFT.

Now, given an absolutely summable sequence x [ n ], we can always build an N -periodic sequence [ n ] as. This can be expressed as. Now consider the DFS of [ n ]:. To check that this assertion is consistent, we can now write the DFS reconstruction formula using the DFS values given to us by inserting 4. We now show that, if we are willing to sacrifice mathematical rigor, the DTFT can be cast in the same conceptual framework we used for the DFT and DFS, namely as a basis change in a vector space.

The following formulas are to be taken as nothing more than a set of purely symbolic derivations, since the mathematical hypotheses under which the results are well defined are far from obvious and are completely hidden by the formalism.

- Andrew Murray books/Absolute Surrender by Andrew Murray!
- Near a Thousand Tables: A History of Food Near a Thousand Tables.
- Child Custody, Visitation, and Support in Pennsylvania (Legal Survival Guides).
- Discrete-Time Signal Processing, 3rd Edition.

It is only fair to say, however, that the following expressions represent a very handy and intuitive toolbox to grasp the essence of the duality between the discrete-time and the frequency domains and that they can be put to use very effectively to derive quick results when manipulating sequences. Consider, for instance, the family of parametric functions 7. For any continuous function f t we can write. Now, as k goes to infinity, the integral converges to f 0 ; hence we can say that the limit of the series of functions r k t converges then to the Dirac delta.

### An Algebraic Approach

The physical interpretation of the Dirac delta is related to quantities expressed as continuous distributions for which the most familiar example is probably that of a probability distribution pdf. These functions represent a value which makes physical sense only over an interval of nonzero measure; the punctual value of a distribution is only an abstraction. The Dirac delta is the operator that extracts this punctual value from a distribution, in a sense capturing the essence of considering smaller and smaller observation intervals.

The resulting object is called a pulse train , similarly to what we built for the case of periodic sequences in N. Now that we have the delta notation in place, we are ready to start. We can write. We can now show that, thanks to the delta formalism, the DTFT is the most general type of Fourier transform for discrete-time signals. Consider a length- N signal x [ n ] and its N DFT coefficients X [ k ]; consider also the sequences obtained from x [ n ] either by periodization or by building a finite-support sequence.

The computation of the DTFTs of these sequences highlights the relationships linking the three types of discrete-time transforms that we have seen so far. Periodic Sequences. If we try to write the analysis DTFT formula for [ n ] we have. Finite-Support Sequences. What the above expression means, is that the DTFT of the finite support sequence [ n ] is again uniquely defined by the N DFT coefficients of the finite-length signal x [ n ] and it can be obtained by a type of Lagrangian interpolation. As in the previous case, the values of DTFT at the roots of unity are equal to the DFT coefficients; note, however, that the transform of a finite support sequence is very different from the DTFT of a periodized sequence.

The latter, in accordance with the definition of the Dirac delta, is defined only in the limit and for a finite set of frequencies; the former is just a smooth interpolation of the DFT. Symmetries and Structure. The DTFT of a time-reversed sequence is. Finally, if x [ n ] is real and symmetric, then the DTFT is real:. Linearity and Shifts. The DTFT is a linear operator:.

A shift in the discrete-time domain leads to multiplication by a phase term in the frequency domain:. This last result is known as the modulation theorem. Energy Conservation. The DFS of a time-reversed sequence is. For real periodic sequences, the following special symmetries hold see 4. The DFS is a linear operator, since it can be expressed as a matrix-vector product.

We have already seen the energy conservation property in the context of basis expansion. The properties of the DFT are obviously the same as those for the DFS, given the formal equivalence of the transforms. The only detail is how to interpret shifts, index reversal and symmetries for finite, length- N vectors; this is easily solved by considering the fact that the equivalence DFT-DFS translates in the time domain to a homomorphism between a length- N signal and its associated N -periodic extension to an infinite sequence.

Mathematically, this means that all shifts and time reversals of a length- N signal are operated modulo N. Consider a length- N signal:. A shift by k corresponds to a circular shift:. The concept of symmetry can be reinterpreted as follows: a symmetric signal is equal to its time reversed version; therefore, for a length- N signal to be symmetric, the first member of 4. Of course the same definition can be used for antisymmetric signals with just a change of sign. The following symmetries hold only for real signals:.

The DFT is obviously a linear operator. A circular shift in the discrete-time domain leads to multiplication by a phase term in the frequency domain:. In the previous Sections, we have developed three frequency representations for the three main types of discrete-time signals; the derivation was eminently theoretical and concentrated mostly upon the mathematical properties of the transforms seen as a change of basis in Hilbert space.

In the following Sections we will see how to put the Fourier machinery to practical use. We have seen two fundamental ways to look at a signal: its time-domain representation, in which we consider the values of the signal as a function of discrete time, and its frequency-domain representation, in which we consider its energy and phase content as a function of digital frequency. The information contained in each of the two representations is exactly the same, as guaranteed by the invertibility of the Fourier transform; yet, from an analytical point of view, we can choose to concentrate on one domain or the other according to what we are specifically seeking.

Consider for instance a piece of music; such a signal contains two coexisting perceptual features, meter and key. The key, on the other hand, can be determined by looking at the pitch patterns of the played notes: since pitch is related to the frequency content of the sound, the natural domain of this feature is the frequency domain.

The DFT, 8 on the other hand, is fundamentally a numerical tool in that it defines a finite set of operations which can be computed in a finite amount of time; in fact, a very efficient algorithmic implementation of the DFT exists under the name of Fast Fourier Transform FFT which only requires a number of operations on the order of N log N for an N -point data vector. The DFT, as we know, only applies to finite-length signals but this is actually acceptable since, in practice, all measured signals have finite support; in principle, therefore, the DFT suffices for the spectral characterization of real-world sequences.

Since the transform of a finite-length signal and its DTFT are related by 4. The first question that we ask ourselves is how to represent spectral data. Since the transform values are complex numbers, it is customary to separately plot their magnitude and their phase; more often than not, we will concentrate on the magnitude only, which is related to the energy distribution of the signal in the frequency domain. These values can be displayed as such and we obtain a plain DFT plot or they can be used to obtain the DTFT of the periodic or finite-support extension of the original signal.

Again, the DTFT of a finite-support extension is just a smooth interpolation of the original DFT points and no new information is added. The algorithm, in its different flavors, is so ubiquitous and so important that the acronym FFT is often used liberally to indicate the DFT or the DFS, which would be more appropriate since the underlying model is that of a periodic signal. In matrix form this is equivalent to decomposing W into the product of a series of matrices with mostly zero or unity elements.

FFT algorithms are tailored to the specific length of the input signal. This is usually achieved by zero padding , i. Now, the maximum resolution of an N -point DFT, i. By extending the signal to a longer length M , we are indeed reducing the separation between frequency components. It is not so.

Details of the spectrum which were not apparent in an N -point DFT are still not apparent in a zero-padded version of the same. The spectrum is a complete, alternative representation of a signal; by analyzing the spectrum, one can obtain, at a glance, the fundamental information, reguired to characterize and classify a signal in the frequency domain.

It is customary to broadly classify discrete-time signals into three classes:. For real-valued signals, the magnitude spectrum is a symmetric function and the above classifications take this symmetry into account. For once, this is made explicit in Figures 4. Phase As we have stated before, the Fourier representation allows us to think of any signal as the sum of the outputs of a potentially infinite number of sinusoidal generators.

While the magnitude of the spectrum defines the inherent power produced by each of the generators, its phase defines the relative alignment of the generated sinusoids. This alignment determines the shape of the signal in the discrete-time domain. To illustrate this with an example, consider the following periodic signal: If the phase terms are uniformly zero, i. In addition, it should be noted with a zero phase term, the periodic signal is symmetric and that therefore the DFS coefficients are real.

While the DFS magnitude remains exactly the same, the resulting time-domain signal is the one depicted in Figure 4. Recall our example at the beginning of this Chapter, when we considered the time and frequency information contained in a piece of music. We stated that the melodic information is related to the frequency content of the signal; obviously this is only partially true, since the melody is determined not only by the pitch values but also by their duration and order. Now, if we take a global Fourier Transform of the entire musical piece we have a comprehensive representation of the frequency content of the piece: in the resulting spectrum there is information about the frequency of each played note.

This makes us wonder whether there exists a time-frequency representation of a signal, in which both time and frequency information are readily apparent. The simplest time-frequency transformation is called the spectrogram. The recipe involves splitting the signal into small consecutive and possibly overlapping length- N pieces and computing the DFT of each. What we obtain is the following function of discrete-time and of a dicrete frequency index:. In matrix notation we have. It is usually represented graphically as a plot in which the x -axis is the discrete-time index m , the y -axis is the discrete frequency index k and a color is the magnitude of S [ k,m ], with darker colors for larger values.

As an example of the insight we can gain from the spectrogram, consider analyzing the well-known Bolero by Ravel. After 13 seconds, the flute starts playing the theme; this is identifiable in the dark horizontal stripes which denote a high energy content around the frequencies which correspond to the pitches of the melody; with further analysis we could even try to identify the exact notes.

The clarity of this plot is due to the simple nature of the signal; if we now plot the spectrogram of the last 20 seconds of the piece, we obtain Figure 4. Here the orchestra is playing full blast, as indicated by the high energy activity across the whole spectrum; we can only detect the onset of the rhythmic shouts that precede the final chord. We can therefore say that the time resolution of the spectrogram is N samples since the value of the signal at time n 0 influences the DFT of the N -point window around n 0. It is rather easy to show that the amount of overlap does not change the situation.

In practice, we need to choose an optimal tradeoff taking the characteristics of the signal into consideration. The above problem, described for the case of the spectrogram, is actually a particular instance of a general uncertainty principle for time-frequency analysis.

The principle states that, independently of the analysis tools that we put in place, we can never hope to achieve arbitrarily good resolution in both time and frequency since there exists a lower bound greater than zero for the product of the localization measure in time and frequency. The absence of a physical dimension for time has the happy consequence that all discrete-time signal processing tools become indifferent to the underlying physical nature of the actual signals: stock exchange values or sampled orchestral music are just sequences of numbers. This is in stark contrast with the practice of analog signal processing in which a half-band filter made of capacitors, resistors and other electronic components must be redesigned for any new class of input signals.

This dimensionless abstraction, however, is not without its drawbacks from the point of view of hands-on intuition; after all, we are all very familiar with signals in the real world for which time is expressed in seconds and frequency is expressed in hertz. The precise, formal link between real-world signal and discrete-time signal processing is given by the Sampling Theorem, which we will study later. The fundamental idea, however, is that we can remove the abstract nature of a discrete-time signal and, correspondingly, of a dimensionless frequency by associating a time duration to the interval between successive discrete-time indices in the sequence.

In the space of length- N signals, indicate the DFT of a signal x as. As such, they must be handled with some care when it comes to their properties. We will see shortly that two of the most important properties of infinite sequences concern their summability: this can take the form of either absolute summability stronger condition or square summability weaker condition, corresponding to finite energy.

Periodic Signals.

The tilde notation [ n ] will be used whenever we need to explicitly stress a periodic behavior. Periodic Extensions. Periodic sequences are infinite in length, and yet their information is contained within a finite number of samples. In this sense, periodic sequences represent a first bridge between finite-length signals and infinite sequences. Note that now an arbitrary shift of the periodic sequence corresponds to the periodization of a circular shift of the original finite-length signal. Finite-Support Signals. Note that, although [ n ] is an infinite sequence, the knowledge of M and of the N nonzero values of the sequence completely specifies the entire signal.

Note that, here, in contrast to the the periodic extension of x [ n ], we are actually adding arbitrary information in the form of the zero values outside of the support interval. This is not without consequences, as we will see in the following Chapters. In general, we will use the bar notation [ n ] for sequences defined as the finite support extension of a finite-length signal.

In classic scenarios there is always a sequence showing a stagecoach leaving town. We can see the spoked wagon wheels starting to turn forward faster and faster, then stop and then starting to turn backwards. In fact, each frame in the movie is a snapshot of a spinning disk with increasing angular velocity. As the speed of the real wheel increases further, the wheel on film starts to move backwards, which corresponds to a negative digital frequency. We can distinguish three cases:. In practice, the periodization of short sequences is an effective method to synthesize the sound of string instruments such as a guitar or a piano; used in conjunction with simple filters, the technique is known as the Karplus-Strong algorithm.

The periodization formula leads to. Other books of interest include: B. Allen and D. For each of the following discrete-time signals, state whether the signal is periodic and, if so, specify the period:. In the 17th century, algebra and geometry started to interact in a fruitful synergy which continues to the present day. It also spearheaded the notion of vector space, in which a geometrical point could be represented as an n -tuple of coordinates; this, in turn, readily evolved into the theory of linear algebra. Later, the concept proved useful in the opposite direction: many algebraic problems could benefit from our innate geometrical intuition once they were cast in vector form; from the easy three-dimensional visualization of concepts such as distance and orthogonality, more complex algebraic constructs could be brought within the realm of intuition.

The final leap of imagination came with the realization that the concept of vector space could be applied to much more abstract entities such as infinite-dimensional objects and functions. In so doing, however, spatial intuition could be of limited help and for this reason, the notion of vector space had to be formalized in much more rigorous terms; we will see that the definition of Hilbert space is one such formalization. Most of the signal processing theory which in this book can be usefully cast in terms of vector notation and the advantages of this approach are exactly what we have just delineated.

Firstly of all, all the standard machinery of linear algebra becomes immediately available and applicable; this greatly simplifies the formalism used in the mathematical proofs which will follow and, at the same time, it fosters a good intuition with respect to the underlying principles which are being put in place.

Furthermore, the vector notation creates a frame of thought which seamlessly links the more abstract results involving infinite sequences to the algorithmic reality involving finite-length signals. Finally, on the practical side, vector notation is the standard paradigm for numerical analysis packages such as Matlab; signal processing algorithms expressed in vector notation translate to working code with very little effort. In the previous Chapter, we established the basic notation for the different classes of discrete-time signals which we will encounter time and again in the rest of the book and we hinted at the fact that a tight correspondence can be established between the concept of signal and that of vector space.

In this Chapter, we pursue this link further, firstly by reviewing the familiar Euclidean space in finite dimensions and then by extending the concept of vector space to infinite-dimensional Hilbert spaces. Euclidean geometry is a straightforward formalization of our spatial sensory experience; hence its cornerstone role in developing a basic intuition for vector spaces. Even a more abstract concept such as that of basis is quite easy to contemplate the standard coordinate concepts of latitude, longitude and height, which correspond to the three orthogonal axes in R 3.

Unfortunately, immediate spatial intuition fails us for higher dimensions i. These notions, ultimately, will be generalized even further to more abstract types of vector spaces. For the moment, let us review the properties of R N , the N -dimensional Euclidean space. Vectors and Notation. A point in R N is specified by an N -tuple of coordinates: 1. The generic n -th element in vector x is indicated by the subscript x n. The n -th element of the k -th vector in the set is indicated by the notation x n k. Inner Product. The inner product between two vectors x,y R N is defined as.

We say that x and y are orthogonal when their inner product is zero:. The norm of a vector is defined in terms of the inner product as. It is easy to visualize geometrically that the norm of a vector corresponds to its length, i. A remarkable property linking the inner product and the norm is the Cauchy-Schwarz inequality the proof of which is nontrivial ; given x,y R N we can state that.

The concept of norm is used to introduce the notion of Euclidean distance between two vectors x and y :. Given such a set, a key question is that of completeness: can any vector in R N be written as a linear combination of vectors from the set? There needs to be a set of M vectors that span R N , and it can be shown that this is equivalent to saying that the set must contain at least N linearly independent vectors. A set of N linearly independent vectors for R N is called a basis and, amongst bases, the ones with mutually orthogonal vectors of norm equal to one are called orthonormal bases.

The orthonormality of such a set is immediately apparent. The purpose of the previous Section was to briefly review the elementary notions and spatial intuitions of Euclidean geometry. A thorough study of vectors in R N and C N is the subject of linear algebra; yet, the idea of vectors, orthogonality and bases is much more general, the basic ingredients being an inner product and the use of a square norm as in 3.

While the analogy between vectors in C N and length- N signal is readily apparent, the question now hinges on how we are to proceed in order to generalize the above concepts to the class of infinite sequences. In fact, the proper generalization of C N to an infinite number of dimensions is in the form of a particular vector space called Hilbert space ; the structure of this kind of vector space imposes a set of constraints on its elements so that divergence problems, such as the one we just mentioned, no longer bother us. When we embed infinite sequences into a Hilbert space, these constraints translate to the condition that the corresponding signals have finite energy — which is a mild and reasonable requirement.

Finally, it is important to remember that the notion of Hilbert space is applicable to much more general vector spaces than C N ; for instance, we can easily consider spaces of functions over an interval or over the real line. This generality is actually the cornerstone of a branch of mathematics called functional analysis. While we will not follow in great depth these kinds of generalizations, we will certainly point out a few of them along the way.

The space of square integrable functions, for instance, will turn out to be a marvelous tool a few Chapters from now when, finally, the link between continuous—and discrete—time signals will be explored in detail. A word of caution: we are now starting to operate in a world of complete abstraction. Here a vector is an entity per se and, while analogies and examples in terms of Euclidean geometry can be useful visually, they are, by no means, exhaustive.

In other words: vectors are no longer just N -tuples of numbers; they can be anything. This said, a Hilbert space can be defined in incremental steps starting from a general notion of vector space and by supplementing this space with two additional features: the existence of an inner product and the property of completeness.

Vector Space. Consider a set of vectors V and a set of scalars S which can be either R or C for our purposes. Inner Product Space. What we have so far is the simplest type of vector space; the next ingredient which we consider is the inner product which is essential to build a notion of distance between elements in a vector space.

A vector space with an inner product is called an inner product space. From this definition of the inner product, a series of additional definitions and properties can be derived: first of all, orthogonality between two vectors is defined with respect to the inner product, and we say that the non-zero vectors x and y are orthogonal, or , if and only if. From the definition of an inner product, we can define the norm of a vector as we will omit from now on the subscript 2 from the norme symbol :.

In turn, the norm satisfies the Cauchy-Schwartz inequality :. The norm also satisfies the triangle inequality :. For orthogonal vectors, the triangle inequality becomes the famous Pythagorean theorem:. Hilbert Space. A vector space H V,S equipped with an inner product is called an inner product space. To obtain a Hilbert space, we need completeness. This is a slightly more technical notion, which essentially implies that convergent sequences of vectors in V have a limit that is also in V.

To gain intuition, think of the set of rational numbers Q versus the set of real numbers R. The set of rational numbers is incomplete, because there are convergent sequences in Q which converge to irrational numbers. The set of real numbers contains these irrational numbers, and is in that sense the completion of Q. Completeness is usually hard to prove in the case of infinite-dimensional spaces; in the following it will be tacitly assumed and the interested reader can easily find the relevant proofs in advanced analysis textbooks.

Finally, we will also only consider separate Hilbert spaces, which are the ones that admit orthonormal bases. Finite Euclidean Spaces. Polynomial Functions. This series converges as. Square Summable Functions. Another interesting example of functional Hilbert space is the space of square integrable functions over a finite interval. The inner product is a fundamental tool in a vector space since it allows us to introduce a notion of distance between vectors. The key intuition about this is a typical instance in which a geometric construct helps us to generalize a basic idea to much more abstract scenarios.

We know that the orthogonal projection defines the point on x which is closest to y and therefore this indicates how well we can approximate y by a simple scaling of x. To illustrate this, it should be noted that. Clearly, if the vectors are orthogonal, the cosine is zero and no approximation is possible.

Since the inner product is dependent on the angular separation between the vectors, it represents a first rough measure of similarity between x and y ; in broad terms, it provides a measure of the difference in shape between vectors. As we have said before, discrete-time signals are vectors; the computation of their inner product will assume different names according to the processing context in which we find ourselves: it will be called filtering , when we are trying to approximate or modify a signal or it will be called correlation when we are trying to detect one particular signal amongst many.

Yet, in all cases, it will still be an inner product, i. In particular, the concept of orthogonality between signals implies that the signals are perfectly distinguishable or, in other words, that their shape is completely different. The need for a quantitative measure of similarity in some applications calls for the introduction of the Euclidean distance, which is derived from the inner product as.

In particular, for C N the Euclidean distance is defined by the expression:. In the practice of signal processing, the Euclidean distance is referred to as the root mean square error ; 4 this is a global, quantitative goodness-of-fit measure when trying to approximate signal y with x. Incidentally, there are other types of distance measures which do not rely on a notion of inner product; for example in C N we could define. This distance is based on the supremum norm and is usually indicated by ; however, it can be shown that there is no inner product from which this norm can be derived and therefore no Hilbert space can be constructed where is the natural norm.

Nonetheless, this norm will reappear later, in the context of optimal filter design. Now that we have defined the properties of Hilbert space, it is only natural to start looking at the consequent inner structure of such a space. The best way to do so is by introducing the concept of basis. All this is accomplished by a linear transformation called a change of basis ; to anticipate the topic of the next Chapter, we will see shortly that the Fourier transform is an instance of basis change.

Sometimes, we are interested in exploring in more detail a specific subset of a given vector space; this is accomplished via the concept of subspace. A subspace is, as the name implies, a restricted region of the global space, with the additional property that it is closed under the usual vector operations. This implies that, once in a subspace, we can operate freely without ever leaving its confines; just like a full-fledged space, a subspace has its own skeleton i.

The set of vectors W is called linearly independent if the following holds:. Sometimes, the set of vectors can be orthogonal but not normal i. Alternatively, an lineary idependent set of vectors can be orthonormalized via the Gramm-Schmidt procedure, which can be found in any linear algebra textbook. One of the most important properties for finite-dimensional spaces is the following:. In other words, in finite dimensions, once we find a full set of orthogonal vectors, we are sure that the set spans the space.

Then the following properties all of which are easily verified hold:. Analysis Formula. Best Approximations. Assume P is a subspace of V ; if we try to approximate a vector y V by a linear combination of basis vectors from the subspace P , then we are led to the concepts of least squares approximations and orthogonal projections. First of all, we define the best linear approximation P of a general vector y V to be the approximation which minimizes the norm. With W as our usual orthonormal basis for P , the projection is given by. This approximation with an orthonormal basis has a key property: it can be successively refined.

Assume you have the approximation with the first m terms of the orthonormal basis:. This is simply given by. While this seems obvious, it is actually a small miracle, since it does not hold for more general, non-orthonormal bases. For the simplest case of Hilbert spaces, namely C N , orthonormal bases are also the most intuitive since they contain exactly N mutually orthogonal vectors of unit norm. M is called a change of basis matrix. The previous equation shows that y is a linear combination of the columns of M H , which, in turn, are of course the vectors x k.

This basis, however, is not an orthonormal basis. It can be transformed to an orthonormal basis by a standard Gramm-Schmidt procedure; the basis vector thus obtained are called Legendre polynomials. This is actually the classic Fourier basis for functions on an interval. Please note that, here, as opposed to the previous examples, the number of basis vectors is actually infinite. The orthogonality of these basis vectors is easily verifiable; their completeness, however, is rather hard to prove and this, unfortunately, is very much the rule for all non-trivial, infinite-dimensional basis sets.

We are now in a position to formalize our intuitions so far, with respect to the equivalence between discrete-time signals and vector spaces, with a particularization for the three main classes of signals that we have introduced in the previous Chapter. Note that in the following, we will liberally interchange the notations x and x [ n ] to denote a sequence as a vector embedded in its appropriate Hilbert space.

The correspondence between the class of finite-length, length- N signals and C N should be so immediate at this point that it does not need further comment. As a reminder, the canonical basis is the canonical basis for C N. The k -th canonical basis vector is often expressed in signal form as. As we have seen, N -periodic signals are equivalent to length- N signals. The space of N -periodic sequences is therefore isomorphic to C N. In particular, the sum of two sequences considered as vectors is the standard pointwise sum for the elements:. The canonical basis for the space of N -periodic sequences is the canonical basis for C N , because of the isomorphism; in general, any basis for C N is also a basis for the space of N -periodic sequences.

Sometimes, however, we will also consider an explicitly periodized version of the basis. Such a sequence is called a pulse train. Note that here we are abandoning mathematical rigor, since the norm of each of these basis vectors is infinite; yet the pulse train , if handled with care, can be a useful tool in formal derivations. We have already pointed out that the formula:. This is the space of choice for all the theoretical derivations involving infinite sequences. This is an infinite set, and actually an infinite set of linearly independent vectors, since.

Note that, for an arbitrary signal x [ n ] the analysis formula gives. Clearly, the space is closed under addition and scalar multiplication, and the canonical inner product is well behaved since all sequences have only a finite number of nonzero values. However, the space is not complete; to clarify this, consider the following family of signals:. A comprehensive review of linear algebra, which contains all the concepts of Hilbert spaces but in finite dimensions, is the classic by G.

For an introduction to Hilbert spaces, there are many mathematics books; we suggest N. As an alternative, a more intuitive and engineering-motivated approach is in the classic book by D. An operator is a transformation of a given signal and is indicated by the notation. For instance, the delay operator is indicated as. A linear operator is one for which the following holds:. In C N , we define the delay operator as the left circular shift of a vector:.

Which operator do you think it is associated to? What does the operator do? Prove that any vector z S is uniquely represented in this basis. Hint: prove by contradiction. Consider the following change of basis matrix in C 8 , with respect to the standard orthonormal basis:. The basis described by H is called the Haar basis and it is one of the most celebrated cornerstones of a branch of signal processing called wavelet analysis.

To get a feeling for its properties, consider the following set of numerical experiments you can use Matlab or any other numerical package :. Fourier theory has a long history, from J. Fourier theory is a branch of harmonic analysis, and in that sense, a topic in pure and applied mathematics. At the same time, because of its usefulness in practical applications, Fourier analysis is a key tool in several engineering branches, and in signal processing in particular. Why is Fourier analysis so important? To understand this, let us take a short philosophical detour.

Interesting signals are time-varying quantities: you can imagine, for instance, the voltage level at the output of a microphone or the measured level of the tide at a particular location; in all cases, the variation of a signal, over time, implies that a transfer of energy is happening somewhere, and ultimately this is what we want to study. Now, a time-varying value which only increases over time is not only a physical impossibility but a recipe for disaster for whatever system is supposed to deal with it; fuses will blow, wires will melt and so on.

Sinusoids have another remarkable property which justifies their ubiquitous presence. Indeed, any linear time-invariant transformation of a sinusoid is a sinusoid at the same frequency : we express this by saying that sinusoidal oscillations are eigenfunctions of linear time-invariant systems. This is a formidable tool for the analysis and design of signal processing structures, as we will see in much detail in the context of discrete-time filters.

The purpose of the present Chapter is to introduce and analyze some key results on Fourier series and Fourier transforms in the context of discrete-time signal processing. It appears that, as we mentioned in the previous Chapter, the Fourier transform of a signal is a change of basis in an appropriate Hilbert space. While this notion constitutes an extremely useful unifying framework, we also point out the peculiarities of its specialization within the different classes of signal.

In particular, for finite-length signals we highlight the eminently algebraic nature of the transform, which leads to efficient computational procedures; for infinite sequences, we will analyze some of its interesting mathematical subtleties. The Fourier transform of a signal is an alternative representation of the data in the signal.

While a signal lives in the time domain , 1 its Fourier representation lives in the frequency domain. We can move back and forth at will from one domain to the other using the direct and inverse Fourier operators, since these operators are invertible. In this Chapter we study three types of Fourier transform which apply to the three main classes of signals that we have seen so far:.

The basic ingredient of all the Fourier transforms which follow, is the discrete-time complex exponential; this is a sequence of the form:. The power of a complex exponential is equal to the average energy over a period, i. A 2 , irrespective of frequency. In the introduction, we hinted at the fact that Fourier analysis allows us to decompose a physical phenomenon into oscillatory components.

However, it may seem odd, that we have chosen to use complex oscillation for the analysis of real-world signals. It may seem even odder that these oscillations can have a negative frequency and that, as we will soon see in the context of the DTFT, the spectrum extends over to the negative axis. The starting point in answering these legitimate questions is to recall that the use of complex exponentials is essentially a matter of convenience.

One could develop a complete theory of frequency analysis for real signals using only the basic trigonometric functions. You may actually have noticed this if you are familiar with the Fourier Series of a real function; yet the notational overhead is undoubtedly heavy since it involves two separate sets of coefficients for the sine and cosine basis functions, plus a distinct term for the zero-order coefficient.

The use of complex exponentials elegantly unifies these separate series into a single complex-valued sequence. Yet, one may ask again, what does it mean for the spectrum of a musical sound to be complex? Simply put, the complex nature of the spectrum is a compact way of representing two concurrent pieces of information which uniquely define each spectral component: its frequency and its phase. These two values form a two-element vector in R 2 but, since R 2 is isomorphic to C , we use complex numbers for their mathematical convenience.

We can visualize its evolution over discrete-time as a series of points on the unit circle in the complex plane. If we decompose a real signal into complex exponentials, we will show that, for any given frequency value, the phases of the positive and negative components are always opposite in sign; as the two oscillations move in opposite directions along the unit circle, their complex part will always cancel out exactly, thus returning a purely real signal.

The final step in developing a comfortable feeling for complex oscillations comes from the realization that, in the synthesis of discrete-time signals and especially in the case of communication systems it is actually more convenient to work with complex-valued signals, themselves. Although the transmitted signal of a device like an ADSL box is a real signal, the internal representation of the underlying sequences is complex; therefore complex oscillations become a necessity. We start by considering a family of finite-length sinusoidal signals indexed by an integer k of the form.

To determine these frequency values, note that, in order for w k [ n ] to contain a whole number of periods over N samples, it must conform to. In more compact notation we can therefore state that. In signal processing practice, however, it is customary to keep the normalization factor explicit in the change of basis formulas; this is mostly due to computational reasons, as we will see later, but, for the sake of consistency with the mainstream literature, we will also follow this convention.

In detail, we have the analysis formula. To stress the point again:. The coefficients X [ k ] are referred to as the spectrum of the signal. While this does not influence the energy distribution in frequency, it does have a significant effect on the shape of the signal in the discrete-time domain as we will shortly see in more detail. Some examples for signals in C 64 are plotted in Figures 4. Figure 4. The magnitude DFT does not change, but the phase offset appears in the phase of the transform. As a consequence, all of the basis vectors are needed to reconstruct the signal.

It can be shown with a few trigonometric manipulations that the DFT of a step signal is. This is the reason why, earlier, we stated that periodic sequences are the natural way to embed a finite-length signal into a sequence: their Fourier representation is formally identical. This is not surprising since a we have already established a correspondence between C N and N and b we are actually expressing a length- N sequence as a combination of N -periodic basis signals.

We now consider a Fourier representation for infinite non-periodic sequences. From a purely mathematical point of view, the Discrete-Time Fourier Transform of a sequence x [ n ] is defined as. While the DFT and DFS were signal transformations which involved only a finite number of quantities, both the infinite summation and the real-valued argument, appearing in the DTFT, can create an uneasiness which overshadows the conceptual similarities between the transforms.

Convergence in the mean square sense implies that, while the total energy of the error signal becomes zero, the pointwise values of the partial sum may never approach the values of the limit. One manifestation of this odd behavior is called the Gibbs phenomenon , which has important consequences in our approach to filter design, as we will see later. Its DTFT can be computed as the sum 4. When, as is very often the case, the DTFT is complex-valued, the usual way to represent it graphically takes the magnitude and the phase separately into account.

A way to gain some intuition about the structure of the DTFT formulas is to consider the DFS of periodic sequences with larger and larger periods. We want to show that, in the limit, we end up with the reconstruction formula of the DTFT. Now, given an absolutely summable sequence x [ n ], we can always build an N -periodic sequence [ n ] as. This can be expressed as. Now consider the DFS of [ n ]:. To check that this assertion is consistent, we can now write the DFS reconstruction formula using the DFS values given to us by inserting 4.

We now show that, if we are willing to sacrifice mathematical rigor, the DTFT can be cast in the same conceptual framework we used for the DFT and DFS, namely as a basis change in a vector space. The following formulas are to be taken as nothing more than a set of purely symbolic derivations, since the mathematical hypotheses under which the results are well defined are far from obvious and are completely hidden by the formalism. It is only fair to say, however, that the following expressions represent a very handy and intuitive toolbox to grasp the essence of the duality between the discrete-time and the frequency domains and that they can be put to use very effectively to derive quick results when manipulating sequences.

Consider, for instance, the family of parametric functions 7. For any continuous function f t we can write. Now, as k goes to infinity, the integral converges to f 0 ; hence we can say that the limit of the series of functions r k t converges then to the Dirac delta. The physical interpretation of the Dirac delta is related to quantities expressed as continuous distributions for which the most familiar example is probably that of a probability distribution pdf.

## Discrete-Time Signal Processing (Prentice-Hall Signal Processing Series)

These functions represent a value which makes physical sense only over an interval of nonzero measure; the punctual value of a distribution is only an abstraction. The Dirac delta is the operator that extracts this punctual value from a distribution, in a sense capturing the essence of considering smaller and smaller observation intervals. The resulting object is called a pulse train , similarly to what we built for the case of periodic sequences in N. Now that we have the delta notation in place, we are ready to start.

We can write. We can now show that, thanks to the delta formalism, the DTFT is the most general type of Fourier transform for discrete-time signals. Consider a length- N signal x [ n ] and its N DFT coefficients X [ k ]; consider also the sequences obtained from x [ n ] either by periodization or by building a finite-support sequence. The computation of the DTFTs of these sequences highlights the relationships linking the three types of discrete-time transforms that we have seen so far. Periodic Sequences. If we try to write the analysis DTFT formula for [ n ] we have. Finite-Support Sequences.

What the above expression means, is that the DTFT of the finite support sequence [ n ] is again uniquely defined by the N DFT coefficients of the finite-length signal x [ n ] and it can be obtained by a type of Lagrangian interpolation.

As in the previous case, the values of DTFT at the roots of unity are equal to the DFT coefficients; note, however, that the transform of a finite support sequence is very different from the DTFT of a periodized sequence. The latter, in accordance with the definition of the Dirac delta, is defined only in the limit and for a finite set of frequencies; the former is just a smooth interpolation of the DFT. Symmetries and Structure. The DTFT of a time-reversed sequence is.

## Discrete-Time Signal Processing by Alan V. Oppenheim

Finally, if x [ n ] is real and symmetric, then the DTFT is real:. Linearity and Shifts. The DTFT is a linear operator:. A shift in the discrete-time domain leads to multiplication by a phase term in the frequency domain:. This last result is known as the modulation theorem. Energy Conservation. The DFS of a time-reversed sequence is. For real periodic sequences, the following special symmetries hold see 4. The DFS is a linear operator, since it can be expressed as a matrix-vector product.

We have already seen the energy conservation property in the context of basis expansion. The properties of the DFT are obviously the same as those for the DFS, given the formal equivalence of the transforms. The only detail is how to interpret shifts, index reversal and symmetries for finite, length- N vectors; this is easily solved by considering the fact that the equivalence DFT-DFS translates in the time domain to a homomorphism between a length- N signal and its associated N -periodic extension to an infinite sequence.

This preliminary battle between old and new usually takes place at conferences, through the internet and in the journals of the discipline. After a little more maturity has been acquired by the new concepts then archival publication as a scientific or engineering monograph may occur. The applications ofsignal processing techniques have grown and grown.

They now cover the wide range from the statistical properties of signals and data through to the hardware problems of communications in all its diverse aspects. Supporting this range ofapplications is a body of theory, analysis and techniques which is equally broad. Darrell Williamson has faced the difficult task of organising this material by adopting an algebraic approach.