Diversity Enhancement of Coded Spread Spectrum Video Watermarking


281 16 295KB

English Pages 12 Year 2002

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Diversity Enhancement of Coded Spread Spectrum Video Watermarking

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Wireless Personal Communications 23: 93–104, 2002. © 2002 Kluwer Academic Publishers. Printed in the Netherlands.

Diversity Enhancement of Coded Spread Spectrum Video Watermarking T. BRANDÃO, M.P. QUELUZ ∗ and A. RODRIGUES IT/IST, Technical University of Lisbon, Av. Rovisco Pais, 1049-001 Lisboa, Portugal ∗ E-mail: [email protected]

Abstract. This paper analyses the effect of signal combination techniques in video watermark detection. A spatial spread spectrum based watermarking techique is used as embedding method, in combination with common error correction codes (BCH, Reed-Solomon with multilevel signaling, binary convolutional codes with Viterbi decoding). Besides an analytical evaluation of the bit error rate, the effectiveness of the channel coding and diversity techniques is also assessed experimentally under MPEG-2 video compression. Keywords: watermarking, spread-spectrum, signal combination techniques.

1. Introduction With the evolution of 2G and the upcoming introduction of 3G networks, service providers will be able to expand the number of services and to support m-commerce over mobile networks. This will allow the exchange of such digital content as digital music, streaming video, e-books, but it has become the major concern for content providers due to the possibility of high quality copying in the digital domain. In order to prevent this problem it is necessary to develop digital rights management systems that are able to introduce copy control, media identification and tracing mechanisms. These goals can be achieved by using watermarking technology. This technology is particularly adapted to be used in mobile systems since mobile terminals provide reliable information about the identity of the user, that can be used for watermarking and data identification purposes [1]. Most proposed watermarking methods use a spread spectrum approach [2]: a narrowband signal (the watermark information) has to be transmitted via a wideband channel that is subject to noise and distortion (the multimedia host data, e.g., video). Under this approach, digital watermarking can be treated as a communication problem. Some authors [3–5] have already shown that error protection techniques might be used advantageously in watermarking. This paper considers a spatial spread spectrum based watermarking technique as embedding method in combination with common error correction codes (BCH, Reed–Solomon with multilevel signaling, binary convolutional codes with Viterbi decoding) applied to a set of video sequences. In order to improve the results, the use of diversity techniques, in conjunction with channel coding, is proposed. This concept, well known from digital communication theory, is implemented by simultaneously considering a group of consecutive frames at the extraction procedure. Analytical expressions and bounds for the bit error rte are compared with empirical results. The effectiveness of the channel coding and diversity techniques is also assessed under MPEG-2 video compression.

94

T. Brandão et al.

Figure 1. Watermark embedding/extraction schemes: (a) Embedding; (b) Extraction.

2. Watermark Embedding The watermark embedding system is depicted in Figure 1(a). The mark consists on a Nb bit sequence B = {b1 , b2 , . . . , bNb } and is embedded in the luminance component of the image. Before embedding, the binary sequence B is mapped to a symbol sequence Bs , with length Ns . If M levels are used to perform multilevel signaling, then Ns = Nb / log2 M resulting in M different symbols A1 , . . . , AM , each one conveying l = log2 M bits. The channel encoder performs error correction encoding over the symbol sequence Bs . Encoding is performed using either binary block codes (BCH codes) or binary convolutional codes, if M = 2, and non-binary block codes (RS codes), if M > 2. If the selected code has code rate cr, the encoded symbol sequence, Bcs , will have Nc symbols, with Nc = Ns /cr. For M > 2 and following the approach proposed in [6], the symbol sequence Bcs is modulated using M bi-orthogonal sequences with zero mean and unitary variance, which assigns a modulating sequence si , i ∈ {1, . . . , Nc }, to symbol i. The use of M bi-orthogonal sequences requires M/2 orthogonal sequences, which are used to modulate symbols A1 to AM/2 . The remaining symbols, AM/2+1 to AM , are modulated using the antipodal sequences of the defined M/2 orthogonal sequences. For M = 2, two antipodal sequences are used for modulating the two different symbols. The modulating sequences, si , are then sent to a scrambler that maps each sequence to a sub-set of pixel positions. The mappings are non-overlapping and pseudo-randomly generated, being secret key dependent, and the inverse operation is only possible if the key is known. Symbol si (m, n) designates the element of the sequence si that was mapped to image position (m, n).

Diversity Enhancement of Coded Spread Spectrum Video Watermarking

95

After the spatial assignment, the values si (m, n) are further weighted by a local factor α(m, n), the purpose of which is to adapt the embedding to the human visual system. The watermark w is then defined as the superposition of all modulated, scrambled and weighted sequences si , as: w(m, n) =

Nc 

α(m, n)si (m, n) .

(1)

i=1

To complete the process of watermark embedding, the watermark w is added to the image luminance component X, resulting in a watermark luminance component Y . The procedure is extended to video sequences following a frame-based approach – the embedding separately processes each frame of the video stream and the same mark is embedded in each frame. 3. Watermark Extraction Retrieving the watermark without any knowledge of the original image can be achieved using the system depicted in Figure 1(b). To reduce major components of the image signal itself a receiver filter (filter F ) is used. As shown in [7], it may significantly improve the performance of the watermark extraction system. After this pre-processing step is completed, the filtered image passes through the unscrambler, which performs the inverse operation of the scrambler defined in previous section. Using the same key of the embedding, the unscrambler generates the image positions corresponding to each embedded symbol. The demodulator consists in M/2 linear correlators where the received signal is correlated with each orthogonal sequence. The correlation exhibiting the largest absolute value leads to the choice of two possible symbols: symbol Ax or its antipodal pair. The sign of the correlation completes this selection: if it is positive, then symbol Ax is selected, otherwise the antipodal symbol is selected. To complete the watermark extraction algorithm, the received symbol ˆ is sequence is decoded and mapped to a binary sequence. The resulting binary sequence, B, the received watermark. For video sequences, the extraction algorithm operates over a group of J (diversity window) consecutive frames (see Section 5). 4. Performance in the Absence of Compression 4.1. U NCODED C ASE Assuming that filter F guarantees a valid gaussian channel approach (F should be a whitening filter) and defining µ and σ as the expected value and the standard deviation of the correlators’ output, respectively, an approximation for these vaues can be written as [6]: DVH E[α(m, n)] Nc  DVH σ ≈ (E[YˆF2 (m, n)] + E[α 2 (m, n)]), Nc µ≈

(2)

(3)

where D is the density of watermark embedding (the ratio between marked image locations and total number of image locations), YˆF is the marked and filtered luminance component

96

T. Brandão et al.

of the image at the unscrambler output, Nc is the number of embedded symbols, H, V are, respectively, the horizontal and vertical image dimensions and E[.] denotes expected value. For uncoded M-ary bi-orthogonal signaling, the symbol error probability PM , is given by [8]:  M  +∞ 2 1  1 v µ  2 −1 PM = 1 − √ e · erf √ v + dv, (4) 2 σ 2π µσ 2 and the bit error probability Pb is bounded by:

PM < Pb ≤ PM . (5) 2 In the binary case, Pb matches the upper bound (Pb = Q(µ/σ ) = PM , with M = 2) and with increasing number of levels, this probability approaches the lower bound (large M leads to Pb ≈ PM /2). 4.2. B INARY AND N ON -B INARY B LOCK C ODES Let us consider the case in which binary antipodal signaling (M = 2) is used in conjunction with a linear (n, k) binary block code with minimum distance dmin = 2t + 1, where t is the maximum number of errors corrected by the code, and a bit-by-bit hard decision. Assuming that the bit errors occur independently, the probability of a decoded message bit-error is upperbounded by [9]:   n 1  n min(i + t, n) (6) Pbi (1 − Pb )n−i , Pdb ≤ i n i=t +1

where Pb stands for channel bit error rate. This probability is given by (4) using M = 2 and µ/σ computed for Nc = Nb n/k embedded bits. The term min(i + t, n) in (6) guarantees that impossible occurrences of more than n bit errors per codeword are not considered. For non-binary (N, K) linear block codes, with symbol-by-symbol hard decision, and considering PM as the probability of symbol error in the channel (as defined in Equation (4), but now computed for Nc = Nb N/(K log2 M) inserted symbols), we get:   N 1  2l−1 N min(i + t, N) · (7) PMi (1 − PM )N−i . Pdb ≤ l i 2 −1 N i=t +1

4.3. C ONVOLUTIONAL C ODES In this paper, a soft decision Viterbi algorithm was used for encoder/decoder implementation. The code rate cr and the constraint length L usually characterize a convolutional code. The minimum free distance df is also an important parameter in the definition of the code performance. It can be defined as the resulting minimum distance when the constraint length approaches infinity. Its calculation involves the generating function of the convolutional code, T (D, N). If transmission errors occur with equal probability and independently, the probability of a decoded message bit-error is upper-bounded by [9]:  µ  df µ2 ∂T (D, N) N = 1 2 . df e 2σ 2 (8) Pdb < Q −µ σ ∂N D = e 2σ 2

Diversity Enhancement of Coded Spread Spectrum Video Watermarking

97

5. Diversity Techniques Video watermark can be seen as a multi-channel system, since the same mark is embedded in each frame, and each frame can be considered as an independent channel. In this sense, results from diversity theory can be aplied, and watermark extraction may be improved considering simultaneously a group of J (diversity window) consecutive frames. Three strategies of signal combination have been studied and implemented. They are discussed in the following, for the case M = 2. 5.1. M AJORITY L OGIC (ML) In this technique, the coded symbols (or the message symbols if channel coding is not used) are independently extracted for each frame and the final symbol sequence is obtained by simple majority counting over the retrieved symbols. The resulting Nc symbol sequence is then applied to the channel decoder. Let us consider that the bit error probability of the coded watermark, Pb , is the same for every frame. Then, assuming independence between errors in consecutive frames, the bit error probability Pbf obtained after applying ML over a group of J frames is given by: Pbf =

J 

i= J +1 2

CiJ Pbi (1 − Pb )J −i .

(9)

If Pb ≪ 1, Equation (9) can be simplified considering only the first term of the sum: J +1 2

Pbf ≈ D JJ +1 Pb

(10)

.

2

From Equation (10), we can conclude that in most cases Pbf ≪ Pb , which proves the interest of using time diversity based on majority logic. 5.2. E QUAL G AIN C OMBINING (EGC) In this technique, for each coded symbol, the correlation output obtained in each frame is summed-up along the group of frames. The resulting correlation value is used for the transmitted symbol decision. Considering that ri is the correlator output for one symbol in frame i, the resulting correlation value, r ′ , for the same symbol, after combining J frames, is given by: r′ =

J 

(11)

ri .

i=1

Assuming once more a gaussian model for the channel, the signal at the output of the signal combiner will have a normal distribution with mean µ′ and variance σ ′2 . Thus, the probability density function of r ′ , p(r ′ ), will be given by: p(r ′ ) = √

1 2π σ ′

e

− (r

′ −µ′ )2 2σ ′2

,

(12)

98

T. Brandão et al.

where µ′ and σ ′2 are obtained as a function of the correlator’s statistics for each frame i according to: µ′ =

J 

µi ,

i=1

σ ′2 =

J 

σi2 .

(13)

i=1

Hence, the bit error probability at the signal combining output, Pbf , is given by:   J   µi     ′  i=1  µ  . Pbf = Q = Q   ′  σ J     2 σi

(14)

µ1 = µ2 = · · · = µJ = µ

(15)

σ12 = σ22 = · · · = σJ2 = σ 2 ,

(16)

i=1

Assuming that the statistics (mean and variance) are the same for all frames (which is not exactly true for MPEG-2 compressed frames), i.e.:

the bit error probability, Pbf , after combining the received signal over J frames, is given by: √ µ  J . (17) Pbf = Q σ As from (4) Pb = Q(µ/σ ), we will have in general Pbf ≪ Pb .

5.3. M AXIMAL G AIN C OMBINING (MGC) In this strategy, the J signals are weighted proportionally to their estimated µ/σ 2 ratio and then summed. Under the assumption of independent channels with white and additive gaussian noise, it maximizes the signal-to-noise ratio at the combining output [8]. The correlation value of the combined signal, for one given symbol, is then given by: r′ =

J  µˆ i ri , σˆ 2 i=1 i

(18)

where µˆ i and σˆ i are, respectively, the estimated mean and standard deviation of the correlators output at frame i. The maximum likelihood estimates of these parameters are given by [10]:

 Nc Nc  1  1  j j j ri b , σˆ i =  (r bj − µˆ i )2 , i = 1, 2, . . . , J , µˆ i = (19) Nc j =1 Nc j =1 i

where the values bj ∈ {−1, 1}, j = 1, . . . , Nc are the binary1 antipodal symbol representation j of the embedded mark and where ri is the correlator output for symbol j in frame i. As the receiver does not a priori know the embedded symbols, the bj values required in (19) are 1 In the developments of this section we are only considering the M = 2 case.

Diversity Enhancement of Coded Spread Spectrum Video Watermarking

99

Figure 2. Error rate and µ/σ 2 evolution during the three first GOPs of Stefan video sequence, after MPEG-2 video compression at 4 Mbit/s.

those obtained from the previous, already processed frames. For the first group of combined frames, the EGC technique is applied. Figure 2 presents the evolution of (µˆ i /σˆ i2 ) and of the bit error rate, for the three first Group of Pictures (GOP) of a TV sequence, after MPEG-2 video compression at 4 Mbit/s. As expected, the highest values of (µˆ i /σˆ i2 ) are obtained for the lowest error rates (i.e., for the frames less distorted due to compression). Assuming again a gaussian channel, the bit error probability for MGC, Pbf , is given by [10]:   J  2   µi . (20) Pbf = Q  σi i=1

To get some insight into the behavior of this type of signal combination, let us assume that the correlators statistics (mean and variance) are constant, µi = µ and σi = σ , except for a fraction ε (0 ≤ ε ≤ 1) of the frames, for which µi = γ µ (0 ≤ γ ≤ 1). With parameter γ , we intend to model the reduction (fade) of the watermark mean level, which may occur due to compression. In this case, the gain G in signal to noise ratio between MGC and EGC combination techniques is given by [10]:  1 + (γ 2 − 1)ε . (21) G= 1 + (γ − 1)ε Figure 3 presents the evolution of G as a function of γ and ε. From this figure, we conclude that the best performance of MGC (comparatively to EGC) will be achieved when a high fraction of frames are severely degraded (ε → 1, γ → 0), as in high compression ratio scenarios.

100 T. Brandão et al.

Figure 3. Qualitative (a) and quantitative (b) evolution of G, as a function of γ and ε.

Figure 4. (a) Table-tennis; (b) Stefan; (c) Mobile and calendar; (a), (b) and (c) are CCIR-601 video sequences, with 300 frames each.

6. Results Figure 4 presents the video sequences used in simulations. The bi-orthogonal sequences necessary for M-ary modulation were generated using Hadamard–Walsh functions. The perceptual factor, α(m, n) in Equation (1), was obtained by filtering the image with a laplacian highpass filter and taking absolute values. The coefficients of the laplacian filter were scaled by a factor β, which accounts for the watermark insertion strength. As pre-detection filter, filter F in Figure 1(b), a 3 × 3 cross-shaped high-pass filter has been used. In the absence of compression, theoretical and experimental curves for the bit error rate (BER) were obtained as a function of the pulse size, defined as DHV/Nb , which accounts for the amount of pixels used to embed one information bit. For the theoretical plots, the statistical parameters µ and σ were estimated applying Equations (2) and (3). The plots depicted in Figure 5 show a comparison for the performance achieved with: multilevel signaling without channel coding and M = 2, 256; multilevel signaling with RS(14,8) codes and M = 256; binary signaling with BCH(127,64) codes; convolutional coding with cr = 1/2 and L = 7 (Conv(2,7) in the plots). The soft decision of the Viterbi decoder uses 256 quantization levels. Theoretical BER curves versus pulse size are represented in Figure 5(a). Each point on these curves was obtained using 100 different insertion keys. In all tests, the embedded mark has a length of Nb = 256 bits and is randomly generated. The parameter that regulates the watermark insertion strength (β) was set to 0.4, a value that guarantees the invisibility of the

Diversity Enhancement of Coded Spread Spectrum Video Watermarking 101

Figure 5. BER for binary and non-binary codes, without video compression and J = 1: (a) Theoretical; (b) Empirical.

mark. For the pulse size range displayed in the figure, the performance achieved by convolutional codes is remarkable, but the curve corresponding to RS(14,8) codes with M = 256 crosses it at a pulse size value of ∼ =140 (BER ∼ = 3 × 10−6 ). Binary block codes exhibit poor performance compared to these cases. Experimental curves are shown in Figure 5(b). The number of tests for the uncoded binary case was 250, while for the remaining cases were 1000, due to an expected lower BER. Overall, there is a good match between theoretical and experimental curves. The best performance is achieved by RS encoding, which doesn’t exhibit errors for pulse size values greater than 110. For the binary cases, convolutional encoding also performs well, and BCH encoding, again, exhibits poor performance. Some instability occurs for the lowest BER rates (below 10−4 ) due to the limited number of tests. In Tables 1–3, results are presented for the percentage of frames (averaged over the three tested sequences and after MPEG-2 video compression at 2, 4 and 6 Mbit/s, respectively) in which the watermark was successfully retrieved, as a function of the number of signaling levels, error correction code, diversity technique (ML, EGC or MGC) and diversity window size (J ). Each sequence was watermarked two times, using different embedding keys and randomly generated watermarks with 64 bits length. The insertion strength (β) was set to 0.2, a value that guarantees that the watermark is far below visibility. As can be observed from Tables 1, 2, and 3, an increase in the number of signaling levels brings an improvement in performance. The use of error correction is generally profitable, particularly in the ase of soft decision convolutional coding or RS codes combined with multilevel signaling. Considering a group of consecutive frames significantly increases the detection rate, a result that is most evident for high compression ratios. Also, the system performance increases with J . As predicted from Figure 3, the best performance of the MGC combining technique is achieved for strong compression (MPEG-2 @ 2 Mbits/s). In this case, 100% success on the extraction rate implies the use of time diversity methods in conjunction with multilevel signaling and/or error correction codes.

102 T. Brandão et al. Table 1. Percentage rates of watermark extraction success under MPEG-2 @ 2 Mbits. Levels/code

M = 2/uncoded M = 16/uncoded M = 256/uncoded M M M M M

= 2/BCH(127,64) = 16/RS(14,8) = 256/RS(14,8) = 2/conv. hard = 2/conv. soft

J =1

ML

J =3 EGC MGC

ML

J =5 EGC MGC

ML

J = 15 EGC MGC

1.11 5.33 11.44

0.00 3.33 7.33 0.33 11.33 21.67 0.67 35.67 47.00

2.22 16.67 24.44 3.89 41.11 49.44 20.56 66.11 66.67

50.00 76.67 80.00 58.33 95.00 96.67 71.67 95.00 95.00

3.56 7.22 9.22 4.78 12.56

1.67 0.33 0.33 1.67 –

5.56 12.22 15.00 12.22 –

78.33 70.00 66.67 83.33 –

9.00 15.33 25.00 12.00 39.67

22.67 26.67 39.67 26.67 55.00

40.56 50.56 58.89 43.33 67.22

51.11 58.89 61.67 55.00 76.11

91.67 95.00 95.00 93.33 96.67

95.00 95.00 95.00 93.33 98.33

ML

J = 25 EGC MGC

83.33 91.67 91.67 86.11 100.0 100.0 86.11 100.0 100.0 97.22 94.44 80.56 100.0 –

100.0 100.0 100.0 100.0 100.0

100.0 100.0 100.0 100.0 100.0

Table 2. Percentage rates of watermark extraction success under MPEG-2 @ 4 Mbits. Levels/code

M = 2/uncoded M = 16/uncoded M = 256/uncoded M M M M M

= 2/BCH(127,64) = 16/RS(14,8) = 256/RS(14,8) = 2/conv. hard = 2/conv. soft

J =1

ML

J =3 EGC MGC

ML

J =5 EGC MGC

ML

J = 15 EGC MGC

ML

J = 25 EGC MGC

21.22 27.56 36.44

28.67 76.67 80.67 32.67 93.67 96.0 59.67 99.0 99.67

67.22 91.67 92.78 85.0 99.44 99.44 93.89 100.0 100.0

100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0

100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0

26.89 28.78 31.22 27.67 38.89

78.67 61.0 61.67 85.0 –

93.89 89.44 91.11 96.11 –

100.0 100.0 100.0 100.0 –

100.0 100.0 100.0 100.0 –

92.33 95.0 99.0 94.67 99.67

93.67 97.0 99.33 95.67 99.67

98.89 100.0 100.0 100.0 100.0

99.44 100.0 100.0 100.0 100.0

100.0 100.0 100.0 100.0 100.0

100.0 100.0 100.0 100.0 100.0

100.0 100.0 100.0 100.0 100.0

100.0 100.0 100.0 100.0 100.0

7. Conclusions In this paper it has been confirmed, theoretically and experimentally, that spread-spectrum based video watermrking can benefit from multilevel signaling and/or error correction coding. The best performance is attained with binary convolutional codes and non-binary block codes, a result that is also valid under MPEG-2 video compression. It has also been shown that, for video sequences under MPEG-2 compression, better performance can be achieved by retrieving the mark over a window of consecutive frames, using time diversity techniques. In general, the maximal gain combining method leads to the highest extraction success rates, a Table 3. Percentage rates of watermark extraction success under MPEG-2 @ 6 Mbits. Levels/code

M = 2/uncoded M = 16/uncoded M = 256/uncoded M M M M M

= 2/BCH(127,64) = 16/RS(14,8) = 256/RS(14,8) = 2/conv. hard = 2/conv. soft

J =1

37.33 66.33 90.0 66.78 79.89 89.11 72.78 94.89

ML

J =3 EGC

MGC

92.0 99.33 98.0 93.67 100.0 100.0 99.0 100.0 100.0 100.0 99.0 99.33 100.0 –

100.0 100.0 100.0 100.0 100.0

100.0 100.0 100.0 100.0 100.0

ML

J =5 EGC

MGC

ML

J = 15 EGC MGC

ML

J = 25 EGC MGC

99.44 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0

100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0

100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0

100.0 100.0 100.0 100.0 –

100.0 100.0 100.0 100.0 –

100.0 100.0 100.0 100.0 –

100.0 100.0 100.0 100.0 100.0

100.0 100.0 100.0 100.0 100.0

100.0 100.0 100.0 100.0 100.0

100.0 100.0 100.0 100.0 100.0

100.0 100.0 100.0 100.0 100.0

100.0 100.0 100.0 100.0 100.0

Diversity Enhancement of Coded Spread Spectrum Video Watermarking 103 result that is most evident for high compression scenarios. The method equal gain combining presents a good performance vs. simplicity tradeoff. The techniques used in the paper can be easily extended to lower bit rates transmission of video content based in the MPEG-4 video standard, for application on wireless systems with higher bandwidth constraints. References 1. 2. 3.

4. 5.

6.

7. 8. 9. 10.

F. Hartung and F. Ramme, “Digital Rights Management and Watermarking of Multimedia Content for MCommerce Applications”, IEEE Communications Magazine, Vol. 38, pp. 78–84, 2000. I.J. Cox, J. Kilian, F.T. Leighton and T. Shamoon, “Secure Spread Spectrum Watermarking for Multimedia”, IEEE Trans. Image Processing, Vol. 6, pp. 1673–1687, 1997. J. Hernández, J.-F. Delaigle and B. Macq, “Improving Data Hiding by Using Convolutional Codes and SoftDecision Decoding”, in Proc. SPIE/IST: Security and Watermarking of Multimedia Contents II, San Jose, U.S.A., January 2000, pp. 24–47. J. Hernández, J. Rodríguez and F. Pérez-González, “Improving the Performance of Spatial Watermarking of Images Using Channel Coding”, Signal Processing, No. 80, pp. 1261–1279, 2000. T. Brandão, M.P. Queluz and A. Rodrigues, “Performance Improvement of Spatial Watermarking through Efficient Non-Binary Channel Coding”, in Proc. SPIE/IST: Security and Watermarking of Multimedia Contents III, San Jose, U.S.a., January 2001, pp. 651–662. M. Kutter, “Performance Improvement of Spread Spectrum Based Image Watermarking Schemes through M-ary Modulation”, in Andreas Pfitzmann (ed.), Information Hiding ’99, Vol. LNCS 1768, pp. 237–252, 1999. G. Depovere, T. Kalker and J.-P. Linnartz, “Improved Watermark Detection Reliability Using Filtering before Correlation”, in Proceedings of ICIP, 1998, pp. 430–434. J.G. Proakis, “Digital Communications”, McGraw-Hill Series in Electrical Engineering, 4th edition, 2001. A.J. Viterbi and J.K. Omura, Principles of Digital Communication and Coding, McGraw-Hill, 1979. T. Brandão, “Spread Spectrum Based Image Watermarking”, M.Sc. Thesis, IST – Technical University of Lisbon, submitted in November 2001.

Tomás Brandão received his E.E. and M.Sc. degrees in electrical and computer engineering from the Instituto Superior Técnico, Technical University of Lisbon, Portugal, in 1999 and 2002, respectively. He is presently an assistant at the Technology Department of the Institute for Business and Labour Studies, Lisbon. His main interests include image analysis/processing, information theory and DSP design.

104 T. Brandão et al.

Maria Paula Queluz received her E.E. and M.Sc. degrees in electrical and computer engineering from the Instituto Superior Técnico (IST), Technical University of Lisbon, Portugal, in 1985 and 1989, respectively, and her Ph.D. degree from the Catholic University of Louvain, Belgium, in 1996. She was responsible for the IST participation in ACTS project – TALISMAN (“Tracing author’s rights by labeling image services and monitoring access network”). She is presently an assistant professor at the Technical University of Lisbon. Her scientific interests include image analysis/processing and copyright protection.

António Rodrigues received his B.Sc. and M.Sc. degrees in electrical and computer engineering from the Instituto Superior Técnico (IST), Technical University of Lisbon, Lisbon, Portugal, in 1985 and 1989, respectively, and his Ph.D. degree from the Catholic University of Louvain, Louvain-la-Neuve, Belgium, in 1997. Since 1985, he has been with the Department of Electrical and Computer Engineering, IST, where he is currently an assistant professor. His research interests include mobile and satellite communications, spread spectrum systems, modulation and coding.