Multimedia Forensics [1 ed.] 9811676208, 9789811676208, 9811676232, 9789811676239, 9789811676215

Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing shar

329 94 15MB

English Pages 494 Year 2022

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
Symbols
Notation
Part I Present and Challenges
1 What's in This Book and Why?
1.1 Introduction
1.2 Overviews
2 Media Forensics in the Age of Disinformation
2.1 Media and the Human Experience
2.2 The Threat to Democracy
2.3 New Technologies, New Threats
2.3.1 End-to-End Trainable Speech Synthesis
2.3.2 GAN-Based Codecs for Still and Moving Pictures
2.3.3 Improvements in Image Manipulation
2.3.4 Trillion-Param Models
2.3.5 Lottery Tickets and Compression in Generative Models
2.4 New Developments in the Private Sector
2.4.1 Image and Video
2.4.2 Language Models
2.5 Threats in the Wild
2.5.1 User-Generated Manipulations
2.5.2 Corporate Manipulation Services
2.5.3 Nation State Manipulation Examples
2.5.4 Use of AI Techniques for Deception 2019–2020
2.6 Threat Models
2.6.1 Carnegie Mellon BEND Framework
2.6.2 The ABC Framework
2.6.3 The AMITT Framework
2.6.4 The SCOTCH Framework
2.6.5 Deception Model Effects
2.6.6 4Ds
2.6.7 Advanced Persistent Manipulators
2.6.8 Scenarios for Financial Harm
2.7 Investments in Countering False Media
2.7.1 DARPA SEMAFOR
2.7.2 The Partnership on AI Steering Committee on Media Integrity Working Group
2.7.3 JPEG Committee
2.7.4 Content Authenticity Initiative (CAI)
2.7.5 Media Review
2.8 Excerpts on Susceptibility and Resilience to Media Manipulation
2.8.1 Susceptibility and Resilience
2.8.2 Case Studies: Threats and Actors
2.8.3 Dynamics of Exploitative Activities
2.8.4 Meta-Review
2.9 Conclusion
References
3 Computational Imaging
3.1 Introduction to Computational Imaging
3.2 Automation of Geometrically Correct Synthetic Blur
3.2.1 Primary Cue: Image Noise
3.2.2 Additional Photo Forensic Cues
3.2.3 Focus Manipulation Detection
3.2.4 Portrait Mode Detection Experiments
3.2.5 Conclusions on Detecting Geometrically Correct Synthetic Blur
3.3 Differences Between Optical and Digital Blur
3.3.1 Authentically Blurred Edges
3.3.2 Authentic Sharp Edge
3.3.3 Forged Blurred Edge
3.3.4 Forged Sharp Edge
3.3.5 Distinguishing IGHs of the Edge Types
3.3.6 Classifying IGHs
3.3.7 Splicing Logo Dataset
3.3.8 Experiments Differentiating Optical and Digital Blur
3.3.9 Conclusions: Differentiating Optical and Digital Blur
3.4 Additional Forensic Challenges from Computational Cameras
References
Part II Attribution
4 Sensor Fingerprints: Camera Identification and Beyond
4.1 Introduction
4.2 Sensor Noise Fingerprints
4.3 Camera Identification
4.4 Sensor Misalignment
4.5 Image Manipulation Localization
4.6 Counter-Forensics
4.7 Camera Fingerprints and Deep Learning
4.8 Public Datasets
4.9 Concluding Remarks
References
5 Source Camera Attribution from Videos
5.1 Introduction
5.2 Challenges in Attributing Videos
5.3 Attribution of Downsized Media
5.3.1 The Effect of In-Camera Downsizing on PRNU
5.3.2 Media with Mismatching Resolutions
5.4 Mitigation of Video Coding Artifacts
5.4.1 Video Coding from Attribution Perspective
5.4.2 Compensation of Loop Filtering
5.4.3 Coping with Quantization-Related Weakening of PRNU
5.5 Tackling Digital Stabilization
5.5.1 Inverting Frame Level Stabilization Transformations
5.5.2 Inverting Spatially Variant Stabilization Transformations
5.6 Datasets
5.7 Conclusions and Outlook
References
6 Camera Identification at Large Scale
6.1 Introduction
6.2 Naive Methods
6.2.1 Linear Search
6.2.2 Sequential Trimming
6.3 Efficient Pairwise Correlation
6.3.1 Search over Fingerprint Digests
6.3.2 Pixel Quantization
6.3.3 Downsizing
6.3.4 Dimension Reduction Using PCA and LDA
6.3.5 PRNU Compression via Random Projection
6.3.6 Preprocessing, Quantization, Coding
6.4 Decreasing the Number of Comparisons
6.4.1 Clustering by Cameras
6.4.2 Composite Fingerprints
6.5 Hybrid Methods
6.5.1 Search over Composite-Digest Search Tree
6.5.2 Search over Full Digest Search Tree
6.6 Conclusion
References
7 Source Camera Model Identification
7.1 Introduction
7.1.1 Image Acquisition Pipeline
7.1.2 Problem Formulation
7.2 Model-Based Approaches
7.2.1 Color Filter Array (CFA)
7.2.2 Lens Effects
7.2.3 Other Processing and Defects
7.3 Data-Driven Approaches
7.3.1 Hand-Crafted Features
7.3.2 Learned Features
7.4 Datasets and Benchmarks
7.4.1 Template Dataset
7.4.2 State-of-the-art Datasets
7.4.3 Benchmark Protocol
7.5 Case Studies
7.5.1 Experimental Setup
7.5.2 Comparison of Closed-Set Methods
7.5.3 Comparison of Open-Set Methods
7.6 Conclusions and Outlook
References
8 GAN Fingerprints in Face Image Synthesis
8.1 Introduction
8.2 Related Work
8.2.1 Generative Adversarial Networks
8.2.2 GAN Detection Techniques
8.3 GAN Fingerprint Removal: GANprintR
8.4 Databases
8.4.1 Real Face Images
8.4.2 Synthetic Face Images
8.5 Experimental Setup
8.5.1 Pre-processing
8.5.2 Facial Manipulation Detection Systems
8.5.3 Protocol
8.6 Experimental Results
8.6.1 Controlled Scenarios
8.6.2 In-the-Wild Scenarios
8.6.3 GAN-Fingerprint Removal
8.6.4 Impact of GANprintR on Other Fake Detectors
8.7 Conclusions and Outlook
References
Part III Integrity and Authenticity
9 Physical Integrity
9.1 Introduction
9.1.1 Journalistic Fact Checking
9.1.2 Physics-Based Methods in Multimedia Forensics
9.1.3 Outline of This Chapter
9.2 Physics-Based Models for Forensic Analysis
9.2.1 Geometry and Optics
9.2.2 Photometry and Reflectance
9.3 Algorithms for Physics-Based Forensic Analysis
9.3.1 Principal Points and Homographies
9.3.2 Photometric Methods
9.3.3 Point Light Sources and Line Constraints in the Projective Space
9.4 Discussion and Outlook
9.5 Picture Credits
References
10 Power Signature for Multimedia Forensics
10.1 Electric Network Frequency (ENF): An Environmental Signature for Multimedia Recordings
10.2 Technical Foundations of ENF-Based Forensics
10.2.1 Reference Signal Acquisition
10.2.2 ENF Signal Estimation
10.2.3 Higher Order Harmonics for ENF Estimation
10.3 ENF Characteristics and Embedding Conditions
10.3.1 Establishing Presence of ENF Traces
10.3.2 Modeling ENF Behavior
10.4 ENF Traces in the Visual Track
10.4.1 Mechanism of ENF Embedding in Videos and Images
10.4.2 ENF Extraction from the Visual Track
10.4.3 ENF Extraction from a Single Image
10.5 Key Applications in Forensics and Security
10.5.1 Joint Time–Location Authentication
10.5.2 Integrity Authentication
10.5.3 ENF-Based Localization
10.5.4 ENF-Based Camera Forensics
10.6 Anti-Forensics and Countermeasures
10.6.1 Anti-Forensics and Detection of Anti-Forensics
10.6.2 Game-Theoretic Analysis on ENF-Based Forensics
10.7 Applications Beyond Forensics and Security
10.7.1 Multimedia Synchronization
10.7.2 Time-Stamping Historical Recordings
10.7.3 Audio Restoration
10.8 Conclusions and Outlook
References
11 Data-Driven Digital Integrity Verification
11.1 Introduction
11.2 Forensics Clues
11.2.1 Camera-Based Artifacts
11.2.2 JPEG Artifacts
11.2.3 Editing Artifacts
11.3 Localization Versus Detection
11.3.1 Patch-Based Localization
11.3.2 Image-Based Localization
11.3.3 Detection
11.4 Architectural Solutions
11.4.1 Constrained Networks
11.4.2 Two-Branch Networks
11.4.3 Fully Convolutional Networks
11.4.4 Siamese Networks
11.5 Datasets
11.6 Major Challenges
11.7 Conclusions and Future Directions
References
12 DeepFake Detection
12.1 Introduction
12.2 DeepFake Video Generation
12.3 Current DeepFake Detection Methods
12.3.1 General Principles
12.3.2 Categorization Based on Methodology
12.3.3 Categorization Based on Input Types
12.3.4 Categorization Based on Output Types
12.3.5 The DeepFake-o-Meter Platform
12.3.6 Datasets
12.3.7 Challenges
12.4 Future Directions
12.5 Conclusion and Outlook
References
13 Video Frame Deletion and Duplication
13.1 Introduction
13.2 Related Work
13.2.1 Frame Deletion Detection
13.2.2 Frame Duplication Detection
13.3 Frame Deletion Detection
13.3.1 Baseline Approaches
13.3.2 C3D Network for Frame Deletion Detection
13.3.3 Experimental Result
13.4 Frame Duplication Detection
13.4.1 Coarse-Level Search for Duplicated Frame Sequences
13.4.2 Fine-Level Search for Duplicated Frames
13.4.3 Inconsistency Detector for Duplication Localization
13.4.4 Experimental Results
13.5 Conclusions and Discussion
References
14 Integrity Verification Through File Container Analysis
14.1 Introduction
14.1.1 Main Image File Format Specifications
14.1.2 Main Video File Format Specifications
14.2 Analysis of Image File Formats
14.2.1 Analysis of JPEG Tables and Image Resolution
14.2.2 Analysis of Exif Metadata Parameters
14.2.3 Analysis of the JPEG File Format
14.2.4 Automatic Analysis of JPEG Header Information
14.2.5 Methods for the Identification of Social Networks
14.3 Analysis of Video File Formats
14.3.1 Analysis of the Video File Structure
14.3.2 Automated Analysis of mp4-like Videos
14.3.3 Efficient Video Analysis
14.4 Concluding Remarks
References
15 Image Provenance Analysis
15.1 The Problem
15.1.1 The Provenance Framework
15.1.2 Previous Work
15.2 Content Retrieval
15.2.1 Approaches
15.2.2 Datasets and Evaluation
15.2.3 Results
15.3 Graph Construction
15.3.1 Approaches
15.3.2 Datasets and Evaluation
15.3.3 Results
15.4 Content Clustering
15.4.1 Approach
15.4.2 Datasets and Evaluation
15.4.3 Results
15.5 Open Issues and Research Directions
15.6 Summary
References
Part IV Counter-Forensics
16 Adversarial Examples in Image Forensics
16.1 Introduction
16.2 Adversarial Examples in a Nutshell
16.2.1 Problem Definition and Review of the Most Popular Attacks
16.2.2 Adversarial Examples in the Physical Domain
16.2.3 White Versus Black-Box Attacks
16.3 Adversarial Examples in Multimedia Forensics
16.3.1 Transferability of Adversarial Examples in Multimedia Forensics
16.3.2 Increased-Confidence Adversarial Examples with Improved Transferability
16.4 Defenses
16.4.1 Detect Then Defend
16.4.2 Adversarial Training
16.4.3 Detector Randomization
16.4.4 Multiple-Classifier Architectures
16.5 Final Remarks
References
17 Anti-Forensic Attacks Using Generative Adversarial Networks
17.1 Introduction
17.2 Background on GANs
17.2.1 GANs for Image Synthesis
17.3 Brief Overview of Relevant Anti-Forensic Attacks
17.3.1 What Are Anti-Forensic Attacks
17.3.2 Anti-Forensic Attack Objectives and Requirements
17.3.3 Traditional Anti-Forensic Attack Design Procedure and Shortcomings
17.3.4 Anti-Forensic Attacks on Parametric Forensic Models
17.3.5 Anti-Forensic Attacks on Deep Neural Networks
17.4 Using GANs to Make Anti-Forensic Attacks
17.4.1 How GANs Are Used to Construct Anti-Forensic Attacks
17.4.2 Overview of Existing GAN-Based Attacks
17.4.3 Differences Between GAN-Based Anti-Forensic Attacks and Adversarial Examples
17.4.4 Advantages of GAN-Based Anti-Forensic Attacks
17.5 Training Anti-Forensic GANs
17.5.1 Overview of Adversarial Training
17.5.2 Knowledge Levels of the Victim Classifier
17.5.3 White Box Attacks
17.5.4 Black Box Scenario
17.5.5 Zero Knowledge
17.6 Known Problems with GAN-Based Attacks & Future Directions
References
Recommend Papers

Multimedia Forensics [1 ed.]
 9811676208, 9789811676208, 9811676232, 9789811676239, 9789811676215

  • Commentary
  • TruePDF
  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Advances in Computer Vision and Pattern Recognition

Husrev Taha Sencar Luisa Verdoliva Nasir Memon   Editors

Multimedia Forensics

Advances in Computer Vision and Pattern Recognition Founding Editor Sameer Singh

Series Editor Sing Bing Kang, Zillow, Inc., Seattle, WA, USA Advisory Editors Horst Bischof, Graz University of Technology, Graz, Austria Richard Bowden, University of Surrey, Guildford, Surrey, UK Sven Dickinson, University of Toronto, Toronto, ON, Canada Jiaya Jia, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong Kyoung Mu Lee, Seoul National University, Seoul, Korea (Republic of) Zhouchen Lin , Peking University, Beijing, Beijing, China Yoichi Sato, University of Tokyo, Tokyo, Japan Bernt Schiele, Max Planck Institute for Informatics, Saarbrücken, Saarland, Germany Stan Sclaroff, Boston University, Boston, MA, USA

More information about this series at https://link.springer.com/bookseries/4205

Husrev Taha Sencar · Luisa Verdoliva · Nasir Memon Editors

Multimedia Forensics

Editors Husrev Taha Sencar Research Complex, HBKU, Education City Qatar Computing Research Institute Doha, Qatar

Luisa Verdoliva University of Naples Federico II Napoli, Italy

Nasir Memon Center for Cyber Security New York University Brooklyn, NY, USA

ISSN 2191-6586 ISSN 2191-6594 (electronic) Advances in Computer Vision and Pattern Recognition ISBN 978-981-16-7620-8 ISBN 978-981-16-7621-5 (eBook) https://doi.org/10.1007/978-981-16-7621-5 © The Editor(s) (if applicable) and The Author(s) 2022. This book is an open access publication. Open Access This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

In 2013, we edited the first edition of this book to provide an overview of challenges and developments in digital image forensics. This was the first book on the topic providing a comprehensive review of prominent research themes as well as the perspectives of legal professionals and forensic practitioners. Since then, the research in this field has expanded dramatically. On one hand, the volume of multimedia content generated and exchanged over the Internet increased hugely. Images and videos represent an ever-increasing share of the data traveling on the net and the preferred communications means for most users, especially of the younger generations. Therefore, they are also the preferred target of malicious attacks. On the other hand, with the advent of deep learning, attacking multimedia is becoming easier and easier. Ten years ago, manipulating an image in a credible way required uncommon technical skills and significant efforts. Manipulating a video, beyond trivial framelevel operations, was possible only for movie productions. This scenario has radically changed now, and multimedia manipulation is at anyone’s reach. Sophisticated tools based on autoencoders or generative adversarial networks are freely available, as well as huge datasets to train them at will. Deepfakes have become commonplace and represent a real and growing menace for the integrity of information, with repercussions on various sectors of society, from the economy to journalism to politics. And these problems can only worsen as new and increasingly sophisticated forms of manipulation are taking shape. Unsurprisingly, multimedia manipulation and forensics are becoming hot topics, not just for the research community but for society at large. Several no-profit organizations have been created to investigate these phenomena. Major IT companies, like Google and Facebook, published online large datasets of manipulated videos to foster research on these topics. Funding agencies are promoting large research projects. Especially remarkable is the Media Forensic initiative (MediFor) launched by DARPA in 2016 to support leading research groups on media integrity all over the world, which generated important outcomes in terms of methods and reference datasets. This was followed in 2020 by the Semantics Forensics initiative (SemaFor) aiming at semantic-level detectors of fake media, working jointly on text, audio, images, and video to counter next-generation attacks. v

vi

Preface

The greatly increased focus on multimedia integrity has caused an intense research effort in latest years, which involves not only the multimedia forensics community but the much larger field of computer vision. Indeed, deep learning takes the lion’s share not only on the attacker’s side but also on the defender’s side. It provides the analyst with new powerful forensic tools for detection. Given a suitably large training set, deep learning architectures ensure a significant performance gain with respect to conventional methods and much higher robustness to post-processing and evasions techniques. In addition, deep learning can be used to address other fundamental problems, such as source identification. In fact, being able to track the provenance of a multimedia asset helps establishing its integrity and, as sensors become more sophisticated and software-oriented, source identification tools must follow suit. Likewise, neural networks, especially generative adversarial networks, are natural candidates to perform precious counter-forensics tasks. In synthesis, deep learning is part of the problem but also part of the solution, and the research activity going on in this cat-and-mouse game seems to be continuously growing. All this said, we felt this was the right time to create an updated version of that book, with largely renovated content, to cover the rapid advancements in multimedia forensics and provide an updated review of the most promising methods and stateof-the-art techniques. We organized the book into three main parts following a brief introduction of development in imaging technologies. The first part will provide an overview of key findings related to the attribution problem. The work pertaining to the integrity of images and videos is collated in the second part. The last part features work on combatting adversarial techniques that rise as a significant challenge to forensics methods. Doha, Qatar Napoli, Italy Brooklyn, USA

Husrev Taha Sencar Luisa Verdoliva Nasir Memon

Contents

Part I

Present and Challenges

1

What’s in This Book and Why? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Husrev Taha Sencar, Luisa Verdoliva, and Nasir Memon

3

2

Media Forensics in the Age of Disinformation . . . . . . . . . . . . . . . . . . . . Justin Hendrix and Dan Morozoff

7

3

Computational Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scott McCloskey

41

Part II

Attribution

4

Sensor Fingerprints: Camera Identification and Beyond . . . . . . . . . . Matthias Kirchner

65

5

Source Camera Attribution from Videos . . . . . . . . . . . . . . . . . . . . . . . . . Husrev Taha Sencar

89

6

Camera Identification at Large Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Samet Taspinar and Nasir Memon

7

Source Camera Model Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Sara Mandelli, Nicolò Bonettini, and Paolo Bestagini

8

GAN Fingerprints in Face Image Synthesis . . . . . . . . . . . . . . . . . . . . . . 175 João C. Neves, Ruben Tolosana, Ruben Vera-Rodriguez, Vasco Lopes, Hugo Proença, and Julian Fierrez

Part III Integrity and Authenticity 9

Physical Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Christian Riess

10 Power Signature for Multimedia Forensics . . . . . . . . . . . . . . . . . . . . . . . 235 Adi Hajj-Ahmad, Chau-Wai Wong, Jisoo Choi, and Min Wu vii

viii

Contents

11 Data-Driven Digital Integrity Verification . . . . . . . . . . . . . . . . . . . . . . . 281 Davide Cozzolino, Giovanni Poggi, and Luisa Verdoliva 12 DeepFake Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Siwei Lyu 13 Video Frame Deletion and Duplication . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Chengjiang Long, Arslan Basharat, and Anthony Hoogs 14 Integrity Verification Through File Container Analysis . . . . . . . . . . . 363 Alessandro Piva and Massimo Iuliani 15 Image Provenance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Daniel Moreira, William Theisen, Walter Scheirer, Aparna Bharati, Joel Brogan, and Anderson Rocha Part IV Counter-Forensics 16 Adversarial Examples in Image Forensics . . . . . . . . . . . . . . . . . . . . . . . 435 Mauro Barni, Wenjie Li, Benedetta Tondi, and Bowen Zhang 17 Anti-Forensic Attacks Using Generative Adversarial Networks . . . . 467 Matthew C. Stamm and Xinwei Zhao

Symbols

iid PCE PRNU r.v. Cov(., .) E(.) ρ f X (x) FX (x) μ X ,σ X2 DCT DFT I I (m, n) I (m, n, t) H0 H1 PD PF A PM PC R PE TPR FPR FNR Tro DNN R@k VO EO V EO

Independent and identically distributed Peak to Correlation Energy Photo-Response Non-Uniformity Random variable Covariance of two r.v.s Expected value of a r.v. Linear correlation value Probability density function of a r.v. X Cumulative distribution function of a r.v. X Mean and variance of a r.v. X Discrete cosine transform Discrete Fourier transform A multidimensional array that may represent an image or a video The value of the (m, n)-th entry of image I The value of the (m, n)-th entry of t-th frame of video I Null hypothesis Alternative hypothesis Probability of Detection Probability of False Alarm Probability of Miss Probability of Correct Rejection Probability of Error True Positive Rate False Positive Rate False Negative Rate Read-out time of a rolling shutter camera Deep Neural Networks Recall at top-k objects retrieved in a rank Vertex Overlap Edge Overlap Graph overlap, a.k.a. vertex-edge overlap ix

x

PSNR MSE ASR CNN DNN DL ML

Symbols

Peak Signal to Noise Ratio Mean Squared Error Attack Success Rate Convolutional Neural Network Deep Neural network Deep Learning Machine Learning

Notation

We use the following notational conventions. • Concerning random variables: – Random variables by uppercase roman letters, e.g., X , Y . – Particular realizations of a random variable by corresponding lowercase letters, e.g., x, y. – A sequence of n r.v.s and its particular realizations will be, respectively, denoted by X n and x n . – Predicted random variables by the hat sign, e.g., Xˆ . – The alphabet of a random variable will be indicated by the capital calligraphic letter corresponding to the r.v. X . • Concerning matrices and vectors: – – – – –

Matrices by boldface capital letters, e.g., A. The entry in the i-th row and j-th column of a matrix A by A(i, j). Column vectors by boldface lowercase letters, e.g., x. < v, w >: scalar product between vectors v and w. Operators are set in normal font, for example, the transpose AT .

• Concerning sets: – Set names by uppercase calligraphic letters, i.e., A-Z. – Sets will be specified by listing their objects within curly brackets, i.e., {. . . }. – Standard number sets by Blackboard bold typeface, e.g., N, Z, R, etc. • Concerning functions: – Functions are typeset as a single lowercase letter, in math font, e.g., f (A). – Multi-letter functions, and in particularly commonly known functions,  N are set in lowercase normal font, e.g., cos(x), corr(x, y), or j ∗ = argmin i=1 A(i, j). j

xi

xii

Notation

– To deal with specific functions from a field of functions (e.g., spherical harmonics basis functions that are indexed by two parameters), we could use comma-separated subscripts, e.g., h i, j (x). – Non-math annotations are set in normal font, e.g., f special (A). The purpose is special (A). to disambiguate names from variables, like in the example f i • Concerning images and videos: – I will be exclusively used to refer to an image or a sequence of images, and In with n ∈ N for multiple images/videos. – I (i, j) will be used as a row-major pixel-wise image reference denoting the pixel value at row i, column j of I. – Similarly, I (i, j, t) denotes the value of the (i, j)-th pixel of t-th frame of video I. – Image pair-wise adjacency matrices will be denoted by M for single matrix and Mn with n ∈ N for multiple matrices. – Row-major adjacency matrix element reference M(i, j) ∈ M for the element expressing the relation between the i-th and the j-th images of interest. • Concerning graphs: – – – –

A generic graph by G for single and G n with n ∈ N for multiple graphs. G = (V, E) to express a graph’s set of vertices V and set of edges E. Vertex names by vn ∈ V , with n ∈ N. Edge names by en ∈ E, with n ∈ N; E = (vi , v j ) to detail an edge’s pair of vertices; if the edge is directed, vi precedes v j in such relation.

• In the context of machine learning and deep learning: – A generic dataset by D. – Training, validation, and test datasets, respectively, by Dtr , Dv , Dts . – i−th sample at the input of a DNN and its corresponding label (if indicated) by (xi , yi ). – Loss function used for DNN training by L. – Sigmoid activation function σ (). – Output of a DNN by φ(x) when the input is equal to x. – Gradient of function φ by ∇φ . – The Jacobian matrix of a multivalued function φ by Jφ . – The logit value (output of a CNN prior to the final softmax layer) at node i by zi . – Mathematical calligraphic capital letters in function form M(·) represent a model function (i.e., a Neural Network). • Concerning norms: – L1, L2, squared L2, and L-infinity norms, respectively, by L 1 , L 2 , L 22 , L ∞ . – p norm by P.P p .

Part I

Present and Challenges

Chapter 1

What’s in This Book and Why? Husrev Taha Sencar, Luisa Verdoliva, and Nasir Memon

1.1 Introduction Multimedia forensics is a societally important and technically challenging research area that will need significant effort for the foreseeable future. While the research community is growing and work like that in this book demonstrates significant progress, many challenges remain. We expect that forensically tackling everimproving media acquisition and generation methods and countering the pace of change in media manipulation will continue to require significant technical breakthroughs in defensive technologies. Whether performed at the individual level or class level attribution is at the heart of multimedia forensics. This form of passive forensic analysis exploits traces introduced by the acquisition or generation pipeline and subsequent editing. These traces are often subtle and invisible to humans but can be detected currently by analysis algorithms and provide mechanisms by which generation or manipulation can be automatically detected. Despite significant achievements in this area of research, advances in imaging technologies and newly introduced approaches to media synthesis have a strong potential to render existing forensic capabilities ineffective. More critically, media manipulation has now become a pressing problem with broad implications. In the past, it required significant skill to create compelling manipulations because editing tools, such as Adobe Photoshop, required experienced users to alter images convincingly. Over the last several years, the rise of machine learning-based technologies has dramatically lowered the skill necessary to create H. T. Sencar (B) Qatar Computing Research Institute, HBKU, Ar-Rayyan, Qatar e-mail: [email protected] L. Verdoliva University Federico II of Naples, Naples, Italy N. Memon New York University Tandon School of Engineering, New York, NY, USA © The Author(s) 2022 H. T. Sencar et al. (eds.), Multimedia Forensics, Advances in Computer Vision and Pattern Recognition, https://doi.org/10.1007/978-981-16-7621-5_1

3

4

H. T. Sencar et al.

compelling manipulations. For example, Generative Adversarial Networks (GANs) can create photo-realistic faces with no skill by an end-user other than the ability to refresh a web page. Deepfake algorithms, such as autoencoders to swap faces in a video, can create manipulations much more easily than previous generation video tools. While these machine learning techniques have many positive uses, they have also been misused for darker purposes such as to perpetrate fraud, to create false personas, and to attack personal reputations. The asymmetry in the operational setting of digital forensics where attackers have access to details and inner workings of the involved methods and tools and thereby have the freedom to tailor their actions makes the task even more difficult. It won’t be surprising to see these advanced capabilities being deployed to counter forensic methods. Clearly, these are all pressing and important problems for which technical solutions must play a role. There are a number of significant challenges that must be addressed by forensics community. Imaging technologies are constantly being improved by manufacturers at both hardware and software levels with little public information about their specifics. This implies a black-box access to these technologies and involves a significant reverse engineering effort on the part of researchers. With rapid advancements in computational imaging, this task will only get more strenuous. Further, new manipulation techniques are becoming available at a rapid pace and each generation improves on the previous. As a result, defenders must develop ways to limit the a priori knowledge and training samples required from an attack algorithm and should assume an open world scenario where they might not have knowledge of all the attack algorithms. Meta-learning approaches for developing detection algorithms more quickly would also be highly beneficial. Manipulations in the wild are propagated via noisy channels, like social media, and so there is a need for robust detection technologies that are not fooled by launderings such as simple compression or more sophisticated replay attacks. Finally, part of the task of forensics is unwinding how the media was manipulated and who manipulated it. Consequently, approaches to understanding the provenance of manipulated media are necessary. All of these challenges motivated us to prepare this book on multimedia forensics. Our main objective was to discuss new research directions and also present some possible solutions to existing problems. In this regard, this book provides a timely vehicle for showcasing important research that has emerged in the last years addressing media attribution, authenticity verification, and counter-forensics. We hope our content also sparks new research efforts in the fight against forgeries and misinformation.

1.2 Overviews Our book begins with an overview of the challenges posed by media manipulation as seen from today. In their chapter, Hendrix and Morozoff provide a comprehensive view of all aspects of the problem to better emphasize the increasing need for

1 What’s in This Book and Why?

5

advanced media forensics capabilities. This is followed by an examination of forensics challenges that arise from the increased adoption of computational imaging. The chapter by McCloskey assesses the ability to detect automated focus manipulations performed by computational cameras. For this, it looks at cues that can be used to distinguish optical blur from synthetic blur. The chapter also covers computational imaging research with potential future implications on forensics. This chapter is followed by a series of chapters focusing on different dimensions of the attribution problem. Undoubtedly, one of the breakthrough discoveries in multimedia forensics is that Photo-Response Non-Uniformity (PRNU) of an imaging sensor, which manifests as a unique and permanent pattern introduced to all media captured by the sensor, can be used for identification and verification of the source of digital media. Our book has three chapters covering different aspects PRNU-based source camera attribution. The first chapter by Kirchner reviews the rich body of literature on camera identification from sensor noise fingerprints with an emphasis on still images from digital cameras and the evolving challenges in this domain. The second chapter by Sencar examines extension of these capabilities to video domain and outlines recently developed methods for estimating the sensor’s PRNU from videos. The third chapter on this topic by Taspinar et al. provides an overview of techniques proposed to perform source matching efficiently in the presence of a large collection of media. Another vertical in attribution domain is the source camera model identification problem. Assuming that a picture or video has been digitally acquired with a camera, the goal of this research direction is to identify the brand and model of the device used at acquisition time without relying on metadata. The chapter by Mandelli et al. investigates source camera model identification through pixel analysis and discusses wide series of methodologies proposed to solve this problem. The remarkable progress of deep learning, in particular Generative Adversarial Networks (GANs), has led to the generation of extremely realistic fake facial content. The potential for misuse of such capabilities opens up new problem in forensics that focuses on identification of GAN fingerprints used in face image synthesis. Neves et al. provide an in-depth literature analysis of state-of-the-art detection approaches for face synthesis and manipulation as well as spoofing those detectors. The next part of our book involves several chapters focusing on integrity and authenticity of multimedia. This part starts with a review of physics-based methods that mainly analyzes the interaction of light and objects and the geometric mapping of light and objects onto the image sensor. In his chapter, Reese reviews the major lines of research on physics-based methods and discusses their strengths and limitations. The following chapter investigates forensic applications of the Electric Network Frequency (ENF) signal. Since ENF serves as an environmental signature captured by audio and video recordings made in locations where there is electrical activity, it provides an additional dimension for time–location authentication and for inferring the grid in which a recording was made. The chapter by Hajj-Ahmad et al. provides an overview of the increasing amount of research work that has been done in this field.

6

H. T. Sencar et al.

The long-lasting problem of image and video manipulation is revisited in the subsequent four chapters. However, distinct from conventional manipulation detection methods, Cozzolino et al. focus on the data-driven deep learning-based methods proposed in recent years. This chapter discusses in detail forensics traces these methods rely on, and describes architectural solutions used for detection and localization, together with the associated training strategies. The next chapter features the new class of forgery known as DeepFakes that involve impersonating audios and videos generated by deep neural networks. In this chapter, Lyu surveys the state-of-the-art DeepFake detection methods and evaluates solutions proposed to tackle this problem along with their pros and cons. In the third chapter of the topic, Long et al. provide an overview of the work related to video frame deletion and duplication and introduce deep learning-based approaches to tackle these two types of manipulations. A complementary integrity verification approach is presented next. Several work demonstrated that media manipulation not only leaves forensic traces in the audiovisual content but also in the file structure. The chapter authored by Piva covers proposed forensic methods devoted to the analysis of image and video file formats to validate multimedia integrity. The last chapter on authenticity verification introduces the problem of provenance analysis. By jointly examining multiple multimedia files, instead of making individual assessments, and evaluating pairwise relationships between them, the provenance of multimedia files is more reliably determined. The chapter by Moreira et al. addresses this problem by covering state-of-the-art analysis techniques. The last part of our book includes two chapters on counter-forensics complementing the advances brought by deep learning to forensics. The advances brought by deep learning have also significantly expanded the capabilities of anti-forensic attackers. The first chapter by Barni et al. focuses on adversarial examples in image forensics and how they can be used in creation of attacks. It also describes possible countermeasures against them, and discusses their effectiveness. The other chapter by Stamm et al. focuses on emerging threat posed by GAN-based anti-forensic attacks in creating realistic, but completely synthetic forensic traces.

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Chapter 2

Media Forensics in the Age of Disinformation Justin Hendrix and Dan Morozoff

2.1 Media and the Human Experience Empiricism is the notion that knowledge originates from sensory experience. Implicit in this statement is the idea that we can trust our senses. But in today’s world, much of the human experience is mediated through digital technologies. Our sensory experiences can no longer be trusted a priori. The evidence before us—what we see and hear and read—is, more often than not, manipulated. This presents a profound challenge to a species that for most of its existence has relied almost entirely on sensory experience to make decisions. A group of researchers from multiple universities and multiple disciplines that study collective behavior recently assessed that the study of the impact of digital communications technologies on humanity should be understood as a “crisis discipline,” alongside the study of matters such as climate change and medicine (Bak-Coleman et al. 2021). Noting that since “our societies are increasingly instantiated in digital form,” they argue the potential of digital media to perturb the ways in which we organize ourselves, deal with conflict, and solve problems is profound: Our social adaptations evolved in the context of small hunter-gatherer groups solving local problems through vocalizations and gestures. Now we face complex global challenges from pandemics to climate change—and we communicate on dispersed networks connected by digital technologies such as smartphones and social media.

The evidence, they say, is clear: in this new information environment, misinformation, disinformation, and media manipulation pose grave threats—from the spread of conspiracy theories (Narayanan et al. 2018), rejection of recommended public health interventions (Koltai 2020), subversion of democratic processes (Atlantic Council 2020), and even genocide (Łubi´nski 2020). Media manipulation is only one of a series of problems in the digital information ecosystem—but it is a profound one. For most of modern history, trust and verifiability of media—whether printed material, audio and more recently images, video, and J. Hendrix (B) · D. Morozoff New York University, Vidrovr Inc., New York, NY, USA e-mail: [email protected] © The Author(s) 2022 H. T. Sencar et al. (eds.), Multimedia Forensics, Advances in Computer Vision and Pattern Recognition, https://doi.org/10.1007/978-981-16-7621-5_2

7

8

J. Hendrix and D. Morozoff

other modalities—have been linked to one’s ability to access and coordinate information to previously gathered facts and beliefs. Consequently, in order to deceive someone, the deceiver had to rely on the obfuscation of a media object’s salient features and hope that the consumer’s time and ability to perceive these differences was not sufficient. This generalization extends from boasts and lies told in the schoolyard, to false narratives circulated by media outlets, to large-scale disinformation campaigns pursued by nations during military conflicts. A consistent property and limitation of deceptive practices is that their success is linked to two conditions. The first condition is that the ability to detect false information is linked to the amount of time we have to process and verify it before taking action on it. For instance, showing someone a brief picture of a photoshopped face is more likely to be taken as true if the viewer is given less time to examine it. The second condition is linked to the amount of time and effort required by the deceiver to create the manipulation. Hence, the impact and reach of a manipulation is limited by the resources and effort involved. A kid in school can deceive their friends with a photoshopped image, but it required Napoleon and the French Empire to flood the United Kingdom with fake pound notes during the Napoleonic wars and cause a currency crisis. A set of technological advancements over the last 30 years has led to a tectonic shift in this dynamic. With the adoption of Internet and social media in the 1990s and 2000s, respectively, the marginal costs required to deceive a victim have plummeted. Any person on the planet with a modicum of financial capital is able to reach billions of people with their message, effectively breaking the resource dependence of a deceiver. This condition is an unprecedented one in the history of the world. Furthermore, the maturation and spread of machine learning-powered media technologies has provided the ability to generate diverse media at scale, lowering the barriers to entry for all content creators. Today, the amount of work required to deliver mass content at Internet scale is exponentially decreasing. This is inevitably compounding the problem of trust online. As a result, our society is approaching a precipice of far-reaching consequences. Media manipulation is easy for anyone, even if they are without significant resources or technical skills. While conversely the amount of media available to people continues to explode, the amount of time dedicated to verifying a single piece of content is also falling. As such, if we are to have any hope of adapting our human sensemaking to this digital age, we will need media forensics incorporated into our digital information environment in multiple layers. We will require automated tools to augment our human abilities to make sense of what we read, hear, and see.

2.2 The Threat to Democracy One of the most pressing consequences of media manipulation is the threat to democracy. Concern over issues of trust and the role of information in democracies is severe. Globally, democracy has run into a difficult patch, with 2020 perhaps among its worst

2 Media Forensics in the Age of Disinformation

9

years in decades. The Economist Intelligence Unit’s Democracy Index found that as of 2020 only “8.4% of the world’s population live in a full democracy while more than a third live under authoritarian rule” (The Economist 2021). In the United States, the twin challenges of election disinformation and the fraught public discourse on the COVID-19 pandemic exposed weaknesses in the information ecosystem. Technology—once regarded as a boon to democracy—is now regarded by some as a threat to it. For instance, a 2019 survey of 1000 experts 2 by the Pew Research Center and Elon University’s Imagining the Internet Center found that, when pressed on the impact of the use of technology by citizens, civil society groups, and governments on core aspects of democracy and democratic representation, 49% of the “technology innovators, developers, business and policy leaders, researchers, and activists” surveyed said technology would “mostly weaken core aspects of democracy and democratic representation in the next decade,” while 33% said technology would mostly strengthen democracies. Perhaps the reason for such pessimism is that current interventions to manage trust in the information ecosystem are not taking place against a static backdrop. Social media has accompanied a decline of trust in institutions and the media generally, exacerbating a seemingly intractable set of social problems that create ripe conditions for the proliferation of disinformation and media manipulation. Yet, the decade ahead will see even more profound disruptions to the ways in which information is produced, distributed, sought, and consumed. A number of technological developments are driving the disruption, including the installation of 5G wireless networks, which will enable richer media experiences, such as AR/VR and having more devices trading information more frequently. The increasing advance of various computational techniques to generate, manipulate, and target content is also problematic. The pace of technological developments suggest that, by the end of the decade, synthetic media may radically change the information ecosystem and demand new architectures and systems to aid sensemaking. Consider a handful of recent developments: • Natural text to speech enables technologies like Google Assistant, but is also being used for voice phishing scams. • Fully generated realistic images from StyleGAN2—a neural network model—are being leveraged to make fake profiles in order to deceive on social media. • Deep Fake face swapping videos are now capable of fooling many people who do not pay close attention to the video. It is natural, in this environment, that the study of the threat itself is a thriving area of inquiry. Experts are busy compiling threats, theorizing threat models, studying the susceptibility of people to different types of manipulations, exploring what interventions build resilience to deception, helping to inform the debate about what laws and rules need to be in place, and offering input on the policies technology platforms should have in place to protect users from media manipulation. In this chapter, we will focus on the emerging threat landscape, in order to give the reader a sense of the importance of media forensics as a discipline. While the technical

10

J. Hendrix and D. Morozoff

work of discerning text, images, video, and audio and their myriad combinations may not seem to qualify as a “crisis discipline,” we believe that it is indeed.

2.3 New Technologies, New Threats In this section, we describe selected research trends with high potential impact on the threat landscape of synthetic media generation. We hope to ground the reader in the art of the possible at the cutting edge of research and provide some intuition to where the field is headed. We briefly explain the key contributions and why we believe they are significant for students and practitioners of media forensics. This list is not exhaustive. These computational techniques will underpin synthetic media applications in the future.

2.3.1 End-to-End Trainable Speech Synthesis Most speech generation systems include several processing steps that are customarily trained in isolation (e.g., text-to-spectrogram generators, vocoders). Moreover, while generative adversarial networks (GANs) have made tremendous progress on visual data, their application to human speech still remains limited. A recent paper addresses both of these issues with an end-to-end adversarial training protocol (Donahue et al. 2021). The paper describes a fully differentiable feedforward architecture for text-tospeech synthesis, which further reduces the amount of the necessary training-time supervision by eliminating the need for linguistic feature alignment. This further lowers the barrier for engineering novel speech synthesis systems, simplifying the need for data pre-processing to generate speech from textual inputs. Another attack vector that has been undergoing rapid progress is voice cloning (Hao 2021). Voice cloning can synthetically convert a short audio recording of one person’s voice to produce a very similar voice applied in various inauthentic capacities. A recent paper proposes a novel speech synthesis system, NAUTILUS, which brings text-to-speech and voice cloning under the same framework (Luong and Yamagishi 2020). This demonstrates the possibility that a single end-to-end model could be used for multiple speech synthesis tasks and could lead to unification recently seen in natural language models (e.g., BERT Devlin et al. 2019 where a single architecture beats state of the art in multiple benchmark problems).

2.3.2 GAN-Based Codecs for Still and Moving Pictures Generative adversarial networks (GANs) have revolutionized image synthesis as they rapidly evolved from small-scale toy datasets to high-resolution photorealistic con-

2 Media Forensics in the Age of Disinformation

11

tent. They exploit the capacity of deep neural networks to learn complex manifolds and enable generative sampling of novel content. While the research started from unconditional, domain-specific generators (e.g., human faces), recent work focuses on conditional generation. This regime is of particular importance for students and practitioners of media forensics as it enables control of the generation process. One area where this technology is underappreciated with regard to its misuse potential is lossy codecs for still and moving pictures. Visual content abounds with inconsequential background content or abstract textures (e.g., grass, water, clouds) which can be synthesized in high quality instead of explicit coding. This is particularly important in limited bandwidth regimes that typically occur in mobile applications, social media, or video conferencing. Examples of recent techniques in this area include GAN-based videoconferencing compression from Nvidia (Nvidia Inc 2020) and the first high-fidelity GAN-based image codec from Google (Mentzer et al. 2020). This may have far-reaching consequences. Firstly, effortless manipulation of video conference streams may lead to novel man-in-the-middle attacks and further blur the boundary between acceptable and malicious alterations (e.g., while replaced backgrounds are often acceptable, the faces or expressions are probably not). Secondly, the ability to synthesize an infinite number of content variations (e.g., decode the same image with different looking trees) poses a threat to reverse image search—one of the very few reliable techniques for fact checkers today.

2.3.3 Improvements in Image Manipulation Progress in machine learning, computer vision, and graphics has been constantly lowering the barrier (i.e., skill, time, compute) for convincing image manipulation. This threat is well recognized, hence we simply report a novel manipulation technique recently developed by University of Washington and Adobe (Zhang et al. 2020). The authors propose a method for object removal from photographs that automatically detects and removes the object’s shadow. This may be another blow to image forensic techniques that look for (in)consistency of physical traces in the photographs.

2.3.4 Trillion-Param Models The historical trend of throwing more compute at a problem to see where the performance ceiling is continuous with large-scale language models (Radford and Narasimhan 2018; Kaplan et al. 2020). Following on the heels of the much popularized GPT-3 (Brown et al. 2020), Google Brain recently published their new Switch Transformer model (Fedus et al. 2021)—a × 10 parameter increase over GPT-3. Interestingly, research trends related to increasing efficiency and reducing costs are paralleling the development of these large models.

12

J. Hendrix and D. Morozoff

With large sparse models, it is becoming apparent that lower precision (e.g., 16-bit) can be effectively used, combined with new chip architecture improvements targeting these lower bit precisions (Wang et al. 2018; Park and Yoo 2020). Furthermore, because of the scales of data needed to train these enormous models, sparsification techniques are further being explored (Jacobs et al. 1991; Jordan and Jacobs 1993). New frameworks are also emerging that facilitate distribution of ML models across many machines (e.g., Tensorflow Mesh that was used in Google’s Switch Transformers) (Shazeer et al. 2018). Finally, an important trend that is also stemming from this development is improvements in compression and distillation of models, a concept that was explored in our first symposium as a potential attack vector.

2.3.5 Lottery Tickets and Compression in Generative Models Following in the steps of a landmark 2018 paper proposing the neural lottery hypothesis (Frankle and Carbin 2019), work continues in ways of discovering intelligent compression and distillation techniques to prune large models without losing accuracy. A recent paper being presented at ICLR 21 has applied this hypothesis to GANs (Chen et al. 2021). There have also been interesting approaches investigating improvements in the training dynamics of subnetworks by leveraging differential solvers to stabilize GANs (Qin et al. 2020). The Holy Grail around predicting subnetworks or initialization weights still remains elusive, and so the trend of training ever larger parameter models, and then attempting to compress/distill them, will continue until the next ceiling is reached.

2.4 New Developments in the Private Sector To understand how the generative threat landscape is evolving, it is important to track developments from industry that could be used in media manipulation. Here, we include research papers, libraries for developers, APIs for the general public, as well as polished products. The key players include research and development divisions of large corporations, such as Google, Nvidia, and Adobe, but we also see relevant activities from the startup sector.

2.4.1 Image and Video Nvidia Maxine (Nvidia Inc 2020)—Nvidia released a fully accelerated platform software development kit for developers of video conferencing services to build and deploy AI-powered features that use state-of-the-art models in their cloud. A notable feature includes GAN-based video compression, where human faces are synthesized based on facial landmarks and a single reference image. While this significantly

2 Media Forensics in the Age of Disinformation

13

improves user experience, particularly for low-bandwidth connections, it opens up potential for abuse. Smart Portraits and Neural Filters in Photoshop (Nvidia Inc 2021)—Adobe introduced neural filters into their flagship photo editor, Photoshop. The addition includes smart portraits, a machine-learning-based face editing feature which brings the latest face synthesis capabilities to the masses. This development continues the trend of bringing advanced ML-power image editing to mainstream products. Other notable examples of commercially available advanced AI editing tools include Anthropic’s PortraitPro (Anthropics Inc 2021) and Skylum’s Luminar (Skylum Inc 2021). ImageGPT (Chen et al. 2020) and DALL-E (Ramesh et al. 2021)—Open AI is experimenting with extending their GPT-3 language model to other data modalities. ImageGPT is an autoregressive image synthesis model capable of convincing com-

Fig. 2.1 Synthesized images generated by DALL-E based on a prompt requesting “snail-like tanks”

14

J. Hendrix and D. Morozoff

pletion of image prompts. DALL-E is an extension of this idea that combines visual and language tokens. It allows for image synthesis based on textual prompts. The model shows a remarkable capability in generating plausible novel content based on abstract concepts, e.g., “avocado chairs” or “snail tanks,” see Fig. 2.1. Runway ML (RunwayML Inc 2021)—Runway ML is a Brooklyn-based startup that provides an AI-powered tool for artists/creatives. Their software brings most recent open-source ML models to the masses and allows for their adoption in image/video editing workflows. Imaginaire (Nvidia Inc 2021)—Nvidia released a PyTorch library that contains optimized implementation of several image and video synthesis methods. This tool is intended for developers and provides various models for supervised and unsupervised image-to-image and video-to-video translation.

2.4.2 Language Models GPT-3 (Brown et al. 2020)—OpenAI developed a language model with 175-billion parameters. It performs well in few-shot learning scenarios, producing nuanced text and functioning code. Shortly after, the model has been released via a Web API for general-purpose text-to-text mapping. Switch Transformers (Fedus et al. 2021)—Six months after the GPT-3 release, Google announced their record-breaking language model featuring over a trillion parameters. The model has been described in a detailed technical manuscript, and has been published along with a Tensorflow Mesh implementation. CLIP (Radford et al. 2021)—Released in early January 2021 by OpenAI, CLIP is an impressive zero-shot image classifier. It pre-trained on 400 million text-to-image pairs. Given a list of descriptions, CLIP will make a prediction for which descriptor, or caption, a given image should be paired.

2.5 Threats in the Wild In this section, we wish to provide a summary of observed disinformation campaigns and scenarios that are relevant to students and practitioners of media forensics. We would like to underscore new tactics and evolving resources and capabilities of adversaries.

2.5.1 User-Generated Manipulations Thus far, user-generated media, aimed at manipulating public opinion, by and large, have not made use of sophisticated media manipulation capabilities beyond image, video, and sound editing software available to the general public. Multiple politically

2 Media Forensics in the Age of Disinformation

15

charged manipulated videos emerged and gained millions of views during the second half of 2020. Documented cases related to the US elections include manipulation of the visual background of a speaker (Reuters Staff 2021c), adding manipulated audio (Weigel 2021), cropping of a footage (Reuters Staff 2021a), manipulated editing of a video, and an attempt to spread false claim that an original video is a “deep fake” production (Reuters Staff 2021b). In all of the cases mentioned, the videos gained millions of views within hours on Twitter, YouTube, Facebook, Instagram, TikTok, and other platforms, before they were taken down or flagged to emphasize they were manipulated. Often the videos emerged from an anonymous account on social media, but were shared widely, in part by prominent politicians and campaign accounts on Twitter, Facebook, and other platforms. Similar incidents emerged in India (e.g., an edited version of a video documenting police brutality) (Times of India Anam Ajmal 2021). Today, any user can employ AI-generated images for creating a fake person. As recently reported in the New York Times (Hill and White 2021), Generated Photos (Generated Photos Inc 2021) can create “unique, worry-free” fake persons for “$2.99 or 1,000 people for $1000.” Similarly, there’s open advertising on Twitter by Rosebud AI for creating human animation (Rosebud AI Inc 2021). Takedowns of far-right extremists and conspiracy-related accounts by several platforms, including Twitter, Facebook, Instagram, and WhatsApp announcement of changes to its privacy policies resulted in reports of migration to alternative platforms and chat services. Though it is still too early to assess the long-term impact of this change, it has the potential of impacting consumption and distribution of media, particularly among audiences that are already keen to proliferate conspiracy theories and disinformation.

2.5.2 Corporate Manipulation Services In recent years, investigations by journalists, researchers, and social media platforms have documented several information operations that were conducted by state actors via marketing and communications firms around the world. States involved included, among others, Saudi Arabia, UAE, and Russia (CNN Clarissa Ward et al. 2021). Some researchers claim outsourcing information operations has become “the new normal,” and so far the operations that were exposed did not spread original synthetic media (Grossman and Ramali 2021). For those countries seeking the benefits of employing inauthentic content but unable to internally develop such a capability, there are plenty of manipulation companies available for hire. Regimes incapable of building a manipulation capability, deprived of the technological resources, and no longer need to worry as an entirely new industry of Advanced Persistent Manipulators (APM) (GMFUS Clint Watts 2021) have emerged in the private sector to meet the demand. Many advertising agencies, able to make manipulated content with ease, are hired by governments for public relations purposes and blur the lines between authentic and inauthentic contents.

16

J. Hendrix and D. Morozoff

The use of a third party by state actors tends to be limited to a targeted campaign. In 2019, reports by Twitter-linked campaigns run by the Saudi firm Smaat to the Saudi government (Twitter Inc 2021). The firm used automated tools to amplify proSaudi messages, using networks of thousands of accounts. The FBI has accused the firm’s CEO for being the Saudi government agent in an espionage case involving two Saudi former Twitter employees (Paul 2021). In 2020, a CNN investigation exposed an outsourced effort by Russia, through Ghana and Nigeria to impact U.S. politics. The operation targeted African American demographics in the United States using accounts on Facebook, Twitter, and Instagram. It was mainly manually executed by a dozen troll farm employees in West Africa (CNN Clarissa Ward et al. 2021).

2.5.3 Nation State Manipulation Examples For governments seeking to utilize, acquire, or develop synthetic media tools, the resources and technology capability needed to create and harness these tools are significant, hence the rise of third-party actors is described in Sect. 2.5.2. Below is a sampling of recent confirmed or presumed nation state manipulations. People’s Republic of China Over the last 2 years, Western social media platforms have encountered a sizeable amount of inauthentic images and video attributed or believed to be stemming from China (The Associated Press ERIKA KINETZ 2021; Hatmaker 2021). To what degree these attributions are directly connected to the Chinese government is unknown, but the incentives align to suggest the Chinese Communist Party (CCP) controlling the state, and the country’s information environment creates or condones the deployment of these malign media manipulations. Australia has made the most public stand against Chinese manipulations. In November 2020, Chinese foreign minister and leading CCP Twitter provocateur posted a fake image showing an Australian soldier with a bloody knife next to a child (BBC Staff 2021). The image sought to inflame tensions in Afghanistan where Australian soldiers have recently been found to have killed Afghan prisoners during the 2009 to 2013 time period. Aside from this highly public incident and Australian condemnation, signs of Chinese employment of inauthentic media continue to mount. Throughout the fall of 2019 and 2020, Chinese language YouTube news channels proliferated. These new programs, created in high volume and distributed on a vast scale, periodically mix real information and real people with selectively sprinkled inauthentic content, much of which is difficult to assess as an intended manipulation or a form of narrated recreation of actual or perceived events. Graphika has reported on these “political spam networks” that distribute pro-Chinese propaganda, first targeting Hong Kong protests in 2019 (Nimmo et al. 2021), and the more recent addition of English language YouTube videos focusing on US policy in 2020 (Nimmo et al. 2021). Reports

2 Media Forensics in the Age of Disinformation

17

Fig. 2.2 Screenshots of posts from Facebook and Twitter

showed the use of inauthentic or misattributed content in both visual and audio forms. Accounts used AI-generated account profile images and dubbed voice overs. A similar network of inauthentic accounts were spotted immediately following the 2021 US presidential inauguration. Users with Deep Fake profile photos, which bear a notable similarity to AI-generated photos on thispersondoesnotexist.com, tweeted verbatim excerpts lifted from the Chinese Ministry of Foreign Affairs, see Fig. 2.2. These alleged Chinese accounts were also enmeshed in a publicly discussed campaign reported by The Guardian as amplifying a debunked ballot video surfacing among voter fraud allegations surrounding the U.S. presidential election. The accounts’ employment of inauthentic faces, Fig. 2.3, while somewhat sloppy, does offer a way to avoid detection as has been seen repeatedly when stolen real photos dropped into image or facial recognition software are easily detected.

Fig. 2.3 Three examples of Deep Fake profile photos from a network of 13 pro-China inauthentic accounts on Twitter. Screengrabs of photos directly from Twitter

France Individuals associated with the French military have been linked to a 2020 information operation, aimed at discrediting a similar Russian IO employing trolls to

18

J. Hendrix and D. Morozoff

spread disinformation in the Central African Republic and other African countries (Graphika and The Stanford Internet Observatory 2021). The French operation used fake media to create Facebook accounts. This is an interesting example of tactical strategy that employs a fake campaign to fight disinformation. Russia/IRA The Russian-sponsored Internet Research Agency was found to have created a small number of fake Facebook, Twitter, and LinkedIn accounts that were amplifying a website posing as an independent news outlet, peacedata.net (Nimmo et al. 2021). While IRA troll accounts have been prolific for years, this is the first observed instance of their use of AI-generated images. Fake profiles were created to give credibility to peacedata.net, who hired unwitting freelance writers to provide left-leaning content for audiences, mostly in the US and UK, as part of a calculated, albeit ineffective, disruption campaign.

2.5.4 Use of AI Techniques for Deception 2019–2020 During the year 2019–2020, at the height of the COVID-19 pandemic, nefarious generative AI techniques gained popularity. The following are examples of various modalities employed for the purposes of deception and information operations (IO). Ultimately, we also observe that commercial developments are rapidly advancing the capability for video and image manipulation. While resources required to train GANs and other models will remain a prohibitive factor in the near future, improvements in image codecs, larger training parameters, lowering precision, and compression/distillation create additional threat opportunities. The threat landscape will change rapidly in the next 3 to 5 years, requiring media forensics capabilities to advance quickly to keep up with the capabilities of adversaries. GAN-Generated Images The most prevalent documented use so far of AI capabilities in IO has been in utilizing available free tools to create fake personas online. Over the past year, the use of GAN technology has become increasingly common, in operations linked to state actors, or organizations driven by political motivation, see Fig. 2.3. The technology was mostly applied in generating profile pictures for inauthentic users on social media, relying on open-source available tools such as thispersondoesntexist.com and Generated Photos (Generated Photos Inc 2021). Several operations used GANs to create hundreds to thousands of accounts that spammed platforms with content and comments, in an attempt to amplify messages. Some of these operations were attributed to Chinese actors and to PR firms. Other operations limited the use of GAN-generated images to create a handful of more “believable” “fake personas” that appeared as creators of content. At least one of those efforts was attributed to Russia’s IRA. In at least one case, a LinkedIn account with a GAN-generated image was used as part of a suspected espionage

2 Media Forensics in the Age of Disinformation

19

effort to contact US national security experts (Associated Press 2021). Synthetic Audio In recent years, there were at least three attempts documented, one of which was successful, to defraud companies and individuals using synthetic audio. Two cases included use of synthetic audio to impersonate the company CEO in a phone call. In 2019, a British energy company transferred over $200,000 following a fraudulent phone call, generated with synthetic audio to impersonate the parent company’ CEO, requesting the money transfer (Wall Street Journal Catherine Stupp 2021). A similar attempt to defraud a US-based company failed in 2020 (Nisos 2021). Symantec has reported at least three cases in 2019 of synthetic audio being deployed to defraud private companies (The Washington Post Drew Harwell 2021). Deception Using GPT-3 While user-generated manipulations aiming at impacting public opinion have avoided advanced technology so far, there was at least one attempt to use GPT-3 in generating automated text, by an anonymous user/bot on Reddit (The Next Web Thomas Macaulay 2021). The effort seemed to abuse a third-party app access to OpenAI API and to give hundreds of comments. In some cases, the bot published conspiracy theories and made false statements. Additionally, recent reports noted GPT-3 has an “anti-Muslim” bias (The Next Web Tristan Greene 2021). This conclusion points to the potential damage that could be caused by apps that are using the technology, even with no malicious intentions. IO and Fraud Using AI Techniques Over a dozen cases of use of synthetic media for information operations and other criminal activities were recorded during the past couple of years. In December 2019, social media platforms noted the first use of AI techniques in a case of “coordinated inauthentic behavior” which was attributed to The BL, a company with ties to the publisher of The Epoch Times, according to Facebook. The company created hundreds of fake Facebook accounts using GAN techniques to create synthetic profile pictures. The use of GAN to create false portrait pictures of non-existent personas repeated itself in more than a handful of IO that were discovered during 2020, including in operations that were attributed by social media platforms to Russian IRA, to China, and to a US-based PR firm.

2.6 Threat Models Just as the state of the art in technologies for media manipulation has advanced, so has theorizing around threat models. Conceptual models and frameworks used to understand and analyze threats and describe adversary behavior are evolving alongside what is known of the threat landscape. It is useful for students of media forensics to understand these frameworks generally, as they are used in the field by analysts,

20

J. Hendrix and D. Morozoff

Fig. 2.4 BEND framework

technologists, and others who work to understand and mitigate against manipulations and deception. It is useful to apply different types of frameworks and threat models for different kinds of threats. There is a continuum between the sub-disciplines and sub-concerns that form media and information manipulation. At times it is possible to seek to put too much under the same umbrella. For instance, when analyzing COVID19 misinformation analysis, the processes and models that are used to tackle that are extremely different than the processes and models that we use to tackle covert influence operations that tend to be state-sponsored, or to understand attacks that are primarily financially motivated and conducted by criminal groups. It’s important to think about what models apply in what circumstances, and specifically what disciplines can contribute to which models. The below examples do not represent a comprehensive selection of threat models and frameworks, but serve as examples.

2 Media Forensics in the Age of Disinformation

21

2.6.1 Carnegie Mellon BEND Framework Developed by Dr. Kathleen Carley, the BEND framework is a conceptual system that seeks to make sense of influence campaigns, and their strategies and tactics. Carley notes that the “BEND framework is the product of years of research on disinformation and other forms of communication based influence campaigns, and communication objectives of Russia and other adversarial communities, including terror groups such as ISIS that began in late December of 2013.”1 The letters that are represented in the acronym BEND refer to elements of the framework, such as “B” which references “Back, Build, Bridge, and Boost” influence maneuvers.

2.6.2 The ABC Framework Initially developed by Camille Francois, then Chief Innovation Officer at Graphika, the ABC framework offers a mechanism to understand “three key vectors characteristic of viral deception” in order to guide potential remedies (François 2020). In this framework: • A is for Manipulative Actors. In Francois’s framework, manipulative actors engage knowingly and with clear intent in their deceptions. • B is for Deceptive Behavior. Behavior encompasses all of the techniques and tools a manipulative actor may use, from engaging a troll farm or a bot army to producing manipulated media. • C is for Harmful Content. Ultimately, it is the message itself that defines a campaign and often its efficacy.

Fig. 2.5 ABCDE framework

1

https://link.springer.com/article/10.1007/s10588-020-09322-9.

22

J. Hendrix and D. Morozoff

Later, Alexandre Alaphilippe, Executive Director of the EU DisinfoLab, proposed adding a D for Distribution, to describe the importance of the means of distribution to the overall manipulation strategy (Brookings Inst Alexandre Alaphilippe 2021). And yet another letter joined the acronym courtesy of James Pamment in the fall of 2020-Pamment proposed E to evaluate the Effect of the manipulation campaign (Fig. 2.5).

2.6.3 The AMITT Framework The Misinfosec and Credibility Coalition communities, led by Sara-Jayne Terp, Christopher Walker, John Gray, and Dr. Pablo Breuer, developed the AMITT (Adversarial Misinformation and Influence Tactics and Techniques) frameworks to codify the behaviours (tactics and techniques) of disinformation threats and responders, to enable rapid alerts, risk assessment, and analysis. AMITT proposes diagramming and treating equally both threat actor behaviors and response behaviors by the target (Gray and Terp 2019). The AMITT TTPs application of tactics, techniques, procedures, and frameworks is the disinformation version of the ATT&K model, which is an information security standard threat model. The AMITT STIX model is the disinformation version of the STIX model, which is an information security data sharing standard.

2.6.4 The SCOTCH Framework In order to enable analysts “to comprehensively and rapidly characterize an adversarial operation or campaign,” Dr. Sam Blazek developed the SCOTCH framework (Blazek 2021) (Fig. 2.6). • S—Source—the actor who is responsible for the campaign. • C—Channel—the platform and its affordances and features that deliver the campaign. • O—Objective—the objective of the actor. • T—Target—the intended audience for the campaign. • C—Composition—the language, content, or message of the campaign. • H—Hook—the psychological phenomena being exploited by the campaign. Blazek offers this example of how SCOTCH might describe a campaign: At the campaign level, a hypothetical example of a SCOTCH characterization is: Non-State Actors used Facebook, Twitter, and Low-Quality Media Outlets in an attempt to Undermine the Integrity of the 2020 Presidential Election among Conservative American Audiences using Fake Images and Videos and capturing attention by Hashtag Hijacking and Posting in Shared-Interest Facebook Groups via Cutouts (Fig. 2.7).

2 Media Forensics in the Age of Disinformation

Fig. 2.6 AMITT framework

Fig. 2.7 Deception model effects

23

24

J. Hendrix and D. Morozoff

2.6.5 Deception Model Effects Still other frameworks are used to explain how disinformation diffuses in a population. Mathematicians have employed game theory to look at information as quantities that diffuse according to certain dynamics over networks. For instance, Carlo Kopp, Kevin Korb, and Bruce Mills model cooperation and diffusion in populations (Kopp et al. 2018) exposed to “fake news”: This model “depicts the relationships between the deception models and the components of system they are employed to compromise.”

2.6.6 4Ds Some frameworks focus on the goals manipulators. Ben Nimmo, then at the Central European Policy Institute and now at Facebook, where he leads global threat intelligence and strategy against influence operations, developed the 4Ds to describe the tactics of Russian disinformation campaigns (StopFake.org 2021). “They can be summed up in four words: dismiss, distort, distract, dismay,” wrote Nimmo (Fig. 2.8). • Dismiss may involve denying allegations made against Russia or a Russian actor. • Distortion includes outright lies, fabrications, or obfuscations of reality. • Distract includes methods of turning attention away from one phenomenon to another. • Dismay is a method to get others to regard Russia as too significant a threat to confront.

2.6.7 Advanced Persistent Manipulators Clint Watts, a former FBI Counterterrorism official, similarly thinks about the goals of threat actors and describes, in particular, nation states and other well-resourced manipulators as Advanced Persistent Manipulators. They use combinations of influence techniques and operate across the full spectrum of social media platforms, using the unique attributes of each to achieve their objectives. They have sufficient resources and talent to sustain their campaigns, and the most sophisticated and troublesome ones can create or acquire the most sophisticated technology. APMs can harness, aggregate, and nimbly parse user data and can recon new adaptive techniques to skirt account and content controls. Finally, they know and operate within terms of service, and they take advantage of free-speech provisions. Adaptive APM behavior will therefore continue to challenge the ability of social media platforms to thwart corrosive manipulation without harming their own business model (Fig. 2.8).

2 Media Forensics in the Age of Disinformation

25

Fig. 2.8 Advanced persistent manipulators framework

2.6.8 Scenarios for Financial Harm Jon Bateman, a fellow in the Cyber Policy Initiative of the Technology and International Affairs Program at the Carnegie Endowment for International Peace, shows how assessments of threats can be applied to specific industry areas or targets in his work on the threats of media manipulation to the financial system (Carnegie Endowment John Batemant 2020). In the absence of hard data, a close analysis of potential scenarios can help to better gauge the problem. In this paper, ten scenarios illustrate how criminals and other bad actors could abuse synthetic media technology to inflict financial harm on a broad swath of targets. Based on today’s synthetic media technology and the realities of financial crime, the scenarios explore whether and how synthetic media could alter the threat landscape.

2.7 Investments in Countering False Media Just as there is substantial effort at theorizing and implementing threat models to understand media manipulation, there is a growing effort in civil society, government, and the private sector to contend with disinformation. There is a large volume of work in multiple disciplines to categorize both the threats and many key indicators of population susceptibility to disinformation, including new work on resilience, dynamics of exploitative activities, and case studies on incidents and actors, including those who are not primarily motivated by malicious intent. This literature combines

26

J. Hendrix and D. Morozoff

Fig. 2.9 Synthetic media scenarios and financial harm (Carnegie Endowment for International Peace JON BATEMAN 2020)

with a growing list of government, civil society and media and industry interventions aimed at mitigating the harms of disinformation. There are now hundreds of groups and organizations in the United States and around the world working on various approaches and problem sets to monitor and

2 Media Forensics in the Age of Disinformation

27

counter disinformation in democracies, see Fig. 2.10. This emerging infrastructure presents an opportunity to create connective tissue between these efforts, and to provide common tools and platforms that enable their work. Consider three examples: 1. The Disinfo Defense League: A project of the Media Democracy Fund, the DDL is a consortium of more than 200 grassroots organizations across the U.S. that helps to mobilize minority communities to counter disinformation campaigns (ABC News Fergal Gallagher 2021). 2. The Beacon Project: Established by the nonprofit IRI combats state-sponsored influence operations, in particular, ones from Russia. The project identifies and exposes disinformation and works with local partners to facilitate a response (Beacon Project 2021). 3. CogSec Collab: The Cognitive Security Collaborative that maps and brings together information security researchers, data scientists, and other subject-matter experts to create and improve resources for the protection and defense of the cognitive domain (Cogsec Collab 2021). At the same time, there are significant efforts underway to contend with the implications of these developments in synthetic media and the threat vectors they create.

Fig. 2.10 There are now hundreds of civil society, government, and private sector groups addressing disinformation across the globe. (Cogsec Collab 2021)

28

J. Hendrix and D. Morozoff

2.7.1 DARPA SEMAFOR The Semantic Forensics (SemaFor) program under the Defense Advanced Research Projects Agency (DARPA) represents a substantial research and development effort to create forensic tools and techniques to develop semantic understanding of content and identify synthetic media across multiple modalities (DARPA 2021).

2.7.2 The Partnership on AI Steering Committee on Media Integrity Working Group The Partnership on AI has hosted an industry led effort to create similar detectors (Humprecht et al. 2020). Technology platforms and universities continue to develop new tools and techniques to determine provenance and identify synthetic media and other forms of digital media of questionable veracity. The Working Group brings together organizations spanning civil society, technology companies, media organizations, and academic institutions will be focused on a specific set of activities and projects directed at strengthening the research landscape related to new technical capabilities in media production and detection, and increasing coordination across organizations implicated by these developments.

2.7.3 JPEG Committee In October 2020, the JPEG committee released a draft document initiating a discussion to explore “if a JPEG standard can facilitate a secure and reliable annotation of media modifications, both in good faith and malicious usage scenarios” (Joint Bi level Image Experts Group and Joint Photographic Experts Group 2021). The committee noted the need to consider revision of standardization practices to address pressing challenges, including use of AI techniques (“Deep Fakes”) and use of authentic media out of context for the purposes of spreading misinformation and forgeries. The paper notes two areas of focus: (1) having a record of modifications of a file within its metadata and (2) proving the record is secured and accessible so provenance is traceable.

2.7.4 Content Authenticity Initiative (CAI) The CAI was founded in 2019 and led by Adobe, Twitter, and The New York Times, with the goal of establishing standards that will allow better evaluation of content. In August 2020, the forum published its white paper “Setting the Standard for Digital Content Attribution” highlighting the importance of transparency around provenance of content (Adobe, New York Times, Twitter, The Content Authenticity Initiative 2021). CAI will “provide a layer of robust, tamper-evident attribution and history

2 Media Forensics in the Age of Disinformation

29

data built upon XMP, Schema.org, and other metadata standards that goes far beyond common uses today.”

2.7.5 Media Review Media Review is an initiative led by the Duke Reporters Lab to provide the fact checker community a schema that would allow evaluation of videos and images, through tagging of different types of manipulation (Schema.org 2021). Media Review is based on the ClaimReview project, and is currently a pending schema.

2.8 Excerpts on Susceptibility and Resilience to Media Manipulation Like other topics in forensics, media manipulation is directly tied to psychology. Though a thorough treatment of this topic is outside the scope of this chapter, we would like to expose the reader to some foundational work in these areas of research. The following section is a collection of critical ideas from different areas of psychological mis/disinformation research. The purpose of these is to convey powerful ideas in the authors’ own words. We hope that readers will select those of interest and investigate those works further.

2.8.1 Susceptibility and Resilience Below is a sample of recent studies on various aspects of exploitation via disinformation which are summarized and quoted. We categorize these papers by overarching topics, but note that many bear relevant findings across multiple topics. Resilience to online disinformation: A framework for cross-national comparative research (Humprecht et al. 2020) This paper seeks to develop a cross-country framework for analyzing disinformation effects and population susceptibility and resilience. Three country clusters are identified: media-supportive; politically consensus-driven (e.g., Canada, many Western European democracies); polarized (e.g., Italy, Spain, Greece); and low-trust, politicized, and fragmented (e.g., United States). Media-supportive countries in the first cluster are identified as most resilient to disinformation, and the United States is identified as most vulnerable to disinformation owing to its “high levels of populist communication, polarization, and low levels of trust in the news media.” Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking (Pennycook and Rand 2020) The tendency to ascribe profundity to randomly generated sentences – pseudoprofound bullshit receptivity – correlates positively with perceptions of fake news

30

J. Hendrix and D. Morozoff

accuracy, and negatively with the ability to differentiate between fake and real news (media truth discernment). Relatedly, individuals who overclaim their level of knowledge also judge fake news to be more accurate. We also extend previous research indicating that analytic thinking correlates negatively with perceived accuracy by showing that this relationship is not moderated by the presence/absence of the headline’s source (which has no effect on accuracy), or by familiarity with the headlines (which correlates positively with perceived accuracy of fake and real news). Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines (Bago et al. 2020) The Motivated System 2 Reasoning (MS2R) account posits that deliberation causes people to fall for fake news, because reasoning facilitates identity-protective cognition and is therefore used to rationalize content that is consistent with one’s political ideology. The classical account of reasoning instead posits that people ineffectively discern between true and false news headlines when they fail to deliberate (and instead rely on intuition)... Our data suggest that, in the context of fake news, deliberation facilitates accurate belief formation and not partisan bias. Political psychology in the digital (mis) information age: A model of news belief and sharing (Van Bavel et al. 2020) This paper reviews complex psychological risk factors associated with belief in misinformation such as partisan bias, polarization, political ideology, cognitive style, memory, morality, and emotion. The research analyzes the implications and risks of various solutions, such as fact-checking, “pre-bunking,” reflective thinking, deplatforming of bad actors, and media literacy. Misinformation and morality: encountering fake news headlines makes them seem less unethical to publish and share (Effron and Raj 2020) Experimental results and a pilot study indicate that “repeatedly encountering misinformation makes it seem less unethical to spread—-regardless of whether one believes it. Seeing a fake-news headline one or four times reduced how unethical participants thought it was to publish and share that headline when they saw it again – even when it was clearly labelled false and participants disbelieved it, and even after statistically accounting for judgments of how likeable and popular it was. In turn, perceiving it as less unethical predicted stronger inclinations to express approval of it online. People were also more likely to actually share repeated (vs. new) headlines in an experimental setting. We speculate that repeating blatant misinformation may reduce the moral condemnation it receives by making it feel intuitively true, and we discuss other potential mechanisms.” When is Disinformation (In) Credible? Experimental Findings on Message Characteristics and Individual Differences. (Schaewitz et al. 2020) “...we conducted an experiment (N = 294) to investigate effects of message factors and individual differences on individuals’ credibility and accuracy perceptions of disinformation as well as on their likelihood of sharing them. Results suggest that message factors, such as the source, inconsistencies, subjectivity, sensationalism, or

2 Media Forensics in the Age of Disinformation

31

manipulated images, seem less important for users’ evaluations of disinformation articles than individual differences. While need for cognition seems most relevant for accuracy perceptions, the opinion toward the news topic seems most crucial for whether people believe in the news and share it online.” Long-term effectiveness of inoculation against misinformation: Three longitudinal experiments. (Maertens et al. 2020) “In 3 experiments (N = 151, N = 194, N = 170), participants played either Bad News (inoculation group) or Tetris (gamified control group) and rated the reliability of news headlines that either used a misinformation technique or not. We found that participants rate fake news as significantly less reliable after the intervention. In Experiment 1, we assessed participants at regular intervals to explore the longevity of this effect and found that the inoculation effect remains stable for at least 3 months. In Experiment 2, we sought to replicate these findings without regular testing and found significant decay over a 2-month time period so that the long-term inoculation effect was no longer significant. In Experiment 3, we replicated the inoculation effect and investigated whether long-term effects could be due to item-response memorization or the fake-to-real ratio of items presented, but found that this is not the case.”

2.8.2 Case Studies: Threats and Actors Below one may find a collection of exposed current threats and actors utilizing manipulations of varying sophistication. The QAnon Conspiracy Theory: A Security Threat in the Making? (Combating Terrorism Center Amarnath Amarasingam, Marc-André Argentino 2021) Analysis of QAnon shows that, while disorganized, it is a significant public security threat due to historical violent events associated with the group, ease of access for recruitment, and a “crowdsourcing” effect wherein followers “take interpretation and action into their own hands.” The origins and development of the group are analyzed, including review of adjacent organizations such as Omega Kingdom Ministries. Followers’ behavior is analyzed and five criminal cases are reviewed. Political Deepfake Videos Misinform the Public, But No More than Other Fake Media (Barari et al. 2021) “We demonstrate that political misinformation in the form of videos synthesized by deep learning (“deepfakes”) can convince the American public of scandals that never occurred at alarming rates – nearly 50% of a representative sample – but no more so than equivalent misinformation conveyed through existing news formats like textual headlines or audio recordings. Similarly, we confirm that motivated reasoning about the deepfake target’s identity (e.g., partisanship or gender) plays a key role in facilitating persuasion, but, again, no more so than via existing news formats. ...Finally, a series of randomized interventions reveal that brief but specific informational treatments about deepfakes only sometimes attenuate deepfakes’ effects and in relatively

32

J. Hendrix and D. Morozoff

small scale. Above all else, broad literacy in politics and digital technology most strongly increases discernment between deepfakes and authentic videos of political elites.” Deepfakes and disinformation: exploring the impact of synthetic political video on deception, uncertainty, and trust in news (Vaccari and Chadwick 2020) An experiment using deepfakes to measure its value as a deception tool and its impact on public trust: “While we do not find evidence that deceptive political deepfakes misled our participants, they left many of them uncertain about the truthfulness of their content. And, in turn, we show that uncertainty of this kind results in lower levels of trust in news on social media.” Coordinating a Multi-Platform Disinformation Campaign: Internet Research Agency Activity on Three US Social Media Platforms, 2015–2017 (Lukito 2020) An analysis of the use of multiple modes as a technique to develop and deploy information attacks: [Abstract] “The following study explores IRA activity on three social media platforms, Facebook, Twitter, and Reddit, to understand how activities on these sites were temporally coordinated. Using a VAR analysis with Granger Causality tests, results show that IRA Reddit activity granger caused IRA Twitter activity within a one-week lag. One explanation may be that the Internet Research Agency is trial ballooning on one platform (i.e., Reddit) to figure out which messages are optimal to distribute on other social media (i.e., Twitter).” The disconcerting potential of online disinformation: Persuasive effects of astroturfing comments and three strategies for inoculation against them (Zerback et al. 2021) This paper examines online astroturfing as part of Russian electoral disinformation efforts, finding that online astroturfing was able to change opinions and increase uncertainty even when the audience was “inoculated” against disinformation prior to exposure.

2.8.3 Dynamics of Exploitative Activities Below are some excerpts from literature covering the dynamics and kinetics of IO activities and their relationships to disinformation. Cultural Convergence: Insights into the behavior of misinformation networks on Twitter (McQuillan et al. 2020) “We use network mapping to detect accounts creating content surrounding COVID19, then Latent Dirichlet Allocation to extract topics, and bridging centrality to identify topical and non-topical bridges, before examining the distribution of each topic and bridge over time and applying Jensen-Shannon divergence of topic distributions to show communities that are converging in their topical narratives.” The primary topic under discussion within the data was COVID-19, and this report includes a

2 Media Forensics in the Age of Disinformation

33

detailed analysis of “cultural bridging” activities among/between communities with conspiratorial views of the pandemic. An automated pipeline for the discovery of conspiracy and conspiracy theory narrative frameworks: Bridgegate, Pizzagate and storytelling on the web (Tangherlini et al. 2020) “Predicating our work on narrative theory, we present an automated pipeline for the discovery and description of the generative narrative frameworks of conspiracy theories that circulate on social media, and actual conspiracies reported in the news media. We base this work on two separate comprehensive repositories of blog posts and news articles describing the well-known conspiracy theory Pizzagate from 2016, and the New Jersey political conspiracy Bridgegate from 2013. ...We show how the Pizzagate framework relies on the conspiracy theorists’ interpretation of “hidden knowledge” to link otherwise unlinked domains of human interaction, and hypothesize that this multi-domain focus is an important feature of conspiracy theories. We contrast this to the single domain focus of an actual conspiracy. While Pizzagate relies on the alignment of multiple domains, Bridgegate remains firmly rooted in the single domain of New Jersey politics. We hypothesize that the narrative framework of a conspiracy theory might stabilize quickly in contrast to the narrative framework of an actual conspiracy, which might develop more slowly as revelations come to light.” Fake news early detection: A theory-driven model (Zhou et al. 2020) “The method investigates news content at various levels: lexicon-level, syntax-level, semantic-level, and discourse-level. We represent news at each level, relying on wellestablished theories in social and forensic psychology. Fake news detection is then conducted within a supervised machine learning framework. As an interdisciplinary research, our work explores potential fake news patterns, enhances the interpretability in fake news feature engineering, and studies the relationships among fake news, deception/disinformation, and clickbaits. Experiments conducted on two real-world datasets indicate the proposed method can outperform the state-of-the-art and enable fake news early detection when there is limited content information” “...Among content-based models, we observe that (3) the proposed model performs comparatively well in predicting fake news with limited prior knowledge. We also observe that (4) similar to deception, fake news differs in content style, quality, and sentiment from the truth, while it carries similar levels of cognitive and perceptual information compared to the truth. (5) Similar to clickbaits, fake news headlines present higher sensationalism and lower newsworthiness while their readability characteristics are complex and difficult to be directly concluded. In addition, fake news (6) is often matched with shorter words and longer sentences.” A survey of fake news: Fundamental theories, detection methods, and opportunities (Zhou and Zafarani 2020) “By involving dissemination information (i.e., social context) in fake news detection, propagation-based methods are more robust against writing style manipulation by malicious entities. However, propagation-based fake news detection is inefficient for fake news early detection ... as it is difficult for propagation-based models to detect

34

J. Hendrix and D. Morozoff

fake news before it has been disseminated, or to perform well when limited news dissemination information is available. Furthermore, mining news propagation and news writing style allow one to assess news intention. As discussed, the intuition is that (1) news created with a malicious intent, that is, to mislead and deceive the public, aims to be “more persuasive” compared to those not having such aims, and (2) malicious users often play a part in the propagation of fake news to enhance its social influence.” This paper also emphasizes the need for explainable detection as a matter of public interest. A Signal Detection Approach to Understanding the Identification of Fake News (Batailler et al. 2020) “The current article discusses the value of Signal Detection Theory (SDT) in disentangling two distinct aspects in the identification of fake news: (1) ability to accurately distinguish between real news and fake news and (2) response biases to judge news as real versus fake regardless of news veracity. The value of SDT for understanding the determinants of fake news beliefs is illustrated with reanalyses of existing data sets, providing more nuanced insights into how partisan bias, cognitive reflection, and prior exposure influence the identification of fake news.” Sharing of fake news on social media: Application of the honeycomb framework and the third-person effect hypothesis (Talwar et al. 2020) This publication uses the “honeycomb framework” of Kietzmann, Hermkens, McCarthy, and Silvestre (Kietzmann et al. 2011) to explain sharing behaviors related to fake news. Qualitative evaluation supports the explanatory value of the honeycomb framework, and the authors are able to identify a number of quantitative associations between demographics and sharing characteristics. Modeling echo chambers and polarization dynamics in social networks (Baumann et al. 2020) “We propose a model that introduces the dynamics of radicalization, as a reinforcing mechanism driving the evolution to extreme opinions from moderate initial conditions. Inspired by empirical findings on social interaction dynamics, we consider agents characterized by heterogeneous activities and homophily. We show that the transition between a global consensus and emerging radicalized states is mostly governed by social influence and by the controversialness of the topic discussed. Compared with empirical data of polarized debates on Twitter, the model qualitatively reproduces the observed relation between users’ engagement and opinions, as well as opinion segregation in the interaction network. Our findings shed light on the mechanisms that may lie at the core of the emergence of echo chambers and polarization in social media.” (Mis)representing Ideology on Twitter: How Social Influence Shapes Online Political Expression (Guess 2021) In comparing users’ tweets as well as those of the accounts that they follow, alongside self-reported political affiliations from surveys, a distinction between self-reported political slant and those views expressed in tweets. This raises the notion that public

2 Media Forensics in the Age of Disinformation

35

political expression may be distinct from individual affiliations, and that an individual’s tweets may not be an accurate proxy for their beliefs. The authors call for a re-examination of potential social causes of/influences upon public political expression: [from Abstract] “we find evidence consistent with the hypothesis that users’ public expression is powerfully shaped by their followers, independent of the political ideology they report identifying with in attitudinal surveys. Finally, we find that users’ ideological expression is more extreme when they perceive their Twitter networks to be relatively like-minded.”

2.8.4 Meta-Review A modern review on the literature surrounding disinformation may be found below. A systematic literature review on disinformation: Toward a unified taxonomical framework (Kapantai et al. 2021) “Our online information landscape is characterized by a variety of different types of false information. There is no commonly agreed typology framework, specific categorization criteria, and explicit definitions as a basis to assist the further investigation of the area. Our work is focused on filling this need. Our contribution is twofold. First, we collect the various implicit and explicit disinformation typologies proposed by scholars. We consolidate the findings following certain design principles to articulate an all-inclusive disinformation typology. Second, we propose three independent dimensions with controlled values per dimension as categorization criteria for all types of disinformation. The taxonomy can promote and support further multidisciplinary research to analyze the special characteristics of the identified disinformation types.”

2.9 Conclusion While media forensics as a field is generally occupied by engineers and computer scientists, it intersects with a variety of other disciplines, and its importance is crucial to the ability for people to form consensus and collaboratively solve problems, from climate change to global health. John Locke, an English philosopher and influential thinker on empiricism and the Enlightenment, once wrote that “the improvement of understanding is for two ends: first, our own increase of knowledge; secondly, to enable us to deliver that knowledge to others.” Indeed, your study of media forensics takes Locke’s formulation a step further. The effort you put in to understanding this field will not only increase your own knowledge, it will allow you to build an information ecosystem that delivers that knowledge to others—the knowledge necessary to fuel human discourse. The work is urgent.

36

J. Hendrix and D. Morozoff

Acknowledgements This chapter relies on contributions from Sam Blazek, Adi Cohen, Stef Daley, Pawel Korus, and Clint Watts.

References ABC News Fergal Gallagher. Minority communities fighting back against disinformation ahead of election. https://www.goodmorningamerica.com/news/story/minority-communities-fightingback-disinformation-ahead-election-73794172 Adobe, New York Times, Twitter, The Content Authenticity Initiative. The content authenticity initiative. https://documentcloud.adobe.com/link/track?uri=urn:aaid:scds:US:2c6361d5b8da-4aca-89bd-1ed66cd22d19 Anthropics Inc. PortraitPro. https://www.anthropics.com/portraitpro/ Associated Press, Experts: spy used AI-generated face to connect with targets. https://apnews.com/ article/bc2f19097a4c4fffaa00de6770b8a60d Atlantic Council (2020) The long fuse: Misinformation and the 2020 election. https://www. atlanticcouncil.org/event/the-long-fuse-eip-report/ Bago B, Rand DG, Pennycook G (2020) Fake news, fast and slow: deliberation reduces belief in false (but not true) news headlines. J Exp Psychol Gen 149(8):1608–1613 Bak-Coleman JB, Alfano M, Barfuss W, Bergstrom CT, Centeno MA, Couzin ID, Donges JF, Galesic M, Gersick AS, Jacquet J, Kao AB, Moran RE, Romanczuk P, Rubenstein DI, Tombak KJ, Van Bavel JJ, Weber EU (2021) Stewardship of global collective behavior. Proc Natl Acad Sci 118(27) Barari S, Lucas C, Munger K (2021) Political deepfakes are as credible as other fake media and (sometimes) real media Batailler C, Brannon S, Teas P, Gawronski B (2020) A signal detection approach to understanding the identification of fake news. Perspect Psychol Sci 10 Baumann F, Lorenz-Spreen P, Sokolov IM, Starnini M (2020) Modeling echo chambers and polarization dynamics in social networks. Phys Rev Lett 124:048301 BBC Staff. Australia demands China apologise for posting ‘repugnant’ fake image. https://www. bbc.com/news/world-australia-55126569 Beacon Project. The Beacon Project. https://www.iribeaconproject.org/who-we-are/mission Blazek S, SCOTCH: a framework for rapidly assessing influence operations. https://www. atlanticcouncil.org/blogs/geotech-cues/scotch-a-framework-for-rapidly-assessing-influenceoperations/ Brookings Inst Alexandre Alaphilippe. Adding a ‘D’ to the ABC disinformation framework . https:// www.brookings.edu/techstream/adding-a-d-to-the-abc-disinformation-framework/ Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler DM, Wu J, Winter C, Hesse C, Chen M, Sigler E, Litwin M, Gray S, Chess B, Clark J, Berner C, McCandlish S, Radford A, Sutskever I, Amodei D (2020) Language models are few-shot learners Carnegie Endowment for International Peace JON BATEMAN. Deepfakes and synthetic media in the financial system: assessing threat scenarios. https://carnegieendowment.org/2020/07/08/ deepfakes-and-synthetic-media-in-financial-system-assessing-threat-scenarios-pub-82237 Carnegie Endowment John Batemant (2020) Deepfakes and synthetic media in the financial system: assessing threat scenarios. https://carnegieendowment.org/2020/07/08/deepfakes-andsynthetic-media-in-financial-system-assessing-threat-scenarios-pub-82237 Chen M, Radford A, Child R, Wu J, Jun H, Luan D, Sutskever I (2020) Generative pretraining from pixels. In: Daumé III H, Singh A (eds) Proceedings of the 37th international conference on machine learning, vol 119. Proceedings of machine learning research, pp 1691–1703. PMLR, 13–18 Chen X, Zhang Z, Sui Y, Chen T (2021) Gans can play lottery tickets too

2 Media Forensics in the Age of Disinformation

37

CNN Clarissa Ward et al (2020) Russian election meddling is back – via Ghana and Nigeria – and in your feeds. https://www.cnn.com/2020/03/12/world/russia-ghana-troll-farms-2020ward/index.html Cogsec Collab, Global disinformation groups. https://datastudio.google.com/u/0/reporting/ a8491164-6aa8-45d0-b609-c70339689127/page/ierzB Cogsec Collab. Cogsec Collab. https://cogsec-collab.org/ Combating Terrorism Center Amarnath Amarasingam, Marc-André Argentino. The QAnon conspiracy theory: a security threat in the making? https://ctc.usma.edu/the-qanon-conspiracy-theorya-security-threat-in-the-making/ DARPA. DARPA SEMAFOR. https://www.darpa.mil/news-events/2021-03-02 Devlin J, Chang M-W, Lee K, Toutanova K (2019) Bert: pre-training of deep bidirectional transformers for language understanding Donahue J, Dieleman S, Bi´nkowski M, Elsen E, Simonyan K (2021) End-to-end adversarial textto-speech Effron DA, Raj M (2020) Misinformation and morality: encountering fake-news headlines makes them seem less unethical to publish and share. Psychol Sci 31(1):75–87 PMID: 31751517 Fedus W, Zoph B, Shazeer N (2021) Switch transformers: scaling to trillion parameter models with simple and efficient sparsity François C (2020) Actors, behaviors, content: a disinformation abc. Algorithms Frankle J, Carbin M (2019) The lottery ticket hypothesis: finding sparse, trainable neural networks Generated Photos Inc. Generated photos. https://generated.photos/ GMFUS Clint Watts. Advanced persistent manipulators, part one: the threat to the social media industry. https://securingdemocracy.gmfus.org/advanced-persistent-manipulators-part-one-thethreat-to-the-social-media-industry/ Graphika and The Stanford Internet Observatory. More-troll kombat. https://public-assets.graphika. com/reports/graphika_stanford_report_more_troll_kombat.pdf Gray JF, Terp S-J (2019) Misinformation: We’re Four Steps Behind Its Creators. https://cyber. harvard.edu/sites/default/files/2019-11/Comparative%20Approaches%20to%20Disinformation %20-%20John%20Gray%20Abstract.pdf Grossman S, Ramali LK, Outsourcing disinformation. https://www.lawfareblog.com/outsourcingdisinformation Guess AM, (Mis)representing ideology on Twitter: how social influence shapesonline political expression. https://www.uzh.ch/cmsssl/ikmz/dam/jcr:995dbede-c863-4931-9ba8bc0722b6cb59/20201116_guess.pdf Hao K (2021) AI voice actors sound more human than ever—and they’re ready to hire. https:// www.technologyreview.com/2021/07/09/1028140/ai-voice-actors-sound-human/ Hill K, White NYTMJ (2020) Designed to deceive: do these people look real to you? https://www. nytimes.com/interactive/2020/11/21/science/artificial-intelligence-fake-people-faces.html Humprecht E, Esser F, Van Aelst P (2020) Resilience to online disinformation: a framework for cross-national comparative research. Int J Press/Polit 25(3):493–516 Humprecht E, Esser F, Van Aelst P (2020) Resilience to online disinformation: a framework for cross-national comparative research. Int J Press/Polit 25(3):493–516 Jacobs RA, Jordan MI, Nowlan SJ, Hinton GE (1991) Adaptive mixtures of local experts. Neural Comput 3(1):79–87 Joint Bi level Image Experts Group and Joint Photographic Experts Group. JPEG fake media: context use cases and requirements. http://ds.jpeg.org/documents/wg1n89043-REQ-JPEG_Fake_ Media_Context_Use_Cases_and_Requirements_v0_1.pdf Jordan MI, Jacobs RA (1993) Hierarchical mixtures of experts and the em algorithm. In: Proceedings of 1993 international conference on neural networks (IJCNN-93-Nagoya, Japan), vol 2, pp 1339– 1344 Kapantai E, Christopoulou A, Berberidis C, Peristeras V (2021) A systematic literature review on disinformation: toward a unified taxonomical framework. New Media Soc 23(5):1301–1326

38

J. Hendrix and D. Morozoff

Kaplan J, McCandlish S, Henighan T, Brown TB, Chess B, Child R, Gray S, Radford A, Wu J, Amodei D (2020) Scaling laws for neural language models Kietzmann JH, Hermkens K, McCarthy IP, Silvestre BS (2011) Social media? get serious! understanding the functional building blocks of social media. Bus Horiz 54(3):241–251. SPECIAL ISSUE: SOCIAL MEDIA Koltai K (2020) Vaccine information seeking and sharing: how private facebook groups contributed to the anti-vaccine movement online. IN: AoIR selected papers of internet research, 2020, Oct Kopp C, Korb KB, Mills BI (2018) Information-theoretic models of deception: modelling cooperation and diffusion in populations exposed to “fake news”. PLOS ONE 13(11):1–35 Łubi´nski P (2020) Social media incitement to genocide (in:) the concept of genocide in international criminal law developments after Lemkin, p 306 Lukito J (2020) Coordinating a multi-platform disinformation campaign: internet research agency activity on three u.s. social media platforms, 2015 to 2017. Polit Commun 37(2):238–255 Luong H-T, Yamagishi J (2020) Nautilus: a versatile voice cloning system Maertens R, Roozenbeek J, Basol M, van der Linden S (2020) Long-term effectiveness of inoculation against misinformation: three longitudinal experiments. J Exp Psychol Appl 27:10 McQuillan L, McAweeney E, Bargar A, Ruch A (2020) Insights into the behavior of misinformation networks on twitter, Cultural convergence Mentzer F, Toderici G, Tschannen M, Agustsson E (2020) High-fidelity generative image compression Narayanan V, Barash V, Kelly J, Kollanyi B, Neudert L-M, Howard PN (2018) Polarization, partisanship and junk news consumption over social media in the us Nimmo B, François C, Shawn Eib C, Ronzaud L, Graphika, IRA again: unlucky thirteen. https:// public-assets.graphika.com/reports/graphika_report_ira_again_unlucky_thirteen.pdf Nimmo B, Shawn Eib C, Tamora L, Graphika, Cross-Platform Spam Network Targeted Hong Kong Protests. https://public-assets.graphika.com/reports/graphika_report_spamouflage.pdf Nimmo B, Shawn Eib C, Tamora L, Graphika, Spamouflage goes to America. https://public-assets. graphika.com/reports/graphika_report_spamouflage_goes_to_america.pdf Nisos, The rise of synthetic audio deepfakes. https://www.nisos.com/technical-blogs/rise_ synthetic_audio_deepfakes Nvidia Inc (2020) Taking it to the MAX: adobe photoshop gets new NVIDIA AI-powered neural filters. https://blogs.nvidia.com/blog/2020/10/20/adobe-max-ai/ Nvidia Inc. Imaginaire. https://github.com/NVlabs/imaginaire Nvidia Inc. Nvidia Maxine. https://developer.nvidia.com/MAXINE Park E, Yoo S (2020) Profit: a novel training method for sub-4-bit mobilenet models Paul RK, Twitter suspends accounts linked to Saudi spying case. https://www.reuters.com/article/ us-twitter-saudi-idUSKBN1YO1JT Pennycook G, Rand DG (2020) Who falls for fake news? the roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. J Personal 88(2):185–200 Qin C, Wu Y, Springenberg JT, Brock A, Donahue J, Lillicrap TP, Kohli P (2020) Training generative adversarial networks by solving ordinary differential equations Radford A, Kim JW, Hallacy C, Ramesh A, Goh G, Agarwal S, Sastry G, Askell A, Mishkin P, Clark J, Krueger G, Sutskever I (2021) Learning transferable visual models from natural language supervision Radford A, Narasimhan K (2018) Improving language understanding by generative pre-training Ramesh A, Pavlov M, Goh G, Gray S, Voss C, Radford A, Chen M, Sutskever I (2021) Zero-shot text-to-image generation Reuters Staff. Fact check: Clip of Biden taken out of context to portray him as plotting a voter fraud scheme. https://www.reuters.com/article/uk-fact-check-biden-voter-protection-not/factcheck-clip-of-biden-taken-out-of-context-to-portray-him-as-plotting-a-voter-fraud-schemeidUSKBN27E2VH

2 Media Forensics in the Age of Disinformation

39

Reuters Staff. Fact check: Donald Trump concession video not a ‘confirmed deepfake’. https://www.reuters.com/article/uk-factcheck-trump-consession-video-deep/fact-checkdonald-trump-concession-video-not-a-confirmed-deepfake-idUSKBN29G2NL Reuters Staff. Fact check: Video does not show Biden saying ‘Hello Minnesota’ in Florida rally. https://www.reuters.com/article/uk-factcheck-altered-sign-biden-mn/fact-checkvideo-does-not-show-biden-saying-hello-minnesota-in-florida-rally-idUSKBN27H1RZ Rosebud AI Inc. Rosebud AI. https://www.rosebud.ai/ RunwayML Inc. RunwayML. https://runwayml.com/ Schaewitz L, Kluck JP, Klösters L, Krämer NC (2020) When is disinformation (in)credible? experimental findings on message characteristics and individual differences. Mass Commun Soc 23(4):484–509 Schema.org. Media review. https://schema.org/MediaReview Shazeer N, Cheng Y, Parmar N, Tran D, Vaswani A, Koanantakool P, Hawkins P, Lee H, Hong M, Young C, Sepassi R, Hechtman B (2018) Mesh-tensorflow: deep learning for supercomputers Skylum Inc. Luminar. https://skylum.com/luminar StopFake.org. Anatomy of an info-war: how Russia’s propaganda machine works, and how to counter it. https://www.stopfake.org/en/anatomy-of-an-info-war-how-russia-s-propagandamachine-works-and-how-to-counter-it/ Talwar S, Dhir A, Singh D, Virk GS, Salo J (2020) Sharing of fake news on social media: Application of the honeycomb framework and the third-person effect hypothesis. J Retail Consum Serv 57:102197 Tangherlini TR, Shahsavari S, Shahbazi B, Ebrahimzadeh E, Roychowdhury V (2020) An automated pipeline for the discovery of conspiracy and conspiracy theory narrative frameworks: bridgegate, pizzagate and storytelling on the web. PLOS ONE 15(6):1–39 Techcrunch Taylor Hatmaker. Chinese propaganda network on Facebook used AI-generated faces. https://techcrunch.com/2020/09/22/facebook-gans-takes-down-networks-of-fake-accountsoriginating-in-china-and-the-philippines/ The Associated Press ERIKA KINETZ. Army of fake fans boosts China’s messaging on Twitter. https://apnews.com/article/asia-pacific-china-europe-middle-east-government-andpolitics-62b13895aa6665ae4d887dcc8d196dfc The Economist. Global democracy has a very bad year. https://www.economist.com/graphic-detail/ 2021/02/02/global-democracy-has-a-very-bad-year The Next Web Thomas Macaulay. Someone let a GPT-3 bot loose on Reddit – it didn’t end well. https://thenextweb.com/neural/2020/10/07/someone-let-a-gpt-3-bot-loose-on-redditit-didnt-end-well/ The Next Web Tristan Greene. GPT-3 is the world’s most powerful bigotry generator. What should we do about it? https://thenextweb.com/neural/2021/01/19/gpt-3-is-the-worlds-most-powerfulbigotry-generator-what-should-we-do-about-it/ The Washington Post Drew Harwell, An artificial-intelligence first: voice-mimicking software reportedly used in a major theft. https://www.washingtonpost.com/technology/2019/09/04/anartificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/ Times of India Anam Ajmal. 1st in India: Twitter tags BJP IT cell chief’s tweet as ‘manipulated media’. https://timesofindia.indiatimes.com/india/1st-in-india-twitter-tags-bjp-itcell-chiefs-tweet-as-manipulated-media/articleshow/79538441.cms Twitter Inc. New disclosures to our archive of state-backed information operations. https:// blog.twitter.com/en_us/topics/company/2019/new-disclosures-to-our-archive-of-state-backedinformation-operations.html Vaccari C, Chadwick A (2020) Deepfakes and disinformation: exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Soc Media + Soc 6(1):2056305120903408 Van Bavel JJ, Harris EA, Pärnamets P, Rathje S, Doell K, Tucker JA (2020) Political psychology in the digital (mis)information age: a model of news belief and sharing

40

J. Hendrix and D. Morozoff

Wall Street Journal Catherine Stupp. Fraudsters used AI to mimic CEO’s voice in unusual cybercrime case. https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusualcybercrime-case-11567157402 Wang N, Choi J, Brand D, Chen C-Y, Gopalakrishnan K (2018) Training deep neural networks with 8-bit floating point numbers Washington Post David Weigel. Twitter flags GOP video after activist’s computerized voice was manipulated. https://www.washingtonpost.com/politics/2020/08/30/ady-barkan-scalisetwitter-video/ Zerback T, Töpfl F, Knöpfle M (2021) The disconcerting potential of online disinformation: persuasive effects of astroturfing comments and three strategies for inoculation against them. New Media Soc 23(5):1080–1098 Zhang E, Martin-Brualla R, Kontkanen J, Curless B (2020) No shadow left behind: removing objects and their shadows using approximate lighting and geometry Zhou X, Jain A, Phoha VV, Zafarani R (2020) Fake news early detection: a theory-driven model. Digit Threat Res Pract 1(2) Zhou X, Zafarani R (2020) A survey of fake news: fundamental theories, detection methods, and opportunities. ACM Comput Surv 53(5)

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Chapter 3

Computational Imaging Scott McCloskey

Since the advent of smartphones, photography is increasingly being done with small, portable, multi-function devices. Relative to the purpose-built cameras that dominated previous eras, smartphone cameras must overcome challenges related to their small form factor. Smartphone cameras have small apertures that produce a wide depth of field, small sensors with rolling shutters that lead to motion artifacts, and small form factors which lead to more camera shake during exposure. Along with these challenges, smartphone cameras have the advantage of tight integration with additional sensors and the availability of significant computational resources. For these reasons, the field of computational imaging has advanced significantly in recent years, with academic groups and researchers from smartphone manufacturers helping these devices become more capable replacements for purpose-built cameras.

3.1 Introduction to Computational Imaging Computational imaging (or computational photographic) approaches are characterized by the co-optimization of what and how the sensor captures light and how that signal is processed in software. Computational imaging approaches now commonly available on most smartphones include panoramic stitching, multi-frame high dynamic range (HDR) imaging, ‘portrait mode’ for low depth of field, and multiframe low-light imaging (‘night sight’ or ‘night mode’). As image quality is viewed as a differentiating feature between competing smartphone brands, there has been tremendous progress improving subjective image quality, accompanied by a lack of transparency due to the proprietary nature of the work. S. McCloskey (B) Kitware, Inc., New York, NY, USA e-mail: [email protected] © The Author(s) 2022 H. T. Sencar et al. (eds.), Multimedia Forensics, Advances in Computer Vision and Pattern Recognition, https://doi.org/10.1007/978-981-16-7621-5_3

41

42

S. McCloskey

Like the smartphone market more generally, computational imaging approaches used therein change very quickly from one generation of the phone to the next. This combination of different modes, and the proprietary and rapid nature of changes, all pose challenges for forensics practitioners. This chapter investigates some of the forensics challenges that arise from the increased adoption of computational imaging and assesses our ability to detect automated focus manipulations performed by computational cameras. We then look more generally at a cue to help distinguish optical blur from synthetic blur. The chapter concludes with a look at some early computational imaging research that may impact forensics in the near future. This chapter will not focus on the definitional and policy issues related to computational imagery, believing that these issues are best addressed within the context of a specific forensic application. The use of an automated, aesthetically driven computational imaging mode does not necessarily imply nefarious intent on the part of the photographer in all cases, but may in some. This suggests that broad-scale screening for manipulated imagery (e.g., on social media platforms) might not target portrait mode images for detection and further scrutiny, but in the context of an insurance claim or a court proceeding, higher scrutiny may be necessary. As with the increase in the number, complexity, and ease of use of software packages for image manipulation, a key forensic challenge presented by computational imaging is the degree to which they democratize the creation of manipulated media. Prior to these developments, the creation of a convincing manipulation required a knowledgeable user dedicating a significant amount of time and effort with only partially automated tools. Much like Kodak cameras greatly simplified consumer photography with the motto “You Press the Button, We Do the Rest,” computational cameras allow users to create—with the push of a button—imagery that’s inconsistent with the classically understood physical limitations of the camera. Realizing that most people won’t carry around a heavy camera with a large lens, a common goal of computational photography is to replicate the aesthetics of Digital Single Lens Reflex (DSLR) imagery using the much smaller and lighter sensors and optics used in mobile phones. From this, one of the significant achievements of computational imaging is the ability to replicate the shallow depth of field of a large aperture DSLR lens on a smartphone with a tiny aperture.

3.2 Automation of Geometrically Correct Synthetic Blur Optical blur is a perceptual cue to depth (Pentland 1987), a limiting factor in the performance of computer vision systems (Bourlai 2016), and an aesthetic tool that photographers use to separate foreground and background parts of a scene. Because smartphone sensors and apertures are so small, images taken by mobile phones often appear to be in focus everywhere, including background elements that distract a viewer’s attention. To draw viewers’ perceptual attention to a particular object and avoid distractions from background objects, smartphones now include ‘portrait

3 Computational Imaging

43

Fig. 3.1 (Left) For a close-in scene captured by a smartphone with a small aperture, all parts of the image will appear in focus. (Right) In order to mimic the aesthetically pleasing properties of a DSLR with a wide aperture, computational cameras now include a ‘portrait mode’ that blurs the background so the foreground stands out better

mode’ which automatically manipulates the sensor image via the application of spatially varying blur. Figure 3.1 shows an example of the all-in-focus natively captured by the sensor and the resulting portrait mode image. Because there is a geometric relationship between optical blur and depth in the scene, the creation of geometrically correct synthetic blur requires knowledge of the 3D scene structure. Consistent with the spirit of computational cameras, portrait modes enable this with a combination of hardware and software. Hardware acquires the scene in 3D using either stereo cameras or sensors with the angular sensitivity of a light field camera (Ng 2006). Image processing software uses this 3D information to infer and apply the correct amount of blur for each part of the scene. The result is automated, geometrically correct optical blur that looks very convincing. A natural question, then, is whether or not we can accurately differentiate ‘portrait mode’-based imagery from genuine optical blur.

3.2.1 Primary Cue: Image Noise Regardless of how the local blurring is implemented, the key difference between optical blur and portrait mode-type processing can be found in image noise. When blur happens optically, before photons reach the sensor, only small signal-dependent noise impacts are observed. When blur is applied algorithmically to an already digitized image, however, the smoothing or filtering operation also implicitly de-noises the image. Since the amount of de-noising is proportional to the amount of local smoothing or blurring, differences in the amount of algorithmic local blur can be detected via inconsistencies between the local intensity and noise level. Two regions of the image having approximately the same intensity should also have approximately the same level of noise. If one region is blurred more than the other, or one is

44

S. McCloskey

blurred while the other is not, an inconsistency is introduced between the intensities and local noise levels. For our noise analysis, we extend the combined noise models of Tsin et al. (2001) and Liu et al. (2006). Ideally, a pixel produces a number of electrons E num proportional to the average irradiance from the object being imaged. However, shot noise N S is a result of the quantum nature of light and captures the uncertainty in the number of electrons stored at a collection site; N S can be modeled as the Poisson noise. Additionally, site-to-site non-uniformities called fixed pattern noise K are a multiplicative factor impacting the number of electrons; K can be characterized as having mean 1 and a small spatial variance σ 2K over all of the collection sites. Thermal energy in silicon generates free electrons which contribute dark current to the image; this is modeled as an additive factor N DC , modeled as the Gaussian noise. The on-chip output amplifier sequentially transforms the charge collected at each site into a measurable voltage with a scale A, and the amplifier generates zero mean read-out noise N R with variance σ 2R . Demosaicing is applied in color cameras to interpolate two of the three colors at each pixel, and it introduces an error that is sometimes modeled as noise. After this, the camera response function (CRF) f (·) maps this voltage via a non-linear transform to improve perceptual image quality. Lastly, the analog-to-digital converter (ADC) approximates the analog voltage as an integer multiple of a quantization step q. The quantization noise can be modeled as the addition of a noise source N Q . With these noise sources in mind, we can describe a digitized 2D image as follows:   D(x, y) = f (K (x, y)E num (x, y) + N DC (x, y) + N S (x, y) + N R (x, y))A + N Q (x, y)

(3.1)

The variance of the noise is given by    q 2 σ 2N (x, y) = f 2 A2 K (x, y)E num (x, y) + E[N DC (x, y)] + σ 2R + 12

(3.2)

where E[·] is the expectation function. This equation tells us two things which are typically overlooked in the more simplistic model of noise as an additive Gaussian source: 1. The noise variance’s relationship with intensity reveals the shape of the CRF’s derivative f  . 2. Noise has a signal-dependent aspect to it, as evidenced by the E num term. An important corollary to this is that different levels of noise in regions of an image having different intensities is not per se an indicator of manipulation, though it has been taken as one in past work (Mahdian and Saic 2009). We show in our experiments that, while the noise inconsistency cue from Mahdian and Saic (2009) has some predictive power in detecting manipulations, a proper accounting for signal-dependent noise via its relationship with image intensity significantly improves accuracy.

3 Computational Imaging

45

Measuring noise in an image is, of course, ill-posed and is equivalent to the long-standing image de-noising problem. For this reason, we leverage three different approximations of local noise, measured over approximately uniform image regions: intensity variance, intensity gradient magnitude, and the noise feature of Mahdian and Saic (2009) (abbreviated NOI). Each of these is related to the image intensity of the corresponding region via a 2D histogram. This step translates subtle statistical relationships in the image to shape features in the 2D histograms which can be classified by a neural network. As we show in the experiments, our detection performance on histogram features significantly improves on that of popular approaches applied directly to the pixels of the image.

3.2.2 Additional Photo Forensic Cues One of the key challenges in forensic analysis of images ‘in the wild’ is that compression and other post-processing may overwhelm subtle forgery cues. Indeed, noise features are inherently sensitive to compression which, like blur, smooths the image. In order to improve detection performance in such challenging cases, we incorporate additional forensic cues which improve our method’s robustness. Some portrait mode implementations appear to operate on a JPEG image as input, meaning that the outputs exhibit cues related to double JPEG compression. As such, there is a range of different cues that can reveal manipulations in a subset of the data.

3.2.2.1

Demosaicing Artifacts

Forensic researchers have shown that the differences between the demosaicing algorithm and the differences between the physical color filter array bonded to the sensor can be detected from the image. Since focus manipulations are applied on the demosaiced images, the local smoothing operations will alter these subtle Color Filter Array (CFA) demosaicing artifacts. In particular, the lack of CFA artifacts or the detection of weak, spatially varying CFA artifacts indicates the presence of global or local tampering, respectively. Following the method of Dirik and Memon (2009), we consider the demosaicing scheme f d being a bilinear interpolation. We divide the image into W × W subblocks and only compute the demosaicing feature at the non-smooth blocks of pixels. Denote each non-smooth block as Bi , where i = 1, . . . , m B , and m B is the number of non-smooth blocks in the image. The re-interpolation error of i-th sub-block for the k-th CFA pattern θk is defined as Bˆ i,k = f d (Bi , θk ) and k = 1, . . . , 4. The MSE error matrix E i(2) (k, c), c ∈ R, G, B between the blocks B and Bˆ is computed in non-smooth regions all over the image. Therefore, we define the metric to estimate the uniformity of normalized green channel column vector as

46

S. McCloskey

F = median

 4 l=1

E (2) (k, 2) − 25| |100 × 3 i (2) l=1 E i (l, 2)



E i (k, 2) E i(2) (k, c) = 100 × 3 l=1 E i (l, 2) E i (k, c) =

3.2.2.2

W  W  2  1 Bi (x, y, c) − Bˆ i,k (x, y, c) W × W x=1 y=1

(3.3)

JPEG Artifact

In some portrait mode implementations, such as the iPhone, the option to save both an original and a portrait mode image of the same scene suggests that post-processing is applied after JPEG compression. Importantly, both the original JPEG image and the processed version are saved in the JPEG format without resizing. Hence, Discrete Cosine Transform (DCT) coefficients representing un-modified areas will undergo two consecutive JPEG compressions and exhibit double quantization (DQ) artifacts, used extensively in the forensics literature. DCT coefficients of locally blurred areas, on the other hand, will result from non-consecutive compressions and will present weaker artifacts. We follow the work of Bianchi et al. (2011) and use the Bayesian inference to assign to each DCT coefficient a probability of being doubly quantized. Accumulated over each 8 × 8 block of pixels, the DQ probability map allows us to distinguish original areas (having high DQ probability) from tampered areas (having low DQ probability). The probability of a block being tampered can be estimated as 

  R(m i ) − L(m i ) ∗ kg (m i ) + 1 p = 1/ i|m i =0



 Q2 m− R(m) = Q 1 Q1   Q2 m− L(m) = Q 1 Q1

b 1 − Q2 2 b 1 + Q2 2



1 − 2

 +

1 2

(3.4)

where m is the value of the DCT coefficient; kg (·) is a Gaussian kernel with standard deviation σe /Q 2 ; Q 1 and Q 2 are the quantization steps used in the first and second compression, respectively; b is the bias; and u is the unquantized DC coefficient.

3 Computational Imaging

47

3.2.3 Focus Manipulation Detection To summarize the analysis above, we adopt five types of features: color variance (VAR), image gradient (GRAD), double quantization (ADQ) (Bianchi et al. 2011), color filter artifacts (CFA) (Dirik and Memon 2009), and noise inconsistencies (NOI) (Mahdian and Saic 2009) for refocusing detection. Each of these features is computed densely at each location in the image, and Fig. 3.2 illustrates the magnitude of these features in a feature map for an authentic image (top row) and a portrait mode image (middle row). Though there are notable differences between the feature maps in these two rows, there is no clear indication of a manipulation except, perhaps, the ADQ feature. And, as mentioned above, the ADQ cue is fragile because it depends on whether blurring is applied after an initial compression. As mentioned in Sect. 3.2.1, the noise cues are signal-dependent in the sense that blurring introduces an inconsistency between intensity and noise levels. To illustrate this, Fig. 3.2’s third row shows scatter plots of the relationship between intensity (on the horizontal axis) and the various features (on the vertical axis). In these plots, particularly the columns related to noise (Variance, Gradient, and NOI), the distinction between the statistics of the authentic image (blue symbols) and the manipulated image (red symbols) becomes quite clear. Noise is reduced in most of the images, though the un-modified foreground region (the red bowl) maintains relatively higher noise because it is not blurred. Note also that the noise levels across the manipulated image are actually more consistent than in the authentic image, showing that previous noise-based forensics (Mahdian and Saic 2009) are ineffective.

Fig. 3.2 Feature maps and histogram for authentic and manipulated images. On the first row are the authentic image feature maps; the second row shows the corresponding maps for the manipulated image. We show scatter plots relating the features to intensity in the third row, where blue sample points correspond to the authentic image, and red corresponds to a manipulated DoF image (which was taken with an iPhone)

48

S. McCloskey

Fig. 3.3 Manipulated refocusing image detection pipeline. The example shown is an iPhone7plus portrait mode image

Figure 3.3 shows our portrait mode detection pipeline, which incorporates these five features. In order to capture the relationship between individual features and the underlying image intensity, we employ an intensity versus feature bivariate histogram—which we call the focus manipulation inconsistency histogram (FMIH). We use FMIH for all five features for defocus forgery image detection, each of which is analyzed by a neural network called FMIHNet. These five classification results are combined by a majority voting scheme to determine a final classification label. After densely computing the VAR, GRAD, ADQ, CFA, and NOI features for each input image (shown in the first five columns of the second row of Fig. 3.3), we partition the input image into superpixels and, for each superpixel i sp , we compute the mean F(i sp ) of each feature measure and its mean intensity. Finally, we generate the FMIH for each of the five figures, shown in the five columns of the third row of Fig. 3.3. Note that the FMIH are flipped vertically with respect to the scatter plots shown in Fig. 3.2. A comparison of the FMIH extracted features from the same scene captured with different cameras is shown in Fig. 3.4.

3 Computational Imaging

49

Fig. 3.4 Extracted FMIHs for the five feature measures with images captured using Canon60D, iPhone7Plus, and HuaweiMate9 cameras

3.2.3.1

Network Architectures

We have designed a FMIHNet, illustrated in Fig. 3.5, for the five histogram features. Our network is a VGG (Simonyan and Zisserman 2014) style network consisting of convolutional (CONV) layers with small receptive fields (3 × 3). During training, the input to our FMIHNet is a fixed-size 101 × 202 FMIH. The FMIHNet is a fusion of two relatively deep sub-networks: FMIHNet1 with 20 CONV layers for VAR and CFA features, and FMIHNet2 with 30 CONV layers for GRAD, ADQ, and NOI features. The CONV stride is fixed to 1 pixel; the spatial padding of the input

Fig. 3.5 Network architecture: FMIHNet1 for Var and CFA features; FMIHNet2 for Grad, ADQ, and NOI features

50

S. McCloskey

features is set to 24 pixels to preserve the spatial resolution. Spatial pooling is carried out by five max-pooling layers, performed over a 2 × 2 pixel window, with stride 2. A stack of CONV layers followed by one Fully Connected (FC) layer performs two-way classification. The final layer is the soft-max layer. All hidden layers have rectification (ReLU) non-linearity. There are two reasons for the small 3 × 3 receptive fields: first, incorporating multiple non-linear rectification layers instead of a single one makes the decision function more discriminative; secondly, this reduces the number of parameters. This can be seen as imposing a regularization on a later CONV layer by forcing it to have a decomposition through the 3 × 3 filters. Because most of the values in our FMIH are zeros (i.e., most cells in the 2D histogram are empty) and because we only have two output classes (authentic and portrait mode), more FC layers seem to degrade the training performance.

3.2.4 Portrait Mode Detection Experiments Having introduced a new method to detect focus manipulations, we first demonstrate that our method can accurately identify manipulated images even if they are geometrically correct. Here, we also show that our method is more accurate than both past forensic methods (Bianchi et al. 2011; Dirik and Memon 2009; Mahdian and Saic 2009) and the modern vision baseline of CNN classification applied directly to the image pixels. Second, having claimed that the photometric relationship of noise cues with the image intensity is important, we will show that our FMIH histograms are a more useful representation of these cues. To demonstrate our performance on the hard cases of geometrically correct focus manipulations, we have built a focus manipulation dataset (FMD) of images captured with a Canon 60D DSLR and two smartphones having dual lens camera-enabled portrait modes: the iPhone7Plus and the Huawei Mate9. Images from the DSLR represent real shallow DoF images, having been taken with focal lengths in the range 17–70 and f numbers in the range F/2.8-F/5.6. The iPhone was used to capture aligned pairs of authentic and manipulated images using portrait mode. The Mate9 was also used to capture authentic/manipulated image pairs, but these are only approximately aligned due to its inability to save the image both before and after portrait mode editing. We use 1320 such images for training and 840 images for testing. The training set consists of 660 authentic images (220 from each of the three cameras) and 660 manipulated images (330 from each of iPhone7Plus and Huawei Mate9). The test set consists of 420 authentic images (140 from each of the three cameras) and 420 manipulated images (140 from each of iPhone7Plus and HuaweiMate9). Figure 3.4 shows five sample images from FMD and illustrates the perceptual realism of the manipulated images. The first row of Table 3.1 quantifies this performance and shows that a range of CNN models (AlexNet, CaffeNet, VGG16, and VGG19) have classification accuracies in the range of 76–78%. Since our method uses five

3 Computational Imaging

51

Table 3.1 Image classification accuracy on FMD Data AlexNet CaffeNet Image VAR map GRAD map ADQ map CFA map NOI map

0.760 0.688 0.733 0.735 0.761 0.707

0.782 0.725 0.767 0.759 0.788 0.765

Table 3.2 FMIH classification accuracy on FMD Data SVM AlexNet CaffeNet VGG16 VAR GRAD ADQ CFA NOI Vote

0.829 0.909 0.909 0.882 0.858 0.942

0.480 0.503 0.496 0.510 0.497 -–

0.480 0.500 0.520 0.520 0.506 -–

0.475 0.481 0.503 0.481 0.520 -–

VGG16

VGG19

0.771 0.714 0.740 0.736 0.777 0.745

0.784 0.726 0.769 0.740 0.785 0.760

VGG19

LeNet

FMIHNet

0.512 0.486 0.511 0.510 0.530 -–

0.635 0.846 0.844 0.871 0.779 0.888

0.850 0.954 0.946 0.919 0.967 0.982

different feature maps which can easily be interpreted as images, the remaining rows of Table 3.1 show the classification accuracies of the same CNN models applied to these feature maps. The accuracies are slightly lower than for the image-based classification. In Sect. 3.2.1, we claimed that a proper accounting for signal-dependent noise via our FMIH histograms improves upon the performance of the underlying features. This is demonstrated by comparing the image- and feature map-based classification performance of Table 3.1 with the FMIH-based classification performance shown in Table 3.2. Using FMIH, even the relatively simple SVMs and LeNet CNNs deliver classification accuracies in the 80–90% range. Our FMIHNet architecture produces significantly better results than these, with our method’s voting output having a classification accuracy of 98%.

3.2.5 Conclusions on Detecting Geometrically Correct Synthetic Blur We have presented a novel framework to detect focus manipulations, which represent an increasingly difficult and important forensics challenge in light of the availability of new camera hardware. Our approach exploits photometric histogram features, with a particular emphasis on noise, whose shapes are altered by the manipulation process. We have adopted a deep learning approach that classifies these 2D

52

S. McCloskey

histograms separately and then votes for a final classification. To evaluate this, we have produced a new focus manipulation dataset with images from a Canon60D DSLR, iPhone7Plus, and HuaweiMate9. This dataset includes manipulations, particularly from the iPhone portrait mode, that are geometrically correct due to the use of dual lens capture devices. Despite the challenge of detecting manipulations that are geometrically correct, our method’s accuracy is 98%, significantly better than image-based detection with a range of CNNs, and better than prior forensics methods. While these experiments make it clear that the detection of imagery generated by first-generation portrait modes is reliable, it’s less clear that this detection capability will hold up as algorithmic and hardware improvements are made to mobile phones. This raises the question of how detectable digital blurring (i.e., blurring performed in software) is compared to optical blur.

3.3 Differences Between Optical and Digital Blur As mentioned in the noise analysis earlier in this chapter, the shape of the Camera Response Function (CRF) shows up in image statistics. The non-linearity of the CRF impacts more than just noise and, in particular, its effect is now well-understood on motion deblurring (Chen et al. 2012; Tai et al. 2013). While that work showed how an unknown CRF is a noise source in blur estimation, we will demonstrate that it is a key signal helping to distinguish authentic edges from software-blurred gradients, particularly at splicing boundaries. This allows us to identify forgeries that involve artificially blurred edges, in addition to ones with artificially sharp transitions. We first consider blurred edges: for images captured by a real camera, the blur is applied to scene irradiance as the sensor is exposed, and it is then mapped to image intensity via the CRF. There are two operations involved, CRF mapping and optical/manual blur convolution. By contrast, in a forged image, the irradiance of a sharp edge is mapped to intensity first and then blurred by the manual blur point spread function (PSF). The key to distinguishing these is that the CRF is a non-linear operator, so it is not commutative, i.e., applying the PSF before the CRF leads to a different result than applying the CRF first, even if the underlying signal is the same. The key difference is that the profile of an authentic blurred edge (CRF before PSF) is asymmetric w.r.t the center location of an edge, whereas an artificially blurred edge is symmetric, as illustrated in Fig. 3.6. This is because, due to its non-linearity, the slope of the CRF is different across the range of intensities. CRFs typically have a larger gradient in low intensities, small gradient in high intensities, and approximately constant slope in the mid-tones. In order to capture this, we use the statistical bivariate histogram of pixel intensity versus gradient magnitude, which we call the intensity gradient histogram (IGH), and use it as the feature for synthetic blur detection around edges in an image. The IGH captures key information about the CRF without having to explicitly estimate it and, as we show later, its shape differs between natural and artificial edge profiles in a way that can be detected with existing CNN architectures.

3 Computational Imaging

53

Edge Profile

Gradient

IGH

Fig. 3.6 Authentic blur (blue), authentic sharp (cyan), forgery blur (red) and forgery sharp (magenta) edge profiles (left), gradients (center), and IGHs (right)

Before starting the theoretical analysis, we clarify our notation. For simplicity, we study the role of the operations ordering on a 1D edge. We denote the CRF, which is assumed to be a non-linear monotonically increasing function, as f (r ), normalized to satisfy f (0) = 0, f (1) = 1. And the inverse CRF is denoted as g(R) = f −1 (R), satisfying g(0) = 0, f (g(1)) = 1. r represents irradiance, and R intensity. We assume that the optical blur PSF is a Gaussian function, having confirmed that the blur tools in popular image manipulation software packages use a Gaussian kernel, K g (x) =

x2 1 √ e −2σ2 σ 2π

(3.5)

When the edge in irradiance space r is a step edge, like the one shown in green in Fig. 3.6,  a x 0 determines the fingerprint strength. Attacks of this type have been demonstrated to be effective, in the sense that they can successfully mislead a camera identification algorithm in the form of Eq. (4.5). The attack’s success generally depends on a good choice of α: too low values mean that the bogus image J may not be assigned to Alice’s camera; a too strong embedding will make the image appear suspicious (Goljan et al. 2011; Marra et al. 2014). In practical scenarios, Eve may have to apply further processing to make her forgery more compelling, e.g., removing the genuine camera fingerprint, synthesizing color filter interpolation artifacts (Kirchner and Böhme 2009), and removing or adding traces of JPEG compression (Stamm and Liu 2011). Under realistic assumptions, it is virtually impossible to prevent Eve from forcing a high similarity score in Eq. (4.5). All is not lost, however. Alice can utilize that noise residuals computed with practical denoising filters are prone to contain remnants of image content. The key observation here is that the similarity between a noise residual WI from an image I taken with Alice’s camera and the noise residual WJ due to a common attack-induced PRNU term will be further increased by some ˆ E. shared residual image content, if I contributed to Eve’s fingerprint estimate K The so-called triangle test (Goljan et al. 2011) picks up on this observation by also considering the correlation between Alice’s own fingerprint and both WI and WJ to determine whether the similarity between WI with WJ is suspiciously large. A pooled version of the test establishes whether any images in a given set of Alice’s images ˆ E (Barni et al. 2018), without determining which. Observe that have contributed to K in either case Alice may have to examine the entirety of images ever made public ˆ E in a procedure that by her. On Eve’s side, efforts to create a fingerprint estimate K deliberately suppresses telltale remnants of image content can thwart the triangle test’s success (Caldelli et al. 2011; Rao et al. 2013; Barni et al. 2018), thereby only setting the stage for the next iteration in the cat-and-mouse game between attacks and defenses. The potentially high computational (and logistical) burden and the security concerns around the triangle test can be evaded in a more constrained scenario. Specif-

78

M. Kirchner

ically, assume that Eve targets the forgery of an uncompressed image but only has ˆ E . Here, it access to images shared in a lossy compression format when estimating K can be sufficient to test for the portion of the camera fingerprint that is fragile to lossy compression to establish that Eve’s image J does not contain a complete fingerprint (Quiring and Kirchner 2015). This works because the PRNU in uncompressed images is relatively uniform across the full frequency spectrum, whereas lossy compression mainly removes high-frequency information from images. As long as Alice’s public images underwent moderately strong compression, such as JPEG at a quality factor of about 85, no practicable remedies for Eve to recover the critically missing portion of her fingerprint estimate are known at the time of this writing (Quiring et al. 2019).

4.7 Camera Fingerprints and Deep Learning The previous sections have hopefully given the reader the impression that research around PRNU-based camera fingerprints is very much alive and thriving, and that new (and old) challenges continue to spark the imagination of academics and practitioners alike. Different from the broader domain of media forensics, which is now routinely drawing on deep learning solutions, only a handful of works have made attempts to apply ideas from this rapidly evolving field to the set of problems typically discussed in the context of device-specific (PRNU-based) camera fingerprints. We can only surmise that this in part due to the robust theoretical foundations that have defined the field and that have ultimately led to the wide acceptance of PRNU-based camera fingerprints in practical forensic casework, law enforcement, and beyond. Data-driven “black-box” solutions may thus appear superfluous to many. However, one of the strengths that deep learning techniques can bring to the rigid framework of PRNU-based camera identification is in fact their very nature: they are data-driven. The detector in Sect. 4.3 was originally derived under a specific set of assumptions with respect to the imaging model in Eq. (4.1) and the noise residuals in Eq. (4.3). There is good reason to assume that real images will deviate from these simplified models to some degree. First and foremost, noise residuals from a single image will always suffer from significant and non-trivial distortion, if we accept that content suppression is an ill-posed problem in the absence of viable image models (and possibly even of the noise characteristics itself (Masciopinto and Pérez-González 2018)). This opens the door to potential improvements from datadriven approaches, which ultimately do not care about modeling assumptions but rather learn (hopefully) relevant insights from the training data directly. One such approach specifically focuses on the extraction of a camera signature from a single image I at test time (Kirchner and Johnson 2019). Instead of relying on a “blind” denoising procedure as it is conventionally the case, a convolutional neural network (CNN) can serve as a flexible non-linear optimization tool that learns how to obtain a better approximation of K. Specifically, the network is trained to ˆ is ˜ to minimize K ˆ − K ˜ 22 , as the pre-computed estimate K extract a noise pattern K the best available approximation of the actual PRNU signal under the given imaging

4 Sensor Fingerprints: Camera Identification and Beyond

79

Fig. 4.4 Camera identification from noise signals computed with a deep-learning based fingerprint extractor (coined “SPN-CNN”) (Kirchner and Johnson 2019). The ROC curves were obtained by (0) thresholding the normalized correlation, ρncc , between the extracted SPN-CNN noise patterns, ˜ and “conventional” camera fingerprints, K, ˆ for patches of size 100 × 100 (blue) and 50 × 50 K, (orange). Device labels correspond to devices in the VISION (Shullani et al. 2017) (top) and Dresden Image Databases (Gloe and Böhme 2010) (bottom). Curves for the standard Wavelet denoiser and an off-the-shelf DnCNN denoiser (Zhang et al. 2017) are included for comparison

model. The trained network replaces the denoiser F(·) at test time, and the fingerprint ˆ instead of KI ˆ in Eq. (4.5). Notably, this breaks similarity is evaluated directly for K with the tradition of employing the very same denoiser for both fingerprint estimation and detection. Empirical results suggest that the resulting noise signals have clear benefits over conventional noise residuals when used for camera identification, as showcased in Fig. 4.4. A drawback of this approach is that it calls for extraction CNNs trained separately for each relevant camera (Kirchner and Johnson 2019). The similarity function employed by the detector in Eq. (4.5) is another target for potential improvement. If model assumptions do not hold, the normalized crosscorrelation may no longer be a good approximation of the optimal detector (Fridrich 2013), and a data-driven approach may be able to reach more conclusive decisions. Along those lines, a Siamese network structure trained to compare spatially aligned ˆ and the noise residual W from a probe image I patches from a fingerprint estimate K was reported to greatly outperform a conventional PCE-based detector across a range of image sizes (Mandelli et al. 2020). Different from the noise extraction approach by Kirchner and Johnson (2019), experimental results suggest that no device-specific training is necessary. Both approaches have not yet been subjected to more realistic settings that include sensor misalignment. While the two works discussed above focus on specific building blocks in the established camera identification pipeline, we are not currently aware of an end-toend deep learning solution that achieves source attribution at the level of individual devices at scale. There is early evidence, however, to suggest that camera model traces derived from a deep model can be a beneficial addition to conventional camera fingerprints, especially when the size of the analyzed images is small (Cozzolino et al. 2020).

80

M. Kirchner

CNNs have also been utilized in the context of counter-forensics, where the twofold objective of minimizing fingerprint similarity while maximizing image quality almost naturally invite data-driven optimization solutions to the problem of sideinformed fingerprint removal. An early proposal trains an auto-encoder inspired anonymization operator by encoding the stated objectives in a two-part cost function (Bonettini et al. 2018). The solution, which has to be retrained for each new image, relies on a learnable denoising filter as part of the network, which is used to extract noise residuals from the “anonymized” images during training. The trained network is highly effective at test time as long as the detector uses the same denoising function, but performance dwindles when a different denoising filter, such as the standard Wavelet-based approach, is used instead. This limits the practical applicability of the fingerprint removal technique for the time being. Overall, it seems almost inevitable that deep learning will take on a more prominent role across a wide range of problems in the field of (PRNU-based) device identification. We have included these first early works here mainly to highlight the trend, and we invite the reader to view them as important stepping stones for future developments to come.

4.8 Public Datasets Various publicly available datasets have been compiled over time to advance research on sensor-based device identification, see Table 4.1. Not surprisingly, the evolution of datasets since the release of the trailblazing Dresden Image Database (Gloe and Böhme 2010) mirrors the general consumer shift from dedicated digital cameras to mobile devices. There are now also several diverse video datasets with a good coverage across different devices available. In general, a defining quality of a good dataset for source device identification is not only the number of available images per unique device, but also whether multiple instances of the same device model are represented. This is crucial for studying the effects of model-specific artifacts on false alarms, which may remain unidentified when only one device per camera model is present. Other factors worth considering may be whether the image/video capturing protocol controlled for certain exposure parameters, or whether all devices were used to capture the same set (or type) of scenes. Although it has become ever more easy to gather suitably large amounts of data straight from some of the popular online media sharing platforms, dedicated custommade research datasets offer the benefit of a well-documented provenance while fostering the reproducibility of research and mitigating copyright concerns. In addition, many of these datasets include, by design, a set of flatfield images to facilitate the computation of high-quality sensor fingerprints. However, as we have seen throughout this chapter, a common challenge in this domain is to keep pace with the latest technological developments on the side of camera manufacturers. This translates into a continued need for updated datasets to maintain practical relevance and timeliness. Results obtained on an older dataset may not hold up well on data from newer

4 Sensor Fingerprints: Camera Identification and Beyond

81

Table 4.1 Public datasets with a special focus on source device identification. The table lists the number of available images and videos per dataset, the number of unique devices and camera models covered, as well as the types of cameras included (C: consumer, D: DSLR, M: mobile device) Dataset Year Images Videos Devices Models Cameras Dresden Gloe and Böhme (2010)* RAISE Dang-Nguyen et al. (2015) VISION Shullani et al. (2017)** HDR Shaya et al. (2018) SOCRatES Galdi et al. (2019) video-ACID Hosler et al. (2019) NYUAD-MMD Taspinar et al. (2020) Daxing Tian et al. (2019) Warwick Quan et al. (2020)* Forchheim Hadwiger and Riess (2020)** * Includes ** Provides

2010

14k+

73

25

CD

2015

8k+

3

3

D

2017

11k+

35

30

M

2018

5k+

23

22

M

2019

9k+

1k

103

60

M

12k+

46

36

CDM

2019

0.6k

2020

6k+

0.3k

78

62

M

2020

43k+

1k+

90

22

M

2020

58k+

14

11

CD

2020

3k+

27

25

M

multiple images of the same scene with varying exposure settings auxiliary data from sharing the base data on various social network sites

devices due to novel sensor features or acquisition pipelines. A good example is the recent release of datasets with a specific focus on high dynamic range (HDR) imaging (Shaya et al. 2018; Quan et al. 2020), or the provision of annotations for videos that underwent electronic image stabilization (Shullani et al. 2017). With the vast majority of media now shared in significantly reduced resolution through online social network platforms or messaging apps, some of the more recent datasets also consider such common modern-day post-processing operations explicitly (Shullani et al. 2017; Hadwiger and Riess 2020).

4.9 Concluding Remarks Camera-specific sensor noise fingerprints are a pillar of media forensics, and they are unrivaled when it comes to establishing source device attribution. While our focus in this chapter has been on still camera images (video data is covered in Chap. 5 of this book), virtually all imaging sensors introduce the same kind of noise fingerprints as we discussed here. For example, line sensors in flatbed scanners received early attention (Gloe et al. 2007; Khanna et al. 2007), and recent years have also seen

82

M. Kirchner

biometric sensors move into the focus (Bartlow et al. 2009; Kauba et al. 2017; Ivanov and Baras 2017, 2019, i. a.). Although keeping track of advances by device manufacturers and novel imaging pipelines is crucial for maintaining this status, an active research community has so far always been able to adapt to new challenges. The field has come a long way over the past 15 years, and new developments such as the cautious cross-over into the world of deep learning promise a continued potential for fruitful exploration. New perspectives and insights may also arise from applications outside the realm of media forensics. For example, with smartphones as ubiquitous companions in our everyday life, proposals to utilize camera fingerprints as building blocks to multifactor authentication let users actively provide their device’s sensor fingerprint via a captured image to be granted access to a web server (Valsesia et al. 2017; Quiring et al. 2019; Maier et al. 2020, i. a.). This poses a whole range of interesting practical challenges on its own, but it also invites a broader discussion about what camera fingerprints mean to an image-saturated world. At the minimum, concerns over image anonymity must be taken seriously in situations that call for it, and so we also see counter-forensics as part of the bigger picture unequivocally. Acknowledgements Parts of this work were supported by AFRL and DARPA under Contract No. FA8750-16-C-0166. Any findings and conclusions or recommendations expressed in this material are solely the responsibility of the authors and do not necessarily represent the official views of AFRL, DARPA, or the U.S. Government.

References Al-Ani M, Khelifi F (2017) On the SPN estimation in image forensics: a systematic empirical evaluation. IEEE Trans Inf Forensics Secur 12(5):1067–1081 Altinisik E, Sencar HT (2021) Source camera verification for strongly stabilized videos. IEEE Trans Inf Forensics Secur 16:643–657 Altinisik E, Tasdemir K, Sencar HT (2020) Mitigation of H.264 and H.265 video compression for reliable PRNU estimation. IEEE Trans Inf Forensics Secur 15:1557–1571 Amerini I, Caldelli R, Cappellini V, Picchioni F, Piva A (2009) Analysis of denoising filters for photo response non uniformity noise extraction in source camera identification. In: 16th international conference on digital signal processing, DSP 2009, Santorini, Greece, July 5-7, 2009. IEEE, pp 1–7 Amerini I, Caldelli R, Crescenzi P, del Mastio A, Marino A (2014) Blind image clustering based on the normalized cuts criterion for camera identification. Signal Process Image Commun 29(8):831– 843 Amerini I, Caldelli R, Del Mastio A, Di Fuccia A, Molinari C, Rizzo AP (2017) Dealing with video source identification in social networks. Signal Process Image Commun 57:1–7 Artusi A, Richter T, Ebrahimi T, Mantiuk RK (2017) High dynamic range imaging technology [lecture notes]. IEEE Signal Process Mag 34(5):165–172 Avidan S, Shamir A (2007) Seam carving for content-aware image resizing. ACM Trans Graph 26(3):10

4 Sensor Fingerprints: Camera Identification and Beyond

83

Baracchi D, Iuliani M, Nencini AG, Piva A (2020) Facing image source attribution on iphone X. In: Zhao X, Shi Y-Q, Piva A, Kim HJ (eds) Digital forensics and watermarking - 19th international workshop, IWDW 2020, Melbourne, VIC, Australia, November 25–27, 2020, Revised Selected Papers, vol 12617. Lecture notes in computer science. Springer, pp 196–207 Barni M, Nakano-Miyatake M, Santoyo-Garcia H, Tondi B (2018) Countering the pooled triangle test for prnu-based camera identification. In: 2018 IEEE international workshop on information forensics and security, WIFS 2018, Hong Kong, China, December 11-13, 2018. IEEE, pp 1–8 Barni M, Santoyo-Garcia H, Tondi B (2018) An improved statistic for the pooled triangle test against prnu-copy attack. IEEE Signal Process Lett 25(10):1435–1439 Bartlow N, Kalka ND, Cukic B, Ross A (2009) Identifying sensors from fingerprint images. In: IEEE conference on computer vision and pattern recognition, CVPR workshops 2009, Miami, FL, USA, 20–25 June 2009. IEEE Computer Society, pp 78–84 Bayram S, Sencar HT, Memon ND (2013) Seam-carving based anonymization against image & video source attribution. In: 15th IEEE international workshop on multimedia signal processing, MMSP 2013, Pula, Sardinia, Italy, September 30 - Oct. 2, 2013. IEEE, pp 272–277 Bayram S, Sencar HT, Memon ND (2012) Efficient sensor fingerprint matching through fingerprint binarization. IEEE Trans Inf Forensics Secur 7(4):1404–1413 Bayram S, Sencar HT, Memon ND (2015) Sensor fingerprint identification through composite fingerprints and group testing. IEEE Trans Inf Forensics Secur 10(3):597–612 Bloy Greg J (2008) Blind camera fingerprinting and image clustering. IEEE Trans Pattern Anal Mach Intell 30(3):532–534 Böhme R, Kirchner M (2013) Counter-forensics: attacking image forensics. In: Sencar HT, Memon N (eds) Digital image forensics: there is more to a picture than meets the eye. Springer, pp 327–366 Bonettini N, Bondi L, Mandelli S, Bestagini P, Tubaro S, Guera D (2018) Fooling PRNU-based detectors through convolutional neural networks. In: 26th European signal processing conference, EUSIPCO 2018, Roma, Italy, September 3-7, 2018. IEEE, pp 957–961 Caldelli R, Amerini I, Novi A (2011) An analysis on attacker actions in fingerprint-copy attack in source camera identification. In: 2011 ieee international workshop on information forensics and security, WIFS 2011, Iguacu Falls, Brazil, November 29 - December 2, 2011. IEEE Computer Society, pp 1–6 Chakraborty S (2020) A CNN-based correlation predictor for prnu-based image manipulation localization. In: Alattar AM, Memon ND, Sharma G (eds) Media watermarking, security, and forensics 2020, Burlingame, CA, USA, 27-29 January 2020 Chakraborty S, Kirchner M (2017) PRNU-based image manipulation localization with discriminative random fields. In: Alattar AM, Memon ND (eds) Media Watermarking, Security, and Forensics 2017, Burlingame, CA, USA, 29 January 2017 - 2 February 2017, pages 113–120. Ingenta, 2017 Chen M, Fridrich JJ, Goljan M, Lukáš J (2008) Determining image origin and integrity using sensor noise. IEEE Trans Inf Forensics Secur 3(1):74–90 Chierchia G, Cozzolino D, Poggi G, Sansone C, Verdoliva L (2014) Guided filtering for PRNUbased localization of small-size image forgeries. In: IEEE international conference on acoustics, speech and signal processing, ICASSP 2014, Florence, Italy, May 4-9, 2014. IEEE, pp 6231–6235 Chierchia G, Parrilli S, Poggi G, Verdoliva L, Sansone C (2011) PRNU-based detection of smallsize image forgeries. In: 17th international conference on digital signal processing, DSP 2011, Corfu, Greece, July 6-8, 2011. IEEE, pp 1–6 Chierchia G, Poggi G, Sansone C, Verdoliva L (2014) A Bayesian-MRF approach for PRNU-based image forgery detection. IEEE Trans Inf Forensics Secur 9(4):554–567 Chuang W-H, Su H, Wu M (2011) Exploring compression effects for improved source camera identification using strongly compressed video. In: Macq B, Schelkens P (eds) 18th IEEE international conference on image processing, ICIP 2011, Brussels, Belgium, September 11-14, 2011. IEEE, pp 1953–1956

84

M. Kirchner

Connelly B, Eli S, Adam F, Goldman Dan B (2009) Patchmatch: a randomized correspondence algorithm for structural image editing. ACM Trans Graph 28(3):24 Cortiana A, Conotter V, Boato G, De Natale FGB (2011) Performance comparison of denoising filters for source camera identification. In: Memon ND, Dittmann J, Alattar AM, Delp EJ III (eds) Media forensics and security III, San Francisco Airport, CA, USA, January 24-26, 2011, Proceedings, SPIE Proceedings, vol 7880. SPIE, p 788007 Cozzolino D, Marra F, Gragnaniello D, Poggi G, Verdoliva L (2020) Combining PRNU and noiseprint for robust and efficient device source identification. EURASIP J Inf Secur 2020:1 Dang-Nguyen D-T, Pasquini C, Conotter V, Boato G (2015) RAISE: a raw images dataset for digital image forensics. In: Ooi WT, Feng W-c, Liu F (eds), Proceedings of the 6th ACM multimedia systems conference, MMSys 2015, Portland, OR, USA, March 18-20, 2015. ACM, pp 219–224 Dirik AE, Sencar HT, Memon ND (2014) Analysis of seam-carving-based anonymization of images against PRNU noise pattern-based source attribution. IEEE Trans Inf Forensics Secur 9(12):2277– 2290 Entrieri J, Kirchner M (2016) Patch-based desynchronization of digital camera sensor fingerprints. In: Alattar AM, Memon ND (eds) Media watermarking, security, and forensics 2016, San Francisco, California, USA, February 14-18, 2016. Ingenta, pp 1–9 Fridrich J (2013) Sensor defects in digital image forensic. In: Sencar HT, Memon N (eds) Digital Image forensics: there is more to a picture than meets the eye. Springer, Berlin, pp 179–218 Galdi C, Hartung F, Dugelay J-L (2019) SOCRatES: A database of realistic data for source camera recognition on smartphones. In: De Marsico M, di Baja GS, Fred ALN (eds) Proceedings of the 8th international conference on pattern recognition applications and methods, ICPRAM 2019, Prague, Czech Republic, February 19-21, 2019. SciTePress, pp 648–655 Gloe T, Böhme R (2010) The Dresden image database for benchmarking digital image forensics. J Digit Forensic Pract 3(2–4):150–159 Gloe T, Franz E, Winkler A (2007) Forensics for flatbed scanners. In: Delp EJ III, Wong PW (eds.) Security, steganography, and watermarking of multimedia contents IX, San Jose, CA, USA, January 28, 2007, SPIE Proceedings, vol 6505. SPIE, p. 65051I Gloe T, Kirchner M, Winkler A, Böhme R (2007) Can we trust digital image forensics? In: Lienhart R, Prasad AR, Hanjalic A, Choi S, Bailey BP, Sebe N (eds) Proceedings of the 15th international conference on multimedia 2007, Augsburg, Germany, September 24-29, 2007. ACM, pp 78–86 Gloe T, Pfennig S, Kirchner M (2012) Unexpected artefacts in PRNU-based camera identification: a ’Dresden image database’ case-study. In: Li C-T, Dittmann J, Katzenbeisser S, Craver S (eds) Multimedia and security workshop, MM&Sec 2012, Coventry, United Kingdom, September 6-7, 2012. ACM, pp 109–114 Goljan M, Chen M, Comesaña P, Fridrich JJ (2016) Effect of compression on sensor-fingerprint based camera identification. In: Alattar AM, Memon ND (eds) Media watermarking, security, and forensics 2016, San Francisco, California, USA, February 14-18, 2016. Ingenta, pp 1–10 Goljan M, Chen M, Fridrich JJ (2007) Identifying common source digital camera from image pairs. In: Proceedings of the international conference on image processing, ICIP 2007, September 16-19, 2007, San Antonio, Texas, USA. IEEE, pp 125–128 Goljan M, Fridrich JJ, Filler T (2009) Large scale test of sensor fingerprint camera identification. In: Delp EJ, Dittmann J, Memon ND, Wong PW (eds) Media forensics and security I, part of the IS&T-SPIE electronic imaging symposium, San Jose, CA, USA, January 19-21, 2009, Proceedings, SPIE Proceedings, vol 7254. SPIE, p. 72540I Goljan M, Fridrich JJ, Filler T (2010) Managing a large database of camera fingerprints. In: Memon ND, Dittmann J, Alattar AM, Delp EJ (eds) Media forensics and security II, part of the IS&TSPIE electronic imaging symposium, San Jose, CA, USA, January 18-20, 2010, Proceedings, SPIE Proceedings, vol 7541. SPIE, pp 754108 Goljan M, Fridrich JJ (2008) Camera identification from cropped and scaled images. In: Delp EJ III, Wong PW, Dittmann J, Memon ND (eds) Security, forensics, steganography, and watermarking of multimedia contents X, San Jose, CA, USA, January 27, 2008, SPIE Proceedings, vol 6819. SPIE, pp 68190E

4 Sensor Fingerprints: Camera Identification and Beyond

85

Goljan M, Fridrich JJ (2012) Sensor-fingerprint based identification of images corrected for lens distortion. In: Memon ND, Alattar AM, Delp EJ III (eds) Media watermarking, security, and forensics 2012, Burlingame, CA, USA, January 22, 2012, Proceedings, SPIE Proceedings, vol 8303. SPIE, p 83030H Goljan M, Fridrich JJ (2014) Estimation of lens distortion correction from single images. In: Alattar AM, Memon ND, Heitzenrater C (eds) Media watermarking, security, and forensics 2014, San Francisco, CA, USA, February 2, 2014, Proceedings, SPIE Proceedings, vol 9028. SPIE, p 90280N Goljan M, Fridrich JJ, Chen M (2011) Defending against fingerprint-copy attack in sensor-based camera identification. IEEE Trans Inf Forensics Secur 6(1):227–236 Hadwiger B, Riess C (2021) The Forchheim image database for camera identification in the wild. In: Del Bimbo A, Cucchiara R, Sclaroff S, Farinella GM, Mei T, Bertini M, Escalante HJ, Vezzani R (eds) Pattern recognition. ICPR international workshops and challenges - virtual event, January 10-15, 2021, Proceedings, Part VI, Lecture Notes in Computer Science, vol 12666. Springer, pp 500–515 Hosler BC, Zhao X, Mayer O, Chen C, Shackleford JA, Stamm MC (2019) The video authentication and camera identification database: a new database for video forensics. IEEE Access 7:76937– 76948 Hosseini MDM, Goljan M (2019) Camera identification from HDR images. In: Cogranne R, Verdoliva L, Lyu S, Troncoso-Pastoriza JR, Zhang X (eds) Proceedings of the ACM workshop on information hiding and multimedia security, IH&MMSec 2019, Paris, France, July 3-5, 2019. ACM, pp 69–76 Iuliani M, Fontani M, Piva A (2019) Hybrid reference-based video source identification. Sensors 19(3):649 Iuliani M, Fontani M, Piva A (2021) A leak in PRNU based source identification - questioning fingerprint uniqueness. IEEE Access 9:52455–52463 Ivanov VI, Baras JS (2017) Authentication of swipe fingerprint scanners. IEEE Trans Inf Forensics Secur 12(9):2212–2226 Ivanov VI, Baras JS (2019) Authentication of area fingerprint scanners. Pattern Recognit 94:230– 249 Joshi S, Korus P, Khanna N, Memon ND (2020) Empirical evaluation of PRNU fingerprint variation for mismatched imaging pipelines. In: 12th IEEE international workshop on information forensics and security, WIFS 2020, New York City, NY, USA, December 6-11, 2020. IEEE, pp 1–6 Kang X, Li Y, Qu Z, Huang J (2012) Enhancing source camera identification performance with a camera reference phase sensor pattern noise. IEEE Trans Inf Forensics Secur 7(2):393–402 Karaküçük A, Dirik AE (2015) Adaptive photo-response non-uniformity noise removal against image source attribution. Digit Investigat 12:66–76 Karaküçük A, Dirik AE (2019) PRNU based source camera attribution for image sets anonymized with patch-match algorithm. Digit Investig 30:43–51 Kauba C, Debiasi L, Uhl A (2017) Identifying the origin of iris images based on fusion of local image descriptors and PRNU based techniques. In: 2017 IEEE international joint conference on biometrics, IJCB 2017, Denver, CO, USA, October 1-4, 2017. IEEE, pp 294–301 Khanna N, Mikkilineni AK, Chiu GT-C, Allebach JP, Delp EJ (2007) Scanner identification using sensor pattern noise. In: Delp EJ III, Wong PW (eds) Security, steganography, and watermarking of multimedia contents IX, San Jose, CA, USA, January 28, 2007. SPIE Proceedings, vol 6505, p 65051K. SPIE Kirchner M, Böhme R (2009) Synthesis of color filter array pattern in digital images. In: Delp EJ, Dittmann J, Memon ND, Wong PW (eds) Media forensics and security I, part of the IS&T-SPIE electronic imaging symposium, San Jose, CA, USA, January 19-21, 2009, Proceedings, SPIE Proceedings, vol 7254. SPIE, p 72540K Kirchner M, Johnson C (2019) SPN-CNN: boosting sensor-based source camera attribution with deep learning. In: IEEE international workshop on information forensics and security, WIFS 2019, Delft, The Netherlands, December 9-12, 2019. IEEE, pp 1–6

86

M. Kirchner

Knight S, Moschou S, Sorell M (2009) Analysis of sensor photo response non-uniformity in RAW images. In: Sorell M (ed) Forensics in telecommunications, information and multimedia, second international conference, e-forensics 2009, Adelaide, Australia, January 19–21, 2009, Revised Selected Papers, vol 8. Lecture Notes of the Institute for Computer Sciences. Springer, Social Informatics and Telecommunications Engineering, pp 130–141 Korus P, Huang J (2017) Multi-scale analysis strategies in prnu-based tampering localization. IEEE Trans Inf Forensics Secur 12(4):809–824 Li C-T (2010) Source camera identification using enhanced sensor pattern noise. IEEE Trans Inf Forensics Secur 5(2):280–287 Li C-T (2010) Unsupervised classification of digital images using enhanced sensor pattern noise. International symposium on circuits and systems (ISCAS 2010), May 30 - June 2, 2010. France. IEEE, Paris, pp 3429–3432 Lin X, Li C-T (2016) Preprocessing reference sensor pattern noise via spectrum equalization. IEEE Trans Inf Forensics Secur 11(1):126–140 Lin X, Li C-T (2020) PRNU-based content forgery localization augmented with image segmentation. IEEE Access 8:222645–222659 Lukás J, Fridrich JJ, Goljan M (2005) Determining digital image origin using sensor imperfections. In: Said A, Apostolopoulos JG (eds) Electronic Imaging: Image and Video Communications and Processing 2005, San Jose, California, USA, 16–20 January 2005, SPIE Proceedings, vol 5685. SPIE Lukáš J, Fridrich JJ, Goljan M (2006) Digital camera identification from sensor pattern noise. IEEE Trans Inf Forensics Secur 1(2):205–214 Maier D, Erb H, Mullan P, Haupert V (2020) Camera fingerprinting authentication revisited. In: 23rd international symposium on research in attacks, intrusions and defenses ({RAID} 2020), pp 31–46 Mandelli S, Bestagini P, Verdoliva L, Tubaro S (2020) Facing device attribution problem for stabilized video sequences. IEEE Trans Inf Forensics Secur 15:14–27 Mandelli S, Bondi L, Lameri S, Lipari V, Bestagini P, Tubaro S (2017) Inpainting-based camera anonymization. In: 2017 IEEE international conference on image processing, ICIP 2017, Beijing, China, September 17-20, 2017. IEEE, pp 1522–1526 Mandelli S, Cozzolino D, Bestagini P, Verdoliva L, Tubaro S (2020) CNN-based fast source device identification. IEEE Signal Process Lett 27:1285–1289 Marra F, Poggi G, Sansone C, Verdoliva L (2017) Blind PRNU-based image clustering for source identification. IEEE Trans Inf Forensics Secur 12(9):2197–2211 Marra F, Roli F, Cozzolino D, Sansone C, Verdoliva L (2014) Attacking the triangle test in sensorbased camera identification. In: 2014 IEEE international conference on image processing, ICIP 2014, Paris, France, October 27-30, 2014. IEEE, pp 5307–5311 Masciopinto M, Pérez-González F (2018) Putting the PRNU model in reverse gear: findings with synthetic signals. In: 26th European signal processing conference, EUSIPCO 2018, Roma, Italy, September 3-7, 2018. IEEE, pp 1352–1356 Meij C, Geradts Z (2018) Source camera identification using photo response non-uniformity on WhatsApp. Digit Investig 24:142–154 Nagaraja S, Schaffer P, Aouada D (2011) Who clicks there!: anonymising the photographer in a camera saturated society. In: Chen Y, Vaidya J (eds) Proceedings of the 10th annual ACM workshop on Privacy in the electronic society, WPES 2011, Chicago, IL, USA, October 17, 2011. ACM, pp 13–22 Quan Y, Li C-T, Zhou Y, Li L (2020) Warwick image forensics dataset for device fingerprinting in multimedia forensics. In: IEEE international conference on multimedia and expo, ICME 2020, London, UK, July 6-10, 2020. IEEE, pp 1–6 Quan Y, Li C-T (2021) On addressing the impact of ISO speed upon PRNU and forgery detection. IEEE Trans Inf Forensics Secur 16:190–202

4 Sensor Fingerprints: Camera Identification and Beyond

87

Quiring E, Kirchner M (2015) Fragile sensor fingerprint camera identification. In: 2015 IEEE international workshop on information forensics and security, WIFS 2015, Roma, Italy, November 16-19, 2015. IEEE, pp 1–6 Quiring E, Kirchner M, Rieck K (2019) On the security and applicability of fragile camera fingerprints. In: Sako K, Schneider SA, Ryan PYA (eds) Computer security - ESORICS 2019–24th European symposium on research in computer security, Luxembourg, September 23–27, 2019, Proceedings, Part I, vol 11735. Lecture notes in computer science. Springer, pp 450–470 Rao Q, Li H, Luo W, Huang J (2013) Anti-forensics of the triangle test by random fingerprint-copy attack. In: Computational visual media conference Rosenfeld K, Sencar HT (2009) A study of the robustness of PRNU-based camera identification. In: Delp EJ, Dittmann J, Memon ND, Wong PW (eds), Media forensics and security I, part of the IS&T-SPIE electronic imaging symposium, San Jose, CA, USA, January 19-21, 2009, Proceedings, SPIE Proceedings, vol 7254. SPIE, p 72540M Shaya OA, Yang P, Ni R, Zhao Y, Piva A (2018) A new dataset for source identification of high dynamic range images. Sensors 18(11):3801 Shullani D, Fontani M, Iuliani M, Shaya OA, Piva A (2017) VISION: a video and image dataset for source identification. EURASIP J Inf Secur 2017:15 Stamm MC, Liu KJR (2011) Anti-forensics of digital image compression. IEEE Trans Inf Forensics Secur 6(3–2):1050–1065 Tandogan SE, Altinisik E, Sarimurat S, Sencar HT (2019) Tackling in-camera downsizing for reliable camera ID verification. In: Alattar AM, Memon ND, Sharma G (eds) Media watermarking, security, and forensics 2019, Burlingame, CA, USA, 13–17 January 2019. Ingenta Taspinar S, Mohanty M, Memon ND (2016) Source camera attribution using stabilized video. In: IEEE international workshop on information forensics and security, WIFS 2016, Abu Dhabi, United Arab Emirates, December 4-7, 2016. IEEE, pp 1–6 Taspinar S, Mohanty M, Memon ND (2017) PRNU-based camera attribution from multiple seamcarved images. IEEE Trans Inf Forensics Secur 12(12):3065–3080 Taspinar S, Mohanty M, Memon ND (2020) Camera fingerprint extraction via spatial domain averaged frames. IEEE Trans Inf Forensics Secur 15:3270–3282 Taspinar S, Mohanty M, Memon ND (2020) Camera identification of multi-format devices. Pattern Recognit Lett 140:288–294 Tian H, Xiao Y, Cao G, Zhang Y, Xu Z, Zhao Y (2019) Daxing smartphone identification dataset. IEEE. Access 7:101046–101053 Valsesia D, Coluccia G, Bianchi T, Magli E (2015) Compressed fingerprint matching and camera identification via random projections. IEEE Trans Inf Forensics Secur 10(7):1472–1485 Valsesia D, Coluccia G, Bianchi T, Magli E (2015) Large-scale image retrieval based on compressed camera identification. IEEE Trans Multim 17(9):1439–1449 Valsesia D, Coluccia G, Bianchi T, Magli E (2017) User authentication via PRNU-based physical unclonable functions. IEEE Trans Inf Forensics Secur 12(8):1941–1956 van Houten W, Geradts ZJMH (2009) Source video camera identification for multiply compressed videos originating from youtube. Digit Investig 6(1–2):48–60 Zeng H, Chen J, Kang X, Zeng W (2015) Removing camera fingerprint to disguise photograph source. In: 2015 IEEE international conference on image processing, ICIP 2015, Quebec City, QC, Canada, September 27-30, 2015. IEEE, pp 1687–1691 Zhang W, Tang X, Yang Z, Niu S (2019) Multi-scale segmentation strategies in PRNU-based image tampering localization. Multim Tools Appl 78(14):20113–20132 Zhang K, Zuo W, Chen Y, Meng D, Zhang L (2017) Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans Image Process 26(7):3142–3155

88

M. Kirchner

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Chapter 5

Source Camera Attribution from Videos Husrev Taha Sencar

Photo-response non-uniformity (PRNU) is an intrinsic characteristic of a digital imaging sensor, which manifests as a unique and permanent pattern introduced to all media captured by the sensor. The PRNU of a sensor has been proven to be a viable identifier for source attribution and has been successfully utilized for identification and verification of the source of digital media. In the past decade, various approaches have been proposed for the reliable estimation, compact representation, and faster matching of PRNU patterns. These studies, however, mainly featured photographic images, and the extension of source camera attribution capabilities to the video domain remains a problem. This is primarily because the steps involved in the generation of a video are much more disruptive to the PRNU pattern, and therefore, its estimation from videos involves several additional challenges. This chapter examines these challenges and outlines recently developed methods for estimating the sensor’s PRNU from videos.

5.1 Introduction The PRNU is caused by variations in size and material properties of the photosensitive elements that comprise a sensor. These essentially affect the response of each picture element under the same amount of illumination. Therefore, the process of identification boils down to quantifying the sensitivity of each picture element. This is realized through an estimation procedure using a set of pictures acquired by the sensor (Chen et al. 2008). To determine whether a media is captured by a given sensor, H. T. Sencar (B) Qatar Computing Research Institute, HBKU, Ar-Rayyan, Qatar e-mail: [email protected] © The Author(s) 2022 H. T. Sencar et al. (eds.), Multimedia Forensics, Advances in Computer Vision and Pattern Recognition, https://doi.org/10.1007/978-981-16-7621-5_5

89

90

H. T. Sencar

the estimated sensitivity profile, i.e., the PRNU pattern, from the media in question is compared to a reference PRNU pattern obtained in advance using a correlationbased measure, most typically using the peak-to-correlation energy (PCE) (Vijaya Kumar 1990).1 In essence, the reliability and accuracy of a source attribution decision strongly depend on the fact that the PRNU-bearing raw signal at the sensor output has to pass through several steps of in-camera processing before the media is generated. These processing steps have a disruptive effect on the inherent PRNU pattern. The imaging sub-systems used by digital cameras largely remain proprietary to the manufacturer; therefore, it is quite difficult to access their details. At a higher level, however, the imaging pipeline in a camera includes various stages, such as acquisition, pre-processing, color-processing, post-processing, and image and video coding. Figure 5.1 shows the basic processing steps involved in capturing a video, most of which are also utilized during the acquisition of a photograph. When generating a video, an indispensable processing step is the downsizing of the full-frame sensor output to reduce the amount of data that needs processing. This may be realized at acquisition by sub-sampling pixel readout data (trough binning Zhang et al. 2018 or skipping Guo et al. 2018 pixels), as well as during color-processing by downsampling color interpolated image data. Another key processing step employed by modern day cameras at the post-processing stage is image stabilization. Since many videos are not captured while the camera is in a fixed position, all modern cameras perform image stabilization to correct perspective distortions caused by handheld shooting or other vibrations. Stabilization can be followed by other post-processing steps, such as noise reduction and dynamic range adjustment, which may also include transformations, resulting in further weakening of the PRNU pattern. Finally, the sequence of post-processed pictures is encoded into a standard video format for effective storage and transfer. Video generation involves at least three additional steps compared to the generation of photos, including downsizing, stabilization, and video coding. When combined together, these operations have a significant adverse impact on PRNU estimation in two main respects. The first relates to geometric transformations applied during downsizing and stabilization, and the second concerns information loss largely caused by resolution reduction and compression. These detrimental effects could be further compounded by out-of-camera processing. Video editing tools today provide a wide variety of sophisticated processing options over cameras as they are less bounded by computational resources and real-time constraints. Hence, the estimation of sensor PRNU from videos requires a thorough understanding of the implications of these operations and the development of effective mitigations against them.

1

Details on the estimation of the PRNU pattern from photographic images can be found in the previous two chapters of this book.

5 Source Camera Attribution from Videos

91

Fig. 5.1 The processing steps involved in the imaging pipeline of a camera when generating a video. The highlighted boxes are specific to video capture

5.2 Challenges in Attributing Videos The now well-established PRNU estimation approach was devised considering incamera processing relevant to the generation of photographic images. There are, however, several obstacles that prevent its direct application to the sequence of pictures that comprise a video. These difficulties can be summarized as follows. Video frames have lower resolution than the full-sensor resolution typically used for acquiring photos. Therefore, a reference PRNU pattern estimated from photos provides a more comprehensive characteristic of the sensor. Hence, the use of a photo-based reference pattern for attribution requires the mismatch with the dimensions of video frames to be taken into account. Alternatively, one might consider creating a separate reference PRNU pattern at all frame sizes. This, however, may not always be possible as videos can be captured at many resolutions (Tandogan et al. 2019). Moreover, in the case of smartphones, this may vary from one camera app to another based on the preference of developers among a set of supported resolutions. Overall, the downsizing operation in a camera involves various proprietary hardware and software mechanisms that involve both cropping and resizing operations. Therefore, source attribution from videos requires the determination of those device-dependent downsizing parameters. When these parameters are not known a priori, PRNU estimation also has to incorporate the search for these parameters. The other challenge concerns the fact that observed PCE values in matching PRNU patterns are much lower in videos compared to photos. This decrease in PCE values is primarily caused by the downsizing operation and video compression. More critically, low PCE values are encountered even when the PRNU pattern is estimated from multiple frames of the video whose source is in question, i.e., reference-toreference matching. When performing reference-to-reference matching, downsizing can be ignored as a factor as long as the resizing factor is higher than 16 (Tandogan et al. 2019), and the compression is the main concern. In this setting, at medium-

92

H. T. Sencar

to-low compression levels (2 Mbps to 600 Kbps coding bit rates), the average PCE value drops significantly as compression becomes more severe (where PCE values drop from around 2000 to 40) (Altinisik et al. 2020). Alternatively, when PRNU patterns from video frames are individually matched with the reference pattern (i.e., frame-to-reference matching), even downsizing by a factor of two causes a significant reduction in measured PCE values (Bondi et al. 2019). Frame-to-reference matching at HD-sized frames has been found to yield PCE values mostly around 20 (Altinisik and Sencar 2021). Correspondingly, a further issue concerns the difficulty of setting a decision threshold for matching. Large-scale tests performed on photographic images show that setting the PCE value to 60 as a threshold yields extremely low false matches when the correct-match rate is quite high. In contrast, as demonstrated in the results of earlier works, where decision thresholds of 40–100 (Iuliani et al. 2019) and 60 (Mandelli et al. 2020) are utilized when performing frame-to-reference matching, such threshold values on video frames yield much lower attribution rates. Another major challenge to video source attribution is posed by image stabilization. When performed electronically, stabilization requires determining and counteracting undesired camera motion. This involves the application of frame or sub-frame level geometric transformations to align the content in successive pictures of a video. From the PRNU estimation standpoint, this requires registration of PRNU patterns by inverting stabilization transformations in a blind manner. This both increases the computational complexity of PRNU estimation and reduces accuracy due to an increase in false-positive matches generated during the search for unknown transformation parameters. Although in-camera processing steps introduce artifacts that obstruct correct attribution, they can also be utilized to perform more reliable matching. For example, biases introduced to the PRNU estimate by the demosaicing operation and blockiness caused by compression are known to introduce periodic structures onto the estimated PRNU pattern. These artifacts can essentially be treated as pilot signals to detect clues about the processing history of a media after the acquisition. In the case of photos, the linear pattern associated with the demosaicing operation has been shown to be effective in determining the amount of shift, rotation, and translation, with weaker presence in newer cameras (Goljan 2018). In the case of videos, however, the linear pattern is observed to be even weaker, most likely due to the application of in-camera downsizing and more aggressive compression of video frames compared to photos. Therefore, it cannot be reliably utilized in identifying global or local transformations. In a similar manner, since video coding uses variable block sizes determined adaptively during encoding, as opposed to a fixed block size used by still image coding, blockiness artifact is also not useful in reducing the computational complexity of determining the subsequent processing. Obviously, the most important factor that underlies these challenges is the inability to access details of the in-camera processing steps due to strong competition between manufacturers. In the lack of such knowledge and with continued advances in camera technology, research in this area has to treat in-camera processing largely as a

5 Source Camera Attribution from Videos

93

black-box and propose PRNU estimation methods that can cope with this ambiguity. As a result, the estimation of the PRNU from videos is likely to incur higher computational complexity and yield lower attribution accuracy.

5.3 Attribution of Downsized Media Cameras capture images and videos at a variety of resolutions. This is typically performed to accommodate different viewing options and support different display and print qualities. However, downsizing becomes necessary when taking a burst of shots or capturing a video in order to reduce the amount of data that needs processing. Furthermore, to perform complex operations like image stabilization, typically, the sensor is cropped and only the center portion is converted into an image. The downsizing operation can be realized through different mechanisms. In the most common case, cameras capture data at full-sensor resolution which is converted into an image and then resampled in software to the target resolution. An alternative or precursor to sub-sampling is on-sensor binning or averaging which effectively obtains larger pixels. This may be performed either at the hardware level during the acquisition by electrically connecting neighboring pixels or by averaging pixel values digitally immediately after analog-to-digital conversion. Therefore, as a result of performing resizing, pixel binning, and sensor cropping, photos and videos captured at different resolutions are likely to yield non-matching PRNUs. The algorithm for how a camera performs downsizing of full-frame sensor output when capturing a photo or video is implemented as part of the in-camera processing pipeline and is proprietary to every camera maker. Therefore, the viable approach to understand downsizing behavior is to acquire media at different resolutions and compare the resulting PRNUs to determine the best method for matching PRNU patterns at different resolutions. Given that smartphones and tablets have become the defacto camera today, an important concern is whether the use of the default camera interface, i.e., the native camera app, to capture photos and videos only exposes certain default settings and does not reveal all possible options that may potentially be selected by other camera apps that utilize the built-in camera hardware in a device. To address this limitation, Tandogan et al. (2019) used a custom camera app to discover the features of 21 Android-based smartphone cameras. This approach also allowed direct control over various camera settings, such as the application of electronic zoom, deployment of video stabilization, compression, and use of high dynamic range (HDR) imaging, which may interfere with the sought after downsizing behavior. Most notably, it was determined that cameras can capture photos at 15– 30 resolutions and videos at 10–20 resolutions, with newer cameras offering more options. A great majority of these resolutions were in 4:3 and 16:9 aspect ratios. To determine the active sensor pixels used during video acquisition, videos were captured at all supported frame resolutions and a photo was taken while recording

94

H. T. Sencar

continued. This setting led to the finding that each photo is taken at the highest resolution of the same aspect ratio, which possibly indicates that frames are downscaled by a fixed factor before being encoded into a video. The maximum resolution of a video is typically much smaller than the full-sensor size. Therefore, a larger amount of cropping and scaling has to be performed on video frames. Hence, the two most important questions that relate to in-camera downsizing are about its impact on the reliability of PRNU matching and how matching should be performed when two PRNU patterns are of different sizes.

5.3.1 The Effect of In-Camera Downsizing on PRNU Investigating the effects of downsizing on camera ID verification, (Tandogan et al. 2019) determined that an important consideration is knowledge of the specifics of the downsizing operation. When downscaling is performed using the well-known bilinear, bicubic, and Lanczos interpolation filters, the resulting PCE values for all 1 (which reduces the number of pixels in the original resizing factors higher than 12 2 image by a factor of 12 ) is found to yield PCE values higher than the commonly used decision threshold of 60. However, when there is a mismatch between the method used for downscaling the reference PRNU pattern and the downsizing method deployed by a camera, the matching performance dropped very fast. In this setting, even at the scaling factor of 14 , estimated PRNUs could not be matched. Hence, it can be deduced that when the underlying method is known, downsizing does not pose a significant obstacle to source attribution despite significant data loss. When it is not known, however, in-camera downsizing is disruptive to successful attribution even at relatively low downsizing factors.

5.3.2 Media with Mismatching Resolutions The ability to reliably match PRNU patterns extracted from media at different resolutions depends on knowledge of how downsizing is performed. In essence, this reduces to determining the amount of cropping and scaling applied to media at lower resolution. Although implementation details of these operations may remain unknown, the area and position of cropping and an approximate scaling factor can be identified through a search. An important concern is whether the search for downsizing parameters should be performed in the spatial domain or the PRNU domain. Since downsizing is performed in the spatial domain, the search has to be ideally performed in the spatial domain where identified parameters are validated based on the match of the estimated PRNU with the reference PRNU. In this search setting, each trial for the parameters in the spatial domain has to be followed by PRNU estimation. As downsizing parameters are determined through a brute-force search and the search space for the parameters can be large, this may be computationally

5 Source Camera Attribution from Videos

95

expensive. This is indeed a greater concern when it comes to inverting stabilization transformations because the search has to be performed for each frame individually rather than once for the camera model. Hence, an alternative approach is to perform the search directly using the PRNU patterns, removing the need for repetitive PRNU estimation. However, since the PRNU estimation process is not linear in nature, this change in the search domain is likely to introduce a degradation in performance. To determine this, Altinisik et al. (2021) applied random transformations to a set of video frames, estimated PRNU patterns, inverted the transformation, and evaluated the match with the corresponding reference patterns. The resulting PCE values when compared to performing transformation and inversion in the spatial domain (to also take into account the effect of transformation-related interpolation) showed that the search of parameters in the PRNU domain yielded mostly acceptable results. Source attribution under geometric transformations has been previously studied. Considering scaled and cropped photographic images, Goljan et al. (2008) demonstrated that through a search of transformation parameters, the alignment between PRNU patterns can be re-established. For this, the PRNU pattern obtained from the image in question is upsampled in discrete steps and matched with the reference PRNU at all shifts. The parameter values that yield the highest PCE are identified as the correct scaling factor and the cropping position. In a similar manner, a PRNU pattern extracted from a higher resolution video can be compared with PRNU patterns extracted from low-resolution videos. With this goal, Refs. Tandogan et al. (2019), Iuliani et al. (2019), and Taspinar et al. (2020) examined in-camera downsizing behavior to enable reliable matching of PRNU patterns at different resolutions. Essentially to determine how scaling and cropping are performed, the high-resolution PRNU pattern is downscaled and a search is performed to determine cropping boundaries by computing the normalized cross-correlation (NCC) between the two patterns. This process is repeated by downsizing the high-resolution pattern in decrements and performing the search until the alignment is established. Alternatively, Bellavia et al. (2021) proposed a content alignment approach. That is, instead of aligning PRNU patterns extracted from photos and pictures of a video captured independently, they used pictures and videos of a static scene and estimated the scaling and cropping parameters through keypoint registration. The presence of electronic stabilization poses two complications to this process. First, since stabilization transformations vary from frame to frame, the above search may at best (i.e., only when stabilization transformation is in the form of a frame level affine transformation) identify the combined effect, and downsizing-related cropping and scaling can only be approximately determined. Second, the search has to be performed at the frame level (frame-to-reference matching) where PRNU strength will be significantly weaker as opposed to using a reference pattern obtained from multiple frames (reference-to-reference matching). Therefore, the process will be more error prone. Ultimately, since downsizing-related parameters (i.e., scaling ratio and cropping boundaries) vary at the camera model level, the above approach can be used to generate a dictionary of parameters that specify how in-camera downsizing is performed using videos acquired under controlled conditions to prevent interference from stabilization.

96

H. T. Sencar

5.4 Mitigation of Video Coding Artifacts As a raw video is a sequence of images, its size is impractically large to store or transfer. Video compression exploits the fact that frames in a video sequence are highly correlated in time and aims to reduce the spatial and temporal redundancy so that as few bits as possible are used to represent the video sequence. Consequently, compression-related information loss causes much more severe artifacts in videos compared to those in photos. Currently, the most prevalent and popular video compression standards are H.264/AVC and its recent descendent H.265/HEVC. The ability to cope with video coding artifacts crucially requires addressing two problems. The first relates to the disruptive effects of a filtering operation, i.e., the loop filtering, deployed by video codecs to suppress artifacts arising due to blockbased operation of video coding standards. Proposed solutions to cope with this have either tried to compensate for such effects in post-processing or to eliminate its effects by intervening in the decoding process. The second problem concerns compression-related information loss. Several approaches have been proposed to combat the adverse effects of compression by incorporating frame and block level encoding information to the PRNU estimation.

5.4.1 Video Coding from Attribution Perspective The widely used PRNU estimation method tailored for photographic images in fact takes into account compression artifacts. To combat this, the Fourier domain representation of the PRNU pattern is Wiener filtered to remove all structural patterns such as those caused by blockiness. However, video coding has several important differences from still image coding (Richardson 2011; Sze et al. 2014), and the PRNU estimation approach for videos should take these differences into account. Block-based operation: Both H.264 and H.265 video compression standards operate by dividing a picture into smaller blocks. Each coding standard has different naming and size limits for those blocks. In H.264, the largest blocks are called macroblocks and they are 16 × 16 pixels in size. A macroblock can be further divided into smaller sub-macroblock regions that are as small as 4 × 4 pixels. H.265 format is an improved version of H.264, and it is designed with parallel processing in mind. The sizes of container blocks in H.265 can vary from 64 × 64 to 16 × 16 pixels. These container blocks can be recursively divided into smaller blocks (down to the size of 8 × 8 pixels) which may be further divided into sub-blocks of 4 × 4 pixels during prediction. For the sake of brevity, we refer to all block types as macroblocks even though the term macroblock no longer exists in the H.265 standard. Frame types: A frame in an H.264 or H.265 video can have three types: I, P, and B. An I frame is self-contained and temporally independent. It can be likened to a JPEG image in this sense. However, they are quite different in many aspects including variable block sizes, use of prediction, integer transformation derived from a form

5 Source Camera Attribution from Videos

97

of discrete sine transform (DST), variable quantization, and deployment of a loop filter. The macroblocks in P frame can use previous I or P frame macroblocks to find the best prediction. Encoding of a B frame provides the same flexibility. Moreover, future I and P frames can be utilized when predicting B frame macroblocks. A video is composed of a sequence of frames that may exhibit a fixed or varying pattern of I, P, and B frames. Block types: Similar to frames, macroblocks can be categorized into three types, namely, I, P, and B. An I macroblock is intra-coded with no dependence on future or previous frames, and I frames only contain this type of block. A P macroblock can be predicted from a previous P or I frame, and P frames can contain both types of blocks. Finally, a B macroblock can be predicted using one or two frames of I or P types. A B frame can contain all three types of blocks. If a block happens to have the same motion vectors as its preceding block and few or no residuals, this block is skipped since it can be constructed at the decoder side using the preceding block information. This type of block has the inherent properties of its preceding blocks and they are called skipped blocks. Skipped blocks might appear in P and B frames only. Rate-Distortion optimization: During encoding, each picture block is predicted either from the neighboring blocks of the same frame (intra-prediction) or from the blocks of the past or future frames (inter-prediction). Therefore, each coded block contains the position information of the reference picture block (motion vector) and the difference between the current block and the reference block (residual). This residual matrix is first transformed to the frequency domain and then quantized. This is also the stage where information loss takes place. There is ultimately a trade-off between the amount of data used for representing a picture (data rate) and the quality of the reconstructed picture. Finding an optimum balance has been a research topic for a long time in video coding. The approach taken by the H.264 standard is based on rate-distortion optimization (RDO). According to the RDO algorithm, the relationship between the rate and distortion is defined as J = D + λR,

(5.1)

where D is the distortion introduced to a picture block, R is the number of bits required for its coding, λ is a Lagrangian multiplier computed as a function of the quantization parameter Q P, and J is the rate-distortion (RD) value to be minimized. The value of J obviously depends on several coding parameters such as block type (I, P or B), intra-prediction mode (samples used for extrapolation), the number of pair of motion vectors, sub-block size (4 × 4, 8 × 4, etc.), and the Q P. Each parameter combination is called coding mode of the block, and the encoder picks the mode m that yields the minimum J as given in m = argmin Ji , i

(5.2)

98

H. T. Sencar

where the index i ranges over all possible coding modes. It must be noted that the selection for the optimum mode in Eq. (5.2) is performed on the encoder side. That is, the encoder knows all variables involved in the computation of Eq. (5.1). The decoder, in contrast, has only the knowledge of the block type, the rate (R), and the λ value and is oblivious to D and J values. Overall due to the RDO, at one end of the distortion spectrum are the intra-coded blocks which undergo light distortion and yield the highest number of bits, and at the other end are the skipped blocks which have the same motion vectors and parameters (i.e., the quantization parameter and the quantized transform coefficients) as their spatially preceding blocks, thereby requiring only a few bits to encode at the expense of a higher distortion. Quantization: In video coding, the transformation and quantization steps are designed to minimize computational complexity so that they can be performed using limited-precision integer arithmetic. Therefore, an integer transform is used, which is then merged with quantization in a single step so that it can be realized efficiently on hardware. As a result of this optimization, the actual quantization step size cannot be selected by the user but is controlled indirectly as a function of the above-stated quantization parameter, Q P. In practice, the user mostly decides on the bitrate of coded video and the encoder varies the quantization parameter so that the intended rate can be achieved. Thus, the quantization parameters might change several times when coding a video. Moreover, a uniform quantizer step is applied to every coefficient in a 4 × 4 or 8 × 8 block by default. However, frequency-dependent quantization can also be performed depending on the selected compression profile. Loop filtering: Block-wise quantization performed during encoding yields a blockiness effect on the output pictures. Both coding standards deploy an in-loop filtering operation to suppress this artifact by applying a spatially variant low pass filter to smooth the block boundaries. The strength of the filter and the number of pixels affected by filtering depend on several constraints such as being at the boundary of a macroblock, the amount of quantization applied, and the gradient of image samples across the boundary. It must be noted that the loop filter is also deployed during encoding to ensure synchronization between encoder and decoder. In essence, the decoder reconstructs each macroblock by adding a residual signal to a reference block identified by the encoder during prediction. In the decoder, however, all the previously reconstructed blocks would have been filtered. To not create such a discrepancy, the encoder also performs prediction assuming filtered block data rather than using the original macroblocks. The only exception to this is the intra-frame prediction where extrapolated neighboring pixels are taken from an unfiltered block. Overall, the reliability of an estimated PRNU pattern from a picture of a video depends on the amount of distortion introduced during coding. This distortion is mainly induced by the RDO algorithm at the block level and by the loop filtering at block boundaries depending on the strength of compression. Therefore, the disruptive effects of these operations must be countered.

5 Source Camera Attribution from Videos

99

5.4.2 Compensation of Loop Filtering The strength of the blocking effect at the block boundaries of a video frame is crucially determined by the level of compression. In high-quality videos, such an artifact will be less apparent, so from a PRNU estimation point of view, the presence of loop filtering will be of less concern. However, for low-quality videos, the application of loop filtering will result in the removal of a significant portion of the PRNU pattern, thereby making source attribution less reliable (Tan et al. 2016). Earlier research explored this problem by focusing on the removal of blocking and ringing noise (Chen et al. 2007). Assuming most codecs use fixed 16 × 16 sized macroblocks, a frequency domain method was proposed to remove PRNU components that exhibit periodicity. To better suppress compression artifacts, Hyun et al. (2012) proposed an alternative approach utilizing a minimum average correlation energy (MACE) filter which essentially minimizes the average correlation plane energy, thereby yielding higher PCE values. By applying a MACE filter to the estimated pattern, a 10% improvement in identification accuracy is compared to the results of Chen et al. (2007). Later, Altinisik et al. (2018) took a proactive approach that involves intervention in the decoding process as opposed to performing postestimation processing. A simplified decoding schema for H.264 and H.265 codecs is given in Fig. 5.2. In this model, the coded bit sequence is run through entropy decoding, dequantization, and inverse transformation steps before the residual block and motion vectors are obtained. Finally, video frames are reconstructed through the recursive motion compensation loop. The loop filtering is the last processing step before the visual presentation of a frame. Despite the fact that the loop filter is primarily related to the decoder side, it is also deployed on the encoder side to be compatible with the decoder. On the encoder side, a decoded picture buffer is used for storing reference frames and those pictures are all loop filtered. Inter-prediction is carried out on loop filtered pictures. It can be seen in Fig. 5.2 that for intra-prediction, unfiltered blocks are used. Since I type macroblocks that appear in all frames are coded using intra-frame prediction, bypassing the loop filter during decoding will not introduce any complications. However, for P and B type macroblocks since the encoder performs prediction assuming filtered blocks, simply removing the loop filter at the decoder will not yield a correct reconstruction. To compensate for this behavior that affects P and B frames, the decoding process must be modified to reconstruct both filtered and non-filtered versions of each macroblock. Filtered macroblocks must be used for reconstructing future macroblocks, and non-filtered ones need to be used for PRNU estimation.2 It must be noted that due to this modification, video frames will now exhibit blocking artifacts. However, since (sub-)macroblock sizes are not deterministic and partitioning of macroblocks into smaller blocks is mainly decided by the RDO algo2

An implementation of loop filter compensation capability for H.264 and H.265 decoding modules in the FFMPEG libraries can be obtained at https://github.com/VideoPRNUExtractor/ LoopFilterCompensation.

100

H. T. Sencar

Fig. 5.2 Simplified schema of decoding for H.264 and H.265 codecs. In the block diagram, for H.264, the filtering block represents the deblocking loop filter and for H.265, it involves a similar deblocking filter cascaded with the sample adaptive offset filter

rithm, PRNU estimates containing blocking artifacts are very unlikely to exhibit a structure that is persistent across many consecutive frames. Hence, estimation of a reference PRNU pattern from several video frames will suppress these blocking artifacts through averaging.

5.4.3 Coping with Quantization-Related Weakening of PRNU The quantization operation applied to prediction error for picture blocks is the key contributing factor to the weakening of the underlying PRNU pattern. A relative comparison between the effects of video and still image compression on PRNU estimation is provided in Altinisik et al. (2020) in terms of the change in the mean squared error (MSE) and PCE values computed between a set of original and compressed images. Table 5.1 shows the compression parameter pairs, i.e., Q P for H.264 and Q F for the JPEG, which yield similar MSE and PCE values. It can be seen that H.264 coding, in comparison to JPEG coding, maintains a relatively high image quality, despite rapidly weakening the inherent PRNU pattern. Essentially, these measurements show that what would be considered medium level in video compression is equivalent to heavy JPEG compression. This essentially indicates why the conventional method of PRNU estimation is not very effective on videos. Several studies have proposed ways to tackle this problem. In van Houten and Geradts (2009), the authors studied the effectiveness of the PRNU estimation and detection process on singly and multiply compressed videos. For this purpose, videos captured by webcams were compressed using a variety of codecs at varying resolutions and matched with the reference PRNU patterns estimated from raw videos. The results of that study highlight the dependence on the knowledge of encoding settings for successful attribution and the non-linear relationship between quality and the detection statistic. Chuang et al. (2011) explored the use of different types of frames in PRNU estimation and found that I frames yield more reliable PRNU patterns than predicted P and B frames and suggested giving a higher weight to the contribution of I frames

5 Source Camera Attribution from Videos

101

Table 5.1 A comparative evaluation of JPEG and H.264 encoding in terms of MSE and PCE Equalized MSE Equalized PCE Q FJ P E G Q PH.264 Q FJ P E G Q PH.264 92 92 91 90 89 87 82 77 67 61 57 57

5 10 15 18 21 24 27 30 35 40 45 50

92 90 84 77 66 50 33 22 10 3 1 1

5 10 15 18 21 24 27 30 35 40 45 50

during estimation. In Altinisik et al. (2018), macroblock level compression information is incorporated into estimation by masking out heavily compressed blocks. A similar masking approach was proposed by Kouokam et al. (2019) in which blocks that lack high-frequency content in the prediction error are eliminated from PRNU estimation. Later, Altinisik et al. (2020) examined the relationship between the quantization parameter of a block and the reliability of the PRNU pattern to introduce a block level weighting scheme that adjusts the contribution of each block accordingly. To better utilize block level parameters, the same authors in Altinisik et al. (2021) considered different PRNU weighting schemes based on encoding block type and coding rate of a block to provide a more reliable estimation of the PRNU pattern. Since encoding is performed at the block level, its weakening effect on the PRNU pattern must also be compensated at the block level by essentially weighting the contribution of each block in accordance with the level of distortion it has undergone. It must be emphasized here that the block-based PRNU weighting approach disregards the picture content and only utilizes decoding parameters. Hence, other approaches proposed to enhance the strength of PRNU by suppressing content interference (Li 2010; Kang et al. 2012; Lin and Li 2016) and improving the denoising performance (Al-Ani et al. 2015; Zeng and Kang 2016; Kirchner and Johnson 2019) are complementary to it. The PRNU weighting approach can be incorporated into the conventional PRNU estimation method (Chen et al. 2008) by simply introducing a block-wise mask for each picture. When obtaining a reference pattern from an unstabilized (or very lightly stabilized) video, this can be formulated as N K =

i=1 Ii

N

× Wi × Mi

i=1 (Ii )

2

× Mi

,

(5.3)

102

H. T. Sencar

where K is the sensor-specific PRNU factor computed using a set of video pictures I1 , . . . , I N ; Wi represents the noise residue obtained after denoising picture Ii ; and Mi is a mask to appropriately weigh the contribution of each block based on different block level parameters. It must be noted that the weights that comprise the mask can take values higher than one; therefore, to prevent a blow-up, the mask term is also included in the denominator as a normalization factor. When the video is stabilized, however, Eq. (5.3) cannot be used since each picture would have been subjected to a different stabilization transformation. Therefore, each frame needs to be first inverse transformed and Eq. (5.3) must be evaluated only after applying the identified inverse transformation to both Ii and the mask Mi . Ultimately, the distortion induced to a block as a result of the RDO serves as the best estimator for the strength of the PRNU. This distortion term, however, is unknown at the decoder. Therefore, when devising a PRNU weighting scheme, available block level coding parameters must be used. This essentially leads to several weighting schemes based on masking out skipped blocks, the quantization strength, and the coding bitrate of a block. Developing a formulation that relates these block level parameters directly to the strength of the PRNU is in general analytically intractable. Therefore, one needs to rely on measurements obtained by changing coding parameters in a controlled manner and determine how the estimated PRNU pattern affects the accuracy of the attribution. Eliminating Skipped Blocks: Skip blocks do not transmit any block residual information. Hence, they provide larger bitrate savings at the expense of a higher distortion when compared to other block types. As a consequence, the PRNU pattern extracted from skipped blocks will be the least reliable. Nevertheless, the distortion introduced by substituting a skip block with a reference block cannot be arbitrarily high as it is bounded by the RDO algorithm, Eq. (5.1). This weighting scheme, therefore, deploys a binary mask M where all pixel locations corresponding to a skipped block are set to zero and those corresponding to coded blocks are assigned a value of one. A major concern with the use of this scheme is that at lower bitrates, skip blocks are expected to be much more frequently encountered than other types of blocks, thereby potentially leaving very few blocks to reliably estimate a PRNU pattern. The rate of the skipped blocks in a video, i.e., the ratio of the number of skipped blocks to the total number of blocks, depending on the coding bitrate is investigated in Altinisik et al. (2021). Accordingly, it is determined that for videos with a bitrate lower than 1 Mbps, more than 70% of the blocks are skipped during coding. This finding overall indicates that unless the video is extremely short in duration or encoded at a very low bitrate, elimination of skipped blocks will not significantly impair the PRNU estimation. Quantization-Based PRNU Weighting: Quantization of the prediction residue associated with a block is the main reason behind the weakening of the PRNU pattern during compression. To exploit this, Altinisik et al. (2020) used the strength of quantization as the basis of the weighting scheme and empirically obtained a relation between the quantization parameter Q P and the PCE, which reflects the reliability of the estimated PRNU pattern. The underlying relationship is obtained by re-encoding a set of high-quality videos at all possible, i.e., 51, QP levels. Then, average PCE

5 Source Camera Attribution from Videos

103

Fig. 5.3 Quantization-based weighting functions for PRNU blocks depending on Q P values when skipped blocks are included (red curve) and eliminated (blue curve)

values between video pictures coded at different Q P values and the camera reference pattern are computed separately for each video. To suppress the camera-dependent variation in PCE values, the obtained average PCE values are normalized with respect to the PCE value obtained for a fixed Q P value. The resulting normalized average PCE values are further averaged across all videos to obtain a camera-independent relationship between PCE and Q P values. Finally, the obtained Q P-PCE relationship is converted into a weighting function by taking into account the formulation of the PCE metric as shown in Fig. 5.3 (red curve). Correspondingly, the frame level mask M in Eq. (5.3) is obtained by filling all pixel locations corresponding to a block with a particular Q P value with the respective values on this curve. Quantization-Based PRNU Weighting Without Skipped Blocks: Another weighting scheme that follows the previous two schemes is the quantization-based weighting of non-skipped blocks. Repeating the same evaluation process for quantization-based PRNU weighting while excluding skipped blocks, the weighting function given in Fig. 5.3 (blue curve) is obtained (Altinisik et al. 2021). As can be seen, both curves exhibit similar characteristics, although in this case, the weights vary over a smaller range. This indicates that the variation in the strength of the PRNU estimated across all coded blocks is less variable. This is mainly because eliminating skipped blocks result in higher PCE values at higher Q P values where compression is more severe and skipped blocks are more frequent. Therefore, when the resulting Q P-PCE values are normalized with respect to a fixed PCE value, it yields a more compact weight function. This new weight function is used in a similar way when creating a mask for each video picture with one difference that for skipped blocks the corresponding mask values are set to zero regardless of the Q P value of the block. Coding Rate (λR)-Based PRNU Weighting: Although the distortion introduced to a block can be inferred from its type or through the strength of quantization it has undergone (Q P), the correct evaluation of the reliability of the PRNU ultimately requires incorporating the RDO algorithm. Since the terms D and J , in Eq. (5.1), are unknown at the decoder, Altinisik et al. (2021) considered using λR which can be determined using the number of bits spent for coding a block and the block’s Q P. Similar to previous schemes, the relationship between λR and PC E is empirically determined by re-encoding the raw-quality videos at 42 different bitrates varying

104

H. T. Sencar

Fig. 5.4 The weighting function based on λR obtained using pictures of 20 re-encoded videos at 42 different bitrates

Fig. 5.5 Average PCE values for 47 high-quality videos encoded at 11 different bitrates ranging from a 600 kbps to 1200 kbps and b 1500 kbps to 4000 kbps

between 200 Kbps and 32 Mbps and obtaining a camera-independent weighting function as displayed in Fig. 5.4 to create a mask for weighting PRNU blocks. Comparison: Tests performed on a set of videos coded at different bitrates showed that block-based PRNU weighting schemes yield a better estimate of the PRNU both in terms of the average improvement in PCE values and in the number of videos that yield a PCE value above 60 when compared to the conventional method, even after performing loop filter compensation (Altinisik et al. 2021). It can be observed that the improvement in PCE values due to PRNU weighting becomes more visible at bitrates higher than 900 Kbps as shown in Fig. 5.5. Overall, at all bitrates, the rate associated with a block is determined to be a more reliable estimator for the

5 Source Camera Attribution from Videos

105

Fig. 5.6 ROC curves obtained by matching PRNU patterns obtained from each test video using different estimation methods with matching and non-matching reference PRNU patterns

strength of extracted PRNU pattern. Similarly, the performance of the Q P-based weighting scheme consistently improves when skipped blocks are eliminated from the estimation process. It was also found that at very high compression rates (at coding rates less than 0.024 bits per pixel per frame), elimination of skipped blocks performs comparably to a coding rate-based weighting scheme. This finding essentially indicates that at high compression rates, PRNU patterns extracted from coded blocks are all equally weakened, and a block-based weighting approach does not provide an additional gain. Tests also showed that the improvement in the performance due to blockbased PRNU weighting does not come at the expense of an increased number of false-positive matches as demonstrated by the resulting ROC curves presented in Fig. 5.6. The gap between the two curves that correspond to the conventional method (tailored for photographic images) and the use of rate-based PRNU weighting scheme also reveals the importance of incorporating video coding parameters into PRNU estimation.

5.5 Tackling Digital Stabilization Most cameras deploy digital stabilization while recording a video. With digital stabilization, frames captured by the sensor are moved and warped to align with one another through in-camera processing. Stabilization can also be applied externally with greater freedom in processing on a computer using one of several video editing tools or in the cloud, such as performed by YouTube. In all cases, however, the application of such frame level processing introduces asynchronicity among PRNU patterns of consecutive frames of a video. Attribution of digitally stabilized videos, thus, requires an understanding of the specifics of how stabilization is performed. This, however, is a challenging task as the inner workings and technical details of processing steps of camera pipelines are usually not revealed. Nevertheless, at a high level, the three main steps of digital stabilization involve camera motion estimation, motion smoothing, and alignment of video frames according to the corrected camera motion. Motion estimation is per-

106

H. T. Sencar

formed either by describing the geometric relationship between consecutive frames through a parametric model or through tracking key feature points across frames to obtain feature trajectories (Xu et al. 2012; Grundmann et al. 2011). With sensor-rich devices such as smartphones and tablets, data from motion sensors are also utilized to improve the estimation accuracy (Thivent et al. 2018). This is followed by the application of a smoothing operation to estimate camera motion or the obtained feature trajectories to eliminate the unwanted motion. Finally, each frame is warped according to the smoothed motion parameters to generate the stabilized video. The most critical factor in stabilization depends on whether the camera motion is represented by a two-dimensional (2D) or three-dimensional (3D) model. Early methods mainly relied on the 2D motion model that involves application of fullframe 2D transformations, such as affine or projective models, to each frame during stabilization. Although this motion model is effective for scenes far away from the camera where parallax is not a concern, it cannot be generalized to more complicated scenes captured under spatially variant camera motion. To overcome 2D modeling limitations, more sophisticated methods have considered 3D motion models. However, due to difficulties in 3D reconstruction, which requires depth information, these methods introduce simplifications to 3D structure and rely heavily on the accuracy of feature tracking (Liu et al. 2009, 2014; Kopf 2016; Wang et al. 2018). Most critically, these methods involve the application of spatially variant warping to video frames in a way that preserves the content from distortions introduced by such local transformations. In essence, digital image stabilization tries to align content in successive pictures through geometric registration, and the details of how this is performed depend on the complexity of camera motion during capture. This may vary from the application of a simple Euclidean transformation (scale, rotation, and shift applied individually or in combination) to spatially varying warping transformation in order to compensate for any type of perspective distortion. As these transformations are applied on a per-frame basis and the variance of camera motion is high enough to easily remove pixel-topixel correspondences among frames, alignment or averaging of frame level PRNU patterns will not be very effective in estimating a reference PRNU pattern. Therefore, performing source attribution in a stabilized video requires blindly determining and inverting those transformations applied to each frame. At the simplest level, an affine motion model can be assumed. However, the deficiency of affine transformation-based stabilization solutions have long been a motivation for research in the field of image stabilization. Furthermore, real-time implementation of stabilization methods has always been a concern. With the continuous development of smartphones and each year’s models packing more computing power, it is hard to make an authoritative statement on the state of stabilization algorithms. In order to develop a better understanding, we performed a test using the Apple iMovie video editing tool that runs on Mac OS computers and iOS mobile devices. With this objective, we shot a video by panning the camera around a still indoors scene while stabilization and electronic zoom were turned off. The video was then stabilized by iMovie at 10% stabilization setting which determines the maximum amount of cropping that can be applied to each frame during alignment.

5 Source Camera Attribution from Videos

107

Fig. 5.7 KLT feature point displacements in two sample frames in a video stabilized using the iMovie editing tool at 0.1 stabilization setting. Displacements are measured in reference to stabilized frames

To evaluate the nature of warping applied to video frames, we extracted Kanade– Lucas–Tomasi (KLT) reference feature points, which are frequently used to estimate the motion of key points (Tomasi and Kanade 1991). Then, displacements of KLT points in pre- and post-stabilized video frames were measured. Figure 5.7 shows the optical flows estimated using KLT points for two sample frames. It can be seen that KLT points in a given locality move similarly, mostly inwards due to cropping and scaling. However, a single global transformation that will cover the movement of all points seems unlikely. In fact, our attempts to determine a single warp transformation to map key points in successive frames failed with only 4–5, out of the typical 50, points resulting in a match. This finding overall supports the assumption that digital image stabilization solutions available in today’s cameras may be deploying sophisticated methods that go beyond frame level affine transformations. To take into account more complex stabilization transformations, projective transformations can be utilized. A projective transformation can be represented by a 3 × 3 matrix with 8 free parameters that specify the amount of rotation, scaling, translation, and projection applied to a point in a two-dimensional space. In this sense, affine transformations form a subset of all such transformations without the two-parameter projection vector. Since a transformation is performed by multiplying the coordinate vector of a point with the transformation matrix, its inversion requires determination of all these parameters. This problem is further exacerbated when transformations are applied in a spatially variant manner. Since in that case different parts of a picture will have undergone different transformations, transform inversion needs to be performed at the block level where encountered PCE values will be much lower.

5.5.1 Inverting Frame Level Stabilization Transformations Attribution for images subjected to geometric transformations has been previously studied earlier in a broader context. Considering scaled and cropped photographic

108

H. T. Sencar

images, Goljan et al. (2008) proposed a brute-force search for the geometric transform parameters to re-establish alignment between PRNU patterns. To be able to achieve this, the PRNU pattern obtained from the image in question is upsampled in discrete steps and matched with the reference PRNU at all shifts. The values that yield the highest PCE are identified as the correct scaling factor and the cropping position. More relevantly, by focusing on panoramic images, Karakucuk et al. (2015) investigated source attribution under more complex geometric transformations. Their work showed the feasibility of estimating inverse transform parameters considering projective transformations. In the case of stabilized videos, Taspinar et al. (2016) proposed determining the presence of stabilization in a video by extracting reference PRNU patterns from the beginning and end of a video and by testing the match of the two patterns. If stabilization is detected, one of the I frames is designated as a reference and it is attempted to align other I frames to it through a search of inverse affine transformations to correct for the applied shift and rotation. The pattern obtained from the aligned I frames is then matched with a reference PRNU pattern obtained from a non-stabilized video by performing another search. Iuliani et al. (2019) introduced another source verification method similar to Taspinar et al. (2016) by also considering mismatches in resolution, thereby effectively identifying stabilization and downsizing transformations in a combined manner. To perform source verification, 5–10 I frames are extracted and corresponding PRNU patterns are aligned with the reference PRNU pattern by searching for the correct amount of scale, shift, and cropping applied to each frame. Those frames that yield a matching statistic above some predetermined PCE value are combined together to create an aligned PRNU pattern. Tests performed on videos captured by 8 cameras in the VISION dataset using the first fiveframes of each video revealed that 86% of videos captured by cameras that support stabilization in the reduced dataset can be correctly attributed to their source with no false positives. This method was also shown to be effective on a subset of videos downloaded from YouTube with an overall accuracy of 87.3%. To extract a reference PRNU pattern from a stabilized video, Mandelli et al. (2020) presented a method considering stabilization transformations in the form of affine transformations. In this approach, PRNU estimates obtained from each frame are matched with other frames in a pair-wise manner to identify those translated with respect to each other. Then the largest group of frames that yield a sufficient match are combined together to obtain an interim reference PRNU pattern, and the remaining frames are aligned with respect to this pattern. Alternatively, when the reference PRNU pattern at a different resolution is already known, then it is used as a reference and PRNU patterns of all other frames are matched by searching for transformation parameters. For verification, five I frames extracted from the stabilized video are matched to this reference PRNU pattern. If the resulting PCE values for at least one of the frames is observed to be higher than the threshold, a match is assumed to be achieved. The results obtained on the VISION dataset show that the method is effective in successfully attributing 71% and 77% of videos captured by cameras that support stabilization with 1% false-positive rate, respectively, when 5 and 10

5 Source Camera Attribution from Videos

109

I frames are used. Alternatively, if the reference PRNU pattern is extracted from photos, rather than flat videos, under the same conditions, attribution rates are found to increase to 87% for 5 frames and to 91% for 10 frames. A note concerning the use of videos in the VISION dataset is the lack of labeling for videos. The above-mentioned studies (i.e., Iuliani et al. 2019 and Mandelli et al. 2020) considered all videos captured by cameras that support stabilization as stabilized and reported performance results accordingly. However, the number of weakly and strongly stabilized videos in the VISION dataset is smaller than the overall number of 257 videos captured by such cameras. In fact, it is demonstrated in Altinisik and Sencar (2021) that 105 of those videos are not stabilized or very lightly stabilized (as they can be attributed using the conventional method after compensation of the loop filtering), 108 are stabilized using frame level affine transformations, and the remaining 44 are more strongly stabilized. Hence, by including the 105 videos in their evaluation, these approaches inadvertently yielded higher attribution performance for stabilized videos.

5.5.2 Inverting Spatially Variant Stabilization Transformations This form of stabilization may cause different parts of a frame to undergo different warpings. Therefore, tackling it requires inverting the stabilization transformation while being restricted to operate at sub-frame level. To address this problem, Altinisik et al. (2021) proposed a source verification method that allows an efficient search of a large range of projective transformations by operating at the block level. Considering different block sizes, they determined that blocks of size less than 500 × 500 pixels yield extremely low PCE values and cannot be used to reliably identify transformation parameters. Therefore, in their search, a block of a video frame is inverse transformed repetitively and the transformation that yields the highest PCE between the inverse transformed block and the reference PRNU pattern is identified. During the search, rather than changing transformation parameters blindly, which may lead to unlikely transformations and necessitate interpolation to convert noninteger coordinates to integer values, only transformations that move corner vertices of the block within a search window are considered. It is assumed that coordinates of each corner of a selected block may move independently within a window of 15 × 15 pixels, i.e., spanning a range of ±7 pixels in both coordinates in respect of the original position. Considering the possibility that a block might have also been subjected to a translation (e.g., introduced by a cropping) not contained within the searched space of transformations, each inverse transformed block is also searched within a shift range of ±50 pixels in all directions in the reference pattern. To accelerate the search, instead of performing a pure random search over all transformation space, the method adopts a three-level hierarchical grid search. With this approach, the search space over rotation, scale, and projection is coarsely sam-

110

H. T. Sencar

pled. In the first level, each corner coordinate is moved by ±4 pixels (in all directions) over a coarse grid to identify five transformations (out of 94 possibilities) that yield the highest PCE values. A higher resolution search is then performed by the same process over neighboring areas of the identified transformations on a finer grid by changing the corner coordinates of transformed blocks ±2 and, again, retaining only the five transformations producing the five highest values. Finally, in the third level, coarse transformations determined in the previous level are further refined by considering all neighboring pixel coordinates (around a ±1 range) to identify the most likely transformations needed for inverting the warping transformation due to stabilization. This overall reduced the number of transformations from 158 to 11 × 94 possibilities. A pictorial depiction of the grid partitioning of the transformation space is shown in Fig. 5.8. The search complexity is further lowered by introducing a two-level grid search approach that provides an order of magnitude decrease in computation, from 11 × 94 to 11 × 93 transformations, without a significant change in the matching performance. The major concern with increasing the degree of freedom when inverting stabilization transformation is the large number of transformations that must be considered to correctly identify each transformation. Essentially, this causes an increase in falsepositive matches, as multiple inverse transformed versions of a PRNU block (or frame) are generated and matched with the reference pattern. Therefore, there is a need for measures to eliminate spurious matches arising due to incorrect transformations. With this objective, (Altinisik and Sencar 2021) also imposed block and sub-block level constraints to validate the correctness of identified inverse transformations and demonstrated their effectiveness. The results of this approach are used to verify the source of stabilized videos in three datasets. Results showed that the introduced method is able to correctly attribute 23–30% of the stabilized videos in the three datasets that cannot be attributed using

Fig. 5.8 Three-level hierarchical grid partitioning of transformation space. Coarse grid points that yield high PCE values are more finely partitioned for subsequent search. The arrows show a sample trace of search steps to identify a likely transformation point for one of the corner vertices of a selected block

5 Source Camera Attribution from Videos

111

Table 5.2 Characteristics of datasets to study stabilization Dataset # of Cameras # of unstabilized # of weakly videos stabilized videos VISION iPhone SE-XR APS

14 8 7

105 0 0

108 19 8

# of strongly stabilized videos 44 22 15

earlier proposed approaches (that assume an affine motion model for stabilization), without any false attributions while utilizing only 10 frames from each video.

5.6 Datasets There is a lack of large-scale datasets to enable test and development of video source attribution methods. This is mainly because there are very few resources on the Internet that provide original videos compared to photographic images. In almost all public video sharing sites, video data are re-encoded to reduce the storage and transmission requirements. Furthermore, video metadata typically exclude source camera information, adding another layer of complication to data collection from open sources. The largest available dataset is the VISION dataset, which includes a collection of photos and videos captured by 35 different camera models (Shullani et al. 2017). However, this dataset is, however, not comprehensive enough to cover the diversity of compression and stabilization behavior of available cameras. To tackle the compression problem, (Altinisik et al. 2021; Tandogan et al. 2019) utilized a set of raw-quality videos captured by 20 smartphone cameras through a custom camera app under controlled conditions. These videos were then externally compressed under different settings.3 To study the effects of stabilization, (Altinisik and Sencar 2021) introduced two other datasets in addition to the VISION dataset. One of these includes media captured by two iPhone camera models (the iPhone SE-XR dataset) and the other includes a set of externally stabilized videos using the Adobe Premiere Pro video processing tool (the APS dataset).4 Table 5.2 provides the stabilization characteristics of the videos in those datasets.

3 4

These videos are available at https://github.com/VideoPRNUExtractor. The iPhone SE-XR and the APS datasets are also available at the above link.

112

H. T. Sencar

5.7 Conclusions and Outlook The generation of a video involves several processing steps that are growing in both sophistication and number. These processing steps are known to be very disruptive to PRNU estimation, and there is little public information about their specifics. Therefore, developing a PRNU estimation method for videos that is as accurate and efficient as it is for photographic images is an extremely challenging task. Current research focusing on this problem has mainly attempted to mitigate downsizing, stabilization, and video coding operations. Overall, the reported results of these studies have shown that it is difficult to generate a reference PRNU pattern from a video, specifically for stabilized and low-to-medium compressed videos. Therefore, almost all approaches to attribution of stabilized videos assume a source verification setting where a given video is matched against a known camera. That is, a reference PRNU pattern of the sensor is assumed to be available. In a source verification setting, with the increase in the number of frames in a video, the likelihood of correctly attributing a video also increases. However, this does not remove the need to devise a PRNU estimation method. To emphasize this, we tested how the conventional PRNU estimation method (developed for photos, without the Wiener filtering component) performs when the source of all frames comprising strongly stabilized videos in the three public datasets mentioned above is verified using the corresponding camera reference PRNU patterns. These results showed that out of more than 80K frames tested, only two frames yielded a PCE value above the commonly accepted threshold of 60 (78 and 491) while a great majority yielded PCE values less than 10. Ultimately, the reliable attribution of videos depends on the severity of information loss mainly caused by video coding and the ability to invert geometric transformations mostly stemming from stabilization. For highly compressed videos, with bitrates lower than 900 kbps, the PRNU is significantly weakened. Similarly, the use of smaller blocks or low-resolution frames, with sizes below 500 × 500 pixels, do not yield reliable PRNU patterns. Thus, PCE values corresponding to true and false matching cases largely overlap. Nonetheless, the most challenging aspect of source attribution remains stabilization. Since most modern smartphone cameras activate stabilization when the camera is set to video mode, before video recording starts, its disruptive effects should be considered to be always present. Overall, coping with stabilization induces a significant computational overhead to PRNU estimation which is mainly determined by the assumed degree of freedom in the search for applied transformation parameters. More critically, this warping inversion step amplifies the number of false attributions, which forces the use of higher threshold values for correct matching, thereby further reducing the accuracy. This underscores the need to develop methods for an effective search of transformation space to determine the correct transformation and transformation validation mechanisms to eliminate false-positive matches as a research priority.

5 Source Camera Attribution from Videos

113

Overall, the PRNU-based source attribution of photos and videos is an important capability in the arsenal of media forensics tools. Despite significant work in this field, reliable attribution of a video to its source camera remains a clear problem. Therefore, further research is needed to increase the applicability of this capability into practice.

References Al-Ani M, Khelifi F, Lawgaly A, Bouridane A (2015) A novel image filtering approach for sensor fingerprint estimation in source camera identification. In: 12th IEEE international conference on advanced video and signal based surveillance, AVSS 2015, Karlsruhe, Germany, August 25–28, 2015. IEEE Computer Society, pp 1–5 Altinisik E, Sencar HT (2021) Source camera verification for strongly stabilized videos. IEEE Trans Inf Forensics Secur 16:643–657 Altinisik E, Tasdemir K, Sencar HT (2018) Extracting PRNU noise from H.264 coded videos. In: 26th european signal processing conference, EUSIPCO 2018, Roma, Italy, September 3-7, 2018. IEEE, pp 1367–1371 Altinisik E, Tasdemir K, Sencar HT (2020) Mitigation of H.264 and H.265 video compression for reliable PRNU estimation. IEEE Trans Inf Forensics Secur 15:1557–1571 Altinisik E, Tasdemir K, Sencar HT (2021) Prnu estimation from encoded videos using blockbasedweighting. In: Media watermarking, security, and forensics. Society for Imaging Science and Technology Bellavia F, Fanfani M, Colombo C, Piva A (2021) Experiencing with electronic image stabilization and PRNU through scene content image registration. Pattern Recognit Lett 145:8–15 Bondi L, Bestagini P, Pérez-González F, Tubaro S (2019) Improving PRNU compression through preprocessing, quantization, and coding. IEEE Trans Inf Forensics Secur 14(3):608–620 Chen M, Fridrich JJ, Goljan M, Lukás J (2008) Determining image origin and integrity using sensor noise. IEEE Trans Inf Forensics Secur 3(1):74–90 Chen M, Fridrich JJ, Goljan M, Lukás J (2007) Source digital camcorder identification using sensor photo response non-uniformity. In: Delp III EJ, Wong PW (eds) Security, steganography, and watermarking of multimedia contents IX, San Jose, CA, USA, January 28, 2007, SPIE proceedings, vol 6505. SPIE, p 65051G Chuang W-H, Su H, Wu M (2011) Exploring compression effects for improved source camera identification using strongly compressed video. In: Macq B, Schelkens P (eds) 18th IEEE international conference on image processing, ICIP 2011, Brussels, Belgium, September 11–14, 2011. IEEE, pp 1953–1956 Goljan M (2018) Blind detection of image rotation and angle estimation. In: Alattar AM, Memon ND, Sharma G (eds) Media watermarking, security, and forensics 2018, Burlingame, CA, USA, 28 January 2018–1 February 2018. Ingenta Goljan M, Fridrich JJ (2008) Camera identification from cropped and scaled images. In: Delp III EJ, Wong PW, Dittmann J, Memon ND (eds) Security, forensics, steganography, and watermarking of multimedia contents X, San Jose, CA, USA, January 27, 2008, SPIE proceedings, vol 6819. SPIE, pp 68190E Grundmann M, Kwatra V, Essa IA (2011) Auto-directed video stabilization with robust L1 optimal camera paths. In: The 24th IEEE conference on computer vision and pattern recognition, CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011. IEEE Computer Society, pp 225–232 Guo J, Gu H, Potkonjak M (2018) Efficient image sensor subsampling for dnn-based image classification. In: Proceedings of the international symposium on low power electronics and design, ISLPED 2018, Seattle, WA, USA, July 23-25, 2018. ACM, pp 40:1–40:6

114

H. T. Sencar

Hyun D-K, Choi C-H, Lee H-K (2012) Camcorder identification for heavily compressed low resolution videos. In: Computer science and convergence. Springer, pp 695–701 Iuliani M, Fontani M, Shullani D, Piva A (2019) Hybrid reference-based video source identification. Sensors 19(3):649 Kang X, Li Y, Zhenhua Q, Huang J (2012) Enhancing source camera identification performance with a camera reference phase sensor pattern noise. IEEE Trans Inf Forensics Secur 7(2):393–402 Karaküçük A, Emir Dirik A, Sencar HT, Memon ND (2015) Recent advances in counter PRNU based source attribution and beyond. In: Alattar AM, Memon ND, Heitzenrater C (eds) Media watermarking, security, and forensics 2015, San Francisco, CA, USA, February 9–11, 2015, Proceedings, SPIE proceedings, vol 9409. SPIE, p 94090N Kirchner M, Johnson C (2019) SPN-CNN: boosting sensor-based source camera attribution with deep learning. In: IEEE international workshop on information forensics and security, WIFS 2019, Delft, The Netherlands, December 9–12, 2019. IEEE, pp 1–6 Kopf J (2016) 360◦ video stabilization. ACM Trans Graph 35(6):195:1–195:9 Kouokam EK, Dirik AE (2019) Prnu-based source device attribution for youtube videos. Digit Investig 29:91–100 Li C-T (2010) Source camera identification using enhanced sensor pattern noise. IEEE Trans Inf Forensics Secur 5(2):280–287 Lin X, Li C-T (2016) Enhancing sensor pattern noise via filtering distortion removal. IEEE Signal Process Lett 23(3):381–385 Liu F, Gleicher M, Jin H, Agarwala A (2009) Content-preserving warps for 3d video stabilization. ACM Trans Graph 28(3):44 Liu S, Yuan L, Tan P, Sun J (2014) Steadyflow: spatially smooth optical flow for video stabilization. In: 2014 IEEE conference on computer vision and pattern recognition, CVPR 2014, Columbus, OH, USA, June 23–28, 2014. IEEE Computer Society, pp 4209–4216 Mandelli S, Bestagini P, Verdoliva L, Tubaro S (2020) Facing device attribution problem for stabilized video sequences. IEEE Trans Inf Forensics Secur 15:14–27 Richardson IE (2011) The H. 264 advanced video compression standard. Wiley Shullani D, Fontani M, Iuliani M, Al Shaya O, Piva A (2017) VISION: a video and image dataset for source identification. EURASIP J Inf Secur 2017:15 Sze V, Budagavi M, Sullivan GJ (2014) High efficiency video coding (hevc). In: Integrated circuit and systems, algorithms and architectures, vol 39. Springer, p 40 Tan TK, Weerakkody R, Mrak M, Ramzan N, Baroncini V, Ohm J-R, Sullivan GJ (2016) Video quality evaluation methodology and verification testing of HEVC compression performance. IEEE Trans Circuits Syst Video Technol 26(1):76–90 Tandogan SE, Altinisik E, Sarimurat S, Sencar HT (2019) Tackling in-camera downsizing for reliable camera ID verification. In: Alattar AM, Memon ND, Sharma G (eds) Media watermarking, security, and forensics 2019, Burlingame, CA, USA, 13-17 January 2019. Ingenta Taspinar S, Mohanty M, Memon ND (2020) Camera identification of multi-format devices. Pattern Recognit Lett 140:288–294 Taspinar S, Mohanty M, Memon ND (2016) Source camera attribution using stabilized video. In: IEEE international workshop on information forensics and security, WIFS 2016, Abu Dhabi, United Arab Emirates, December 4–7, 2016. IEEE, pp 1–6 Thivent DJ, Williams GE, Zhou J, Baer RL, Toft R, Beysserie SX (2018) Combined optical and electronic image stabilization, May 22. US Patent 9,979,889 Tomasi C, Kanade T (1991) Detection and tracking of point. Technical report, features. Technical Report CMU-CS-91-132, Carnegie, Mellon University van Houten W, Geradts ZJMH (2009) Source video camera identification for multiply compressed videos originating from youtube. Digit Investig 6(1–2):48–60 Vijaya Kumar BVK, Laurence Hassebrook (1990) Performance measures for correlation filters. Appl Opt 29(20):2997–3006 Wang Z, Zhang L, Huang H (2018) High-quality real-time video stabilization using trajectory smoothing and mesh-based warping. IEEE Access 6:25157–25166

5 Source Camera Attribution from Videos

115

Xu J, Chang H, Yang S, Wang M (2012) Fast feature-based video stabilization without accumulative global motion estimation. IEEE Trans Consumer Electron 58(3):993–999 Zeng H, Kang X (2016) Fast source camera identification using content adaptive guided image filter. J Forensic Sci 61(2):520–526 Zhang J, Jia J, Sheng A, Hirakawa K (2018) Pixel binning for high dynamic range color image sensor using square sampling lattice. IEEE Trans Image Process 27(5):2229–2241

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Chapter 6

Camera Identification at Large Scale Samet Taspinar and Nasir Memon

Chapters 3 and 4 have shown that PRNU is a very effective solution for source camera verification, i.e., linking a visual object to its source camera (Lukas et al. 2006). However, in order to determine the source camera, there has to be a suspect camera in the possession of a forensics analyst which is often not the case. On the other hand, in the source camera identification problem, the goal is to match a PRNU fingerprint to a large database. This capability is needed when one needs to attribute one or more images from an unknown camera to a large number of images in a large image repository to find other images that may have been taken from the same camera. For example, consider that a legal entity acquires some illegal visual objects (such as child pornography) without any suspect camera available. Consider also that the owner of the camera shares images or videos on social media such as Facebook, Flickr, or YouTube. In such a scenario, it becomes crucial to find a way to link the illegal content to those social media accounts. Given that these social media contains billions of accounts, camera identification at large scale becomes a very challenging task. This chapter will focus on how databases that support camera identification can be created and the methodologies to search query images on those databases. Along with that, the time complexities, strengths, and drawbacks of different methods will also be presented in this chapter.

S. Taspinar Overjet, Boston, MA, USA e-mail: [email protected] N. Memon (B) New York University Tandon School of Engineering, New York, NY, USA e-mail: [email protected] © The Author(s) 2022 H. T. Sencar et al. (eds.), Multimedia Forensics, Advances in Computer Vision and Pattern Recognition, https://doi.org/10.1007/978-981-16-7621-5_6

117

118

S. Taspinar and N. Memon

6.1 Introduction As established in Chaps. 3 and 4, once a fingerprint, K , is obtained from a camera, the PRNU noise obtained from other query visual objects can be tested against the fingerprint to determine if they are captured by the same camera. Each of these comparisons is a verification task (i.e., 1-to-1). On the other hand, in the identification task, the source of a query image or video is searched within a fingerprint collection. The collection could be compiled from social media such as Facebook, Flickr, and YouTube. Visual objects from each camera in such a collection can be clustered together, and their fingerprints extracted to create a known camera fingerprint database. This chapter will discuss identification on a large scale which can be seen as a sequence of multiple verification tasks (i.e., 1-tomany). Specifically, we will focus on structuring large databases of images or camera fingerprints and the search efficiency of different methods using such databases. Notice that search can mainly be done in two ways: querying one or more images from a camera against a database of camera fingerprints or querying a fingerprint against a database of images. This chapter will focus on the first case where we will assume there is only a single query image whose source camera is under question. However, the workflow is the same for the latter case as well. Over the past decade, various approaches have been proposed to speed up camera identification using large databases of camera fingerprints. These methods can be grouped under two categories: (i) techniques for decreasing the cost of pairwise comparisons and (ii) decreasing number of comparisons made. Along with these a third category aims at combining the strengths of two approaches to create a superior method. We will assume that neither query images nor the images used for computing the fingerprint are geometrically transformed unless otherwise stated. As described in Chap. 3, some techniques can be used to reverse the geometric transformations performed before searching the database. However, not all algorithms proposed in this chapter will work, and geometric transformations may cause a failure in the methodology. We will also assume that there is no camera brand/model information available as metadata typically easy to forge and often does not exist when visual objects are obtained from social media. However, when reliable metadata is available, it can speed up search further by restricting to the relevant models in the obvious manner.

6.2 Naive Methods This section will present the most basic methods for source camera identification. Consider that images or videos from N known cameras are obtained. Their fingerprints are extracted as a one-time operation and stored in a fingerprint dataset,

6 Camera Identification at Large Scale

119

D = K 1 , K 2 , . . . K N to form a fingerprint dataset. This step is common for all the methods presented in this and other camera identification methods.

6.2.1 Linear Search The most straightforward search method for source camera identification is linear search (brute force). A query visual object with n pixels, whose fingerprint estimate is K q , is then compared with all the fingerprints until (i) it matches with one of them (i.e., at least one H1 case) (ii) no more fingerprints left (i.e., all comparisons are H0 cases) H0 : K i = K q (non-matching fingerprints) H1 : K i = K q (matching fingerprints)

(6.1)

So the complexity of the linear search is O(n N ) for a single query image. Since modern cameras typically have more than 10 M pixels, and the number of cameras in a comprehensive database could be many millions, the computational cost of this method becomes exceedingly large.

6.2.2 Sequential Trimming Another simple method is to trim all images to a fixed length such that all fingerprints become the same length k where k ≤ n (Goljan et al. 2010). The main benefits of this method are it would speed up the search by n/k times, and the memory and disk usage drop by the same ratio. Given a camera fingerprint, K i , and a fingerprint of a query object, K q , their correlation cq is independent of the number of remaining pixels after trimming (i.e., k). However, as k drops, the probability of false alarm, PF A increases, and the probability of detection, PD decreases for a fixed decision threshold (Fridrich and Goljan 2010). Therefore, the sequential trimming method may have only limited speed up if the goal is to retain performance. Moreover, this method still requires the query fingerprint to be correlated with N fingerprints in the database (i.e., linear with the number of cameras in the database). However, the time the computation complexity drops to O(k N ) as fingerprint is trimmed from n to k elements.

120

S. Taspinar and N. Memon

6.3 Efficient Pairwise Correlation As presented in Chap. 3, the complexity of a single correlation is proportional to the number of pixels in a fingerprint pair. Since modern cameras contain millions of pixels and a large database can contain billions of images, using naive search methods is prohibitively expensive for source camera identification. The previous section shows that the time complexity of a brute-force search is proportional to the number of fingerprints and number of pixels in each fingerprint (i.e., O(n × N )). Similarly, sequential trimming can only speed up the search a few times. Hence, further improvements for identification are required. This section presents the first approach mentioned in Sect. 6.1 that decreases the time complexity of a pairwise correlation.

6.3.1 Search over Fingerprint Digests The first method that addresses the above problem is searching over fingerprint digests (Goljan et al. 2010). This method reduces the dimensionality of each fingerprint by selecting the k largest (in terms of magnitude) fingerprint elements along with their spatial location from a fingerprint F. Fingerprint Digest Given a fingerprint, F, with n fingerprint elements, the proposed method picks the top k largest of them as well as their indices. Suppose that the values of these k elements are VF = {v1 , v2 , . . . vk } and their locations are L F = {l1 , l2 , . . . lk } for 1 ≤ li ≤ n where VF = F[L F ]. The digest of each fingerprint can be extracted and stored along with the original fingerprint. An important aspect is determining the number of pixels, k, in a digest. Picking a large number of pixels will not yield a high speedup, whereas a low number will result in many false positives. Search step Suppose that the fingerprint of the query image is X and it is correlated with one of the digests in the database VF whose indices are L F . In the first step, the digest of X is estimated as VX = X [L F ]

(6.2)

Then, corr (VF , VX ) is obtained. If the correlation is lower than a preset threshold, τ , then X and F are said to be from different sources. Otherwise, the original fingerprint X and F are correlated to decide if they originate from the same source camera. Overall, this method speeds up approx. k/n times where k = 0 where X iB is the ith element in binary form for i ∈ {1, . . . n}. This way, the efficiency of pairwise fingerprint matching can be improved by a factor of b. The original work (Lukas et al. 2006) uses 64-bit double precision for each fingerprint element. Therefore, the binary quantization method can save storage usage and computation efficiency by a factor 64 as compared to Lukas et al. (2006). However, binary quantization comes with an expense of a decrease in the correct detection. Alternatively, one can use the quantization approach but with more bits instead of a single bit. This way, the metrics can be preserved. When the bit depth of two fingerprints is reduced from 64-bit to 8-bit format, the pairwise correlation changes marginally (i.e., less than 10−4 ) but nearly 8 times speed up can be achieved as the complexity of Pearson correlation is proportional to the number of bits.

6.3.3 Downsizing Another simple yet effective method is creating multiple downsized versions of a fingerprint where along with the original fingerprint L other copies of the fingerprint are stored (Yaqub et al. 2018; Tando˘gan et al. 2019). In these methods, a camera fingerprint whose resolution is r × c, is downsized by a scaling factor s in the first level. Then, in the next level, the scaled fingerprint is further resized by s, and so on. Hence, the resolution of a fingerprint in Lth level the resolution becomes srL × scL . Since, multiple copies of a fingerprint is stored in this method, the storage usage . The authors show that the optimal value for s is 2 as it causes increases by s 2 +sr4×c +...s 2L the least interpolation artefact, and L is 3 level. Moreover, Lanczos−3 interpolation

122

S. Taspinar and N. Memon

results in best accuracy as it best approximates the Sinc kernel (Lehman et al. 1999). In these settings, the storage requirement increases by ≈33%. Matching Given a query fingerprint and a dataset of N fingerprints (for each one of them 3 other resized versions are stored), a pairwise comparison can be done as follows: The query fingerprint is resized to L3 size (i.e., by 1/8) and compared by L3 version of ith fingerprint. If the comparison of L3 query and reference fingerprint pair is below a preset threshold, the query image is not captured by the ith camera, and the query fingerprint is then compared by (i + 1)th reference fingerprint. Otherwise, L2 fingerprint are compared. The same process is continued for other levels as well. The preset PCE threshold is set to 60, as presented in Goljan et al. (2009). The results show that when cropping is not involved, this method can achieve up to 53 times speed up with a small drop in the true positive rate.

6.3.4 Dimension Reduction Using PCA and LDA It is known that sensor noise is high-dimensional data (i.e., more than millions of pixels). This data contains redundant and interfering components such as demosaicing, JPEG compression, and other in-camera image processing operations. Because of these artifacts are the probability of match may decrease, and a significant slow down happens. Therefore, removing those redundant or interfering signals helps improve the matching accuracy and the efficiency of pairwise comparison. Li et al. propose to use Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) to compress camera fingerprints (Li et al. 2018). The proposed method first uses PCA-based denoising (Zhang et al. 2009) to create a better representation of camera fingerprints. Then LDA is utilized to decrease the dimensionality further as well as further improve matching accuracy. Fingerprint Extraction Suppose that there are n images, {I1 , . . . In }, captured by a camera. These images are cropped from their centers such that each becomes N × N resolution. First, PRNU noise, xi , extracted from these images using one of extraction methods (Lukas et al. 2006; Dabov et al. 2009). In this step, the noise signals are flattened to create column vectors. Hence, the size of xi becomes N 2 × 1. They are then horizontally stacked to created the training matrix, X as follows: X = [x1 , x2 , . . . xn ]

(6.4)

6 Camera Identification at Large Scale

123

Feature extraction via PCA The mean of X , μ, becomes

n 1 μ= xi n i=1

The training set is normalized by subtracting the mean from each image by x¯i = xi − μ. Hence, the normalized training set X¯ becomes X¯ = [x¯1 , x¯2 , . . . x¯n ]

(6.5)

The covariance matrix,  of X¯ can be calculated as  = E[ X¯ X¯ T ] ≈ n1 X¯ X¯ T . PCA is performed for finding orthonormal vectors, u k and their eigenvalues, λk . The dimensionality of X¯ is very high (i.e., N 2 × n). Hence the computational complexity of PCA is prohibitively high. Instead of direct computation, singular value decomposition (SVD) is applied, which is a close approximation of PCA. SVD can be written as  = T

(6.6)

where  = [φ1 , φ2 . . . φm ] is the m × m orthonormal eigenvector matrix and  = diag{λ1 , λ2 . . . λn } are eigenvalues. Along with creating decorrelated eigenvectors, PCA allow to reduce dimensionality. Using the first d of m eigenvectors, transformation matrix M pca = {φ1 , φ2 . . . φd } T ˆ for d < m. Then the transformed dataset, Yˆ becomes Yˆ = M pca X whose size is d × n. Generally speaking, most of the discriminative information of the SPN signal will concentrate on the first few eigenvectors. In this research, the authors kept only d eigenvalues to preserve 99% of the variance, which is used to create a compact version of the PRNU noise. Moreover, thanks to this dimension reduction, the “contamination” caused by other in-camera operations can be decreased. The authors call this a “PCA feature”. Feature extraction via LDA LDA is used for two purposes: (i) dimension reduction (ii) increasing the distance between different classes. Since the training set is already labeled, LDA can be used to reduce dimension and increase separability of different classes. This can be done by finding a transformation matrix, Mlda that maximizes the ratio of the determinant of between-class scatter matrix, Sb to within-classes scatter matrix Sw :  T    (6.7) Mlda = argmax =  JJ T SSwb JJ  J

 L where Sw = cj=1 i=1 (yi − μ j )(yi − μ j )T . There are c distinct classes and jth class has L j samples, yi is the ith sample of the class j and μ j is the mean of the

124

S. Taspinar and N. Memon

 L class j. On the other hand Sb is defined as Sb = 1c cj=1 i=1 (μ j − μ)(μ j − μ)T where μ is the average of all samples. Using the “LDA feature” extractor, Mlda , a c − 1 dimensional vector can be obtained as T [(M dpca )T X ] z = Mlda T = ((M dpca )T (Mlda )X

(6.8)

So, PCA and LDA operations can be combined using only a single operation, T ). Typically (c − 1) is lower than d, which can further i.e., Me = ((M dpca )T (Mlda compress the training data. Reference Fingerprint Estimation Consider that jth camera, c j , which has captured image I1 , I2 , . . . I L j and their PRNU noise patterns are x1 , x2 , . . . x L j , respectively. The reference fingerprint for c j can be obtained using (6.8). So, using (6.8), LDA features are extracted (i.e., z j = MeT X ). Source Matching The authors present two different methodologies: Algorithm 1 and 2. These algorithms make use of y d (PCA-SPN) and z (LDA-SPN), respectively. Both of these compressed fingerprints have much lower dimensionality compared to the original fingerprint, x. The source camera identification process is straightforward for both algorithms. “One-time” fingerprint compression step is used to set up the databases, the PCA-SPN of query fingerprint, yqd , is correlated by each fingerprint in the database. Similarly, for LDA-SPN, z q is correlated with fingerprint in the more compressed dataset. Decision thresholds τ y and τz can be setup based on desired false acceptance rate (Fridrich and Goljan 2010).

6.3.5 PRNU Compression via Random Projection Valsesia et al. (2015a, b) utilized the idea of applying random projections (RP) followed by binary quantization to reduce fingerprint dimension for camera identification on large databases. Dimension Reduction Given a dataset, D which contains N n-dimensional fingerprints, the goal is to reduce each fingerprint to m-dimensions with slight or no information loss where m γ | H0 ). Experiments carried out on the Spanish databases AHUMADA and GAUDI resulted in an EER of 6%, and experiments carried out on Brazilian databases Carioca 1 and Carioca 2 resulted in an EER of 7%. Esquef et al. (2014), the authors proposed a novel edit detection method for forensic audio analysis that includes modification on their approach in Rodríguez et al. (2010). In this version of the approach, the detection criterion is based on unlikely variations of the ENF magnitude. A data-driven magnitude threshold is chosen in a hypothesis framework similar to the one in Rodríguez et al. (2010). The authors conduct a qualitative evaluation of the influences of the edit duration and location as well as noise contamination on the detection ability. This new approach achieved a 4% EER on the Carioca 1 database versus 7% EER on the same database for the previous approach of Rodríguez et al. (2010). The authors report that their results indicate that amplitude clipping and additive broadband noise severely affect the performance of the proposed method, and further research is needed to improve the detection performance in more challenging scenarios. The authors further extended their work in Esquef et al. (2015). Here, they modified the detection criteria by taking advantage of the typical pattern of ENF variations elicited by audio edits. In addition to the threshold-based detection strategy of Esquef et al. (2014), a verification of the pattern of the anomalous ENF variations is carried

264

A. Hajj-Ahmad et al.

Fig. 10.13 Flowchart of the proposed ENF-based authentication system in Hua et al. (2016)

out, making the edit detector less prone to false positives. The authors confronted this newly proposed approach with that of Esquef et al. (2014) and demonstrated experimentally that the new approach is more reliable as it tends to yield lower EERs such as a reduction from 4% EER to 2% EER on the Carioca 1 database. Experiments carried out on speech databases degraded with broadband noise also support the claim that the modified approach can achieve better results than the original one. Other groups have also addressed and extended the explorations into ENF-based integrity authentication and tampering detection. We mention some highlights in what follows. In Fuentes et al. (2016), the authors proposed a phase locked loop (PLL)-based method for determining audio authenticity. They use a voltage-controlled oscillator (VCO) in a PLL configuration that produces a synthetic signal similar to that of a preprocessed ENF-containing signal. Some corrections are made to the VCO signal to make it closer to the ENF, but if the ENF has strong phase variations, there will remain large differences between the ENF signal and the VCO signal, signifying tampering. An automatic threshold-based decision on the audio authenticity is made by quantifying the frequency variations of the VCO signal. The experimental results shown in this work show that the performance of the proposed approach is on par with that of previous works (Rodríguez et al. 2010; Esquef et al. 2014), achieving, for instance, 2% EER on the Carioca 1 database.

10 Power Signature for Multimedia Forensics

265

In Hua et al. (2016), the authors present a solution to the ENF-based audio authentication system that jointly performs timestamp verification (if tampering is not detected) and detection of tampering type and tampering region (if tampering is detected). A high-level description of this system is shown in Fig. 10.13. The authentication tool here is an absolute error map (AEM) between the ENF signal under study and reference ENF signals from a database. The AEM is quantitatively a matrix containing all the raw information of absolute errors where the extracted ENF signal is matched to each possible shift of a reference signal. The AEM-based solutions rely coherently on the authentic and trustworthy portions of the ENF signal under study to make their decisions on authenticity. If the signal is deemed authentic, the system returns the timestamp at which a match is found. Otherwise, the authors propose two variant approaches that analyze the AEM to find the tampering regions and characterize the tampering type (insertion, deletion, or splicing). The authors frame their work here as a proof-of-concept study, demonstrating the effectiveness of their proposal by synthetic performance analysis and experimental results. In Reis et al. (2016), Reis et al. (2017), the authors propose their ESPRIT-Hilbertbased tampering detection scheme with SVM classifier (SPHINS) framework for tampering detection, which uses an ESPRIT-Hilbert ENF estimator in conjunction with an outlier detector based on the sample kurtosis of the estimated ENF. The computed kurtosis values are vectorized and applied to an SVM classifier to indicate the presence of tampering. They report a 4% EER performance on the clean Carioca 1 database and that their proposed approach gives improved results, as compared to those in Rodríguez et al. (2010); Esquef et al. (2014), for low SNR regimes and in scenarios with nonlinear digital saturation, when applied to the Carioca 1 database. In Lin and Kang (2017), the authors propose an approach where they apply a wavelet filter to an extracted ENF signal to reveal the detailed ENF fluctuations and then carry out autoregressive (AR) modeling on the resulting signal. The AR coefficients are used as input features for an SVM system for tampering detection. They report that, as compared to Esquef et al. (2015), their approach can achieve improved performance in noisy conditions and can provide robustness against MP3 compression. In the same vein as ENF-based integrity authentication, Su et al. (2013); Lin et al. (2016) address the issue of recaptured audio signals. The authors in Su et al. (2013) demonstrate that recaptured recordings may contain two sets of ENF traces, one from the original time of the recording and the other due to the recapturing process. They tackle the more challenging scenario that the two traces overlap in frequency by proposing a decorrelation-based algorithm to extract the ENF traces with the help of the power reference signal. Lin et al. (2016) employ a convolutional neural network (CNN) to decide if a certain recording is recaptured or original. Their deep neural network relies on spectral features based on the nominal ENF and its harmonics. Their paper goes into further depth, examining the effect of the analysis window on the approach’s performance and visualizing the intermediate feature maps to gain insight on what the CNN learns and how it makes its decisions.

266

A. Hajj-Ahmad et al.

10.5.3 ENF-Based Localization The ENF traces captured by audio and video signals can be used to infer information about the location in which the media recording was taken. In this section, we discuss approaches to use the ENF to do inter-grid localization, which entails inferring the grid in which the media recording was made, and intra-grid localization, which entails pinpointing the location-of-recording of the media signal within a grid.

10.5.3.1

Inter-Grid Localization

This section describes a system that seeks to identify the grid in which an ENFcontaining recording was made, without the use of concurrent power references HajjAhmad et al. (2013), Hajj-Ahmad et al. (2015). Such a system can be very important for multimedia forensics and security. It can pave the way to identify the origins of videos such as those of terrorist attacks, ransom demands, and child exploitation. It can also reduce the computational complexity and facilitate other ENF-based forensics applications, such as the time-of-recording authentication application of Sect. 10.5.1. For instance, if a forensic analyst is given a media recording of unknown time and location information, inferring the grid-of-origin of the recording would help in narrowing down the set of power reference recordings that need to be used to compare the media ENF pattern. Upon examining ENF signals from different power grids, it can be noticed that there are differences between them in the nature and the manner of the ENF variations (Hajj-Ahmad et al. 2015). These differences are generally attributed to the control mechanisms used to regulate the frequency around the nominal value and the size of the power grid. Generally speaking, the larger the power grid is, the smaller the frequency variations are. Figure 10.14 shows the 1-min average frequency during a 48-hour period from five different locations in five different grids. As can be seen in this figure, Spain, which is part of the large continental European grid, shows a small range of frequency values as well as fast variations. The smallest grids of Singapore and Great Britain show relatively large variations in the frequency values (Bollen and Gu 2006). Given an ENF-containing media recording, a forensic analyst can process the captured ENF signal to extract its statistical features to facilitate the identification of the grid in which the media recording was made. A machine learning system can be built that learns the characteristics of ENF signals from different grids and uses it to classify ENF signals in terms of their grids-of-origin. Hajj-Ahmad et al. (2015) have developed such a machine learning system. In that implementation, ENF signal segments are extracted from recordings 8 min long each. Statistical, wavelet-based, and linear-predictive-related features are extracted from these segments and used to train an SVM multiclass system. The authors make a distinction between “clean” ENF data extracted from power recordings and “noisy” ENF data extracted from audio recordings, and various com-

10 Power Signature for Multimedia Forensics

267

Fig. 10.14 Frequency variations measured in Sweden (top left), in Spain (top center), on the Chinese east coast (top right), in Singapore (bottom left), and in Great Britain (bottom right) (Bollen and Gu 2006)

binations of these two sets of data are used in different training scenarios. Overall, the authors were able to achieve an average accuracy of 88.4% on identifying ENF signals extracted from power recordings from eleven candidate power grids, and an average accuracy of 84.3% on identifying ENF signals extracted from audio recordings from eight candidate grids. In addition, the authors explored using multi-conditional systems that can adapt to cases where the noise conditions of the training and testing data are different. This approach was able to improve the identification accuracy of noisy ENF signals extracted from audio recordings by 28% when the training dataset is limited to clean ENF signals extracted from power recordings. This ENF-based application was the focus of the 2016 Signal Processing Cup (SP Cup), an international undergraduate competition overseen by the IEEE Signal Processing Society. The competition engaged participants from nearly 30 countries. 334 students formed 52 teams that registered for the competition; among them, more than 200 students in 33 teams turned in the required submissions by the open competition deadline in January 2016. The top three teams from the open competition attended the final stage of the competition at ICASSP 2016 in Shanghai, China, to present their final work (Wu et al. 2016). Most student participants from the SP Cup have uploaded their final reports to SigPort (Signal Processing 2021). A number of them have also further developed their works and published them externally. A common improvement on the SVM approach originally proposed in Hajj-Ahmad et al. (2015) was to make the classification system a multi-stage one, with earlier stages distinguishing between 50 Hz grid versus 60 Hz grid, or an audio signal versus a power signal. An example of such a system is proposed in Suresha et al. (2017).

268

A. Hajj-Ahmad et al.

In Šari´c et al. (2016), a team from Serbia examined different machine learning algorithms that can be used to achieve inter-grid localization, including K -nearest neighbors, random forests, SVM, linear perceptron, and neural networks. In their setup, the classifier that achieved the highest accuracy overall was random forest. They also explored adding additional features to those of Hajj-Ahmad et al. (2015), related to the extrema and rising edges in the ENF signal, which showed performance improvements between 3% and 19%. Aside from SP-Cup-related works, Jeon et al. (2018) demonstrated how their ENF map proposed in Jeon et al. (2018) can be utilized for inter-grid location identification as part of their LISTEN framework. Their experimental setup included identifying grid-of-origin of audio streams extracted from Skype and Torfan from seven power grids, achieving classification accuracies in the 85–90% range for audio segments of length 10–40 mins. Their proposed system also tackled intra-grid localization, which we will discuss next.

10.5.3.2

Intra-Grid Localization

Though the ENF variations at different points in the same grid are very similar, research has shown that there can be discernible differences between these variations. The differences can be due to the local load characteristics of a given city and the time needed to propagate a response to demand and supply in the load to other parts of the grid (Bollen and Gu 2006). System disturbances, such as short circuits, line switching, and generator disconnections, can be contributing causes to these different ENF values (Elmesalawy and Eissa 2014). When a significant change in the load occurs somewhere in the grid, it has a localized effect on the ENF in the given area. This change in the ENF then propagates across the grid at, typically, a speed of approximately 500 ms (Tsai et al. 2007). In Hajj-Ahmad et al. (2012), the authors conjecture that small and large changes in the load may cause location-specific signatures in local ENF patterns. Following this conjecture, such differences may be exploited to pinpoint the location-of-recording of a media signal within a grid. Due to the finite propagation speed of frequency disturbances across the grid, the ENF signal is anticipated to be more similar for locations close to each other as compared to locations that are farther apart. As an experiment, concurrent recordings of power signals were done in three cities in the Eastern North American grid shown in Fig. 10.15a. Examining the ENF signals extracted from the different power recordings, the authors found that they have a similar general trend but possible microscopic differences. To better capture these differences, the extracted ENF signals are passed through a highpass filter, and the correlation coefficient between the highpassed ENF signals is used as a metric to examine the location similarity between recordings. Figure 10.15b shows the correlation coefficients between various 500 s ENF segments from different locations at the same time. We can see that the closer the two city-of-origins are, the more similar their ENF signals are, and the farther they are apart, the less similar their ENF signals are. From here, one can start to think about

10 Power Signature for Multimedia Forensics

269

Fig. 10.15 a Locations of power recordings. b Pairwise correlation coefficients between 500 s long, highpass-filtered ENF segments extracted from power recordings made in the locations shown in a (Hajj-Ahmad et al. 2012)

using the ENF signal as a location stamp and exploiting the relation between pairwise ENF similarity and distance between locations of recording. A highpass-filtered ENF query can be compared to highpass-filtered ENF signals from known city anchors in a grid as a means toward pinpointing the location-of-recording of an ENF query. The authors further examined the use of the ENF signal as an intra-grid location stamp in Garg et al. (2013), where they proposed a half-plane intersection method to estimate the location of ENF-containing recordings and found that the localization accuracy can be improved with the increase in the number of locations used as anchor nodes. This study conducted experiments on power ENF signals. With audio and video signals, the situation is more challenging because of the noisy nature of the embedded ENF traces. The location-specific signatures are best captured using instantaneous frequencies estimated at 1 s temporal resolution, and reliable ENF signal extraction at such a high temporal resolution is an ongoing research problem. Nevertheless, there is a potential of being able to use ENF signals as a location stamp. Jeon et al. (2018) have included intra-grid localization as part of the capabilities of their LISTEN framework for inferring the location-of-recording of a target recording, which relies on an ENF map built using ENF traces collected from online streaming sources. After narrowing down the power grid to which a recording belongs through inter-grid localization, their intra-grid localization approach relies on calculating the Euclidean distance between a time-series sequence of interpolated signals in the chosen power grid and the target recording’s ENF signal. The approach then narrows down the location of recording to a specific region within the grid. In Chai et al. (2016); Yao et al. (2017); Cui et al. (2018), the authors rely on the FNET/GridEye system, described in Sect. 10.2.1, to achieve intra-grid localization. In these works, the authors work on ENF-based intra-grid localization for ENF signals extracted from clean power signals without using concurrent ENF power

270

A. Hajj-Ahmad et al.

references. Such intra-grid localization is made possible by exploiting geolocationspecific temporal variations induced by electromechanical propagation, nonlinear load, and recurrent local disturbance (Yao et al. 2017). In Cui et al. (2018), the authors apply features similar to those proposed in intragrid localization approaches and rely on a random forest classifier for training. In Yao et al. (2017), the authors apply wavelet analysis to ENF signals and feed detail signals into a neural network for training. The results show that the system works better at larger geographical scales and when the training and testing ENF-containing recordings were recorded closer together in time.

10.5.4 ENF-Based Camera Forensics An additional ENF-based application that has been proposed uses the ENF traces captured in a video recording to characterize the camera that was used to produce the video (Hajj-Ahmad et al. 2016). This is done within a nonintrusive framework that only analyzes the video at hand to extract a characterizing inner parameter of the camera used. The focus was on CMOS cameras equipped with rolling shutters, and the parameter estimated was the read-out time Tro , which is the time it takes for the camera to acquire the rows of a single frame. Tro is typically not listed in a camera’s user manual and is usually less than the frame period equal to the reciprocal of a video’s frame rate. This work was inspired by prior work on flicker-based video forensics that addresses issues in the entertainment industry pertaining to movie piracy-related investigations (Baudry et al. 2014; Hajj-Ahmad 2015). The focus of that work was on pirated videos that are produced by camcording video content displayed on an LCD screen. Such pirated videos commonly exhibit an artifact called the flicker signal, which results from the interplay between the backlight of the LCD screen and the recording mechanism of the video camera. In Hajj-Ahmad (2015), this flicker signal is exploited to characterize the LCD screen and camera producing the pirated video by estimating the frequency of the screen’s backlight signal and the camera’s read-out time Tro value. Both the flicker signal and the ENF signal are signatures that can be intrinsically embedded in a video due to the camera’s recording mechanism and the presence of a signal in the recording environment. For the flicker signal, the environmental signal is the backight signal of the LCD screen, while for the ENF signal, it is the electric lighting signal in the recording environment. In Hajj-Ahmad et al. (2016), the authors leverage the similarities between the two signals to adapt the flicker-based approach to an ENF-based approach targeted at characterizing the camera producing the ENF-containing video. The authors carried out an experiment involving ENFcontaining videos produced using five different cameras, and the results showed that the proposed approach achieves high accuracy in estimating the discriminating Tro parameter with a relative estimation error within 1.5%.

10 Power Signature for Multimedia Forensics

271

Vatansever et al. (2019) further extend the work on the application by proposing an approach that can operate on videos that the approach in Hajj-Ahmad et al. (2016) cannot address, which are cases where the main ENF component is aliased 0 Hz when the nominal ENF is a multiple of the video camera’s frame rate, e.g., a video camera with 25 fps capturing 100 Hz ENF. The approach examines the two strongest frequency components at which the ENF component appears, following a model proposed in the same paper, and deduces the read-out time Tro based on the ratio of strength between these two components.

10.6 Anti-Forensics and Countermeasures ENF signal-based forensic investigations such as time–location authentication, integrity authentication, and recording localization discussed in Sect. 10.5 rely on the assumption that the ENF signal buried in a hosting signal is not maliciously altered. However, there may exist adversaries performing anti-forensic operations to mislead forensic investigations. In this section, we examine the interplay between forensic analysts and adversaries to better understand the strengths and limitations of ENF-based forensics.

10.6.1 Anti-Forensics and Detection of Anti-Forensics Anti-forensic operations by adversaries aim at altering the ENF traces to invalidate or mislead the ENF analysis. One intuitive approach is to superimpose an alternative ENF signal without removing the native ENF signal. This approach can confuse the forensic analyst when the two ENF traces are overlapping and have comparable strengths, under which it is impossible to separate the native ENF signal without using side information (Su et al. 2013; Zhu et al. 2020). A more sophisticated approach is to remove the ENF traces either in the frequency domain by using a bandstop filter around the nominal frequency (Chuang et al. 2012; Chuang 2013) or in the time domain by subtracting a time signal synthesized via frequency modulation based on some accurate estimates of the ENF signal such as those from the power mains (Su et al. 2013). The most malicious approach is to mislead the forensic analyst by replacing the native ENF signal with some alternative ENF signal (Chuang et al. 2012; Chuang 2013). This can be done by first removing the native signal and then adding an alternative ENF signal. Forensic analysts are motivated to devise ways to detect the use of anti-forensic operations (Chuang et al. 2012; Chuang 2013), so that the forged signal can be rejected or be recovered to allow forensic analysis. In case an anti-forensic operation for ENF is carried out merely around the fundamental frequency, the consistency of the ENF components from multiple harmonic frequency bands can be used to detect such operation. The abrupt discontinuity in a spectrogram may be another indicator

272

A. Hajj-Ahmad et al.

for an anti-forensic operation. In Su et al. (2013), the authors formulated a composite hypothesis testing problem to detect the operation of superimposing an alternative ENF signal. Adversaries may respond to forensic analysts by improving their anti-forensic operations (Chuang et al. 2012; Chuang 2013). Instead of replacing the ENF component at the fundamental frequency, adversaries can attack all harmonic bands as well. The abrupt discontinuity check for spectrogram can be addressed by envelope adjustment via the Hilbert transform. Such dynamic interplay continues by evolving their actions in response to each other’s actions (Chuang et al. 2012; Chuang 2013), and actions tend to become more complex as the interplay goes on.

10.6.2 Game-Theoretic Analysis on ENF-Based Forensics Chuang et al. (2012), Chuang (2013) quantitatively analyzed the dynamic interplay between forensic analysts and adversaries using the game-theoretic framework. A representative but simplified scenario that involves different actions was quantitatively evaluated. The optimal strategies in terms of Nash equilibrium, namely the status in which no player can increase his/her own benefit (or formally, the utility) via unilateral strategy changes, were derived. The optimal strategies and the resulting forensic and anti-forensic performances at the equilibrium can lead to a comprehensive understanding of the capability of ENF-based forensics in a specific scenario.

10.7 Applications Beyond Forensics and Security The ENF signal as an intrinsic signature of multimedia recordings gives rise to applications beyond forensics and security. For example, an ENF signal’s unique fluctuations can be used as a time–location signature to allow the synchronization of multiple pieces of media signals that overlap in time. This allows the synchronized signal pieces to jointly reveal more information than the individual pieces (Su et al. 2014c, b; Douglas et al. 2014). In another example, the slowly varying trend of the ENF signal can serve as an anchor for the detection of abnormally recorded segments within a recording (Chang and Huang 2010; Feng-Cheng Chang and Hsiang-Cheh Huang 2011) and can thus allow for the restoration of tape recordings suffering from irregular motor speed (Su 2014).

10.7.1 Multimedia Synchronization Conventional multimedia synchronization approaches rely on passive/active calibration of the timestamps or identifying common contextual information from two or

10 Power Signature for Multimedia Forensics

273

more recordings. Voice and music are the contextual information that can be used for audio synchronization. Overlapping visual scenes, even recorded from different viewing angles, can be exploited for video synchronization. ENF signals naturally embedded in the audio and visual tracks of recordings can complement conventional synchronization approaches as they would not rely on common contextual information. ENF-based synchronization can be categorized into two major scenarios, one where multiple recordings for synchronization overlap in time, and one where they do not. If multiple recordings from the same power grid are recorded with time overlap, then they can be readily synchronized by the ENF without using other information. In Su et al. (2014c), Douglas et al. (2014), Su et al. (2014b), the authors use the ENF signals extracted from audio tracks for synchronization. For videos containing both the visual and audio tracks, it is possible to use either modality as the source of ENF traces for synchronization. Su et al. (2014b) demonstrate using the ENF signals from the visual track for synchronization. Using the visual track for multimedia synchronization is important for such scenarios as video surveillance cases where an audio track may not be available. In Golokolenko and Schuller (2017), the authors show that the idea can be extended to synchronizing the audio streams from wireless/low-cost USB sound cards in a microphone array. Sound cards have their own respective sample buffers, which fill up at different rates, thus leading to synchronization mismatches of streams by different sound cards. Conventional approaches to address the synchronization require specialized hardware, which is not a requirement for a synchronization approach based on using the captured ENF traces. If recordings were not captured with time overlap, then reference ENF databases are needed to help determine the absolute recording time of each piece. This falls back to the time-of-recording authentication scenario discussed in Sect. 10.5.1— Each recording will need to determine its timestamp by matching against one or multiple reference databases, depending on whether the source location of the recording is known.

10.7.2 Time-Stamping Historical Recordings ENF traces can help archivists by providing a time-synchronized exhibit of multimedia files. Many twentieth century recordings are important cultural heritage records, but some may lack necessary metadata, such as the date and the exact time of recording. Su et al. (2013), Su et al. (2014c), Douglas et al. (2014) found ENF traces in the 1960s phone conversation recordings of President Kennedy in the White House, and in the multi-channel recordings of the 1970 NASA Apollo 13 mission. In Su et al. (2014c), the ENF was used to align two recordings of around 4 hours in length from the Apollo 13 mission, and the result was confirmed by the audio contents of the two recordings.

274

A. Hajj-Ahmad et al.

A set of time-synchronized exhibits of multimedia files with reliable metadata of recording time and high-SNR ENF traces can be used to reconstruct a reference ENF database for various purposes (Douglas et al. 2014). Such a reconstructed ENF database from a set of recordings can be valuable for applying ENF analysis to sensing data collected in the past before power signals recorded for reference purposes is available.

10.7.3 Audio Restoration Audio signal digitized from an analog tape has its content frequency “drifted” off the original value due to the inconsistent rolling speed between the recorder and the digitizer. The ENF signal, although varying along time, has a general trend that remains around the nominal frequency. When ENF traces are embedded in a tape recording, this consistent trend can serve as the anchor for correcting the abnormal speed of the recording (Su 2014). Figure 10.16a shows the drift of the general trend of the ENF followed by an abrupt jump in a recording from the NASA Apollo 11 Mission. The abnormal speed can be corrected by temporally stretching or compressing the frame segments of an audio signal using techniques such as multi-rate conversion and interpolation (Su 2014). The spectrogram of the resulting corrected signal is shown in Fig. 10.16b in which the trend of the ENF is constant without an abrupt jump.

10.8 Conclusions and Outlook This chapter has provided an overview of the ENF signal, an environment signature that can be captured by multimedia recordings made in locations where there is elec-

Fig. 10.16 The spectrogram of an Apollo mission control recording a before and b after speed correction on the digitized signal (Su 2014)

10 Power Signature for Multimedia Forensics

275

trical activity. We first examined the embedding mechanism of ENF traces, followed by how ENF signals, can be extracted from audio and video recordings. We noted that in order to reliably extract ENF signals from visual recordings, it is often helpful to exploit the rolling-shutter mechanism commonly used by cameras with CMOS image sensor arrays. We then systematically reviewed how the ENF signature can be used for media recordings’ time and/or location authentication, integrity authentication, and localization. We also touched on anti-forensics in which adversaries try to mislead forensic analysts and discussed countermeasures. Applications beyond forensics, including multimedia synchronization and audio restoration, were also discussed. Looking into the future, we notice many new problems have naturally arisen as technology continues to evolve. For example, given the popularity of short video clips that last for less than 30 s, ENF analysis tools need to accommodate short recordings. The technical challenge lies in how to efficiently make use of the ENF information than traditional ENF analysis tools that focus on recordings of minutes or even hours long. Another technical direction is to push forward the limit of ENF extraction algorithms from working on videos to images. Is it possible to extract enough ENF information from a sequence of fewer than ten images captured over a few seconds/minutes to infer the time and/or location? For the case of ENF extraction from a single rolling-shutter image, is it possible to know more beyond whether the image is captured at a 50 or 60 Hz location? Exploring new use cases of ENF analysis based on its intrinsic ENF embedding mechanism is also a natural extension. For example, deepfake video detection may be a potential avenue for ENF analysis. ENF can be naturally embedded through the flickering of indoor lighting into the facial region and the background of a video recording. A consistency test may be designed to complement detection algorithms based on other visual/statistical cues.

References Baudry S, Chupeau B, Vito MD, Doërr G (2014) Modeling the flicker effect in camcorded videos to improve watermark robustness. In: IEEE international workshop on information forensics and security (WIFS), pp 42–47 Bollen M, Gu I (2006) Signal processing of power quality disturbances. Wiley-IEEE Press Brixen EB (2007a) Techniques for the authentication of digital audio recordings. In: Audio engineering society convention 122 Brixen EB (2007b) Further investigation into the ENF criterion for forensic authentication. In: Audio engineering society convention 123 Brixen EB (2008) ENF; quantification of the magnetic field. In: AES international conference: audio forensics-theory and practice Bykhovsky D, Cohen A (2013) Electrical network frequency (ENF) maximum-likelihood estimation via a multitone harmonic model. IEEE Trans Inf Forensics Secur 8(5):744–753 Catalin G (2009) Applications of ENF analysis in forensic authentication of digital audio and video recordings. J Audio Eng Soc 57(9):643–661

276

A. Hajj-Ahmad et al.

Chai J, Liu F, Yuan Z, Conners RW, Liu Y (2013) Source of ENF in battery-powered digital recordings. In: Audio engineering society convention 135 Chai J, Zhao J, Guo J, Liu Y (2016) Application of wide area power system measurement for digital authentication. In: IEEE/PES transmission and distribution conference and exposition, pp 1–5 Chang F-C, Huang H-C (2010) Electrical network frequency as a tool for audio concealment process. In: International conference on intelligent information hiding and multimedia signal processing, IIH-MSP ’10, Washington, DC, USA. IEEE Computer Society, pp 175–178 Chang F-C, Huang H-C (2011) A study on ENF discontinuity detection techniques. In: International conference on intelligent information hiding and multimedia signal processing, IIH-MSP ’11, Washington, DC, USA. IEEE Computer Society, pp 9–12 Choi J, Wong C-W (2019) ENF signal extraction for rolling-shutter videos using periodic zeropadding. In: IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 2667–2671 Chuang W-H, Garg R, Wu M (2012) How secure are power network signature based time stamps? In: ACM conference on computer and communications security, CCS ’12, New York, NY, USA, 2012. ACM, pp 428–438 Chuang W-H, Garg R, Wu M (2013) Anti-forensics and countermeasures of electrical network frequency analysis. IEEE Trans Inf Forensics Sec 8(12):2073–2088 Cooper AJ (2008) The electric network frequency (ENF) as an aid to authenticating forensic digital audio recordings an automated approach. In: AES international conference: audio forensicstheory and practice Cooper Alan J (2011) Further considerations for the analysis of ENF data for forensic audio and video applications. Int J Speech Lang Law 18(1):99–120 Cooper AJ (2009a) Digital audio recordings analysis: the electric network frequency (ENF) criterion. Int J Speech Lang Law 16(2):193–218 Cooper AJ (2009b) An automated approach to the electric network frequency (ENF) criterion: theory and practice. Int J Speech Lang Law 16(2):193–218 Cui Y, Liu Y, Fuhr P, Morales-Rodriguez M (2018) Exploiting spatial signatures of power ENF signal for measurement source authentication. In: IEEE international symposium on technologies for homeland security (HST), pp 1–6 Dosiek L (2015) Extracting electrical network frequency from digital recordings using frequency demodulation. IEEE Signal Process Lett 22(6):691–695 Eissa MM, Elmesalawy MM, Liu Y, Gabbar H (2012) Wide area synchronized frequency measurement system architecture with secure communication for 500kV/220kV Egyptian grid. In: IEEE international conference on smart grid engineering (SGE), p 1 Elmesalawy MM, Eissa MM (2014) New forensic ENF reference database for media recording authentication based on harmony search technique using GIS and wide area frequency measurements. IEEE Trans Inf Forensics Secur 9(4):633–644 Esquef PA, Apolinario JA, Biscainho LWP (2014) Edit detection in speech recordings via instantaneous electric network frequency variations. IEEE Trans Inf Forensics Secur 9(12):2314–2326 Esquef PAA, Apolinário JA, Biscainho LWP (2015) Improved edit detection in speech via ENF patterns. In: IEEE international workshop on information forensics and security (WIFS), pp 1–6 Fechner N, Kirchner M (2014) The humming hum: Background noise as a carrier of ENF artifacts in mobile device audio recordings. In: Eighth international conference on IT security incident management IT forensics (IMF), pp 3–13 FNET Server Web Display (2021) http://fnetpublic.utk.edu, Accessed Jun. 2021 Fuentes M, Zinemanas P, Cancela P, Apolinário JA (2016) Detection of ENF discontinuities using PLL for audio authenticity. In: IEEE latin American symposium on circuits systems (LASCAS), pp 79–82 Fu L, Markham PN, Conners RW, Liu Y (2013) An improved discrete Fourier transform-based algorithm for electric network frequency extraction. IEEE Trans Inf Forensics Secur 8(7):1173– 1181

10 Power Signature for Multimedia Forensics

277

Garg R, Hajj-Ahmad A, Wu M (2013) Geo-location estimation from electrical network frequency signals. In: IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 2862–2866 Garg R, Varna AL, Hajj-Ahmad A, Wu M (2013) ‘Seeing’ ENF: Power-signature-based timestamp for digital multimedia via optical sensing and signal processing. IEEE Trans Inf Forensics Secur 8(9):1417–1432 Garg R, Varna AL, Wu M (2011) ‘Seeing’ ENF: Natural time stamp for digital video via optical sensing and signal processing. In: ACM international conference on multimedia, MM ’11, New York, NY, USA. ACM, pp 23–32 Garg R, Varna AL, Wu M (2012) Modeling and analysis of electric network frequency signal for timestamp verification. In: IEEE international workshop on information forensics and security (WIFS), pp 67–72 Glentis GO, Jakobsson A (2011) Time-recursive IAA spectral estimation. IEEE Signal Process Lett 18(2):111–114 Golokolenko O, Schuller G (2017) Investigation of electric network frequency for synchronization of low cost and wireless sound cards. In: European signal processing conference (EUSIPCO), pp 693–697 Grigoras C (2005) Digital audio recordings analysis: the electric network frequency (ENF) criterion. Int J Speech Lang Law 12(1):63–76 Grigoras C (2007) Applications of ENF criterion in forensic audio, video, computer and telecommunication analysis. Forensic Sci Int 167(2–3):136–145 Gu J, Hitomi Y, Mitsunaga T, Nayar S (2010) Coded rolling shutter photography: Flexible spacetime sampling. In: IEEE international conference on computational photography (ICCP), pp 1–8, Cambridge, MA Gupta S, Cho S, Kuo JCC (2012) Current developments and future trends in audio authentication. IEEE MultiMed 19(1):50–59 Hajj-Ahmad A, Berkovich A, Wu M (2016) Exploiting power signatures for camera forensics. IEEE Signal Process Lett 23(5):713–717 Hajj-Ahmad A, Wong C-W, Gambino S, Zhu Q, Yu M, Wu M (2019) Factors affecting ENF capture in audio. IEEE Trans Inf Forensics Secur 14(2):277–288 Hajj-Ahmad A, Baudry S, Chupeau B, Doërr G (2015) Flicker forensics for pirate device identification. In: ACM workshop on information hiding and multimedia security, IH&MMSec ’15, New York, NY, USA. ACM, pp 75–84 Hajj-Ahmad A, Garg R, Wu M (2012) Instantaneous frequency estimation and localization for ENF signals. In: APSIPA annual summit and conference, pp 1–10 Hajj-Ahmad A, Garg R, Wu M (2013) ENF based location classification of sensor recordings. In: IEEE international workshop on information forensics and security (WIFS), pp 138–143 Hajj-Ahmad A, Garg R, Wu M (2013) Spectrum combining for ENF signal estimation. IEEE Signal Process Lett 20(9):885–888 Hajj-Ahmad A, Garg R, Wu M (2015) ENF-based region-of-recording identification for media signals. IEEE Trans Inf Forensics Secur 10(6):1125–1136 Hua G (2018) Error analysis of forensic ENF matching. In: IEEE international workshop on information forensics and security (WIFS), pp 1–7 Hua G, Zhang Y, Goh J, Thing VLL (2016) Audio authentication by exploring the absolute-errormap of ENF signals. IEEE Trans Inf Forensics Secur 11(5):1003–1016 Hua G, Goh J, Thing VLL (2014) A dynamic matching algorithm for audio timestamp identification using the ENF criterion. IEEE Trans Inf Forensics Secur 9(7):1045–1055 Huijbregtse M, Geradts Z (2009) Using the ENF criterion for determining the time of recording of short digital audio recordings. In: Geradts ZJMH, Franke KY, Veenman CJ (eds) Computational Forensics, Lecture Notes in Computer Science, vol 5718. Springer Berlin Heidelberg, pp 116–124 Huijbregtse M, Geradts Z (2009) Using the ENF criterion for determining the time of recording of short digital audio recordings. In: International workshop on computational forensics (IWCF), pp 116–124

278

A. Hajj-Ahmad et al.

Jeon Y, Kim M, Kim H, Kim H, Huh JH, Yoon JW (2018) I’m listening to your location! inferring user location with acoustic side channels. In: World Wide Web conference, WWW ’18, pages 339– 348, Republic and Canton of Geneva, Switzerland. International World Wide Web Conferences Steering Committee Kajstura M, Trawinska A, Hebenstreit J (2005) Application of the electric network frequency (ENF) criterion: a case of a digital recording. Forensic Sci Int 155(2–3):165–171 Karantaidis G, Kotropoulos C (2018) Assessing spectral estimation methods for electric network frequency extraction. In: Pan-Hellenic conference on informatics, PCI ’18, New York, NY, USA. ACM, pp 202–207 Kim H, Jeon Y, Yoon JW (2017) Construction of a national scale ENF map using online multimedia data. In: ACM on conference on information and knowledge management, CIKM ’17, New York, NY, USA. ACM, pp 19–28 Lin X, Liu J, Kang X (2016) Audio recapture detection with convolutional neural networks. IEEE Trans Multimed 18(8):1480–1487 Lin X, Kang X (2017) Supervised audio tampering detection using an autoregressive model. In: IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 2142– 2146 Lin X, Kang X (2018) Robust electric network frequency estimation with rank reduction and linear prediction. ACM Trans Multimed Comput Commun Appl 14(4):84:1–84:13 Liu Y, Yuan Z, Markham PN, Conners RW, Liu Y (2011) Wide-area frequency as a criterion for digital audio recording authentication. In: Power and energy society general meeting, pp 1–7 Liu Y, Yuan Z, Markham PN, Conners RW, Liu Y (2012) Application of power system frequency for digital audio authentication. IEEE Trans Power Deliv 27(4):1820–1828 Manolakis DG, Ingle VK, Kogon SM(2000) Statistical and adaptive signal processing. McGrawHill, Inc Myriam D-C, Sylvain M (2000) High-precision fourier analysis of sounds using signal derivatives. J Audio Eng Soc 48(7/8):654–667 Nicolalde DP, Apolinario J (2009) Evaluating digital audio authenticity with spectral distances and ENF phase change. In: IEEE international conference on acoustics, speech and signal processing (ICASSP), Washington, DC, USA. IEEE Computer Society, pp 1417–1420 Oard DW, Wu M, Kraus K, Hajj-Ahmad A, Su H, Garg R (2014) It’s about time: Projecting temporal metadata for historically significant recordings. In: iConference, Berlin, Germany, pp 635–642 Ojowu O, Karlsson J, Li J, Liu Y (2012) ENF extraction from digital recordings using adaptive techniques and frequency tracking. IEEE Trans Inf Forensics Secur 7(4):1330–1338 Phadke AG, Thorp JS, Adamiak MG (1983) A new measurement technique for tracking voltage phasors, local system frequency, and rate of change of frequency. IEEE Trans Power Appar Syst PAS-102(5):1025–1038 Pop G, Draghicescu D, Burileanu D, Cucu H, Burileanu C (2017) Fast method for ENF database build and search. In: International conference on speech technology and human-computer dialogue (SpeD), pp 1–6 Ralph S (1986) Multiple emitter location and signal parameter estimation. IEEE Trans Antennas Propag 34(3):276–280 Reis PMGI, da Costa JPCL, Miranda RK, Del Galdo G (2016) Audio authentication using the kurtosis of ESPRIT based ENF estimates. In: International conference on signal processing and communication systems (ICSPCS), pp 1–6 Reis PMGI, da Costa JPCL, Miranda RK, Del Galdo G (2017) ESPRIT-Hilbert-based audio tampering detection with SVM classifier for forensic analysis via electrical network frequency. IEEE Trans Inf Forensics Secur 12(4):853–864 Rodríguez DPN, Antonio Apolinário J, Wagner Pereira Biscainho L (2010) Audio authenticity: Detecting ENF discontinuity with high precision phase analysis. IEEE Trans Inf Forensics and Secur 5(3):534–543 Rodriguez DP, Apolinario JA, Biscainho LWP (2013) Audio authenticity based on the discontinuity of ENF higher harmonics. In: Signal processing conference (EUSIPCO), pp 1–5

10 Power Signature for Multimedia Forensics

279

Roy R, Kailath T (1989) ESPRIT-estimation of signal parameters via rotational invariance techniques. IEEE Trans Acoust Speech Signal Process 37(7):984–995 Sanders RW (2008) Digital audio authenticity using the electric network frequency. In: AES international conference: audio forensics-theory and practice Sanders R, Popolo PS (2008) Extraction of electric network frequency signals from recordings made in a controlled magnetic field. In: Audio engineering society convention 125 Šari´c Z, Žuni´c A, Zrniéc T, Kneževi´c M, Despotovi´c D, Deli´c T (2016) Improving location of recording classification using electric network frequency (ENF) analysis. In: International symposium on intelligent systems and informatics (SISY), pp 51–56 Signal Processing (SP) Cup Project Reports (2021) https://sigport.org/events/sp-cup-projectreports, Accessed Jun. 2021 Su H (2014) Temporal and spatial alignment of multimedia signals. PhD thesis, University of Maryland, College Park Su H, Garg R, Hajj-Ahmad A, Wu M (2013) ENF analysis on recaptured audio recordings. In: IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 3018–3022 Su H, Hajj-Ahmad A, Garg R, Wu M (2014a) Exploiting rolling shutter for ENF signal extraction from video. In: IEEE international conference on image processing (ICIP), pp 5367–5371 Su H, Hajj-Ahmad A, Wong C-W, Garg R, Wu M (2014b) ENF signal induced by power grid: A new modality for video synchronization. In: ACM international workshop on immersive media experiences, ImmersiveMe ’14, New York, NY, USA, 2014. ACM, pp 13–18 Su H, Hajj-Ahmad A, Wu M, Oard DW (2014c) Exploring the use of ENF for multimedia synchronization. In: IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 4613–4617 Suresha PB, Nagesh S, Roshan PS, Gaonkar PA, Meenakshi GN, Ghosh PK (2017) A high resolution ENF based multi-stage classifier for location forensics of media recordings. In: National conference on communications (NCC), pp 1–6 Szeliski R (2010)Computer vision: algorithms and applications, chapter 2.2. Springer Top P, Bell MR, Coyle E, Wasynczuk O (2012) Observing the power grid: Working toward a more intelligent, efficient, and reliable smart grid with increasing user visibility. IEEE Signal Process Mag 29(5):24–32 Tsai S-J, Zhang L, Phadke AG, Liu Y, Ingram MR, Bell SC, Grant IS, Bradshaw DT, Lubkeman D, Tang L (2007) Frequency sensitivity and electromechanical propagation simulation study in large power systems. IEEE Trans Circuits Syst I: Regular Papers 54(8):1819–1828 Vaidyanathan PP (2006) Multirate systems and filter banks. Pearson Education India Vatansever S, Dirik AE, Memon N (2017) Detecting the presence of ENF signal in digital videos: a superpixel-based approach. IEEE Signal Process Lett 24(10):1463–1467 Vatansever S, Dirik AE, Memon N (2019) Analysis of rolling shutter effect on ENF based video forensics. IEEE Trans Inf Forensics Secur 1 Wong C-W, Hajj-Ahmad A, Wu M (2018) Invisible geo-location signature in a single image. In: IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 1987– 1991 Wu M, Hajj-Ahmad A, Kirchner M, Ren Y, Zhang C, Campisi P (2016) Location signatures that you don’t see: highlights from the IEEE Signal Processing Cup 2016 student competition [SP Education]. IEEE Signal Process Mag 33(5):149–156 Yao W, Zhao J, Till MJ, You S, Liu Y, Cui Y, Liu Y (2017) Source location identification of distribution-level electric network frequency signals at multiple geographic scales. IEEE Access 5:11166–11175 Zhang Y, Markham P, Xia T, Chen L, Ye Y, Wu Z, Yuan Z, Wang L, Bank J, Burgett J, Conners RW, Liu Y (2010) Wide-area frequency monitoring network (FNET) architecture and applications. IEEE Trans Smart Grid 1(2):159–167 Zhong Z, Xu C, Billian BJ, Zhang L, Tsai SS, Conners RW, Centeno VA, Phadke AG, Liu Y (2005) Power system frequency monitoring network (FNET) implementation. IEEE Trans Power Syst 20(4):1914–1921

280

A. Hajj-Ahmad et al.

Zhu Q, Chen M, Wong C-W, Wu M (2018) Adaptive multi-trace carving based on dynamic programming. In: Asilomar conference on signals, systems, and computers, pp 1716–1720 Zhu Q, Chen M, Wong C-W, Wu M (2020) Adaptive multi-trace carving for robust frequency tracking in forensic applications. IEEE Trans Inf Forensics Secur 16:1174–1189

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Chapter 11

Data-Driven Digital Integrity Verification Davide Cozzolino, Giovanni Poggi, and Luisa Verdoliva

Images and videos are by now a dominant part of the information flowing on the Internet and the preferred communication means for younger generations. Besides providing information, they elicit emotional responses, much stronger than text does. It is probably for these reasons that the advent of AI-powered deepfakes, realistic and relatively easy to generate, has raised great concern among governments and ordinary people alike. Yet, the art of image and video manipulation is much older than that. Armed with conventional media editing tools, a skilled attacker can create highly realistic fake images and videos, so-called “cheapfakes”. However, despite the nickname, there is nothing cheap in cheapfakes. They require significant domain knowledge to be crafted and their detection, can be much more challenging than the detection of AI-generated material. In this chapter, we focus on the detection of cheapfakes, that is, image and video manipulations carried out with conventional means. However, we skip conventional detection methods, already thoroughly described in many reviews, and focus on the data-driven deep learning-based methods proposed in recent years. We look at manipulation detection methods under various perspectives. First, we analyze in detail the forensics traces they rely on, and then the main architectural solutions proposed for detection and localization, together with the associated training strategies. Finally, major challenges and future directions are highlighted.

11.1 Introduction In the last few years, multimedia forensics has been drawing ever-increasing attention in the scientific community and beyond. Indeed, with the editing software tools D. Cozzolino (B) · G. Poggi · L. Verdoliva University Federico II of Naples, Naples, Italy e-mail: [email protected] © The Author(s) 2022 H. T. Sencar et al. (eds.), Multimedia Forensics, Advances in Computer Vision and Pattern Recognition, https://doi.org/10.1007/978-981-16-7621-5_11

281

282

D. Cozzolino et al.

available nowadays, modifying images or videos has become extremely easy. In the wrong hands, and used for the wrong purposes, this capability may represent major threats for both individuals and the society as a whole. In this chapter, we focus on conventional manipulations, also known as cheapfakes, in contrast to deepfakes that are based on the use of deep learning tools for their generation (Paris and Donovan 2019). Generating a cheapfake does not require artificial-intelligence-based technology, however a skilled attacker can carry out very realistic fakes, which can have disruptive consequences. In Fig. 11.1, we show some examples of very common manipulations: inserting an object copied from a different image (splicing), replicating an object coming from the same image (copy-move), or removing an object by extending the background (inpainting). Usually, to better fit the object in the scene, some suitable post-processing operations are also applied, like resizing, rotation, boundary smoothing, or color adjustment. A large number of methods have been proposed for forgery detection and localization, both for images and videos (Farid 2016; Korus 2017; Verdoliva 2020). Here, we focus on methods that verify the digital (as opposed to physical or semantic) integrity of the media asset, namely, discover the occurrence of a manipulation by detecting the pixel-level inconsistencies it caused. It is worth emphasizing that even well-crafted manipulations, which do not leave visible artifacts on the image, always modify its statistics, leaving traces that can be exploited by pixel-level analysis tools. In fact, the image formation process inside a camera comprises a certain number of operations, both hardware and software, specific of each camera, which leave distinctive marks on each acquired image (Fig. 11.2). For example, camera models use widely different demosaicing algorithms, as well as different quantization tables

Fig. 11.1 Examples of image manipulations carried out using conventional media editing tools. From left to right: splicing (alien material inserted in the image), copy-move (an object has been cloned), inpainting (objects removed using background patches)

11 Data-Driven Digital Integrity Verification

283

Fig. 11.2 In-camera processing pipeline. An image is captured using a complex acquisition system. The optical filter reduces undesired light components and the lenses focus the light on the sensor, where the red-green-blue (RGB) components are extracted by means of a color filter array (CFA). Then, a sequence of internal processing steps follow, including lens distortion correction, enhancement, demosaicing, and finally compression

for JPEG compression. Their traces allow one to identify the camera model that acquired a given image. So, when parts of different images are spliced together, a spatial anomaly in these traces can be detected, revealing the manipulation. Likewise, the post-camera editing process may introduce some specific artifacts, as well as disrupt camera-specific patterns. Therefore, by emphasizing possible deviations with respect to the expected behavior, one can establish with good confidence if the digital integrity has been violated. In the multimedia forensics literature, there has been intense work on detecting any possible type of manipulation, global or local, irrespective of its purpose. Several methods detect a large array of image manipulations, like resampling, median filtering, and contrast enhancement. However, these operations can also be carried out only to improve the image appearance or for other legitimate purposes, with no malicious intent. For this reason, in this chapter, we will focus on detecting local (semantically) relevant manipulations, like copy-moves, inpainting, and compositions, that can modify the meaning of the image. Early digital integrity method followed a model-based approach, relying on some mathematical or statistical models of data and attacks. More recent methods, instead, are for the most part data-driven, based on deep learning: they exploit the availability of large amounts of data to learn how to detect specific or generic forensic clues. These can be further classified based on whether they give as output an image-level detection score that indicates the probability that the image has been manipulated or not, or a pixel-level localization mask that provides a score for each pixel. In addition, we will distinguish methods based on the training strategy they apply: • two-class learning, where training is carried out on datasets of labeled pristine and manipulated data; • one-class learning, where only real data are used during training and manipulations are regarded as anomalies. In the rest of the chapter, we first analyze in detail the forensics clues exploited in the various approaches, then review the main architectural solutions proposed for detection and localization, together with their training strategies and the most

284

D. Cozzolino et al.

popular datasets proposed in this field. Finally, we draw conclusions and highlight major challenges and future directions.

11.2 Forensics Clues Learning-based detection methods are typically designed on the basis of the specific forensic artifacts they look for. For example, some proposed convolutional neural networks (CNNs) base their decision on low-level features (e.g., camera-based), while others focus on higher level features (e.g., edge boundaries in a composition), and still others on features of both types. Therefore, this section will review the most interesting clues that can be used for digital integrity verification.

11.2.1 Camera-Based Artifacts One powerful approach is to exploit the many artifacts that are introduced during the image acquisition process (Fig. 11.2). Such artifacts are specific of the individual camera (the device) or the camera model, and therefore represent some form of signature superimposed on the image, which can be exploited for forensic purposes. As an example, if part of the image is replaced with content taken from another source, this added material will lack the original camera artifacts, allowing detection of the attack. Some of the most relevant artifacts are related to the sensor, the lens, the color filter array, the demosaicing algorithm, and the JPEG quantization tables. In all cases, with the aim of analyzing weak artifacts, the image content (the scene) represents an undesired strong disturbance. Therefore, it is customary to work on the so-called noise residuals, obtained by suppressing the high-level content. To this end, one can estimate the “true” image by means of a denoising algorithm, and subtract it from the original image, or use some high-pass filters in the spatial or in the transform domain (Lyu and Farid 2005; He et al. 2012). A powerful camera-related artifact is the photo-response non-uniformity noise (PRNU), successfully used not only for forgery detection but also for source identification (Chen et al. 2008). The PRNU pattern is due to unavoidable imperfections in the sensor manufacturing process. It is unique for each individual camera, stable in time, and present in all acquired images, therefore it can be considered as a device fingerprint. Using such a fingerprint, manipulations of any type can be detected. In fact, whenever the image is modified, the PRNU pattern is corrupted or even fully removed, a strong clue of a possible manipulation (Lukàš et al. 2006). The main drawbacks of PRNU-based methods are (i) the need of images taken from the same camera under test to obtain a good estimate of the reference pattern and (ii) the need to have a PRNU pattern spatially aligned with the image under test.

11 Data-Driven Digital Integrity Verification

285

Fig. 11.3 Training a noiseprint extraction network. Pairs of patches feed the two branches of the Siamese network that extract the corresponding noiseprint patches. The output distance must be minimized for patches coming from the same camera and spatial location, maximized for the others

In Cozzolino and Verdoliva (2020), a learning-based strategy is proposed to extract a stronger camera fingerprint, the image noiseprint. Like the PRNU pattern, it behaves as a fingerprint embedded in all acquired images. Contrary to it, it depends on all camera-model artifacts which contribute to the digital history of an image. Noiseprints are extracted by means of a CNN, trained to capture, and emphasize all camera-model artifacts, while suppressing the high-level semantic content. To this end, the CNN is trained in a Siamese configuration, with two replicas of the network, identical for architecture and weights, on two parallel branches. The network is fed with pairs of patches drawn from a large number of pristine images of various camera models. When the patches on the two branches are aligned (same camera model and same spatial position, see Fig. 11.3) they can be expected to contain much the same artifacts. Therefore, the output of each branch can be used as reference for the input of the other one, by-passing the need for clean examples. Eventually, the network learns to extract the desired noiseprint, and can be used with no further supervision on images captured by any camera model, both inside and outside the training set. Of course, the extracted noiseprints will always contain traces of the imperfectly rejected high-level image, acting as noise for the forensic purposes. However, by exploiting all sorts of camera-related artifacts, noiseprints enable more powerful and reliable forgery detection procedures than PRNU patterns. On the down side, the noiseprint does not allow one to distinguish between two devices of the same camera model. Therefore, it should not be regarded as a substitute of the PRNU, but rather as a complementary tool. In Fig. 11.4, we show some examples of noiseprints extracted from different fake images, with the corresponding heat maps obtained by feature clustering. In the first case, the noiseprint clearly shows the 8 × 8 grid of the JPEG format. Sometimes, traces left by image manipulations on the extracted noiseprint are so strong to allow easy localization even by direct inspection. The approach can be also easily extended to videos (Cozzolino et al. 2019), but larger datasets are necessary to make up for the reduced quality of the sources, due to heavy compression. Interestingly, noiseprints can be also used in a supervised setting (Cozzolino and Verdoliva 2018). Given a

286

D. Cozzolino et al.

Fig. 11.4 Examples of manipulated images (top) with their corresponding noiseprints (middle) and heat maps obtained by a proper clustering (bottom)

Fig. 11.5 Noiseprint in supervised modality. The localization procedure is the classic pipeline used with PRNU-based methods: the noiseprint is extracted from a set of pristine images taken by the same camera of the image under analysis, their average represents a clean reference, namely, a reliable estimate of the camera noiseprint that can be used for image forgery localization

suitable training set of images, a reliable estimate of the noiseprint is built, where high-level scene leakages are mostly removed, and used in a PRNU-like fashion to discover anomalies in the image under analysis (see Fig. 11.5). As we mentioned before, many different artifacts arise from the in-camera processing pipeline, related to sensor, lens, and especially the internal processing algorithms. Of course, one can also focus on just one of these artifacts to develop a manipulation detector. Indeed, this has been the dominant approach in the past, with many modelbased methods proposed to exploit very specific features, leveraging a deep domain knowledge. In particular, there has been intense research on the CFA-demosaicing combination. Most digital cameras use a periodic color filter array, so that each individual sensor element records light only in a certain range of wavelengths (i.e., red,

11 Data-Driven Digital Integrity Verification

287

Fig. 11.6 Histograms of the DCT coefficients clearly show the statistical differences between images subject to a single or double JPEG compression

green, blue). The missing color information is then interpolated from surrounding pixels in the demosaicing process, which introduces a subtle periodic correlation pattern in all acquired images. Both CFA configuration and demosaicing algorithms are specific of each camera model, leading to different correlation patterns which can be exploited for forensic means. For example, when a region is spliced in a photo taken by another camera model, its periodic pattern will appear anomalous. Most of the methods proposed in this context are model-based (Popescu and Farid 2005; Cao and Kot 2009; Ferrara et al. 2012), however recently a convolutional neural network specifically tailored to the detection of mosaic artifacts has been proposed (Bammey et al. 2020). The network is trained only on authentic images and includes 3 × 3 convolutions with a dilation of 2, so as to examine pixels that all belong to the same color channel and detect possible inconsistencies. In fact, a change of the mosaic can lead to forged blocks being detected at incorrect positions modulo (2, 2).

11.2.2 JPEG Artifacts Many algorithms in image forensics exploit traces left by the compression process. Some methods look for anomalies in the statistical distribution of original DCT samples, assumed to comply with the Benford law (Fu et al. 2007). However, the most popular approach is to detect double compression traces (Bianchi and Piva 2012). In fact, when a JPEG-compressed image undergoes a local manipulation and then is compressed again, double compression (or double quantization, DQ) artifacts appear all over the image (with very high probability) in the forged area. This phenomenon is exploited also in the well-known Error Level Analysis (ELA), widespread among practitioners for its simplicity. The effects of double compression are especially visible in the distributions of the DCT coefficients which, after the second compression, show characteristic periodic peaks and valleys (Fig. 11.6). Based on this observation, Wang and Zhang (2016) performs splicing detection by means of a CNN trained with histograms of DCT coefficients. The histograms are computed either from single-compressed patches (forged areas) or double-compressed ones (pristine areas), with quality factors in

288

D. Cozzolino et al.

the range 60–95. To reduce the dimensionality of the feature vector, only a small interval near the peak of each histogram is chosen to represent the whole histogram. Following this seminal paper, also Amerini et al. (2017) and Park et al. (2018) use histograms of DCT coefficients as input, while (Kwon et al. 2021) uses onehot coded DCT coefficients as already proposed in Yousfi and Fridrich (2020) for steganalysis. Also the quantization tables can bring useful information, and are used as additional input in Park et al. (2018); Kwon et al. (2021). Leveraging domainspecific prior knowledge, the approach proposed in Barni et al. (2017) works on noise residuals rather than on image pixels, and uses the first layers of the CNN to extract histogram-related features. In Park et al. (2018), instead, the quantization tables from the JPEG header are used as input together with the histogram features. It is worth observing that methods based on deep learning provide good results also in cases where conventional methods largely fail, such as when test images are compressed with a quality factor (QF) never seen in training or when the second QF is larger than the first one. Double compression detectors have been also proposed for H.264 video analysis, with a two-stream neural network that analyzes separately intra-coded and predictive frames (Nam et al. 2019).

11.2.3 Editing Artifacts The editing process often generates a trail of precious traces, besides artifacts related to re-compression. Indeed, when a new object is inserted in an image, it typically requires several post-processing steps to fit the new context smoothly. These include geometric transformations, like rotation and scaling, contrast adjustment, and blurring, to smooth the object-background boundaries. Both rotation and resampling, for example, require some forms of interpolation, which in turn generates periodic artifacts. Therefore, some methods (Kirchner 2008) look for traces of resampling as a proxy for possible forgeries. In this section, we describe the most used and discriminative editing-based traces exploited in forensics algorithms.

Copy-moves A very common manipulation consists in replicating or hiding objects. Of course, the presence of identical regions is a strong hint of forgery, but clones are often modified to disguise traces, and near-identical natural objects also exist, which complicates the forensic analysis. Studies on copy-move detection date back to 2003, with the seminal work of Fridrich (2003). Since then, a large literature has grown on this topic proposing solutions that allow for copy-move detection even in the presence of rotation, resizing, and other geometric distortions (Christlein et al. 2012). Methods based on keypoints are very efficient, while block-based dense methods (Cozzolino et al. 2015) are more accurate and deal also with removals.

11 Data-Driven Digital Integrity Verification

289

Fig. 11.7 Examples of additive copy-moves. From left to right: manipulated images and corresponding ground truths, binary localization maps obtained using a dense approach (Cozzolino et al. 2015) and those obtained by means of a deep-learning-based one (Chen et al. 2020). The latter method allows telling apart the source and copied areas

A first solution that relies on deep learning has been proposed in Wu et al. (2018), where an end-to-end CNN-based solution is able to jointly optimize the three main steps of a classic copy-move solution, that is, feature extraction, matching, and postprocessing. To improve performance across different scenarios, multiscale feature analysis and hierarchical feature mapping are employed in Zhong and Pun (2019). Overall, deep-learning-based methods appear to outperform conventional methods on low-resolution images (i.e., small-size images) and can be designed to distinguish the copy from the original region. In fact, typically, copy-move detection methods generate a map where both the original object and its clone are highlighted, but do not establish which is which. The problem of source-target disambiguation is considered in Wu et al. (2018); Barni et al. (2019); Chen et al. (2020). The main idea originally proposed in Wu et al. (2018) is to use a CNN for manipulation detection and another one for similarity computation. Then a suitable fusion of these two outputs helps to distinguish the source region from its copies (Fig. 11.7).

Inpainting Inpainting is a very effective image manipulation approach that can be used to hide objects/people. The main classic approaches for their detection rely on methods inspired by copy-move detection, given that in the widespread exemplar-based inpainting, multiple small regions are copied from all over the image and combined together to cover the object. Patch-based inpainting is addressed in Zhu et al. (2018) with an encoder/decoder architecture which provides a localization map as output. In Li and Huang (2019), instead, a fully convolutional network is used to localize the inpainted area in the residual domain. The CNN does not look any specific editing artifact caused by inpainting, but only trained on a large number of examples of real and manipulated images, making the detection very effective for the analyzed methods.

290

D. Cozzolino et al.

Splicings Other approaches focus on anomalies which appear at the boundaries of objects when a composition is performed, due to the inconsistencies between regions drawn from different sources. For example, Salloum et al. (2018) uses a multi-task fully convolutional network comprising a specific branch for detecting boundaries between inserted regions and background and another one for analyzing the surface of the manipulation. Both in this work and in Chen et al. (2021), training is carried out also using the edge map extracted from the ground truth. In other papers (Liu et al. 2018; Shi et al. 2018; Zhang et al. 2018; Phan-Xuan et al. 2019), instead, training is performed on patches that are either authentic or composed of pixels belonging to the two classes. Therefore, the CNN is forced to look for the boundaries of the manipulation.

Face Warping In Wang et al. (2019), a CNN-based solution is proposed to detect artifacts introduced by a specific Photoshop tool, Face-Aware Liquify, which performs image warping of human faces. The approach is very successful for this task because the network is trained on manipulated images automatically generated by the very same tool. To deal with more challenging situations and increase robustness, data augmentation is applied by including resizing, JPEG compression, and various types of histogram editing.

11.3 Localization Versus Detection Integrity verification can be carried out at two different levels: image-level (detection) or pixel-level (localization). In the first case, a global integrity score is provided, which indicates the likelihood that a manipulation occurred anywhere in the image. The score can be a real number, typically an estimate of the probability of manipulation, or a binary decision (yes/no). Localization instead builds upon the hypothesis that a detector has already decided on the presence of a manipulation and tries to establish for each individual pixel if it has been manipulated. Therefore, the output has the same size of the image and, again, can be real valued, corresponding to a probability map or more in general a heat map, or binary, corresponding to a decision map (see Fig. 11.8). It is worth noting that most localization methods in practice also try to detect the presence of a manipulation and do not make any assumptions on the fake/pristine nature of the image. Many methods focus on localization, others on detection or on both tasks. A brief summary of the main characteristics of these approaches is presented in Table 11.1. In this section, we analyze the major directions adopted in the literature to address these problems by using deep-learning-based solutions.

11 Data-Driven Digital Integrity Verification

291

Fig. 11.8 The output of a method for digital integrity verification can be either a (binary or continuous) global score related to the probability that the image has been tampered or a localization map, where the score is computed at pixel level

11.3.1 Patch-Based Localization The images to analyze are usually much larger than the input layer of neural networks. To avoid resizing the input image, which may destroy precious evidence, several approaches propose to work on small patches, with size typically spanning from 64×64 to 256×256 pixels, and analyze the whole image patch by patch. Localization can be performed in different ways based on the learning strategy, as described in the following.

Two-class Learning Supervised learning relies on the availability of a large dataset of both real and manipulated patches. Once the patch is classified at test time, localization is carried out by performing a sliding-window analysis of the whole image. Most of the early methods are designed in this way. For example, to detect traces of double compression, in Park et al. (2018) a dataset of single-compressed and double-compressed JPEG blocks was generated using a very large number of quantization tables. Clearly, the more diverse and varied the dataset is, the more effective the training will be, with strong impact on the final performance. Beyond double compression, one can be interested in detecting traces of other types of processing, like blurring or resampling. In this case, one has only to generate a large number of patches of such kind. The main issue is the variety of parameters one can consider and that does not allow to easily cover the whole space. Patch-based learning is also used to detect object insertion in an image, based on the presence of boundary artifacts. In this case, manipulated patches include pixels coming from two different sources (the host image and the alien object), with a simple edge separating the two regions (Zhang et al. 2018; Liu et al. 2018).

(1) Localization, Patch-based, Two-class Wang and Zhang (2016), Barni et al. (2017), Amerini et al. (2017), Park et al. (2018) Liu et al. (2018), Shi et al. (2018), Zhang et al. (2018), Phan-Xuan et al. (2019) (2) Localization, Patch-based, One-class Cozzolino and Verdoliva (2016) Bondi et al. (2017) Self-consistency Huh et al. (2018) RFM Yao et al. (2020) Adaptive-CFA Bammey et al. (2020) Forensicgraph Mayer and Stamm (2019) Noiseprint Cozzolino and Verdoliva (2020) Niu et al. (2021) (3) Localization, Image-based, Segmentation-like Wu et al. (2018) MFCN Salloum et al. (2018) BusterNet Wu et al. (2018) Zhu et al. (2018) Li and Huang (2019) ManTra-Net Wu et al. (2019) RRU-Net Bi et al. (2019) D-Unet Bi et al. (2020)

Acronym [ref] Patch at 64-256 Patch at 32-64 None Patch at 64 Patch at 128 Patch at 64 Patch at 648 Patch at 128-256 Patch at 48 Patch at 64 Resized to 256 Patch Patch at 256 Patch at 256 Original size Patch at 256 Resized to 384 Resized to 512

Splicing Camera-based Camera-based Camera-based Camera-based Demosaicing Camera-based Camera-based JPEG Copy-move Splicing Copy-move Inpainting Inpainting Editing Splicing Splicing

Training

JPEG

Artifact

Resized to 256 Original size Original size Patch at 256 Original size Original size Original size Resized to 512

Original size Original size Original size Original size Original size Original size Original size Original size

Patch at 32-64

Patch at 64-256

Testing

  







    

D

(continued)

       

       





L

Table 11.1 An overview of the main approaches proposed in the literature together with their main features: the type of artifact they aim to detect, the training and test input, i.e., patches, resized images or original images. In addition, we also indicate if they have been designed for localization (L), detection (D), or both

292 D. Cozzolino et al.

Resized to 256 Resized to 256 Resized to 512 Resized to 256 Resized to 521 Resized to 256 Patch at 128 Resized to 320 Patch at 512 Resized to 512 Patch at 256 Patch at 256 Resized to 600 Resized to 600 Patch at 128 Original size

Splicing Splicing Splicing Copy-move Editing Editing Splicing Copy-move JPEG Splicing Editing Splicing Editing Editing Splicing Editing

H-LSTM Bappy et al. (2019) LSTM-Skip Mazaheri et al. (2019) MAGritte Kniaz et al. (2019) CMSD/STRD Chen et al. (2020) SPAN Hu et al. (2020) GSCNet Shi et al. (2020) DU-DC-EC Net Zhang and Ni (2020) DOA-GAN Islam et al. (2020) CAT-Net Kwon et al. (2021) MVSS-Net Chen et al. (2021) PSCC-Net Liu et al. (2021) MS-CRF-Att Rao et al. (2021) (4) Localization, Image-based, Obj.Detection-like RGB-N Zhou et al. (2018) Const. R-CNN Yang et al. (2020) (5) Detection SRM-CNN Rao and Ni (2016) E2E Marra et al. (2019)

Training

Artifact

Acronym [ref]

Table 11.1 (continued)

Patch at 128 Original size

Resized to 600 Resized to 600

Resized to 256 Resized to 256 Resized to 512 Resized to 256 Resized to 512 Resized to 256 Original size Resized to 320 Original size Original size Original size Original size

Testing

    

 

 

 



D            

L

11 Data-Driven Digital Integrity Verification 293

294

D. Cozzolino et al.

One-class learning Several methods in forensics look at the problem using an anomaly detection perspective, developing a model of the pristine data and looking for anomalies with respect to this model, which can suggest the presence of a manipulation. The idea is to extract a specific type of forensic feature during training, whose distribution depends strongly on the source image or class of images. Then, at test time, these features are extracted from all patches of the image (with suitable stride) and clustered based on their statistics. If two distinct clusters emerge, the smallest one is regarded as anomalous, likely due to a local manipulation, and the corresponding patches provide a map of the tampered area. The most successful features used for this task are low level, like those related to the CFA pattern (Bammey et al. 2020), compression traces (Niu et al. 2021), or all the in-camera processing operations (Cozzolino and Verdoliva 2016; Bondi et al. 2017; Cozzolino and Verdoliva 2020). For example, some features allow one to classify different camera models with high reliability. In the presence of a splicing, the features observed in the pristine and the forged regions will be markedly different, indicating that different image parts were acquired by different camera models (Bondi et al. 2017; Yao et al. 2020). Autoencoders can be very useful to implement a one-class approach. In fact, a network trained on a large number of patches extracted from untampered images eventually learns to reproduce pristine images with good accuracy. On the contrary, anomalies will give rise to large reconstruction errors, pointing to a possible manipulation. Noise inconsistencies are used in Cozzolino and Verdoliva (2016) to localize splicings without requiring a training step. At test time, handcrafted features are extracted from the patches of the image and an iterative algorithm based on feature labeling is used to detect the anomaly region. The same approach has been extended to videos in D’Avino et al. (2017), using an LSTM recurrent network to account for temporal dependencies. In Bammey et al. (2020), instead, the idea is to learn locally the possible configurations of a CFA mosaic. Whenever such pattern is not found in the sliding-window analysis at test time, an anomaly is identified, and then a possible manipulation. A similar approach can be followed with reference to the JPEG quality factor: in this case, if an inconsistency is found in the image, this means that the anomalous region was compressed with a different compression level than the rest of the image, a strong evidence of local modifications (Niu et al. 2021).

11.3.2 Image-Based Localization Majority of approaches that carry out localization rely on fully convolutional networks and leverage architectures developed originally for image segmentation, regarding the problem itself as a special form of segmentation between pristine and manipulated regions. A few methods, instead, take inspiration from object detection

11 Data-Driven Digital Integrity Verification

295

and make decisions on regions extracted from a preliminary region proposal phase. In the following, we review the main ideas.

Segmentation-like approach These methods take as input the whole image and produce a localization map of the same size. Therefore, they avoid the problem of post-processing a large number of patch-level results in some sensible way. On the down side, they need a large collection of fake and real images for training, as well as their associated pixel-level ground truth, two rather challenging requirements. Actually, many methods keep being trained on patches and keep working on patches at testing time, but all the pieces of information are processed jointly in the network to provide an image-level output, just as it happens with denoising networks. Therefore, for these methods, training can be performed at patch level by considering many different types of local manipulations (Wu et al. 2019; Liu et al. 2021) or including boundaries between the pristine and forged area (Salloum et al. 2018; Rao et al. 2021; Zhang and Ni 2020). Many other solutions, instead, use the information from the whole image by resizing it to a fixed dimension (Wu et al. 2018; Bappy et al. 2019; Mazaheri et al. 2019; Bi et al. 2019, 2020; Kniaz et al. 2019; Shi et al. 2020; Chen et al. 2020; Hu et al. 2020; Islam et al. 2020; Chen et al. 2021). The image is reduced to the size of the network input, and therefore fine-grained pixel-level dependencies typical of in-camera and out-camera artifacts are destroyed. Moreover, local manipulations may become extremely small after resizing and easily neglected in the overall loss. This approach is conceptually simple but causes a significant loss of information. To increase the number of training samples and generalize across a large variety of possible manipulations, in Zhou et al. (2019) an adversarial strategy is also adopted. Even the loss functions used in these works are typically inspired by semantic segmentation, such as pixel-wise cross-entropy (Bi et al. 2019, 2020; Hu et al. 2020), weighted crossentropy (Bappy et al. 2019; Rao et al. 2021; Salloum et al. 2018; Zhang and Ni 2020), and dice (Chen et al. 2021). Since often the manipulation region is much smaller than the whole image in Li and Huang (2019), it is proposed to adopt the focal loss, which assigns a modulating factor to the cross-entropy term and then can address the class imbalance problem.

Object Detection-Like Approach A different strategy is to identify only the region box that includes the manipulation adopting approaches that are applied for the object detection task. This path is first followed in Zhou et al. (2018) relying on the Region Proposal Network (RPN) (Ren et al. 2017) very popular for object detection. This is adapted to provide regions with potential manipulations, by using features extracted both from the RGB channels and from the noise residual as input. The approach proposed in Yang et al. (2020), instead,

296

D. Cozzolino et al.

adopts Mask R-CNN (He et al. 2020) and includes an attention region proposal network to identify the manipulated region.

11.3.3 Detection Despite the obvious importance of detection, which should take place before localization is attempted, in the literature there has been limited attention to this specific problem. A large number of localization methods have been used to perform detection through a suitable post-processing of the localization heatmap aimed at extracting a global score. In some cases, the average or the maximum value of the heatmap is used as decision statistics (Huh et al. 2018; Wu et al. 2019; Rao et al. 2021). However, localization methods often perform clustering or segmentation in the target image and therefore they tend to find a forged area also in pristine images, generating a large number of false alarms. Other methods propose ad hoc information fusion strategies. Specifically, in Rao and Ni (2016); Boroumand and Fridrich (2018), training is carried out on a patchlevel analysis then statistics-based information is aggregated for the whole image to form a feature vector as the input of a classifier, either a support vector machine or a neural network. Unfortunately, a patch-level analysis does not allow taking into account both local information (through textural analyses) and global information over the whole image (through contextual analyses) at the same time. That is, by focusing on a local analysis, the big picture may go lost. Based on these considerations, a different direction is followed in Marra et al. (2019). The network takes as input the whole image without any resizing, so as not to lose precious traces of manipulation hidden in its fine-grain structure. All patches are analyzed jointly, through a suitable aggregation, to make the final image-level decision on the presence of a manipulation. More important, also in the training phase, only image-level labels are used to update the network weights, and only image-level information back-propagates until the early patch-level feature extraction layers. To this end, a gradient checkpointing technique is used, which allows to keep in memory all relevant variables at the cost of a limited increase in computation. This approach allows for the joint optimization of patch-level feature extraction and image-level decision. The first layers extract features that are instrumental to carry out a global analysis and the last layers exploit this information to highlight local anomalies that

11 Data-Driven Digital Integrity Verification Image

Acvaon map

297 ROI-based localizaon

Fig. 11.9 Example of localization analysis based on the detection approach proposed in Marra et al. (2019). From left to right: test image, corresponding activation map, and ROI-based localization result with ground-truth box (green) and automatic ones (magenta). Detection scores are shown on the corner of each box

would not appear at the patch level. In Marra et al. (2019), this end-to-end approach is shown to also allow a good interpretation of results by means of activation maps and a good localization of forgeries (see Fig. 11.9).

11.4 Architectural Solutions In this section, we describe the most popular and interesting deep-learning-based architectural solutions proposed in digital forensics.

11.4.1 Constrained Networks To exploit low-level features related to camera-based artifacts, one needs to suppress the scene content. This can be done by pre-processing the input image, through a preliminary denoising step, or by means of suitable high-pass filters, like the popular spatial rich models (SRM) initially proposed in the context of steganalysis (Fridrich and Kodovsky 2012) and applied successfully in image forensics (Cozzolino et al. 2014). Several CNN architectures try to include this pre-processing in their architecture by means of a constrained first layer which performs the desired high-pass filtering. For example, in Rao and Ni (2016), a set of fixed high-pass filters inspired to the spatial rich model are used to compute residuals of the input patches, while in Bayar and Stamm (2016) the high-pass filters of the first convolutional layer are learnt using the following constraints:  i, j

wk (0, 0) = −1 wk (i, j) = 0

(11.1)

298

D. Cozzolino et al.

where wk (i, j) is the weight of k-th filter at the (i, j) position and (0, 0) indicates the central position. This solution is pursued also in other works, such as (Wu et al. 2019; Yang et al. 2020; Chen et al. 2021), while many others use a fixed filtering (Li and Huang 2019; Barni et al. 2017; Zhou et al. 2018) or adopt a combination of fixed high-pass filters and trainable ones (Wu et al. 2019; Zhou et al. 2018; Bi et al. 2020; Zhang and Ni 2020). A slightly different perspective is considered in Cozzolino et al. (2017) where a CNN architecture is built to replicate exactly the co-occurrence-based methods of Fridrich and Kodovsky (2012). All the phases of this procedure, i.e., extraction of noise residuals, scalar quantization, computation of co-occurrences, and histogram are implemented using standard convolutional or pooling layers. Once established the equivalence, the network is modified to enable its fine-tuning and further improve performance. Since the resulting network has a lightweight structure, fine-tuning can be carried out using a small training set, limiting computation time.

11.4.2 Two-Branch Networks In order to exploit different types of clues at the same time, it is possible to adopt a CNN architecture that accepts multiple inputs (Zhou et al., 2018; Shi et al., 2018; Bi et al., 2020; Kwon et al., 2021). For example, a two-branch approach is proposed in Zhou et al. (2018) in order to take into account both low-level features extracted from noise residuals and high-level features extracted from RGB images. In Amerini et al. (2017); Kwon et al. (2021), instead, information extracted from the spatial and DCT domain is analyzed so as to take explicitly into account also compression artifacts. Two-branch solutions can also be used to produce multiple outputs, as done in Salloum et al. (2018), where the fully convolutional network has two output branches: one provides the localization map of splicing while the other provides a map containing only the edges of the forged region. In the context of copy-move detection, the two-branch structure is used to address separately the problems of detecting duplicates and disambiguating between the original object an its clones (Wu et al. 2018; Barni et al. 2019).

11.4.3 Fully Convolutional Networks Fully convolution networks (FCNs) can be extremely useful since they preserve spatial dependencies and generate an output of the same size of the input image. Therefore, the network can be trained to output a heat map which allows to perform tampering localization. Various architectures have shown to be particularly useful for forensics applications. Some methods (Bi et al. 2019; Shi et al. 2020; Zhang and Ni 2020) rely on a plain U-Net architecture (Ronneberger et al. 2015), one of the most successful

11 Data-Driven Digital Integrity Verification

299

network for semantic segmentation, initially proposed for biomedical applications. A more innovative variation of FCNs is proposed in Bappy et al. (2019); Mazaheri et al. (2019) where the architecture includes a long short-term memory (LSTM) module. The blocks of the image are sequentially processed by the LSTM. In this way, the network can model the spatial relationships between neighboring blocks, which facilitates detecting manipulated blocks as those that break the natural statistics of authentic ones. Other papers include spatial pyramid mechanisms in the architectures, in order to capture relationships at different scales of the image (Bi et al. 2020; Wu et al. 2019; Hu et al. 2020; Liu et al. 2021; Chen et al. 2021). Both LSTM modules and spatial pyramids aim at overcoming the intrinsic limits of networks based only on convolutions where, due to the limited overall receptive field, the decision on a pixel depends only on the information in a small region around the pixel. With the same aim, it is also possible to use attention mechanisms (Chen et al. 2008, 2021; Liu et al. 2021; Rao et al. 2021), as in Shi et al. (2020), where channel attention is used in the Gram matrix to measure the correlation between any two feature maps.

11.4.4 Siamese Networks Siamese networks can be extremely useful for several forensic purposes. In fact, by leveraging the Siamese configuration, with two identical networks working in parallel, one can focus on the similarity and differences among patches, making up for the absence of ground-truth data. This approach is applied in Mayer and Stamm (2018), where a constrained network is used to extract camera-model features, while another network is trained to learn the similarity between pairs of such features. This strategy is further developed in Mayer and Stamm (2019) where a graph-based representation is introduced to better identify the forensic relationships among all patches within an image. A Siamese network is also used in Huh et al. (2018) to establish if image patches were captured with different imaging pipelines. The proposed approach is self-supervised and localizes image splicings by predicting the consistency of EXIF attributes between pairs of patches in order to establish whether they came from a single coherent image. Once trained on pristine images, featured by their EXIF header, the network can be used on any new image without further supervision. As already seen in Sect. 1.2.1, a Siamese approach can help to extract a camera-model fingerprint, where artifacts related to camera model are emphasized (Cozzolino and Verdoliva 2020, 2018; Cozzolino et al. 2019).

300

D. Cozzolino et al.

11.5 Datasets

Real

Fake

For two-class learning-based approaches, having good data for training is of paramount importance. By “good”, we mean abundant (for data-hungry modern CNNs), representative (of the many possible manipulations encountered in the wild) and well curated (e.g., balanced, free from biases). Moreover, to assess the performance of new proposals, it is important to compare results on multiple datasets with different features. The research community has made considerable efforts through the years to release a number of datasets with a wide array of image manipulations. However, not all of them possess the right properties to support, by themselves, the development of learning-based methods. In fact, a way too popular approach is to split a single dataset in training, validation, and test set, carrying out training and experiments on this single source, a practice that may easily induce some forms of polarization or overfitting, if the dataset is not built with great care. In Fig. 11.10, we show a list of the most widespread datasets that have been proposed since 2011. During this short time lapse, the size of the datasets has rapidly increased, by roughly two orders of magnitude, together with the variety of manipulations (see Table 11.2). although the capacity to fool an observer has not grown at a similar pace. In this section, the most widespread datasets are described, and their features are briefly discussed. The Columbia dataset (Hsu and Chang 2006) is one of the first dataset proposed in the literature. It has been extensively used, however it includes only unrealistic forgeries without any semantic meaning. A more realistic dataset with splicings is DSO-1 (Carvalho et al. 2013) that in turn has the limitation that all images have the

2012

2013

2015

2016

2017

2018

2019

2020

Fig. 11.10 An overview of the most used datasets in the current literature for image manipulation detection and localization. We can observe that more recent ones are much larger, while the level of realism has not really improved upon time. Note that the size of the circles corresponds to the number of samples in the dataset

11 Data-Driven Digital Integrity Verification

301

Table 11.2 List of datasets including generic image manipulations. Some of them are targeted for specific manipulations, while others are more various and contain different types of forgeries Dataset Refs. manipulations # prist. # forged Columbia gray Columbia color CASIA v1 CASIA v2 MICC F220 MICC F2000 VIPP FAU DSO-1 CoMoFoD Wild Web GRIP RTD (Korus) COVERAGE NC2016 NC2017 FaceSwap MFC2018 PS-Battles MFC2019 DEFACTO FantasticReality SMIFD-500 IMD2020

Ng and Chang (2004) Hsu and Chang (2006) Dong et al. (2013) Dong et al. (2013) Amerini et al. (2011) Amerini et al. (2011) Bianchi and Piva (2012) Christlein et al. (2012) Carvalho et al. (2013) Tralic et al. (2013) Zampoglou et al. (2015) Cozzolino et al. (2015) Korus and Huang (2016) Wen et al. (2016) Guan et al. (2019) Guan et al. (2019) Zhou et al. (2017) Guan et al. (2019) Heller et al. (2018) MFC (2019) Mahfoudi et al. (2019) Kniaz et al. (2019) Rahman et al. (2019) Novozámský et al. (2020)

Splicing (unrealistic) Splicing (unrealistic) Splicing, copy-move Splicing, copy-move Copy-move Copy-move Splicing

933 182 800 7,200 110 1,300 68

912 180 921 5,123 110 700 69

Copy-move Splicing Copy-move Real-world cases

48 100 260 90

48 100 260 9,657

Copy-move

80

80

Various

220

220

Copy-move Various Various Face swapping Various Various Various Various Splicing Various Various

100 560 2,667 1,758 14,156 11,142 10,279 – 16,592 250 414

100 564 1,410 1,927 3,265 102,028 5,750 229,000 19,423 250 2,010

same resolution, they are in uncompressed format and there is no information on how many cameras were used. A very large dataset of face swapping has been introduced in Zhou et al. (2017), which contains around 4,000 real and manipulated images using two different algorithms. Several datasets have been designed for copy-move forgery detection (Christlein et al. 2012; Amerini et al. 2011; Tralic et al. 2013; Cozzolino et al. 2015; Wen et al. 2016), one of the most studied forms of manipulation. Some of them modify the copied object in multiple ways, including rotation, resizing, and change of illumination, to stress the capabilities of copy-move methods.

302

D. Cozzolino et al.

Many other datasets include various types of manipulations so as to present a more realistic scenario. The CASIA dataset (Dong et al. 2013) includes both splicings and copy-moves and inserted objects are post-processed to better fit the scene. Nonetheless, it exhibits a strong polarization, as highlighted in Cattaneo and G. Roscigno (2014): authentic images can be easily separated from tampered ones since they were compressed with different quality factors. Forgeries of various nature are present also in the Realistic Tampering Dataset proposed in Korus and Huang (2016) that comprises all uncompressed, but very realistic manipulations. The Wild Web Dataset is instead a collection of cases from the Internet (Zampoglou et al. 2017). The U.S. National Institute of Standards and Technology (NIST) has released several large datasets (Guan et al. 2019) aimed at testing forensic algorithms in increasingly realistic and challenging conditions. The first one, NC2016, is quite limited and present some redundancies, such as same images repeated multiple times in different conditions. While this can be of interest to study the behavior of some algorithms, it prevents the dataset to be split for training, validation, and test to avoid overfitting. Much more challenging and large datasets have been proposed by NIST in subsequent years: NC2017, MFC2018, and MFC2019. These datasets are very large and present very different types of manipulations, resolutions, formats, compression levels, and acquisition devices. Some very large datasets have been released recently. DEFACTO (Mahfoudi et al. 2019) comprises over 200,000 images with a wide variety of realistic manipulations, both conventional, like splicings, copy-moves, and removals, and more advanced, like face morphing. The PS-Battles Dataset, instead, provides for each original image a varying number of manipulated versions (Heller et al. 2018), for a grand total of 102,028 images. However, it does not provide ground truths and the original themselves are not always pristine. SMIFD-500 (Social Media Image Forgery Detection Database) Rahman et al. (2019) is a set of 500 images collected from popular social media, while FantasticReality (Kniaz et al. 2019) is a dataset of splicing manipulations split into two parts: “Rough” and “Realistic”. The “Rough” part contains 8k splicings with obvious artifacts while the “Realistic” part provides 8k splicings that were retouched manually to obtain a realistic output. A dataset of manipulated images has been also built in Novozámský et al. (2020). It features 35,000 images collected from 2,322 different camera models, and manipulated in various ways, e.g., copy-paste, splicing, retouching, and inpainting. Moreover, it also includes a set of 2,000 realistic fake images downloaded from the Internet along with the corresponding real images. Since some methods work on anomaly detection and are trained only on pristine data, well-curated datasets comprising a large number of real images are also of interest. The Dresden image database (Gloe and Böhme 2010) contains over 14,000 JPEG images from 73 digital cameras of 25 different models. To allow studying the peculiarities of different models and devices, the same scene is captured by a large number of cameras. Raw images can be found both in Dresden and in the RAISE dataset (Dang-Nguyen et al. 2015), composed of 8,156 images taken by 3 different camera models. A dataset comprising both SDR (standard dynamic range) and HDR (high dynamic range) images has been presented in Shaya et al. (2018).

11 Data-Driven Digital Integrity Verification

303

A total of 5,415 images were captured by 23 mobile devices in different scenarios under controlled acquisition conditions. The VISION dataset, instead, Shullani et al. (2017) includes 34,427 images and 1,914 videos from 35 devices of 11 major brands. They appear both in their original format and after uploading/downloading on various platforms (Facebook, YouTube, WhatsApp) so as to allow studies on data retrieved from social networks. Another mixed dataset, SOCRATES, is proposed in Galdi et al. (2017). It contains 6,200 images and 680 videos captured using 67 smartphones of 14 brands and 42 models.

11.6 Major Challenges In this section, we want to highlight some major challenges of current deep-learningbased approaches. • Localization versus Detection. To limit computational complexity and memory storage, deep networks work on relatively small input patches. Therefore, to process large images, one must either resize them or analyze them patch-wise. The first solution does not fit forensic needs, as it destroys precious statistical evidence related to local textures. On the other hand, a patch-wise analysis may miss image-level phenomena. A feature extractor trained on patches can only learn good features for local decisions, which are not necessarily good to make image-level decisions. Most deep-learning-based methods sweep under the rug the detection problem and focus on localization. Then, detection is addressed as an afterthought through some forms of processing of the localization map. Not surprisingly, the resulting detection performance is often very poor. Table 11.3 compares the localization and detection performances of some patchwise methods. We present results in terms of Area Under the Curve (AUC) for detection and the Matthews Correlation Coefficient (MCC) for localization. MCC is robust to unbalanced classes (the forgery is often small compared to the whole image) its absolute value belongs to the range [0, 1]. It represents the crosscorrelation coefficient between the decision map and the ground truth, computed as T P × T N − FP × FN MCC = √ (T P + F P)(T P + F N )(T N + F P)(T N + F N ) where – – – –

TP (true positive): # positive pixels declared positive; TN (true negative): # negative pixels declared negative; FP (false positive): # negative pixels declared positive; FN (false negative): # positive pixels declared negative;

We can note that, while the localization results are relatively good, detection results are much worse and often no better than coin flipping. A more intense effort on the detection problem seems necessary.

304

D. Cozzolino et al.

Table 11.3 Localization by means of MCC and detection by means of AUC performance compared. With a few exceptions, the detection performance is very poor (AUC = 0.5 amounts to coin flipping) MCC/AUC

DS0-1

VIPP

RTD

PS-Battles

100/100

62/62

220/220

80/80

RRU-Net Bi et al. (2019)

0.250/0.522

0.288/0.538

0.240/0.524

0.303/0.583

Const. R-CNN Yang et al. (2020)

0.355/0.575

0.382/0.480

0.370/0.522

0.423/0.517

EXIF-SC Huh et al. (2018)

0.529/0.763

0.402/0.633

0.278/0.541

0.345/0.567

SPAN Hu et al. (2020)

0.318/0.697

0.336/0.556

0.260/0.564

0.320/0.583

Adaptive-CFA Bammey et al. (2020)

0.121/0.380

0.130/0.498

0.127/0.472

0.094/0.526

Noiseprint Cozzolino and Verdoliva (2020)

0.758/0.865

0.532/0.568

0.345/0.531

0.476/0.495

• Robustness. Forensic analyses rely typically on the weak traces associated with the digital history of the target image/video. Such traces are further weakened and tend to disappear altogether in the presence of other quality-impairing processing steps. As an example, consider the image of Fig. 11.11, with a large splicing, localized with high accuracy. After compression or resizing, the localization map becomes more fuzzy, and decision unreliable. Likewise, in the images of Fig. 11.12 with copy-moves, the attribution of the two copies becomes incorrect after heavy resizing. On the other hand, most images and videos of interest are found on social networks, where they are routinely heavily resized and recompressed for storage efficiency, with non-deterministic parameters. Therefore, robustness is a central problem that all multimedia forensic methods should confront with from the beginning. • Generalization. Carrying out a meaningful training is a hard task in image forensics, in fact, it is very easy to introduce some sort of bias. For example, one may train and test the network on images acquired only by a limited number of devices, identifying the provenance source instead of the artifact trace, or may use different JPEG compression pipelines for real and fake images, giving rise to different digital histories. A network will no doubt learn all these features if they help discriminating between pristine and tampered images, and may neglect more general but weaker traces of manipulation. This will lead to a very good performance on a training-aligned test set, and very bad performance on independent tests. Therefore, it is important to avoid biases to the extent possible, and even more important to test the trained networks on multiple and independent test sets. • Semantic Detection. Developing further on detection, a major problem is telling apart innocent manipulations from malicious ones. An image may be manipulated for many reasons, very frequently only for improving its appearance or to better fit it in the context of interest. These images should not be detected, as they are not really of interest and their number would easily overwhelm any subsequent analysis system. A good forensic algorithm should be able to single out only malicious attacks, based on the type of manipulation, often concerning only a small part of the image, and especially based on the context, including all other related media and textual information.

11 Data-Driven Digital Integrity Verification

305

Image

JPEG 90

JPEG 80

JPEG 70

JPEG 60

Result

Resizing 90%

Resizing 75%

Resizing 50%

Resizing 40%

Fig. 11.11 Results of splicing localization in the presence of different levels of compression and resizing. MCC is 0.998, it decreases to 0.985, 0.679, 0.412, and 0.419 by varying the compression level: JPEG quality equal to 90, 80, 70, and 60, respectively. MCC decreases to 0.994, 0.991, 0.948, and 0.517 by varying the resizing factor: 0.90, 0.75, 0.50, and 0.40, respectively Forged Image

Ground truth

Resizing 90%

Resizing 75%

Resizing 50%

Fig. 11.12 Results of copy-move detection and attribution in the presence of resizing

11.7 Conclusions and Future Directions Manipulating images and videos is becoming simpler and simpler, and fake media are becoming widespread, with the well-known associated risks. Therefore, the development of reliable multimedia forensic tools is more urgent than ever. In recent years, the main focus of research has been on deepfakes, images, and videos (often of persons) fully generated by means of AI tools. Yet, detecting computer-generated material seems to be relatively simple, for the time being. These images and videos lack some characteristic features of real media assets (think of device and model fingerprints) and exhibit instead traces of their own generation process (e.g., GAN fingerprints). Moreover, they can be generated in large number, providing for a virtually unlimited training set for deep learning methods to learn from. Cheapfakes, instead, though crafted with conventional methods, keep evading more easily the scrutiny of forensic tools. Also for them, detectors based on deep learning can be expected to provide improved performance. However, creating large unbiased datasets with manipulated images representative of the wide variety of attacks is not simple. In addition, the manipulations themselves rely on real (as opposed to generated) data, hardly detected as anomalous. Finally, manipulated images are often resized and recompressed, further weakening the feeble forensic

306

D. Cozzolino et al. Copy-Move

InpainƟng

Forged Image

Original Image

Splicing

Fig. 11.13 Examples of conventional manipulations carried out using deep learning methods: splicing (Tan et al. 2018), copy-move (Thies et al. 2019), inpainting (Zhu et al. 2021)

traces a detector can rely on. All this said, the current state of deep-learning-based methods for cheapfake detection can be considered promising, and further improvements will certainly come. However, new challenges loom on the horizon. The acquisition and processing technology evolves rapidly, deleting or weakening well-known forensic traces, just think of the effects of video stabilization and new shooting modes on PRNU-based methods (Mandelli et al. 2020; Baracchi et al. 2020). Of course, new traces will appear, but research may struggle keeping this fast pace. Likewise, also manipulation methods evolve, see Fig. 11.13. For example, the inpainting process has been significantly improved thanks to deep learning, with recent methods which fill image regions using newly generated semantically consistent content. In addition, methods have been already proposed to conceal the traces left by various types of attacks. Along this line, another peculiar problem of deep-learning-based methods is their vulnerability to adversarial attacks. The class (pristine/fake) of an asset can be easily changed by means of a number of simple methods, without significantly impairing its visual quality. As a final remark, we underline the need for interpretable tools. Deep learning methods, despite their excellent performance, act as black boxes, providing little or no hints on why certain decisions are made. Of course, such decisions may be easily questioned and may hardly stand in a court of justice. Improving explainability, though largely unexplored, is therefore a major topic for current and future work in this field. Acknowledgements This material is based on research sponsored by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL) under agreement number FA8750-20-2-1004. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA and AFRL or the U.S. Government. This work is also supported by the PREMIER project, funded by

11 Data-Driven Digital Integrity Verification

307

the Italian Ministry of Education, University, and Research within the PRIN 2017 program and by a Google gift.

References Al Shaya O, Yang P, Ni R, Zhao Y, Piva A (2018) A new dataset for source identification of high dynamic range images. Sensors 18 Amerini I, Ballan L, Caldelli R, Del Bimbo A, Serra G (2011) A SIFT-Based forensic method for copy-move attack detection and transformation recovery. IEEE Trans Inf Forensics Secur 6(3):1099–1110 Amerini I, Uricchio T, Ballan L, Caldelli R (2017) Localization of JPEG double compression through multi-domain convolutional neural networks. In: IEEE computer vision and pattern recognition (CVPR) workshops Bammey Q, von Gioi R, Morel J-M (2020) An adaptive neural network for unsupervised mosaic consistency analysis in image forensics. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 14182–14192 Bappy JH, Simons C, Nataraj L, Manjunath BS, Roy-Chowdhury AK (2019) Hybrid LSTM and encoder-decoder architecture for detection of image forgeries. IEEE Trans Image Process 28(7):3286–3300 Baracchi D, Iuliani M, Nencini A, Piva A (2020) Facing image source attribution on iPhone X. In: International workshop on digital forensics and watermarking (IWDW) Barni M, Bondi L, Bonettini N, Bestagini P, Costanzo A, Maggini M, Tondi B, Tubaro S (2017) Aligned and non-aligned double JPEG detection using convolutional neural networks. J Vis Commun Image Represent 49:153–163 Barni M, Phan Q-T, Tondi B (2021) Copy move source-target disambiguation through multi-branch CNNS. IEEE Trans Inf Forensics Secur 16:1825–1840 Bayar B, Stamm MC (2016) A deep learning approach to universal image manipulation detection using a new convolutional layer. In: ACM workshop on information hiding and multimedia security Bianchi T, Piva A (2012) Image forgery localization via block-grained analysis of JPEG artifacts. IEEE Trans. Inf. Forensics Secur 7(3):1003–1017 Bi X, Wei Y, Xiao B, Li W (2019) RRU-Net: the ringed residual U-Net for image splicing forgery detection. In: IEEE computer vision and pattern recognition (CVPR) workshops Bi X, Liu Y, Xiao B, Li W, Pun C-M, Wang G, Gao X (2020) D-Unet: a dual-encoder u-net for image splicing forgery detection and localization. arXiv:2012.01821 Bondi L, Lameri S, Güera D, Bestagini P, Delp EJ, Tubaro S (2017) Tampering detection and localization through clustering of camera-based CNN features. In: IEEE computer vision and pattern recognition (CVPR) workshops Boroumand M, Fridrich J (2018) Deep learning for detecting processing history of images. In: IS&T electronic imaging: media watermarking, security, and forensics Cao H, Kot AC (2009) Accurate detection of demosaicing regularity for digital image forensics. IEEE Trans Inf Forensics Secur 5:899–910 Cattaneo G, Roscigno G (2014) A possible pitfall in the experimental analysis of tampering detection algorithms. In: International conference on network-based information systems, pp 279–286 Chen M, Fridrich J, Goljan M, Lukàš J (2008) Determining image origin and integrity using sensor noise. IEEE Trans Inf Forensics Secury 3(4):74–90 Chen X, Dong C, Ji J, Cao J, Li X (2021) Image manipulation detection by multi-view multi-scale supervision. arXiv:2104.06832 Chen B, Tan W, Coatrieux G, Zheng Y, Shi YQ (2020) A serial image copy-move forgery localization scheme with source/target distinguishment. IEEE Trans. Multimed

308

D. Cozzolino et al.

Christlein V, Riess C, Jordan J, Angelopoulou E (2012) An evaluation of popular copy-move forgery detection approaches. IEEE Trans Inf Forensics Secur 7(6):1841–1854 Cozzolino D, Verdoliva L (2020) Noiseprint: a CNN-based camera model fingerprint. IEEE Trans Inf Forensics Secur 15(1):14–27 Cozzolino D, Poggi G, Verdoliva L (2015) Efficient dense-field copy-move forgery detection. IEEE Trans Inf Forensics Secur 10(11):2284–2297 Cozzolino D, Gragnaniello D, Verdoliva L (2014) Image forgery detection through residual-based local descriptors and block-matching. In: IEEE International Conference on Image Processing (ICIP), pp 5297–5301 Cozzolino D, Poggi G, Verdoliva L (2017) Recasting residual-based local descriptors as convolutional neural networks: an application to image forgery detection. In: ACM workshop on information hiding and multimedia security, pp 1–6 Cozzolino D, Poggi G, Verdoliva L (2019) Extracting camera-based fingerprints for video forensics. In: IEEE computer vision and pattern recognition (CVPR) workshops, pp 130–137 Cozzolino D, Verdoliva L (2016) Single-image splicing localization through autoencoder-based anomaly detection. In: IEEE workshop on information forensics and security (WIFS), pp 1–6 Cozzolino D, Verdoliva L (2018) Camera-based image forgery localization using convolutional neural networks. In: European signal processing conference (EUSIPCO), Sep 2018 Dang-Nguyen DT, Pasquini C, Conotter V, Boato G (2015) RAISE: a raw images dataset for digital image forensics. In: 6th ACM multimedia systems conference, pp 219–1224 D’Avino D, Cozzolino D, Poggi G, Verdoliva L (2017) Autoencoder with recurrent neural networks for video forgery detection. In: IS&T international symposium on electronic imaging: media watermarking, security, and forensics de Carvalho T, Riess C, Angelopoulou E, Pedrini H, Rocha A (2013) Exposing digital image forgeries by illumination color classification. IEEE Trans Inf Forensics Secur 8(7):1182–1194 Dong J, Wang W, Tan T (2013) CASIA image tampering detection evaluation database. In: IEEE China summit and international conference on signal and information processing, pp 422–426 Farid H (2016) Photo forensics. The MIT Press Ferrara P, Bianchi T, De Rosa A, Piva A (2012) Image forgery localization via fine-grained analysis of CFA artifacts. IEEE Trans Inf Forensics Secur 7(5):1566–1577 Fridrich J, Kodovsky J (2012) Rich models for steganalysis of digital images. IEEE Trans Inf Forensics Secur 7:868–882 Fridrich J, Soukal D, Lukáš J (2003) Detection of copy-move forgery in digital images. In: Proceedings of the 3rd digital forensic research workshop Fu D, Shi YQ, Su W (2007) A generalized Benford’s law for JPEG coefficients and its applications in image forensics. In: Proceedings of the SPIE, Security, Steganography, and Watermarking of Multimedia Contents IX Galdi C, Hartung F, Dugelay J-L (2017) Videos versus still images: asymmetric sensor pattern noise comparison on mobile phones. Media watermarking, security and forensics. In: IS&T EI Gloe T, Böhme R (2010) The ‘Dresden Image Database’ for benchmarking digital image forensics. In: Proceedings of the 25th annual ACM symposium on applied computing, vol 2, pp 1585–1591, Mar 2010 Grgic S, Tralic D, Zupancic I, Grgic M (2013) CoMoFoD-New database for copy-move forgery detection. In: Proceedings of the 55th international symposium ELMAR, pp 49–54 Guan H, Kozak M, Robertson E, Lee Y, Yates AN, Delgado A, Zhou D, Kheyrkhah T, Smith J, Fiscus J (2019) MFC datasets: large-scale benchmark datasets for media forensic challenge evaluation. In: IEEE winter conference on applications of computer vision (WACV) workshops, pp 63–72 He Z, Lu W, Sun W, Huang J (2012) Digital image splicing detection based on Markov features in DCT and DWT domain. Pattern Recogn 45:4292–4299 He K, Gkioxari G, Dollár P, Girshick R (2020) Mask R-CNN. IEEE Trans Pattern Anal Mach Intell 42(2):386–397 Heller S, Rossetto L, Schuldt H (2018) The ps-battles dataset-an image collection for image manipulation detection. arXiv:1804.04866

11 Data-Driven Digital Integrity Verification

309

Hsu Y-F, Chang S-F (2006) Detecting image splicing using geometry invariants and camera characteristics consistency. In: IEEE international conference on multimedia and expo (ICME), pp 549–552 Huh M, Liu A, Owens A, Efros AA (2018) Fighting fake news: image splice detection via learned self-consistency. In: European conference on computer vision (ECCV) Hu X, Zhang Z, Jiang Z, Chaudhuri S, Yang Z, Nevatia R (2020) SPAN: spatial pyramid attention network for image manipulation localization. In: European conference on computer vision (ECCV), pp 312–328 Islam A, Long C, Basharat A, Hoogs A (2020) DOA-GAN: dual-order attentive generative adversarial network for image copy-move forgery detection and localization. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 4675–4684 Kirchner M (2008) Fast and reliable resampling detection by spectral analysis of fixed linear predictor residue. In: 10th ACM workshop on multimedia and security, pp 11–20 Kniaz VV, Knyaz V, Remondino F (2019) The point where reality meets fantasy: Mixed adversarial generators for image splice detection. Adv Neural Inf Process Syst (NIPS) 32:215–226 Korus P (2017) Digital image integrity—a survey of protection and verification techniques. Digit Signal Process 71:1–26 Korus P, Huang J (2016) Evaluation of random field models in multi-modal unsupervised tampering localization. In: IEEE international workshop on information forensics and security, pp 1–6, Dec 2016 Kwon M-J, Yu I-J, Nam S-H, Lee H-K (2021) CAT-Net: compression artifact tracing network for detection and localization of image splicing. In: IEEE winter conference on applications of computer vision (WACV), pp 375–384, Jan 2021 Li H, Huang J (2019) Localization of deep inpainting using high-pass fully convolutional network. In: IEEE international conference on computer Vision (ICCV), pp 8301–8310 Liu Y, Guan Q, Zhao X, Cao Y (2018) Image forgery localization based on multi-scale convolutional neural networks. In: ACM workshop on information hiding and multimedia security Liu X, Liu Y, Chen J, Liu X (2021) PSCC-Net: progressive spatio-channel correlation network for image manipulation detection and localization. arXiv:2103.10596 Lukàš J, Fridrich J, Goljan M (2006) Detecting digital image forgeries using sensor pattern noise. In: Proceedings of the SPIE, pp 362–372 Lyu S, Farid H (2005) How realistic is photorealistic? IEEE Trans Signal Process 53(2):845–850 Mahfoudi G, Tajini B, Retraint F, Morain-Nicolier F, Dugelay JL, France B, Pic M (2019) DEFACTO: image and face manipulation dataset. In: European signal processing conference Mandelli S, Bestagini P, Verdoliva L, Tubaro S (2020) Facing device attribution problem for stabilized video sequences. IEEE Trans Inf Forensics Secur 15(1):14–27 Marra F, Gragnaniello D, Verdoliva L, Poggi G (2020) A full-image full-resolution end-to-endtrainable CNN framework for image forgery detection. IEEE Access 8:133488–133502 Mayer O, Stamm MC (2020) Exposing fake images with forensic similarity graphs. IEEE J Sel Top Signal Process 14(5):1049–1064 Mayer O, Stamm MC (2018) Learned forensic source similarity for unknown camera models. In: IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 2012– 2016, Apr 2018 Mazaheri G, Chowdhury Mithun N, Bappy JH, Roy-Chowdhury AK (2019) A skip connection architecture for localization of image manipulations. In: IEEE computer vision and pattern recognition (CVPR) workshops MFC2019. https://www.nist.gov/itl/iad/mig/media-forensics-challenge-2019-0 Nam S-H, Park J, Kim D, Yu I-J, Kim T-Y, Lee H-K (2019) Two-Stream network for detecting double compression of H.264 videos. In: IEEE international conference on image processing (ICCV) Ng T-T, Chang S-F (2004) A data set of authentic and spliced image blocks. Tech. Rep. TR2032004-3, Columbia University

310

D. Cozzolino et al.

Niu Y, Tondi B, Zhao Y, Ni R, Barni M (2021) Image splicing detection, localization and attribution via JPEG primary quantization matrix estimation and clustering. arXiv:2102.01439 Novozámský A, Mahdian B, Saic S (2020) IMD2020: a large-scale annotated dataset tailored for detecting manipulated images. In: IEEE winter conference on applications of computer vision (WACV) workshops Paris B, Donovan J (2019) Deepfakes and cheap fakes. Data & Society, USA Park J, Cho D, Ahn W, Lee H-K (2018) Double JPEG detection in mixed JPEG quality factors using deep convolutional neural network. In: European conference on computer vision (ECCV) Phan-Xuan H, Le-Tien T, Nguyen-Chinh T, Do-Tieu T, Nguyen-Van Q, Nguyen-Thanh T (2019) Preserving spatial information to enhance performance of image forgery classification. In: International conference on advanced technologies for communications (ATC), pp 50–55 Popescu AC, Farid H (2005) Exposing digital forgeries in color filter array interpolated images. IEEE Trans Signal Process 53(10):3948–3959 Rahman M, Tajrin J, Hasnat A, Uzzaman N, Atiqur Rahaman R (2019) SMIFD: novel social media image forgery detection database. In: International conference on computer and information technology (ICCIT), pp 1–6 Rao Y, Ni J (2016) A deep learning approach to detection of splicing and copy-move forgeries in images. In: IEEE international workshop on information forensics and security (WIFS), pp 1–6 Rao Y, Ni J, Xie H (2021) Multi-semantic CRF-based attention model for image forgery detection and localization. Signal Process 183:108051 Ren S, He K, Girshick R, Sun J (2017) Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149 Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention (MICCAI), pp 234–241 Salloum R, Ren Y, Jay Kuo CC (2018) Image splicing localization using a multi-task fully convolutional network (MFCN). J Vis Commun Image Represent 201–209 Shi Z, Shen X, Kang H, Lv Y (2018) Image manipulation detection and localization based on the dual-domain convolutional neural networks. IEEE Access 6:76437–76453 Shi Z, Shen X, Chen H, Lyu Y (2020) Global semantic consistency network for image manipulation detection. IEEE Signal Process Lett 27:1755–1759 Shullani D, Fontani M, Iuliani M, Al Shaya O, Piva A (2017) VISION: a video and image dataset for source identification. EURASIP J Inf Secur 1–16 Tan F, Bernier C, Cohen B, Ordonez V, Barnes C (2018) Where and who? Automatic semantic-aware person composition. In: IEEE winter conference on applications of computer vision (WACV), pp 1519–1528 Thies J, Zollhöfer M, Nießner M (2019) Deferred neural rendering: image synthesis using neural textures. ACM Trans Grap (TOG) 38(4):1–12 Verdoliva L (2020) Media forensics and DeepFakes: an overview. IEEE J Sel Top Signal Process 14(5):910–932 Wang S-Y, Wang O, Owens A, Zhang R, Efros AA (2019) Detecting photoshopped faces by scripting photoshop. In: International conference on computer vision (ICCV) Wang Q, Zhang R (2016) Double JPEG compression forensics based on a convolutional neural network. EURASIP J Inf Secur 1–12 Wen B, Zhu Y, Subramanian R, Ng T-T, Shen X, Winkler S (2016) COVERAGE—a novel database for copy-move forgery detection. In: IEEE international conference on image processing (ICIP), pp 161–165 Wu Y, Abd-Almageed W, Natarajan P (2018) BusterNet: detecting copy-move image forgery with source/target localization. In: European conference on computer vision (ECCV), pp 170–186 Wu Y, Abd-Almageed W, Natarajan P (2018) Image copy-move forgery detection via an end-to-end deep neural network. In: IEEE winter conference on applications of computer vision (WACV)

11 Data-Driven Digital Integrity Verification

311

Wu Y, AbdAlmageed W, Natarajan P (2019) ManTra-Net: manipulation tracing network for detection and localization of image forgeries with anomalous features. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Yang C, Li H, Lin F, Jiang B, Zhao H (2020) Constrained R-CNN: a general image manipulation detection model. In: IEEE international conference on multimedia and expo (ICME), pp 1–6 Yao H, Xu M, Qiao T, Wu Y, Zheng N (2020) Image forgery detection and localization via a reliability fusion map. Sensors 20(22):6668 Yousfi Y, Fridrich J (2020) An intriguing struggle of CNNs in JPEG Steganalysis and the OneHot solution. IEEE Signal Process. Lett. 27:830–834 Zampoglou M, Papadopoulos S, Kompatsiaris Y (2017) Large-scale evaluation of splicing localization algorithms for web images. Multimed Tools Appl 76(4):4801–4834 Zampoglou M, Papadopoulos S, Kompatsiaris Y (2015) Detecting image splicing in the wild (web). In: IEEE international conference on multimedia and expo (ICME) workshops Zhang R, Ni J (2020) A dense u-net with cross-layer intersection for detection and localization of image forgery. In: IEEE conference on acoustics, speech and signal processing (ICASSP), pp 2982–2986 Zhang Z, Zhang Y, Zhou Z, Luo J (2018) Boundary-based Image Forgery Detection by Fast Shallow CNN. In: IEEE international conference on pattern recognition (ICPR) Zhong J-L, Pun C-M (2019) An end-to-end dense-inceptionnet for image copy-move forgery detection. IEEE Trans Inf Forensics Secur 15:2134–2146 Zhou P, Chen B-C, Han X, Najibi M, Shrivastava A, Lim S-N, Davis L (2020) Generate, segment, and refine: towards generic manipulation segmentation 34:13058–13065, Apr 2020 Zhou P, Han X, Morariu V, Davis L (2017) Two-stream neural networks for tampered face detection. In: IEEE computer vision and pattern recognition (CVPR) workshops, pp 1831–1839 Zhou P, Han X, Morariu V, Davis L (2018) Learning rich features for image manipulation detection. In: IEEE conference on computer vision and pattern recognition (CVPR) Zhu X, Qian Y, Zhao X, Sun B, Sun Y (2018) A deep learning approach to patch-based image inpainting forensics. Signal Process: Image Commun 67:90–99 Zhu M, He D, Li X, Li C, Li F, Liu X, Ding E, Zhang Z (2021) Image inpainting by end-to-end cascaded refinement with mask awareness. IEEE Trans Image Process 30:4855–4866

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Chapter 12

DeepFake Detection Siwei Lyu

One particular disconcerting form of disinformation are the impersonating audios/ videos backed by advanced AI technologies, in particular, deep neural networks (DNNs). These media forgeries are commonly known as the DeepFakes. The AIbased tools are making it easier and faster than ever to create compelling fakes that are challenging to spot. While there are interesting and creative applications of this technology, it can be weaponized to cause negative consequences. In this chapter, we survey the state-of-the-art DeepFake detection methods. We introduce the technical challenges in DeepFake detection and how researchers formulate solutions to tackle this problem. We discuss the pros and cons, as well as the potential pitfalls and drawbacks of each types of the solutions. Notwithstanding this progress, there are a number of critical problems that are yet to be resolved for existing DeepFake detection methods. We will also highlight a few of these challenges and discuss the research opportunities in this direction.

12.1 Introduction Falsified images and videos created by AI algorithms, more commonly known as DeepFakes, are a recent twist to the disconcerting problem of online disinformation. Although fabrication and manipulation of digital images and videos are not new (Farid 2012), the rapid developments of deep neural networks (DNNs) in recent years have made the process of creating convincing fake images/videos increasingly easier and faster. DeepFake videos first caught the public’s attention in late 2017, when S. Lyu (B) Department of Computer Science and Engineering, University at Buffalo, State University of New York, Buffalo, NY 14260, USA e-mail: [email protected] © The Author(s) 2022 H. T. Sencar et al. (eds.), Multimedia Forensics, Advances in Computer Vision and Pattern Recognition, https://doi.org/10.1007/978-981-16-7621-5_12

313

314

S. Lyu

Fig. 12.1 Examples of DeepFake videos: (top) Head puppetry, (middle) face swapping, and (bottom) lip syncing

a Reddit account with the same name began posting synthetic pornographic videos generated using a DNN-based face-swapping algorithm. Subsequently, technologies that make DeepFakes have been mainstreamed through readily available software freely available on GitHub.1 There are also emerging online services2 and start-up companies also commercialized tools that that can generate DeepFake videos on demand.3 Currently, there are three major types of DeepFake videos. • Head puppetry entails synthesizing a video of a target person’s whole head and upper shoulder using a video of a source person’s head, so the synthesized target appears to behave the same way as the source. • Face swapping involves generating a video of the target with the faces replaced by synthesized faces of the source while keeping the same facial expressions. • Lip syncing is to create a falsified video by only manipulating the lip region so that the target appears to speak something that s/he does not speak in reality. Figure 12.1 shows some example frames of each type of DeepFake videos aforementioned. While there are interesting and creative applications of DeepFake videos, due to the strong association of faces to the identity of an individual, they can also be 1

E.g., FakeApp (FakeApp 2020), DFaker (DFaker github 2019), faceswap-GAN (faceswapGAN github 2019), faceswap (faceswap github 2019), and DeepFaceLab (DeepFaceLab github 2020). 2 E.g., https://deepfakesweb.com. 3 E.g., Synthesia (https://www.synthesia.io/) and Canny AI https://www.cannyai.com/.

12 DeepFake Detection

315

weaponized. Well-crafted DeepFake videos can create illusions of a person’s presence and activities that do not occur in reality, which can lead to serious political, social, financial, and legal consequences (Chesney and Citron 2019). The potential threats range from revenge pornographic videos of a victim whose face is synthesized and spliced in, to realistically looking videos of state leaders seeming to make inflammatory comments they never actually made, a high-level executive commenting about her company’s performance to influence the global stock market, or an online sex predator masquerades visually as a family member or a friend in a video chat. Since the first known case of DeepFake videos were reported in December 2017,4 the mounting concerns over the negative impacts of DeepFakes have spawned an increasing interest in DeepFake detection in the Multimedia Forensics research community, and the first dedicated DeepFake detection algorithm was developed by my group in June 2018 (Li et al. 2018). Subsequently, there are avid developments with many DeepFake detection methods developed in the past few years, e.g., Li et al. (2018, 2019a, b), Li and Lyu (2019) Yang et al. (2019), Matern et al. (2019), Ciftci et al. (2020), Afchar et al. (2018), Güera and Delp (2018a, b), McCloskey and Albright (2018), Sabir et al. (2019), Rössler et al. (2019), Nguyen et al. (2019a, b, c), Nataraj et al. (2019), Xuan et al. (2019), Jeon et al. (2020), Mo et al. (2018), Liu et al. (2020), Fernando et al. (2019), Shruti et al. (2020), Koopman et al. (2018), Amerini et al. (2019), Chen and Yang (2020), Wang et al. (2019a), Guarnera et al. (2020), Durall et al. (2020), Frank et al. (2020), Ciftci and Demir (2019), Chen and Yang (2020), Stehouwer et al. (2019), Bonettini et al. (2020), Khalid and Woo (2020). Correspondingly, there have also been several large-scale benchmark datasets to evaluate DeepFake detection performance (Yang et al. 2019; Korshunov and Marcel 2018; Rössler et al. 2019; Dufour et al. 2019; Dolhansky et al. 2019; Ciftci and Demir 2019; Jiang et al. 2020; Wang et al. 2019b; Khodabakhsh et al. 2018; Stehouwer et al. 2019). DeepFake detection has also been supported by government funding agencies and private companies alike.5 A climax of these efforts is the first DeepFake Detection Challenge from late 2019 to early 2020 (Deepfake detection challenge 2021). Overall, the DeepFake Detection Challenge corresponds to the state-of-the-art, with the winning solutions being a tour de force of advanced DNNs (an average precision of 82.56% by the top performer). These provide us effective tools to expose DeepFakes that are automated and mass produced by AI algorithms. However, we need to be cautious in reading these results. Although the organizers have made their best effort to simulate situations that DeepFake videos are deployed in real life, there is still a significant discrepancy between the performance on the evaluation dataset and a more real dataset—when tested on unseen videos, the top performer’s accuracy reduced to 65.18%. In addition, all solu4

https://www.vice.com/en/article/gydydm/gal-gadot-fake-ai-porn. Detection of DeepFakes is one of the goals of the DARPA MediFor (2016–2020), which also sponsored the NIST MFC 2018 and 2020 Synthetic Data Detection Challenge (NIST MFC 2018), and SemaFor (2020–2024) programs. The Global DeepFake Detection Challenge (Deepfake detection challenge 2021) in 2020 was sponsored by leading tech companies including Facebook, Amazon, and Microsoft.

5

316

S. Lyu

tions are based on clever designs of DNNs and data augmentations, but provide little insight beyond the “black box”-type classification algorithms. Furthermore, these detection results may not completely reflect the actual detection performance of the algorithm on a single DeepFake video, especially ones that have been manually processed and perfected after being generated from the AI algorithms. Such “crafted” DeepFake videos are more likely to cause real damages, and careful manual post processing can reduce or remove artifacts that the detection algorithms predicate on. In this chapter, we survey the state-of-the-art DeepFake detection methods. We introduce the technical challenges in DeepFake detection and how researchers formulate solutions to tackle this problem. We discuss the pros and cons, as well as the potential pitfalls and drawbacks of each types of the solutions. We also provide an overview of research efforts for DeepFake detection and a systematic comparison of existing datasets, methods, and performances. Notwithstanding this progress, there are a number of critical problems that are yet to be resolved for existing DeepFake detection methods. We will also highlight a few of these challenges and discuss the research opportunities in this direction.

12.2 DeepFake Video Generation Although in recent years there have been many sophisticated algorithms for generating realistic synthetic face videos (Bitouk et al. 2008; Dale et al. 2011; Suwajanakorn et al. 2015, 2017; Thies et al. 2016; Korshunova et al. 2017; Pham et al. 2018; Karras et al. 2018a, 2019; Kim et al. 2018; Chan et al. 2019), most of these have not been in mainstream as open-source software tools that anyone can use. It is a much simpler method based on the work of neural image style transfer (Liu et al. 2017) that becomes the tool of choice to create DeepFake videos in scale, with several independent open-source implementations. We refer to this method as the basic DeepFake maker, and it is underneath many DeepFake videos circulated on the Internet or in the existing datasets. The overall pipeline of the basic DeepFake maker is shown in Fig. 12.2 (left). From an input video, faces of the target are detected, from which facial landmarks are further extracted. The landmarks are used to align the faces to a standard configuration (Kazemi and Sullivan 2014). The aligned faces are then cropped and fed to an autoencoder (Kingma and Welling 2014) to synthesize faces of the donor with the same facial expressions as the original target’s faces. The auto-encoder is usually formed by two convoluntional neural networks (CNNs), i.e., the encoder and the decoder. The encoder E converts the input target’s face to a vector known as the code. To ensure the encoder capture identityindependent attributes such as facial expressions, there is one single encoder regardless the identities of the subjects. On the other hand, each identity has a dedicated decoder Di , which generates a face of the corresponding subject from the code. The encoder and decoder are trained in tandem using uncorresponded face sets of multiple subjects in an unsupervised manner, Fig. 12.2 (right). Specifically, an encoder-

12 DeepFake Detection

317

Fig. 12.2 Synthesis (left) and training (right) of the basic DeepFake maker algorithm. See texts for more details

decoder pair is formed alternatively using E and Di for input face of each subject, and optimize their parameters to minimize the reconstruction errors (1 difference between the input and reconstructed faces). The parameter update is performed with the back-propagation until convergence. The synthesized faces are then warped back to the configuration of the original target’s faces and trimmed with a mask from the facial landmarks. The last step involves smoothing the boundaries between the synthesized regions and the original video frames. The whole process is automatic and runs with little manual intervention.

12.3 Current DeepFake Detection Methods In this section, we provide a brief overview of current DeepFake detection methods. As this is an actively researched area with the literature growing rapidly, we cannot promise comprehensive coverage of all existing methods. Furthermore, as the focus here is on DeepFake videos, we exclude detection methods for GANgenerated images. Our objective is to summarize the existing detection methods at a more abstract level, pointing out some common characteristics and challenges that can benefit future developments of more effective detection methods.

12.3.1 General Principles As detection of DeepFakes is a problem in Digital Media Forensics, it complies with three general principles of Digital Media Forensics. • Principle 1: Manipulation operations leave traces in the falsified media.

318

S. Lyu

Digitally manipulated or synthesized media (including DeepFakes) are created by a process other than a capturing device recording an event actually occurs in the physical world. This fundamental difference in the creation process will be reflected in the resulting falsified media, albeit revealed with different scales depending on the amount of changes to the original media. Based on this principle, we can conclude that DeepFakes are detectable. • Principle 2: A single forensic measure can be circumvented. Any forensic detection method is based on differentiating the real and falsified media on certain characteristics in the media signals. However, the same characteristics can be exploited to evade the very forensic detections, either by hiding the traces or disrupting them. This principle is the premise for anti-forensic measures for DeepFake detections. • Principle 3: There is an intention behind every falsified media. A falsified media in the wild is made for a reason. It could be a satire or a prank but also a malicious attack to the victim’s reputation and credibility. Understanding the motivation behind the DeepFake can provide richer information to the detection of DeepFakes, as well as to prevent and mitigate the damages.

12.3.2 Categorization Based on Methodology We categorize existing DeepFake detection methods into three types. The first two work by seeking specific artifacts in DeepFake videos that differ them from the real videos (Principle 1 of Sect. 12.3.1).

12.3.2.1

Signal Feature-Based Methods

The signal feature-based methods (e.g., Afchar et al. 2018; Güera and Delp 2018b; McCloskey and Albright 2018; Li and Lyu 2019) look for abnormalities at the signal level, treating the videos as a sequence of frames f (x, y, t), and a synchronized audio signal a(t) if the audio track exists. Such abnormalities are often caused by various steps in the processing pipeline of DeepFake generation. For instance, our recent work (Li and Lyu 2019) exploits the signal artifacts introduced by the resizing and interpolation operations during the post-processing of DeepFake generation. Similarly, the work in Li et al. (2019a) focuses on the boundary when the synthesized face regions are blended into the original frame. In Frank et al. (2020), the authors observe that synthesized faces often exhibit abnormalities in high frequencies, due to the up-sampling operation. Based on this observation, a simple detection method in the frequency domain is proposed. In Durall et al. (2020), the detection method uses a classical frequency domain analysis followed by an SVM classifier. The work in Guarnera et al. (2020) uses the EM algorithm to extract a set of local features in the frequency domain that are used to distinguish frames of DeepFakes from those of the real videos.

12 DeepFake Detection

319

The main advantage of signal feature-based methods is that the abnormalities are usually fundamental to generation process, and fixing them may require significant changes to the underlying DNN model or post-processing steps. On the other hand, the reliance on signal features also means that signal feature-based DeepFake detection methods are susceptible to disturbance to the signals, such as interpolation (rotation, up-sizing), down-sampling (down-sizing), additive noise, blurring, and compression.

12.3.2.2

Physical/Physiological-Based Methods

The physical/physiological-based methods (e.g., Li et al. 2018; Yang et al. 2019; Matern et al. 2019; Ciftci et al. 2020; Hu et al. 2020) expose DeepFake videos based on their violations to the fundamental laws of physics or human physiology. The DNN model synthesizing human faces do not have direct knowledge about physiological traits of human faces or the physical laws of the surrounding environment, such information may be incorporated into the model indirectly (and inefficiently) through training data. This lack of knowledge can lead to detectable traits that are also intuitive to humans. For instance, the first dedicated DeepFake detection method (Li et al. 2018) works by detecting the inconsistency or lack of realistic eye blinking in DeepFake videos. This is due to the training data for the DeepFake generation model, which are often obtained from the Internet as the portrait images of a subject. As the portrait images are dominantly those with the subject’s eyes open, the DNN generation models trained using them cannot reproduce closed eyes for a realistic eye blinking. The work of Yang et al. (2019) use the physical inconsistencies in the head poses of the DeepFake videos due to splicing the synthesized face region into the original frames. The work of Matern et al. (2019) summarizes various appearance artifacts that can be observed in DeepFake videos, such as the inconsistent eye colors, missing reflections, and fuzzy details in the eye and teeth areas. The authors then propose a set of simple features for for DeepFake detection. The work in Ciftci et al. (2020) introduces a DeepFake detection method based on biological signals extracted from facial regions that are invisible to human eyes in portrait videos, such as the slight color changes due to the blood flow caused by heart beats. These biological signals are used to reveal the spatial coherence and temporal consistency. The main advantage of physical/physiological-based methods is its superior intuitiveness and explainability. This is especially important when the detection results are used by practitioners such as journalists. On the other hand, physical/ physiological-based detection methods are limited by the effectiveness and robustness of the underlying Computer Vision algorithms, and their strong reliance on the semantic cues also limits their applicability.

320

12.3.2.3

S. Lyu

Data-Driven Methods

The third category of detection methods are data-driven methods (e.g., Sabir et al. 2019; Rössler et al. 2019; Nguyen et al. 2019a, b, c; Nataraj et al. 2019; Xuan et al. 2019; Wang et al. 2019b; Jeon et al. 2020; Mo et al. 2018; Liu et al. 2020; Li et al. 2019a, b; Güera and Delp 2018a; Fernando et al. 2019; Shruti et al. 2020; Koopman et al. 2018; Amerini et al. 2019; Chen and Yang 2020; Wang et al. 2019a; Guarnera et al. 2020; Durall et al. 2020; Frank et al. 2020; Ciftci and Demir 2019; Chen and Yang 2020; Stehouwer et al. 2019; Bonettini et al. 2020; Khalid and Woo 2020), which do not target at specific features, but use videos labeled as real or DeepFake to train ML models (oftentimes DNNs) that can differentiate DeepFake videos from the real ones. Note that feature-based methods can also use DNN classifiers, so what makes the data-driven methods different is that the cues for classifying the two types of videos are implicit and found by the ML models. As such, the success of datadriven methods largely depends on the quality and diversity of training data, as well as the design of the ML models. Early data-driven DeepFake detection methods reuse standard DNN models (e.g., VGG-Net Do et al. 2018, XceptionNet Rössler et al. 2019) designed for other Computer Vision tasks such as object detection and recognition. There also exist methods that use more specific or novel network architectures. MesoNet (Afchar et al. 2018) is an early detection method that uses a customized DNN model to detect DeepFakes, and interprets the detection results by correlating them with some mesoscopic level image features. The DeepFake detection method of Liu et al. (2020) uses the GramNet to capture differences in global image texture, which has the additional “Gram blocks” in the conventional CNN models that can calculate the Gram matrices at different levels of the network. The detection methods in Nguyen et al. (2019c, b) use capsule networks and show that it can achieve similar performance but with fewer parameters than the CNN models. It should be mentioned that, to date, the state-of-the-art performance in DeepFake detection is obtained with the data-drive methods based on large-scale DeepFake datasets and innovative model design and training procedures. For instance, the top performer of the DeepFake Detection Challenge6 is based on the use of an ensemble DNN models that entails seven different EfficientNet models Tan and Le (2019). The runner-up method7 also uses ensemble models that includes EfficientNet and XceptionNet models but also introduce a new data augmentation methods (WSDAN) to anticipate the different configurations of faces.

6 7

https://github.com/selimsef/dfdc_deepfake_challenge. https://github.com/cuihaoleo/kaggle-dfdc.

12 DeepFake Detection

321

12.3.3 Categorization Based on Input Types Another way to categorize existing DeepFake detection methods can be obtained by the type of the inputs. Most existing DeepFake detection methods are based on binary classification at the frame level, i.e., determining the likelihood of an individual frame as real or of DeepFake. Although simple and easy to implement, there are two issues related with the frame-based detection methods. First, the temporal consistency among frames are not explicitly considered, as (i) many DeepFake videos exhibit temporal artifacts and (ii) real or DeepFake frames tend to appear in continuous intervals. Second, it necessitates an extra step when video-level integrity score is needed: we have to aggregate the scores over individual frames to compute such a score using certain types of aggregation rules, common choices of which include the average, the maximum, or the average of top range. Temporal methods (e.g., Sabir et al. 2019; Amerini et al. 2019; Koopman et al. 2018), on the other hand, take the whole frame sequences as input, and use the temporal correlation between the frames as an intrinsic feature. Temporal methods often uses sequential models such as RNNs as basic model structure, and directly output the videos level prediction. The audio-visual detection methods also use the audio track as an input, and detect DeepFakes based on the asynchrony between the audios and frames.

12.3.4 Categorization Based on Output Types Regardless of the underlying methodology or types of input, the majority of existing DeepFake detection methods formulate the problem as binary classification, which returns a label for each input video that signifies if the input is a real or a DeepFake. Often, the predicted labels are accompanied with a confidence score, a real value in [0, 1], which may be interpreted as the probability of the input to belong to one of the classes (i.e., real or DeepFake). A few methods extend this to a multi-class classification problem, where the labels also reflect different types of DeepFake generation models. There are also methods that solve the location problem (e.g., Li et al. 2019b; Huang et al. 2020), which further identifies the spatial area (in the form of bounding boxes or masked out region) and time interval of DeepFake treatments.

12.3.5 The DeepFake-o-Meter Platform Unfortunately, the plethora of DeepFake detection algorithms were not taken full advantage of. On the one hand, differences in training datasets, hardware, and learning architectures across research publications make rigorous comparisons of different detection algorithms challenging. On the other hand, the cumbersome process of downloading, configuring, and installing of individual detection algorithms deny

322

S. Lyu

Fig. 12.3 Detection results on DeepFake-o-meter of a state-of-the-art DeepFake detection method (Li and Lyu 2019) over a fake video on youtube.com. The lower integrity score (range in [0, 1]) suggests a video frame more likely to be generated using DeepFake algorithms

the access of the state-of-the-art DeepFake detection methods to most users. To this end, we have developed DeepFake-o-meter http://zinc.cse.buffalo.edu/ubmdfl/ deepfake-o-meter/, an open platform for DeepFake detections. For developers of DeepFake detection algorithms, it provides an API architecture to wrap individual algorithms and run on a third-party remote server. For researchers, it is an evaluation/benchmarking platform to compare multiple algorithms on the same input. For users, it provides a convenient portal to use multiple state-of-the-art detection algorithms. Currently, we have incorporated 11 state-of-the-art DeepFake image and video detection methods. A sample analysis result is shown in Fig. 12.3.

12.3.6 Datasets The availability of large-scale datasets of DeepFake videos is an enabling factor to the development of DeepFake detection methods. The first DeepFake dataset UADFV (Yang et al. 2019) only has 49 DeepFake videos with visible artifacts when it was released in June 2018. Subsequently, more DeepFake datasets are proposed with increasing quantities and qualities. Several examples of synthesized DeepFake video frames are shown in Fig. 12.4. • UADFV (Yang et al. 2019): This dataset contains 49 real videos downloaded from Youtube, which were used to create 49 DeepFake videos. • DeepfakeTIMIT (Korshunov and Marcel 2018): The original videos in this dataset is from the VidTIMIT database, from which a total of 640 DeepFake videos were generated. • FaceForensics++ (Rössler et al. 2019): FaceForensics++ is a forensics dataset consisting of 1,000 original video sequences that have been manipulated with four automated face manipulation methods: DeepFakes, Face2Face, FaceSwap and NeuralTextures.

12 DeepFake Detection

323

Fig. 12.4 Example frames of DeepFake videos from the Celeb-DF dataset (Li et al. 2020). The left most column corresponds to frames from the original videos, and the other columns are frame from DeepFake videos swapped with synthesized faces

324

S. Lyu

• DFD: The Google/Jigsaw DeepFake detection dataset has 3,068 DeepFake videos generated based on 363 original videos of 28 consented individuals of various genders, ages, and ethnic groups. • DFDC (Dolhansky et al. 2019): The Facebook DeepFake Detection Challenge (DFDC) Dataset is part of the DeepFake detection challenge, which has 4,113 DeepFake videos created based on 1,131 original videos of 66 consented individuals of various genders, ages and ethnic groups. • Celeb-DF (Li et al. 2020): The Celeb-DF dataset contains real and DeepFake synthesized videos having similar visual quality on par with those circulated online. It includes 590 original videos collected from YouTube with subjects of different ages, ethic groups and genders, and 5,639 corresponding DeepFake videos. • DeeperForensics-1.0 (Jiang et al. 2020): DeeperForensics-1.0 consists of 10,000 DeepFake videos generated by a new DNN-based face swapping framework. • DFFD (Stehouwer et al. 2019): The DFFD dataset contains 3,000 videos of four types of digital manipulations as identity swap, expression, swap, attribute manipulation, and entire synthesized faces.

12.3.7 Challenges Albeit impressive progress has been made in the performance of detection of DeepFake videos, there are several concerns over the current detection methods that suggest caution. Performance Evaluation. Currently, the problem of detecting DeepFake videos is commonly formulated, solved, and evaluated as a binary classification problem, where each video is categorized as real or a DeepFake. Such dichotomy is easy to set up in controlled experiments, where we develop and test DeepFake detection algorithms using videos that are either pristine or made with DeepFake generation algorithms. However, the picture is murkier when the detection method is deployed in real world. For instance, videos can be fabricated or manipulated in ways other than DeepFakes, so not being detected as a DeepFake video does not necessarily suggest the video is a real one. Also, a DeepFake video may be subject to other types of manipulations and a single label may not comprehensively reflect such. Furthermore, in a video with multiple subjects’ faces only one or a few are generated with DeepFake for a fraction of the frames. So the binary classification scheme needs to be extended to multi-class, multi-label, and local classification/detection to fully handle the complexities of real-world media forgeries. Explainability of Detection Results. Current DeepFake detection methods are mostly designed to perform batch analysis over a large collection videos. However, when the detection methods are used in the field by journalists or law enforcement, we usually need only to analyze a small number of videos. Numerical score corresponding to the likelihood of a video being generated using a synthesis algorithm is not as useful to the practitioners if it is not corroborated with proper reasoning of the score. In such scenarios, it is very typical to request a justification for the numerical

12 DeepFake Detection

325

score for the analysis to be acceptable for publishing or used in court. However, many data-driven Deepfake detection methods, especially those based on the use of deep neural networks, usually lack explainability due to the black box nature of the DNN models. Generalization Across Datasets. As a DNN-based DeepFake detection method needs to be trained on a specific training dataset, a lack of generalization has been observed for DeepFake videos created using models not represented in the training dataset. This is different from overfitting, as the learned model may perform well on testing videos that are created with the same model but not used in training. The problem is caused by “domain shifting” when the trained detection method is applied to DeepFake videos generated using different generation models. A simple solution is to enlarge the training set to represent more diverse generation models, but a more flexible approach is needed to scale up to previously unseen models. Social Media Laundering. A large fraction of online videos are now spread through social networks, e.g., FaceBook, Instagram, and Twitter. To save network bandwidth and also to protect the users’ privacy, these videos are usually striped off metadata, down-sized, and then heavy compressed before they are uploaded to the social platforms. These operations, commonly known as social media laundering, are detrimental to recover traces of underlying manipulation, and at the same time increase the false positive detections, i.e., classifying a real video as a DeepFake. So far, most data-driven DeepFake detection methods that use signal-level features are much affected by social media laundering. A practical measure to improve the robustness of DeepFake detection methods to social media laundering is to actively incorporate simulations of such effects in training data, and also enhance evaluation datasets to include performance on social media laundered videos, both real and synthesized. Anti-forensics. With the increasing effectiveness of DeepFake detection methods, we also anticipate developments of corresponding anti-forensic measures, which take advantage of the vulnerabilities of current DeepFake detection methods to conceal revealing traces of DeepFake videos. The data-driven DNN-based DeepFake detection methods are particularly susceptible to anti-forensic attacks due to the known vulnerability of general deep neural network classification models (see Principle 2 of Sect. 12.3.1). Indeed, recent years have witnessed a rapid development of antiforensic methods based on adversarial attacks to DNN models targeting DeepFake detectors (Huang et al. 2020; Carlini and Farid 2020; Gandhi and Jain 2020; Neekhara 2020). Anti-forensic measures can also be developed in the other aspect, to disguise a real video as a DeepFake video by adding simulated signal level features used by current detection algorithms, a situation we term as fake DeepFake. Further, DeepFake detection methods must improve to handle such intentional and adversarial attacks.

326

S. Lyu

12.4 Future Directions Besides continuing improving to solve the aforementioned limitations, we also envision a few important directions of DeepFake detection methods that will receive more attention in the coming years. Other Forms of DeepFake Videos. Although face swapping is currently the most widely known form of DeepFake videos, it is by no means the most effective. In particular, for the purpose of impersonating someone, face swapping DeepFake videos have several limitations. Psychological studies (Sinha et al. 2009) show that human face recognition largely relied on information gleaned from face shape and hairstyle. As such, to create convincing impersonating effect, the person whose face is to be replaced (the target) has to have similar face shape and hairstyle to the person whose face is used for swapping (the donor). Second, as the synthesized faces need to be spliced into the original video frame, the inconsistencies between the synthesized region and the rest of the original frame can be severe and difficult to conceal. In these respects, other forms of DeepFake videos, namely, head puppetry and lip-syncing, are more effective and thus should become the focus of subsequent research in DeepFake detection. Methods studying whole face synthesis or reenactment have experienced fast development in recent years. Although there have not been as many easy-to-use and free open-source software tools generating these types of DeepFake videos as for the face-swapping videos, the continuing sophistication of the generation algorithms will change the situation in the near future. Because the synthesized region is different from face swapping DeepFake videos (the whole face in the former and lip area in the latter), detection methods designed based on artifacts specific to face swapping are unlikely to be effective for these videos. Correspondingly, we should develop detection methods that are effective to these types of DeepFake videos. Audio DeepFakes. AI-based impersonation are not limited to imagery, recent AI-synthesized content-generation are leading to the creation of highly realistic audios (Ping et al. 2017; Gu and Kang 2018; AlBadawy and Lyu 2020). Using synthesized audios of the impersonating target can significantly make the DeepFake videos more convincing and compounds its negative impact. As audio signals are 1D signals and have very different nature from images and videos, different methods need to be developed to specifically target such forgeries. This problem has drawn attention in the speech processing community recently with part of the most recent Global ASVspoofing Challenge (https://www.asvspoof.org/). Dedicated to AI-driven voice conversion detection, and a few dedicated methods for audio DeepFake detection, e.g., AlBadawy et al. (2019), have also shown up recently. In the coming years, we expect more developments in these areas, in particular, those can leverage features in both visual and audio features of the fake videos. Intent Inference. Even though the potential negative impacts of DeepFake videos are tremendous, in reality, the majority of DeepFake videos are not created not with a malicious intent. Many DeepFake videos currently circulated online are of a pranksome, humorous, or satirical nature. As such, it is important to expose the underlying intent of a DeepFake in the context of legal or journalistic investigation (Principle 3

12 DeepFake Detection

327

in Sect. 12.3.1). Inferring intention may require more semantic and contextual understanding of the content, few forensic methods are designed to answer this question, but this is certainly a direction that future forensic methods will focus on. Human Performance. Although the potential negative impacts of online DeepFake videos are widely recognized, currently there is a lack of formal and quantitative study of the perceptual and psychological factors underlying their deceptiveness. Interesting questions such as if there exist an uncanny valley8 for DeepFake videos, what is the just noticeable difference between high-quality DeepFake videos and real videos to human eyes, or what type/aspects of DeepFake videos are more effective in deceiving the viewers, have yet to be answered. To pursue these questions, it calls for close collaboration among researchers in digital media forensics and in perceptual and social psychology. There is no doubt that such studies are invaluable to research in detection techniques as well as a better understanding of the social impact that DeepFake videos can cause. Protection measures. However, given the speed and reach of the propagation of online media, even the currently best forensic techniques will largely operate in a postmortem fashion, applicable only after AI synthesized fake face images or videos emerge. We aim to develop proactive approaches to protect individuals from becoming the victims of such attacks, which complement to the forensic tools. One such method we have recently studied (Li et al. 2019c; Sun et al. 2020) is to add specially designed patterns known as the adversarial perturbations that are imperceptible to human eyes but can result in detection failures. The rationale is as follows. High-quality AI face synthesis models need large number of, typically in the range of thousands, sometimes even millions, training face images collected using automatic face detection methods, i.e., the face sets. Adversarial perturbations “pollute” a face set to have few actual faces and many non-faces with low or no utility as training data for AI face synthesis models. The proposed adversarial perturbation generation method can be implemented as a service of photo/video sharing platforms before a user’s personal images/videos are uploaded or as a standalone tool that the user can use, to process the images and videos before they are uploaded online.

12.5 Conclusion and Outlook We predict that several future technological developments will further improve the visual quality and generation efficiency of the fake videos. Firstly, one critical disadvantage of the current DeepFake generation methods are that they cannot produce good details such as skin and facial hairs. This is due to the loss of information in the encoding step of generation. However, this can be improved by incorporating GAN models (Goodfellow et al. 2014) which have demonstrated performance in recov8

The uncanny valley in this context refers to the phenomenon whereby a DeepFake generated face bearing a near-identical resemblance to a human being arouses a sense of unease or revulsion in the viewers.

328

S. Lyu

ering facial details in recent works (Karras et al. 2018b, 2019, 2020). Secondly, the synthesized videos can be more realistic if they are accompanied with realistic voices, which combines video and audio synthesis together in one tool. In the face of this, the overall running efficiency, detection accuracy, and more importantly, false positive rate, have to be improved for wide practical adoption. The detection methods also need to be more robust to real-life post-processing steps, social media laundering, and counter-forensic technologies. There is a perpetual competition of technology, know-hows, and skills between the forgery makers and digital media forensic researchers. The future will reckon the predictions we make in this work.

References Afchar D, Nozick V, Yamagishi J, Echizen I (2018) Mesonet: a compact facial video forgery detection network. In: WIFS AlBadawy E, Lyu S (2020) Voice conversion using speech-to-speech neuro-style transfer. In: Interspeech, Shanghai, China AlBadawy E, Lyu S, Farid H (2019) Detecting ai-synthesized speech using bispectral analysis. In: Workshop on media forensics (in conjunction with CVPR), Long Beach, CA, United States Amerini I, Galteri L, Caldelli R, Del Bimbo A (2019) Deepfake video detection through optical flow based cnn. In: Proceedings of the IEEE international conference on computer vision workshops, pp 0-0 Bitouk D, Kumar N, Dhillon S, Belhumeur P, Nayar SK (2008) Face swapping: automatically replacing faces in photographs. ACM Trans Graph (TOG) Bonettini N, Cannas ED, Mandelli S, Bondi L, Bestagini P, Tubaro S (2020) Video face manipulation detection through ensemble of cnns. arXiv:2004.07676 Carlini N, Farid H (2020) Evading deepfake-image detectors with white- and black-box attacks. arXiv:2004.00622 Chan C, Ginosar S, Zhou T, Efros AA (2019) Everybody dance now. In: ICCV Chen Z, Yang H (2020) Manipulated face detector: joint spatial and frequency domain attention network. arXiv:2005.02958 Chesney R, Citron DK (2019) Deep Fakes: a looming challenge for privacy, democracy, and national security. In: 107 California Law Review (2019, Forthcoming); U of Texas Law, Public Law Research Paper No. 692; U of Maryland Legal Studies Research Paper No. 2018-21 Ciftci UA, Demir I (2019) Fakecatcher: detection of synthetic portrait videos using biological signals. arXiv:1901.02212 Ciftci UA, Demir I, Yin L (2020) How do the hearts of deep fakes beat? Deep fake source detection via interpreting residuals with biological signals. In: IEEE/IAPR international joint conference on biometrics (IJCB) Dale K, Sunkavalli K, Johnson MK, Vlasic D, Matusik W, Pfister H (2011) Video face replacement. ACM Trans Graph (TOG) DeepFaceLab github. https://github.com/iperov/DeepFaceLab. Accessed 4 July 2020 Deepfake detection challenge. https://deepfakedetectionchallenge.ai DFaker github. https://github.com/dfaker/df. Accessed 4 Nov 2019 Do N-T, Na I-S, Kim S-H (2018) Forensics face detection from gans using convolutional neural network Dolhansky B, Howes R, Pflaum B, Baram N, Ferrer CC (2019) The deepfake detection challenge (DFDC) preview dataset. arXiv:1910.08854

12 DeepFake Detection

329

Dufour N, Gully A, Karlsson P, Vorbyov AV, Leung T, Childs J, Bregler C (2019) Deepfakes detection dataset by google & jigsaw Durall R, Keuper M, Keuper J (2020) Watch your up-convolution: Cnn based generative deep neural networks are failing to reproduce spectral distributions. arXiv:2003.01826 faceswap github. https://github.com/deepfakes/faceswap. Accessed 4 Nov 2019 faceswap-GAN github. https://github.com/shaoanlu/faceswap-GAN. Accessed 4 Nov 2019 FakeApp. https://www.malavida.com/en/soft/fakeapp/. Accessed 4 July 2020 Farid H (2012) Digital image forensics. MIT Press, Cambridge Fernando T, Fookes C, Denman S, Sridharan S (2019) Exploiting human social cognition for the detection of fake and fraudulent faces via memory networks. Computer vision and pattern recognition. arXiv:1911.07844 Frank J, Eisenhofer T, Schönherr L, Fischer A, Kolossa D, Holz T (2020) Leveraging frequency analysis for deep fake image recognition. arXiv:2003.08685 Gandhi A, Jain S (2020) Adversarial perturbations fool deepfake detectors. arXiv:2003.10596 Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: NeurIPS Guarnera L, Battiato S, Giudice O (2020) Deepfake detection by analyzing convolutional traces. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops Güera D, Delp EJ (2018a) Deepfake video detection using recurrent neural networks. In: 2018 15th IEEE international conference on advanced video and signal based surveillance (AVSS). IEEE, pp 1–6 Güera D, Delp EJ (2018b) Deepfake video detection using recurrent neural networks. In: AVSS Gu Y, Yongguo K (2018) Multi-task WaveNet: a multi-task generative model for statistical parametric speech synthesis without fundamental frequency conditions. In: Interspeech, Hyderabad, India Huang Y, Juefeixu F, Wang R, Xie X, Ma L, Li J, Miao W, Liu Y, Pu G (2020) Fakelocator: robust localization of gan-based face manipulations via semantic segmentation networks with bells and whistles. Computer vision and pattern recognition. arXiv:2001.09598 Hu S, Li Y, Lyu S (2009) Exposing GAN-generated faces using inconsistent corneal specular highlights. arXiv:11924:2020 Jeon H, Bang Y, Woo SS (2020) Fdftnet: facing off fake images using fake detection fine-tuning network. Computer vision and pattern recognition. arXiv:2001.01265 Jiang L, Wu W, Li R, Qian C, Loy CC (2020) Deeperforensics-1.0: a large-scale dataset for realworld face forgery detection. arXiv:2001.03024 Karras T, Aila T, Laine S, Lehtinen J (2018a) Progressive growing of GANs for improved quality, stability, and variation. In: ICLR Karras T, Aila T, Laine S, Lehtinen J (2018b) Progressive growing of GANs for improved quality, stability, and variation. In: International conference on learning representations (ICLR) Karras T, Laine S, Aila T (2019) A style-based generator architecture for generative adversarial networks. In: CVPR Karras T, Laine S, Aittala M, Hellsten J, Lehtinen J, Aila T (2020) Analyzing and improving the image quality of stylegan. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 8110–8119 Kazemi V, Sullivan J (2014) One millisecond face alignment with an ensemble of regression trees. In: CVPR Khalid H, Woo SS (2020) Oc-fakedect: classifying deepfakes using one-class variational autoencoder. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) workshops Khodabakhsh A, Ramachandra R, Raja KB, Wasnik P, Busch C (2018) Fake face detection methods: can they be generalized?, pp 1–6 Kim H, Garrido P, Tewari A, Xu W, Thies J, Nießner N, Pérez P, Richardt C, Zollhöfer M, Theobalt C (2018) Deep video portraits. ACM Trans Graph (TOG) Kingma DP, Welling M (2014) Auto-encoding variational bayes. In: ICLR

330

S. Lyu

Koopman M, Rodriguez AM, Geradts Z (2018) Detection of deepfake video manipulation. In: The 20th Irish machine vision and image processing conference (IMVIP), pp 133–136 Korshunova I, Shi W, Dambre J, Theis L (2017) Fast face-swap using convolutional neural networks. In: ICCV Korshunov P, Marcel S (2018) Deepfakes: a new threat to face recognition? Assessment and detection. arXiv:1812.08685 Li Y, Lyu S (2019) Exposing deepfake videos by detecting face warping artifacts. In: IEEE conference on computer vision and pattern recognition workshops (CVPRW) Li Y, Chang M-C, Lyu S (2018) In Ictu Oculi: exposing AI generated fake face videos by detecting eye blinking. In: IEEE international workshop on information forensics and security (WIFS) Li L, Bao J, Zhang T, Yang H, Chen D, Wen F, Guo B (2019a) Face x-ray for more general face forgery detection. arXiv:1912.13458 Li J, Shen T, Zhang W, Ren H, Zeng D, Mei T (2019b) Zooming into face forensics: a pixel-level analysis. Computer vision and pattern recognition. arXiv:1912.05790 Li Y, Yang X, Wu B, Lyu S (2019c) Hiding faces in plain sight: disrupting ai face synthesis with adversarial perturbations. arXiv:1906.09288 Li Y, Sun P, Qi H, Lyu S (2020) Celeb-DF: a Large-scale challenging dataset for DeepFake forensics. In: IEEE conference on computer vision and patten recognition (CVPR), Seattle, WA, United States Liu M-Y, Breuel T, Kautz J (2017) Unsupervised image-to-image translation networks. In: NeurIPS Liu Z, Qi X, Jia J, Torr P (2020) Global texture enhancement for fake face detection in the wild. arXiv:2002.00133 Matern F, Riess C, Stamminger M (2019) Exploiting visual artifacts to expose deepfakes and face manipulations. In: IEEE winter applications of computer vision workshops (WACVW) McCloskey S, Albright M (2018) Detecting gan-generated imagery using color cues. arXiv:1812.08247 Mo H, Chen B, Luo W (2018) Fake faces identification via convolutional neural network. In: Proceedings of the 6th ACM workshop on information hiding and multimedia security, pp 43–47 Nataraj L, Mohammed TM, Manjunath BS, Chandrasekaran S, Flenner A, Bappy JH, RoyChowdhury AK (2019) Detecting gan generated fake images using co-occurrence matrices. Electron Imag (2019)5:532–1 Neekhara P (2020) Adversarial deepfakes: evaluating vulnerability of deepfake detectors to adversarial examples. arXiv:2002.12749 Nguyen HH, Fang F, Yamagishi J, Echizen I (2019a) Multi-task learning for detecting and segmenting manipulated facial images and videos. In: IEEE international conference on biometrics: theory, applications and systems (BTAS) Nguyen HH, Yamagishi J, Echizen I (2019b) Capsule-forensics: using capsule networks to detect forged images and videos. In: ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 2307–2311 Nguyen HH, Yamagishi J, Echizen I (2019c) Use of a capsule network to detect fake images and videos. arXiv:1910.12467 NIST MFC 2018 Synthetic Data Detection Challenge. https://www.nist.gov/itl/iad/mig/mediaforensics-challenge-2019-0 Pham HX, Wang Y, Pavlovic V (2018) Generative adversarial talking head: bringing portraits to life with a weakly supervised neural network. arXiv:1803.07716 Ping W, Peng K, Gibiansky A, Arik SO, Kannan A, Narang S, Raiman J, Miller J (2017) Deep voice 3: 2000-speaker neural text-to-speech. arXiv:1710.07654 Rössler A, Cozzolino D, Verdoliva L, Riess C, Thies J, Nießner M (2019) FaceForensics++: learning to detect manipulated facial images. In: ICCV Sabir E, Cheng J, Jaiswal A, AbdAlmageed W, Masi I, Natarajan P (2019) Recurrent convolutional strategies for face manipulation detection in videos. Interfaces (GUI) 3:1 Shruti Agarwal HF, El-Gaaly T, Lim S-N (2020) Detecting deep-fake videos from appearance and behavior shruti. arXiv:2004.14491

12 DeepFake Detection

331

Sinha P, Balas B, Ostrovsky Y, Russell R (2009) Face recognition by humans: 20 results all computer vision researchers should know about. https://www.cs.utexas.edu/users/grauman/courses/ spring2007/395T/papers/sinha_20results.pdf Stehouwer J, Dang H, Liu F, Liu X, Jain AK (2019) On the detection of digital face manipulation. Computer vision and pattern recognition. arXiv:1910.01717 Sun P, Li Y, Qi H, Lyu S (2020) Landmark breaker: obstructing deepfake by disturbing landmark extraction. In: IEEE workshop on information forensics and security (WIFS), New York, NY, United States Suwajanakorn S, Seitz SM, Kemelmacher-Shlizerman I (2015) What makes tom hanks look like tom hanks. In: ICCV Suwajanakorn S, Seitz SM, Kemelmachershlizerman I (2017) Synthesizing obama: learning lip sync from audio. ACM Trans Graph 36(4):95 Tan M, Le Q (2019) EfficientNet: rethinking model scaling for convolutional neural networks. In: Chaudhuri K, Salakhutdinov R (eds) Proceedings of the 36th international conference on machine learning, vol 97 of Proceedings of machine learning research, Long Beach, California, USA, 09–15 Jun 2019. PMLR, pp 6105–6114 Thies J, Zollhofer M, Stamminger M, Theobalt C, Niessner M (2016) Face2Face: real-time face capture and reenactment of rgb videos. In: IEEE conference on computer vision and pattern recognition (CVPR) Wang R, Juefeixu F, Ma L, Xie X, Huang Y, Wang J, Liu Y (2019a) Fakespotter: a simple yet robust baseline for spotting ai-synthesized fake faces. Cryptography cryptography and security. arXiv:1909.06122 Wang S, Wang O, Zhang R, Owens A, Efros AA (2019b) Cnn-generated images are surprisingly easy to spot... for now. Computer vision and pattern recognition. arXiv:1912.11035 Xuan X, Peng B, Wang W, Dong J (2019) On the generalization of gan image forensics. In: Chinese conference on biometric recognition. Springer, pp 134–141 Yang X, Li Y, Lyu S (2019) Exposing deep fakes using inconsistent head poses. In: ICASSP

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Chapter 13

Video Frame Deletion and Duplication Chengjiang Long, Arslan Basharat, and Anthony Hoogs

Videos can be manipulated in a number of different ways, including object addition or removal, deep fake videos, temporal removal or duplication of parts of the video, etc. In this chapter, we provide an overview of the previous work related to video frame deletion and duplication and dive into the details of two deep-learning-based approaches for detecting and localizing frame deletion (Chengjiang et al. 2017) and duplication (Chengjiang et al. 2019) manipulations. This should provide the reader a brief overview of the related research and details of a couple of deep-learning-based forensics methods to defend against temporal video manipulations.

13.1 Introduction Digital video forgery (Sowmya and Chennamma 2015) is referred to as intentional modification of the digital video for fabrication. A common digital video forgery technique is temporal manipulation, which includes frame sequence manipulations such as dropping, insertion, reordering and looping. By altering only the temporal aspect of the video the manipulation is not detectable by single-image forensic techniques; therefore, there is need for digital forensics methods that perform temporal analysis of videos to detect such manipulations. In this chapter, we will first focus on the problem of video frame deletion detection in a given, possibly manipulated, video without the original video. As illustrated in Fig. 13.1, we define a frame drop to be a removal of any number of consecutive C. Long Meta Reality Labs, Burlingame, CA, USA A. Basharat (B) · A. Hoogs Kitware, Inc., Clifton Park, NY, USA e-mail: [email protected] © The Author(s) 2022 H. T. Sencar et al. (eds.), Multimedia Forensics, Advances in Computer Vision and Pattern Recognition, https://doi.org/10.1007/978-981-16-7621-5_13

333

334

C. Long et al.

Fig. 13.1 The illustration of frame dropping detection challenge. Assuming that there are three consecutive frame sequences (marked in red, green and blue, respectively) in an original video, the manipulated video is obtained after removing the green frame sequence. Our goal is to identify the location of the frame drop at the end of the red frame sequence and the beginning of the blue frame sequence

frames within a video shot.1 In our work (Chengjiang et al. 2017) to address this problem, we only consider videos with a single shot to avoid the confusion between frame drops and shot breaks. Single-shot videos are prevalent from various sources, like mobile phones, car dashboard cameras or body-worn cameras. To the best of our knowledge, only a small amount of recent work (Thakur et al. 2016) has explored automatically detecting dropped frames without a reference video. In digital forgery detection, we cannot assume a reference video, unlike related techniques that detect frame drops for quality assurance. Wolf (2009) proposed a frame-by-frame motion energy cue defined based on the temporal information difference sequence for finding dropped/repeated frames, among which the changes are slight. Unlike Wolf’s work, we detect the locations where frames are dropped in a manipulated video without being compared with the original video. Recently, Thakur et al. (2016) proposed an SVM-based method to classify tampered or non-tampered videos. In this work, we explore the authentication (Valentina et al. 2012; Wang and Farid 2007) of the scene or camera to determine if a video has one or more frame drops without a reference or original video. We expect such authentication is able to explore underlying spatio-temporal relationships across the video so that it is robust to digital-level attacks and conveys a consistency indicator across the frame sequences. We believe that we can still use similar assumption that consecutive frames are consistent with each other and the consistency will be destroyed if there exists temporal manipulation. To authenticate a video, two-frame techniques such as color histogram, motion energy (Stephen 2009) and optical flow (Chao et al. 2012; Wang et al. 2014) have been used. By only using two frames these techniques cannot generalize to work on both videos with rapid scene changes (often from fast camera 1

A shot is a consecutive sequence of frames captured between the start and stop operations of a single video camera.

13 Video Frame Deletion and Duplication

335

Fig. 13.2 An illustration of frame duplication manipulation in a video. Assume an original video has three sets of frames indicated here by red, green and blue rectangles. A manipulated video can be generated by inserting a second copy of the red set in the middle of the green and the blue sets. Our goal is to detect both instances of the red set as duplicated and also determine that the second instance is the one that’s forged

motion) and videos with subtle scene changes such as static camera surveillance videos. In the past few years, deep learning algorithms have made significant breakthroughs, especially in the image domain (Krizhevsky et al. 2012). The features computed by these algorithms have been used for image matching/classification (Zhang et al. 2014; Zhou et al. 2014). In this chapter, we evaluate approaches using these features for dropped frame detection using two to three frames. However, these image-based deep features still lack modeling the motion effectively. Inspired by Tran et al.’s C3D network (Tran et al. 2015), which is able to extract powerful spatio-temporal features for action recognition, we propose a C3D-based network for detecting frame drops, as illustrated in Fig. 13.3. As we can observe, there are three aspects to distinguish our C3D-based network approach (Chengjiang et al. 2017) from Tran et al.’s work. (1) Our task is to check whether there exist frames dropped between the 8th and the 9th frame, which makes the center part more informative than the two ends of the 16-frame video clips; (2) the output of the network has two branches, which correspond to “frame drop” and “no frame drop”, between the 8th and the 9th frame; (3) unlike most approaches, we use the output scores from the network as confidence score directly and define confidence score with a peak detection step and a scale term based on the output score curves; and (4) such a network is able to not only predict whether the video has frame dropping but also detect the exact location where the frame dropping occurs. To summarize, the contributions of our work (Chengjiang et al. 2017) are: • Proposed a 3D convolutional network for frame dropping detection, and the confidence score is defined with a peak detection step and a scale term based on the output score curves. It is able to identify whether there exists frame dropping and even determine the exact location of frame dropping without any information of the reference/original video. • For performance comparison, we also compared a series of baselines, including cue-based algorithms (color histogram, motion energy and optical flow) and learning-based algorithms (an SVM algorithm and convolutional neural networks (CNNs) using two or three frames as input).

336

C. Long et al.

• The experimental results on both the Yahoo Flickr Creative Commons 100 Million (YFCC100m) dataset and the Nimble Challenge 2017 dataset clearly demonstrate the efficacy of the proposed C3D-based network. An increasingly large volume of digital video content is becoming available in our daily lives through the internet due to the rapid growth of increasingly sophisticated, mobile and low-cost video recorders. These videos are often edited and altered for various purposes using image and video editing tools that have become more readily available. Manipulations or forgeries can be done for nefarious purposes to either hide or duplicate an event or content in the original video. Frame duplication refers to a video manipulation where a copy of a sequence of frames is inserted into the same video either replacing previous frames or as additional frames. Figure 13.2 provides an example of frame duplication where in the manipulated video the red frame sequence from the original video is inserted between the green and the blue frame sequences. As a real-world example, frame duplication forgery could be done to hide an individual leaving a building in a surveillance video. If such a manipulated video was part of a criminal investigation, without effective forensics tools the investigators could be misled. Videos can also be manipulated by duplicating a sequence of consecutive frames with the goal of concealing or imitating specific content in the same video. In this chapter, we also describe a coarse-to-fine framework based on deep convolutional neural networks to automatically detect and localize such frame duplication (Chengjiang et al. 2019). First, an I3D network finds coarse-level matches between candidate duplicated frame sequences and the corresponding selected original frame sequences. Then a Siamese network based on ResNet architecture identifies fine-level correspondences between an individual duplicated frame and the corresponding selected frame. We also propose a robust statistical approach to compute a video-level score indicating the likelihood of manipulation or forgery. Additionally, for providing manipulation localization information we develop an inconsistency detector based on the I3D network to distinguish the duplicated frames from the selected original frames. Quantified evaluation on two challenging video forgery datasets clearly demonstrates that this approach performs significantly better than four state-of-the-art methods. It is very important to develop robust video forensic techniques, to catch videos with increasing sophisticated forgeries. Video forensics techniques (Milani et al. 2012; Wang and Farid 2007) aim to extract and exploit features from videos that can distinguish forgeries from original, authentic videos. Like other areas in information security, the sophistication of attacks and forgeries continue to increase for images and videos, requiring a continued improvement in forensic techniques. Robust detection and localization of duplicated parts of a video can be a very useful forensic tool for those tasked with authenticating large volumes of video content. In recent years, multiple digital video forgery detection approaches have been employed to solve this challenging problem. Wang and Farid (2007) proposed a frame duplication detection algorithm which takes the correlation coefficient as a measure of similarity. However, such an algorithm results in a heavy computational load due to a

13 Video Frame Deletion and Duplication

337

large number of correlation calculations. Lin et al. (2012) proposed to use histogram difference (HD) instead of correlation coefficients as the detection features. The drawback is that the HD features do not show strong robustness against common video operations or attacks. Hu et al. (2012) propose to detect duplicated frames using video sub-sequence fingerprints extracted from the DCT coefficients. Yang et al. (2016) propose an effective similarity-analysis-based method that is implemented in two stages, where the features are obtained via SVD. Ulutas et al. propose to use a BoW model (Ulutas et al. 2018) and binary features (Ulutas et al. 2017) for frame duplication detection. Although deep learning solutions, especially those based on convolution neural networks, have demonstrated promising performance in solving many challenging vision problems such as large-scale image recognition (Kaiming et al. 2016; Stock and Cisse 2018), object detection (Shaoqing et al. 2015; Yuhua et al. 2018; Tang et al. 2018) and visual captioning (Venugopalan et al. 2015; Aneja et al. 2018; Huanyu et al. 2018), no deep learning solutions were developed for this specific task at the time, which motivated us to fill this gap. In Chengjiang et al. (2019) we describe a coarse-to-fine deep learning framework, called C2F-DCNN, for frame duplication detection and localization in forged videos. As illustrated in Fig. 13.4, we first utilize an I3D network (Carreira and Zisserman 2017) to obtain the candidate duplicate sequences at a coarse level; this helps narrow the search faster through longer videos. Next, at a finer level, we apply a Siamese network composed of two ResNet networks (Kaiming et al. 2016) to further confirm duplication at the frame level to obtain accurate corresponding pairs of duplicated and selected original frames. Finally, the duplicated frame range can be distinguished from the corresponding selected original frame range by our inconsistency detector that is designed as an I3D network with 16-frames as an input video clip. Unlike other methods, we consider the consistency between two consecutive frames from a 16-frame video clip in which these two consecutive frames are at the center, i.e., 8th and 9th frame. This is aimed at capturing the temporal context for matching a range of frames for duplication. Inspired by Long et al. (2017), we design an inconsistency detector based on the I3D network to cover three categories, i.e., “none”, “frame drop” and “shot break”, which represent that between the 8th and 9th frame there are no manipulations, frames removal within one shot, and a shot boundary transition, respectively. Therefore, we are able to use output scores from the learned I3D network to formulate a confidence score of inconsistency between any two consecutive frames to distinguish the duplicated frame range from the selected original frame range, even in videos with multiple shots. We also proposed a heuristic strategy to produce a video-level frame duplication likelihood score. This is built upon the measures like the number of possible frames duplicated, the minimum distance between duplicated frames and selected frames, and the temporal gap between the duplicated frames and the selected original frames. To summarize, the contributions of this approach (Chengjiang et al. 2019) are as follows: • A novel coarse-to-fine deep learning framework for frame duplication detection and localization in forged videos. This framework features fine-tuned I3D networks

338

C. Long et al.

and the ResNet Siamese network, providing a robust yet efficient approach to process large volumes of video data. • Designed an inconsistency detector based on a fine-tuned I3D network that covers three categories to distinguish duplicated frame range from the selected original frame range. • A heuristic formulation for video-level detection score, which leads to significant improvement in detection benchmark performance. • Evaluated performance on two video forgery datasets and the experimental results strongly demonstrate the effectiveness of the proposed method.

13.2 Related Work 13.2.1 Frame Deletion Detection The most related prior work can be roughly split into two categories: video interframe forgery identification and shot boundary detection. Video inter-frame forgery identification. Video inter-frame forgery involves frame insertion and frame deletion. Wang et al. proposed an SVM method (Wang et al. 2014) based on the assumption that the optical flows are consistent in an original video, while in forgeries the consistency will be destroyed. Chao’s optical flow method (Chao et al. 2012) provides different detection schemes for inter-frame forgery based on the observation that the subtle difference between frame insertion and deletion. Besides optic flow, Wang et al. (2014) also extracted the consistency of correlation coefficients of gray values as distinguishing features to classify original videos and forgeries. Zheng et al. (2014) proposed a novel feature called block-wise brightness variance descriptor (BBVD) for fast detection of video inter-frame forgery. Different from this inter-frame forgery identification, our proposed C3D-based network (Chengjiang et al. 2017) is able to explore the powerful spatio-temporal relationships as the authentication of the scene or camera in a video for frame dropping detection. Shot Boundary Detection. There is a large amount of work to solve the shot boundary detection problem (Smeaton et al. 2010). The task of shot boundary detection (Smeaton et al. 2010) is to detect the boundaries to separate multiple shots within a video. The TREC video retrieval evaluation (TRECVID) is an important benchmark dataset for automatic shot boundary detection challenge. And different research groups from across the world have worked to determine the best approaches to shot boundary detection using a common dataset and common scoring metrics. Instead of detecting where two shots are concatenated, we are focused on detecting a frame drop within a single shot.

13 Video Frame Deletion and Duplication

339

Fig. 13.3 The pipeline of the C3D-based method. At the training stage, the C3D-based network takes 16-frame video clips extracted from the video dataset as input, and produces two outputs, i.e., “frame drop” (indicated with “+”) or “no frame drop” (indicated with “–”). At the testing stage, we decompose a testing video into a sequence of continuous 16-frame clips and then fit them into the learned C3D-based network to obtain the output scores. Based on the score curves, we use a peak detection step and introduce a scale term to define the confidence scores to detect/identify whether there exist dropped frames for per frame clip or per video. The network model consisted of 66 million parameters with 3 × 3 × 3 filter size at all convolutional layers

13.2.2 Frame Duplication Detection The research related to frame duplication can be broadly divided into inter-frame forgery, copy-move forgery and convolutional neural networks. Inter-frame forgery refers to frame deletion and frame duplication. For features used for inter-frame forgery, either spatially or temporally, keypoints are extracted from nearby patches recognized over distinctive scales. Keypoint-based methodologies can be further subdivided into direction-based (Douze et al. 2008; Le et al. 2010), keyframe-based coordinating (Law-To et al. 2006) and visual-words-based (Sowmya and Chennamma 2015). In particular, keyframe-based feature has been shown to perform well for close video picture/feature identification (Law-To et al. 2006). In addition to keypoint-based features, Wu et al. (2014) propose a velocity field consistency-based approach to detect inter-frame forgery. This method is able to distinguish the forgery types, identify the tampered video and locate the manipulated positions in forged videos as well. Wang et al. (2014) propose to make full use of the consistency of the correlation coefficients of gray values to classify original videos and inter-frame forgeries. They also propose an optical flow method (Wang et al. 2014) based on the assumption that the optical flows are consistent in an original video, while in forgeries the consistency will be destroyed. The optical flow is extracted as a distinguishing feature to identify inter-frame forgeries through a support vector machine (SVM) classifier to recognize frame insertion and frame deletion forgeries.

340

C. Long et al.

Fig. 13.4 The C2F-DCNN framework for frame duplication detection and localization. Given a testing video, we first run the I3D network (Carreira and Zisserman 2017) to extract deep spatialtemporal features and build the coarse sequence-to-sequence distance to determine the possible frame sequences that are likely to have frame duplication. For the likely duplicated sequences, a ResNet-based Siamese network further confirms a frame duplication at the frame level. For the videos with duplication detected, temporal localization is determined with an I3D-based inconsistency detector to distinguish the duplicated frames from the selected frames

Huang et al. (2018) proposed a fusion of audio forensics detection methods for video inter-frame forgery. Zhao et al. (2018) developed a similarity analysis-based method to detect inter-frame forgery in a video shot. In this method, the HSV color histogram is calculated to detect and locate tampered frames in the shot, and then the SURF feature extraction and FLANN (Fast Library for Approximate Nearest Neighbors) matching are used for further confirmation. Copy-move forgery is created by copying and pasting content within the same frame, and potentially post-processing it (Christlein et al. 2012; D’Amiano et al. 2019). Wang et al. (2009) propose a dimensionality reduction approach through principal component analysis (PCA) on the different pieces. Mohamadian et al. (2013) develop a singular value decomposition (SVD) based method in which the image is isolated into numerous little covering squares and after that SVD is requested to remove the copied frames. Yang et al. (2018) proposed a copy-move forgery detection based on a modified SIFT-based detector. Wang et al. (2018) presented a novel block-based robust copy-move forgery detection approach using invariant quaternion exponent moments. D‘Amiano et al. (2019) proposed a dense-field method with a video-oriented version of PatchMatch for the detection and localization of copy-move video forgeries. Convolutional neural networks (CNNs) have been demonstrated to learn rich, robust and powerful features for large-scale video classification (Karpathy et al. 2014). Various 3D CNN architectures (Tran et al. 2015; Carreira and Zisserman 2017; Hara et al. 2018; Xie et al. 2018) have been proposed to explore spatio-temporal contextual relations between consecutive frames for representation learning. Unlike the existing methods for inter-frame forgery and copy-move forgery which mainly use hand-crafted features or bag-of-words, we take advantage of convolutional neural networks to extract spatial and temporal features for frame duplication detection and localization.

13 Video Frame Deletion and Duplication

341

13.3 Frame Deletion Detection There is limited work exploring frame deletion or dropping detection problem without reference or original video. Therefore, we first introduce a series of baselines, including cue-based and learning-based methods, and then introduce our proposed C3D-based CNN.

13.3.1 Baseline Approaches We studied three different cue-based baseline algorithms from the literature, i.e., (1) color histogram, (2) optical flow (Wang et al. 2014; Chao et al. 2012) and (3) motion energy (Stephen 2009) as follows: • Color histogram. We calculate the histograms on all R, G and B three channels. Whether there are frames dropped between the two consecutive frames is detected by thresholding the score calculated by the L2 distances based on the color histograms of the two adjacent frames. • Optical flow. We calculate the optical flow (Wang et al. 2014; Chao et al. 2012) from the two adjacent frames by the Lucas-Kanade method. Whether there exist frames dropped between the current frame and the next frame is detected by thresholding the L2 distance between the average moving direction between the previous frame and the current frame, and the average moving direction between the current frame and the next frame. • Motion energy. Motion energy is the temporal information (TI) difference sequence (Stephen 2009), i.e., the difference of Y channel in the YCrCb color space. Whether there exist frames dropped between the current frame and the next frame is detected by thresholding the motion energy between the current frame and the next frame. Note that each algorithm mentioned above compares two consecutive frames and estimates whether there are missing frames between them. We also developed four learning-based baseline algorithms as follows: • SVM. We train an SVM model to predict whether there are frames dropped between two adjacent frames. The feature vector is the concatenation of the absolute difference of color histograms and the two-dimensional absolute difference of the optical flow directions. The optical flow dimensionality is much smaller than the color histogram, and therefore we give it a higher weight. • Pairwise Siamese Network. We train a Siamese CNN that determines if the two input frames are consecutive or if there is frame dropping between them. Each CNN consists of two convolutional layers and three fully connected layers. The loss used is contrastive loss. • Triplet Siamese Network. We extend the pairwise Siamese network to use three consecutive frames. Unlike the pairwise Siamese network, the triplet Siamese

342

C. Long et al.

Table 13.1 A list of related algorithms for temporal video manipulation detection. The first three algorithms are cue-based without any training work. The rest are learned-based algorithms, including the traditional SVM, the popular CNNs and the method we proposed in Chengjiang et al. (2019) Method Brief description Learning? Color histogram Optical flow

Motion energy

SVM

Pairwise Siamese Network

Triplet Siamese Network

Alexnet (Krizhevsky et al. 2012) Network C3D-based Network (Chengjiang et al. 2019)

RGB 3 channel histograms + L2 distance The optic flow (Wang et al. 2014; Chao et al. 2012) with Lucas-Kanade method + L2 distance Based on temporal information difference (Stephen 2009) sequence 770-D feature vector (3 × 256-D RGB histogram + 2-D optic flow) Siamese network architecture (2 conv layers + 3 fc layers + contrastive loss) Siamese network architecture (Alexnet-variant + Euclidean&contrastive loss) Alexnet-variant network architecture C3D-variant network architecture + confidence score

No No

No

Yes

Yes

Yes

Yes Yes

network consisted of three Alexnets (Krizhevsky et al. 2012) merging their output with Euclidean loss between the previous frame and the current frame, and with contrastive loss between the current frame and the next frame. • Alexnet-variant Network. The input frames are converted to gray-scale and put into the RGB channels. To facilitate the comparison of the competing algorithms, we summarize the above descriptions in Table 13.1.

13.3.2 C3D Network for Frame Deletion Detection The baseline CNN algorithms we investigated lacked a strong temporal feature suitable to capture the signature of frame drops. These algorithms only used features from two to three frames that were computed independently. C3D network was originally designed for action recognition, however, we found that spatio-temporal signature

13 Video Frame Deletion and Duplication

343

produced by the 3D convolution is also very effective in capturing the frame drop signatures. The pipeline of our proposed method is as shown in Fig. 13.3. As we can observe, there are three modifications from the original C3D network. First, the C3D network takes clips of 16 frames, therefore we check the center of the clip (between frames 8 and 9) for frame drops to give equal context on both sides of the drop. This is done by formulating our training data so that frame drops only occur in the center. Secondly, we have a binary output associated with “frames dropped” and “no frames dropped” between the 8th and 9th frame. Lastly, we further refine the per-frame network output scores into a confidence score using peak detection and temporal scaling to further suppress the noisy detections. With the refined confidence scores we are not only able to identify whether the video has frame drops but also localize them by applying the network to the video in a sliding window fashion.

13.3.2.1

Data Preparation

To obtain the training data, we used 2,394 iPhone 4 consumer videos from the World Dataset made available on the DARPA Media Forensics (MediFor) program for research. We pruned the videos such that all videos were of length 1–3 min. We get ended up with 314 videos, of which we randomly selected 264 videos for training, and the rest 50 videos for validation. We developed a tool that randomly drops fixed-length frame sequences from videos. It picks a random number of frame drops and random frame offsets in the video for each removal. The frame drops do not overlap, and it forces 20 frames to be kept around each drop. In our experiments, we manipulate each video many different times to create more data. We vary the fixed frame drop length to see how it affects detection we used 0.5 s, 1 s, 2 s, 5 s and 10 s as five different frame drop durations. We used the videos with these drop durations to train a general C3D-based network for frame drop detection.

13.3.2.2

Training

We use momentum μ = 0.9, γ = 0.0001 and set power to be 0.075. We start training at a base learning rate of α = 0.001 and the “inv” as the learning rate policy. We set the batch size to be 15 and use the 206000th iteration as the learned model for testing, which achieves about 98.2% validation accuracy.

13.3.2.3

Testing

The proposed C3D-based network is able to identify the temporal removal manipulation due to dropped frames in a video and also localize one or more frame drops within the video. We observe that some videos captured by moving digital cameras may have multiple changes due to quickly camera motion, zooming in/out, etc.,

344

C. Long et al.

which can be deceiving to the C3D-based network and can result in false frame dropping detections. In order to reduce such false alarms and increase the generalization ability of our proposed network, we propose an approach to refine the raw network output scores to the confidence scores using peak detection and introduction of a scale term based on the output score variation, i.e., 1. We first detect the peaks on the output score curve obtained from the proposed C3D-based network per video. Among all the peaks, we only pick the top 2% peaks and ignore the rest of the peaks. Then we shift the time window to check the number of peaks (denoted as n p ) appearing in the time window with ith frame as the center (denoted as W (i)). If the number is more than one, i.e., other peaks in the neighborhood, the output score f (i) will be penalized. The value will be penalized more if there are a lot of high peaks detected. The intuition behind is that we want to reduce the false alarms when there are multiple peaks occurring close just because the camera is moving or even zooming in/out. 2. We also introduce a scale term (i) defined as the difference of the median score and the minimum score within the time window W (i) to control the influence of the camera motion. as

Based on the above statement, we can obtain the confidence score for the ith frame  f (i) − λ(i) when n p < 2 , (13.1) f con f (i) = f (i) − λ(i) otherwise np

where W (i) = {i −

w w , . . . , i + }. 2 2

(13.2)

Note that λ in Eq. 13.1 is a parameter to control how much the scale term affects the confidence score, and w in Eq. 13.2 indicates the width of the time window. For testing per frame, say ith frame, we first form a 16-frame video clip and set the ith frame to be the 8th frame in the video clip, and then we can get the output score f con f (i). If f con f (i) > T hr eshold, then we predict there are dropped frames between the ith frame and the (i + 1)th frame. For testing per video, we take it as a binary classification and confidence measure per video. To simplify, we use a simple confidence measure, i.e., max f con f (i) across all frames. If max f con f (i) > i

i

T hr eshold, then there are temporal removal within the video. Otherwise, the video is predicted without any temporal removal. The results reported in this work are without any T hr eshold as we are reporting the ROC curves.

13.3.3 Experimental Result We conducted the experiments on a Linux machine with Intel(R) Xeon(R) CPU E52687 0 @ 3.10 GHz, 32 GB system memory and graphical card NVIDIA GTX 1080

13 Video Frame Deletion and Duplication

(a)

(b)

(d)

(e)

345

(c)

(f)

Fig. 13.5 Performance comparison on the YFCC100m dataset against seven baseline approaches, using per-frame ROCs for five different drop durations (a–e), and (f) is frame-level ROC for all the five drop durations combined

(Pascal). We report our results as the ROC curves based on the output score f con f (i) and accuracy as metrics. We present the ROC curves with false positive rate as well as false alarm rate per minute to provide and demonstrate the level of usefulness for a user that might have to adjudicate each detection reported by the algorithm. We present the ROC curves for both per-frame analysis where the ground truth data is available and per-video analysis otherwise. To demonstrate the effectiveness of the proposed approach, we ran experiments on the YFCC100m dataset2 and the Nimble Challenge 2017 (Development 2 Beta 1) dataset.3

13.3.3.1

YFCC100m Dataset

We download 53 videos tagged with iPhone from Yahoo Flickr Creative Commons 100 Million (YFCC100m) dataset and manually verified that they are single-shot videos. To create ground truth we used our automatic randomized frame dropping tool to generate the manipulated videos. For each video we generated manipulated 2

YFCC100m dataset: http://www.yfcc100m.org. Nimble Challenge 2017 dataset: https://www.nist.gov/itl/iad/mig/nimble-challenge-2017evaluation.

3

346

C. Long et al.

Table 13.2 The detailed results of our proposed C3D-based network. # pos and #neg are the number instances for the positive and the negative testing 16-frame video clips, respectively. The Acc pos and Accneg are the corresponding accuracy. Acc is the total accuracy. All the accuracies use the unit % Duration (s) # pos : #neg Acc pos Accneg Acc 0.5 1 2 5 10

2816:416633 2333:390019 2816:416633 1225:282355 770:239210

98.40 99.49 99.57 99.70 100.00

98.15 98.18 98.11 98.17 98.12

98.16 98.18 98.12 98.18 98.13

videos with frame drops of 0.5, 1, 2, 5 or 10 s intervals at random locations. For each video and each drop duration, we randomly generate 10 manipulated videos. In this way we collect 53 × 5 × 10 = 2650 manipulated videos as testing dataset. For each drop duration, we run all the competing algorithms in Table 13.1 on the 530 videos with the parameter setting w = 16, λ = 0.22. The experimental results are summarized in the ROC curves for all these five different drop durations in Fig. 13.5. One can note that (1) the traditional SVM outperforms the three simple cue-based algorithms; (2) the four convolution neural networks algorithms perform much better than the traditional SVM and all the cue-based algorithms; (3) among all the CNNbased networks, both the triplet Siamese network and the Alexnet-variant network perform similar and better than the pairwise Siamese network; and (4) our proposed C3D-based network performs the best. This provides some empirical support to the hypothesis that the proposed C3D-based method is able to take advantage of the temporal and spatial correlations, while the other CNN-based networks only explore the spatial information in the individual frames. To better understand the C3D-based network, we provide more experimental details in Table 13.2. With the drop duration increase, both the number of positive and negative testing instances decrease and the positive accuracy keeps increasing. As one might expect, the shorter the frame drop duration, the more difficult it is to detect. We also merge the results of the C3D-based network with five different drop durations in Fig. 13.5 together to plot a unified ROC curve. For comparison, we also plot another ROC curve that uses the output scores to detect whether there exist frame drops within a testing video. As we can see in Fig. 13.5(f), using output score from the C3D-based network, we can still achieve very good performance to 0.9983854 AUC. This observation can be explained by the fact that the raw phone videos from the YFCC100m dataset have less quick motion, no zooming in/out occurring and even no video manipulations. Also, the manipulated videos are generated in the same way as the generation of training manipulated videos with the same five drop durations. Since there are no overlaps on the video contents between training videos and testing videos, such a good performance demonstrates the power and the generalization

13 Video Frame Deletion and Duplication

347

(a)

(b)

(c)

(d)

Fig. 13.6 The visualization of two successful examples (one true positive and the other one is true negative) and two failure examples (one false positive and the other one is false negative) from the YFCC100m dataset. The red dashed line indicates the location between the 8th frame and the 9th frame where we test for a frame drop. The red arrows point to the frame on the confidence score plots

ability of the trained network. Although using output score directly achieves a very good AUC, using the confidence score defined in Eq. 13.1 can still improve the AUC from 0.9983854 to 0.9992465. This demonstrates the effectiveness of our confidence score defined with such a peak detection step and a scale term. We visualize both success and failure cases in our proposed C3D-based network, as shown in Fig. 13.6. Looking at the successful cases in Fig. 13.6(a), “frame drops” is identified correctly in the 16-frame video clip because a man stands at one side in the 8th frame and move to another side suddenly in the 9th frame, and the video clip in Fig. 13.6(b) is predicted as “no frame drops” correctly since a child follows his father in all 16 frames and the 8th frame and the 9th frame are consistent with each other. Regarding the failures cases, as shown in Fig. 13.6(c), there is no frame drop but it is still identified as “frame drop” between the 8th frame and the 9th frame due to the camera shakes during the video capture of such a street scene. Also, “frame drop” in the top clip cannot be detected correctly between the 8th frame and the 9th frame in the video clip, as shown in Fig. 13.6(d), since the scene inside the bus has almost no visible changes between these two frames.

348

C. Long et al.

Fig. 13.7 The ROC curve of our proposed C3D-based network on the Nimble Challenge 2017 dataset

Note that our training stage is carried out off-line. Here we only offer the runtime for the testing stage under our experimental environment. For each testing video clip with a 16-frame length, it takes about 2 s. For a one-minute short video with 30 FPS, it requires about 50 min to complete the testing throughout all the frame sequence.

13.3.3.2

Nimble Challenge 2017 Dataset

In order to check whether our proposed C3D-based network is able to identify a testing video with unknown arbitrary drop duration, we also conducted experiments on the Nimble Challenge 2017 dataset, specifically the NC2017-Dev2Beta1 version, in which there are 209 probe videos with various video manipulations. Among these videos, there are six videos manipulated with “TemporalRemove”, which is regarded as “frame dropping”. Therefore, we run our proposed C3D-based network as a binary classifier to classify all these 209 videos into two groups, i.e., “frame dropping” and “no frame dropping”, at the video level. In this experiment, the parameters are set as w = 500, λ = 1.25. We first plot the output scores from the C3D-based network and the confidence score of each of the six videos is labeled with “TemporalRemove” in Fig. 13.9. It is clear that the video named “d3c6bf5f224070f1df74a63c232e360b.mp4” has the lowest confidence score smaller than zero. To explain such a case, we further check the content of the video, as shown in Fig. 13.8. As we can observe, this video is even really hard for us to identify it as “TemporalRemoval” since it is taken by a static camera and only the lady’s mouth and head are taking very slight changes across the whole video from the beginning to the end. As we trained purely on iPhone videos, our training network was biased

13 Video Frame Deletion and Duplication

349

Fig. 13.8 The entire frame sequence of the 34-second video “d3c6bf5f224070f1df74a63c232e360b.mp4”, which has 1047 frames and was captured by a static camera. We observe that only the lady’s mouth and head are taking very slight change across the video from the beginning to the end

Fig. 13.9 The illustration of output scores from the C3D-based network and their confidence scores for six videos labeled with “TemporalRemove” from the Nimble Challenge 2017 dataset. The blue curve is the output score, the red “+” marks the detected peaks, and the red confidence score is used to determine whether the video can be predicted as a video with “frame drops”

toward videos with camera motion. With a larger dataset of static camera videos, we can train different networks for static and dynamic cameras to address this problem. We plot the ROC curve in Fig. 13.7. As we can see, the AUC of the C3D-based network with confidence scores is high to 0.96, while the AUC of the C3D-based network with the output scores directly is only 0.86. The insight behind such a significant improvement is that there are testing videos with camera quick-moving, zooming in and out, as well as other types of video manipulations, and our confidence scores defined with the peak detection step and the scale term to penalize multiple peaks occurring too close and large scales is able to significantly reduces the false alarms. Such a significant improvement by 0.11 AUC strongly demonstrates the effectiveness of our proposed method.

350

C. Long et al.

13.4 Frame Duplication Detection As shown in Fig. 13.4, given a probe video, our proposed C2F-DCNN framework is designed to detect and localize frame duplication manipulation. An I3D network is used to produce a sequence-to-sequence matrix and determine the candidate frame sequences at the coarse-search stage. A Siamese network is then applied for a finelevel search to verify whether frame duplications exist. After this, an inconsistency detector is applied to further distinguish duplicated frames from selected frames. All of these steps are described below in detail.

13.4.1 Coarse-Level Search for Duplicated Frame Sequences In order to efficiently narrow the search space, we start by finding possible duplicate sets of frames throughout the video using a robust CNN representation. We split a video into overlapping frame sequences, where each sequence has 64 frames and the number of overlapped frames is 16. We choose I3D Network (Carreira and Zisserman 2017), instead of using C3D network (Tran et al. 2015) due to these reasons: (1) It inflates 2D ConvNets into 3D and makes filters from typically N × N square to N×N×N cubic; (2) it bootstraps 3D filters from 2D filters to bootstrap parameters from the pre-trained ImageNet models; and (3) it paces receptive field growth in space, time and network depth. In this work, we apply the pre-trained off-the-shell I3D network to extract the 1024-dimensional feature vector for k = 64 frame sequences since the input for the standard I3D network is 64 rgb-data and 64 flow-data. We observed that a lot of time was being spent on the pre-processing. To reduce the testing runtime, we only compute the first k rgb-data and k flow-data items. For the subsequent frame sequence, we can copy (k − 1) rgb-data and (k − 1) flow-data from the previous video clip, and only calculate the last rgb-data and flow-data. This significantly improved the testing efficiency. Based on the sequence features, we calculate the sequence-to-sequence distance matrix over the whole video using L2 distance. If the distance is smaller than the threshold T1 , then this indicates that these two frame sequences are likely duplicated and we take them as two candidate frame sequences for further confirmation during the next fine-level search.

13.4.2 Fine-Level Search for Duplicated Frames For the candidate frame sequences, detected by the previous stage described in Sect. 13.4.1, we evaluate the distance between all pairs of frames across the two sequences, i.e., a duplicated frame and the corresponding selected original frame.

13 Video Frame Deletion and Duplication

351

Fig. 13.10 A sample distance matrix based on the frame-to-frame distances computed by the Siamese network between a pair of frame sequences. The symbols shown on the line segment with low distance are used to compute the video-level confidence score for frame duplication detection

For this purpose we propose a Siamese neural network architecture, which learns to differentiate between two frames in the provided pair. It consists of two identical networks by sharing exactly the same parameters, each taking one of the two input frames. A contrastive loss function is applied to the last layers to calculate the distance between the pair. In principle, we can choose any neural network to extract features for each frame. In this work, we choose the ResNet network (Kaiming et al. 2016) with 152 layers given its demonstrated robustness. We connect two ResNets in the Siamese architecture with a contrastive loss function, and each loss value associated with the distance between a pair of frames is formulated into the frame-to-frame distance matrix, in which the distance is normalized to the range [0, 1]. A distance smaller than the threshold T2 indicates that these two frames are likely duplicated. For videos that have multiple consecutive frames duplicated we expect to see a line with low values parallel to the diagonal in the visualization of the distance matrix, as plotted in Fig. 13.10. It is worth mentioning that we provide both frame-level and video-level scores to evaluate the likelihood of frame duplication. For the frame-level score, we can use the value in the frame-to-frame distance directly. For the video-level score, we propose a heuristic strategy to formulate the confidence value. We first find the minimal value of distance dmin = d(i min , jmin ) where i min , jmin = argmin0≤i< j≤n d(i, j) is the frameto-frame distance matrix. Then a search is performed in two directions to find the number of consecutive duplicated frames: k1 = argmax |d(i min − k, jmin − k) − dmin | ≤ 

(13.3)

k:k≤i min

and k2 = argmax |d(i min + k, jmin + k) − dmin | ≤ 

(13.4)

k:k≤n− jmin

where  = 0.01 and the length of the interval with duplicated frames can be defined as

352

C. Long et al.

l = k1 + k2 + 1.

(13.5)

Finally, we can formulate the video-level confidence score as follows: Fvideo = −

dmin . l × ( jmin − i min )

(13.6)

The intuition here is that a more likely frame duplication is indicated by a smaller value of dmin , a longer interval of duplicated frames and a larger temporal gap between the selected original frames and the duplicated frames.

13.4.3 Inconsistency Detector for Duplication Localization We observe that the duplicated frames inserted into the source video usually yield artifacts due to temporal inconsistency at both the beginning frames and the end frames in a manipulated video. To automatically distinguish the duplicated frames from selected frames, we make use of both spatial and temporal information by training an inconsistency detector to locate this temporal discrepancy. For this purpose, we build upon our work discussed above, Long et al. (2017), which proposed a C3D-based network for frame-drop detection and only works for single-shot videos. Instead of using only one RGB stream data as input, we replace the C3D network with an I3D network to also incorporate the optical flow data stream. It is also worth mentioning that unlike the I3D network used in Sect. 13.4.1, input to the I3D network here is a 16-frame temporal interval, every frame in a sliding window, with RGB and optical flow data. The temporal classification provides insight into the temporal consistency between the 8th and the 9th frame within the 16-frame interval. In order to handle multiple shots in a video with hard cuts, we extend the binary classifier to three classes: “none”—no temporal inconsistency indicating manipulation; “frame drop”—there are frames removed within one-shot video; and “shot break” or “break”—there is a temporal boundary or transition between two video shots. Note that the training data with shot-break videos are obtained from TRECVID 2007 dataset (Kawai et al. 2007), and we only use the hard-cut shot-breaks since soft-cut changes gradually and has strong consistency between any two consecutive frames. The confusion matrix in Fig. 13.11 illustrates the high effectiveness of the proposed I3D network-based inconsistency detector. Based on the output scores for the three categories from the I3D network, i.e., dr op br eak S Inone 3D (i), S I 3D (i) and S I 3D (i), we formulate the confidence score of inconsistency as the following function: dr op

eak (i) − λS Inone S(i) = S I 3D (i) + S Ibr3D 3D (i),

(13.7)

where λ is the weight parameter, and for the results presented here, we use λ = 0.1. We assume the selected original frames have a higher temporal consistency with

13 Video Frame Deletion and Duplication

353

Fig. 13.11 The confusion matrix for three classes of temporal inconsistency within a video, used with the I3D-based inconsistency. We expect a high likelihood of “drop” class at the two ends of the duplicated frame sequence and a high “none” likelihood at the ends of the selected original frame sequence

Fig. 13.12 Illustration of distinguishing duplicated frames from the selected frames. The index ranges for the red frame sequence and the green sequence are [72, 191] and [290, 409], respectively. s1 and s2 are the corresponding inconsistency scores for the red sequence and green sequence, respectively. Obviously, s1 > s2 , which indicates that the red sequence is duplicated frames as expected

frames before and after such frames than the duplicated frames because the insertion of duplicated frames usually causes a sharp inconsistency at the beginning and the end of the duplicated interval, as illustrated in Fig. 13.12. Given a pair of frame sequences that are potentially duplicated, [i, i + l] and [ j, j + l], we compare two scores, wind  S(i − 1 + k) + S(i + l + k) (13.8) s1 = k=−wind

and s2 =

wind 

S( j − 1 + k) + S( j + l + k),

(13.9)

k=−wind

where wind is the window size. We check the inconsistency at both the beginning and the end of the sequence. In this work, we set wind = 3 to avoid the failure cases where a few start or end frames were detected incorrectly. If s1 > s2 , then the duplicated frame segment is [i, i + l]. Otherwise, the duplicated frame segmentation

354

C. Long et al.

Fig. 13.13 Illustration of frame-to-frame distance between duplicated frames and the selected frames

is [ j, j + l]. As shown in Fig. 13.12, our modified I3D network is able to measure the consistency between consecutive frames.

13.4.4 Experimental Results We evaluate our proposed C2F-DCNN method on a self-collected video dataset and the Media Forensics Challenge 2018 (MFC18)4 dataset (Guan et al. 2019). Our self-collected video dataset is obtained through automatically adding frame duplication manipulation on the 12 raw static camera videos from VIRAT dataset (Oh et al. 2011) and 17 dynamic iPhone 4 videos. The duration of each video is in the range from 47 seconds to 3 minutes. In order to generate test videos with frame duplication, we randomly select frame sequences with the duration 0.5, 1, 2, 5 and 10 s, and then re-insert them into the same source videos. We use the X264 video codec and a frame rate of 30 fps to generate these manipulated videos. Note that we avoid any temporal overlap between the selected original frames and the duplicated frames in all generated videos. Since we have the frame-level ground truth, we can use it for frame-level performance evaluation. The MFC18 dataset consists of two subsets, Dev dataset and Eval dataset, which we denote as the MFC18-Dev dataset and the MFC18-Eval dataset, respectively. There are 231 videos in the MFC18-Dev dataset and 1036 videos in the MFC18Eval dataset. The duration of each video is in the range from 2 seconds to 3 minutes. The frame rate for most of the videos is 29–30 fps, while a smaller number of videos are 10 or 60 fps and only five videos in the MFC18-Eval dataset are larger than 240 fps. We opt out these five videos and another two videos which have less than 17 4

https://www.nist.gov/itl/iad/mig/media-forensics-challenge-2018.

13 Video Frame Deletion and Duplication

355

frames from the MFC18-Eval dataset because the input for the I3D network should have at least 17 frames. We use the remaining 1029 videos in the MFC18-Eval dataset to conduct the video-level performance evaluation. The detection task is to detect whether or not a video has been manipulated with frame duplication manipulation, while the localization task to localize the duplicated frames index. For the measurement metrics, we use the performance measures of area under the ROC curve (AUC) for the detection task, and use the Matthews correlation coefficient TP × TN − FP × FN MCC = √ (TP + FP)(TP + FN)(TN + FP)(TN + FN) for localization evaluation, where TP, FP, TN and FN refer to frames which represent true positive, false positive, true negative and false negative, respectively. See Guan et al. (2019) for further details on the metrics.

13.4.4.1

Frame-Level Analysis on Self-collected Dataset

To better verify the effectiveness of deep learning solution in frame-duplication detection on the self-collected dataset, we consider four baselines: Lin et al.’s method (Guo-Shiang and Jie-Fan 2012) that uses histogram difference as the detection features, Yang et al.’s method (Yang et al. 2016) that is an effective similarityanalysis-based method with SVD features, Ulutas et al.’s method (Ulutas et al. 2017) based on binary features and another method by them (Ulutas et al. 2018) that uses bag-of-words with 130-dimensional SIFT descriptors. Different from our proposed C2F-DCNN method, all of these methods use traditional feature extraction without deep learning. Note that the manipulated videos are generated by us, hence both selected original frames and duplicated frames are accessible to us. We treat these experiments as a white-box attack and evaluate the performance of frame-to-frame distance measurements. We run the proposed C2F-DCNN approach and the above-mentioned four stateof-the-art approaches on our self-collected dataset and the results are summarized in Table 13.3. As we can see, due to the X264 codec, the contents of the duplicated frames have been affected so that the detection of a duplicated frame and its corresponding selected frame is very challenging. In this case, our C2F-DCNN method still outperforms the four previous methods. To help the reader better understand the comparison, we provide a visualization of the normalized distances between the selected frames and the duplicated frames in Fig. 13.13. We can see our C2F-DCNN performs the best for both sample videos, especially with respect to the ability to distinguish the temporal boundary between duplicated frames and non-duplicated frames. All these observations strongly demonstrate the effectiveness of this deep learning approach for frame duplication detection.

356

C. Long et al.

Table 13.3 The AUC performance of frame-to-frame distance measurements for frame duplication detection on our self-collected video dataset.(unit: %) Method iPhone 4 videos VIRAT videos Lin 2012 (Guo-Shiang and Jie-Fan 2012) Yang 2016 (Yang et al. 2016) Ulutas 2017 (Ulutas et al. 2017) Ulutas 2018 (Ulutas et al. 2018) C2F-DCNN

80.81

80.75

73.79 70.46

82.13 81.32

73.25

69.10

81.46

84.05

Fig. 13.14 The ROC curves for video-level frame duplication detection on the MFC18-Dev dataset

Fig. 13.15 The ROC curves for video-level frame duplication detection on the MFC18-Eval dataset

13.4.4.2

Video-Level Analysis on the MFC18 Dataset

It is worth mentioning that the duplicated videos in the MFC18 dataset usually include multiple manipulations, and this makes the content between the selected original frames and duplicated frames different at times. Therefore, the testing video in both the MFC18-Dev and the MFC18-Eval datasets are very challenging. Since we are not aware of the details about the generation of all the testing videos, we take this dataset as a black-box attack and evaluate its video-level detection and localization performance.

13 Video Frame Deletion and Duplication

357

Table 13.4 The MCC metric in [–1.0, 1.0] range for video temporal localization on the MFC18 dataset. Our approach generates the best MCC score, where 1.0 is perfect Method MFC18-Dev MFC18-Eval Lin 2012 (Guo-Shiang and Jie-Fan 2012) Yang 2016 (Yang et al. 2016) Ulutas 2017 (Ulutas et al. 2017) Ulutas 2018 (Ulutas et al. 2018) C2F-DCNN w/ ResNet C2F-DCNN w/ C3D C2F-DCNN w/ I3D

0.2277

0.1681

0.1449 0.2810

0.1548 0.3147

0.0115

0.0391

0.4618 0.6028 0.6612

0.3234 0.3488 0.3606

√ Table 13.5 The video temporal localization performance on the MFC18 dataset. Note , × and ⊗ indicate correct cases, incorrect cases and ambiguously incorrect cases, respectively. And #(.) indicates the number of a kind of specific cases √ Dataset #( ) #(×) #(⊗) MFC18-Dev MFC18-Eval

14 33

6 38

1 15

We compare the proposed C2F-DCNN method and the above-mentioned four state-of-the-art methods, i.e., Lin 2012 (Guo-Shiang and Jie-Fan 2012), Yang 2016 (Yang et al. 2016), Ulutas 2017 (Ulutas et al. 2017) and Ulutas 2018 (Ulutas et al. 2018) on these two datasets. We use the negative minimum distance (i.e., −dmin ) as a default video-level scoring method to generate a video-level score for each competing method, including ours. “C2F-DCNN+confscore” denotes our best configuration with C2F-DCNN along with the proposed video-level confidence score defined in Eq. 13.6. In contrast, “C2F-DCNNa” uses only −dmin as the confidence score. The comparative manipulated video detection results are summarized in Figs. 13.14 and 13.15. A few observations that we would like to point out: (1) C2F-DCNN always outperforms the four previous methods for the video-level frame duplication, with the video-level score as negative minimum distance; (2) with “+conf score”, our “C2FDCNN+confscore” method generates a significant boost in AUC as compared to the baseline score of −dmin and achieves a high correct detection rate at a low false alarm rate; and (3) the proposed “C2F-DCNN+confscore” method achieves very high AUC scores on the two benchmark datasets: 99.66% on MFC18-Dev, and 98.02% on MFC18-Eval. We also performed a quantified analysis of the temporal localization within a manipulated video with frame duplication. For comparison with the four previous methods, we use the feature distance between any two consecutive frames.

358

C. Long et al.

Fig. 13.16 The visualization of confusion bars in video temporal localization. For each subfigure, the top (purple) bar is ground truth indicating duplication, the middle bar (pink) is the system output from the proposed method and the bottom bar is the confusion calculated based on the above the truth and the system output. Note TN, FN, FP, TP and “OptOut” in the confusion are marked in white, blue, red, green and yellow/black, respectively. a and b–d are correct results, which include completely correct cases and partially correct cases. e and f show the failure cases

For the proposed C2F-DCNN approach, the best configuration “C2F-DCNN w/ I3D” includes the I3D network as the inconsistency detector. We also provide two baseline variants by replacing the I3D inconsistency detector with a ResNet network feature distance S Res (i) only (“C2F-DCNN w/ ResNet”) or the C3D network’s dr op none scores SC3D (i) − λSC3D (i) from (Chengjiang et al. 2017) (“C2F-DCNN w/ C3D”). The temporal localization results are summarized in Table 13.4, from which we can observe that (1) our deep learning solutions, “C2F-DCNN w/ ResNet”, “C2F-DCNN w/ C3D” or “C2F-DCNN w/ I3D” work better than the four previous methods and “C2F-DCNN w/ I3D” performs the best. These observations suggest that 3D convolutional kernel is able to measure the inconsistency between the consecutive frames, and both RGB data stream and optical flow data stream are complementary to further improve the performance. To better understand the video temporal localization measurement, we plot the confusion bars on the video timeline based on the truth and the corresponding system output under different scenarios, as shown in Fig. 13.16. We would like to emphasize that no algorithm is able to distinguish duplicated frames from selected frames for the ambiguously incorrect cases indicated as ⊗ in Table 13.5, because such videos

13 Video Frame Deletion and Duplication

359

often break the assumption of temporal consistency and in many cases the duplicated frames are difficult to identify by the naked eye.

13.5 Conclusions and Discussion We presented a C3D-based network with a confidence score defined with a peak detection step and a scale term for frame dropping detection. The method we proposed in Chengjiang et al. (2017) flexibly explores the underlying spatio-temporal relationship across the one-shot videos. Empirically, it is not only able to identify manipulation of temporal removal type robustly but also to detect the exact location where the frame dropping occurred. Our future work includes revising frame dropping strategy to be more realistic for training video collection, evaluating an LSTM-based network for quicker runtime, and working on other types of video manipulation detection such as addressing shot boundaries and duplication in looping cases. Multiple factors cause frame duplication detection and localization becoming more and more challenging in video forgeries. These factors include high frame rates, multiple manipulations (e.g., “SelectCutFrames”, “TimeAlterationWarp”, “AntiForensicCopyExif”, “RemoveCamFingerprintPRNU”5 ) involved before and after, and gaps between the selected frames and the duplicated frames. In particular, zero gap between the selected frames and the duplicated frames renders the manipulation undetectable because the inconsistency which should exist at the end of the duplicated frames does not appear in the video temporal context. Regarding the runtime, the I3D network for inconsistency detection is the most expensive component in our framework but we only apply it on the candidate frames that are likely to have frame duplication manipulations detected in the coarse-search stage. For each testing video clip with a 16-frame length, it takes about 2 s with our learned I3D network. For a one-minute short video with 30 FPS, it requires less than 5 min to complete the testing throughout all the frame sequences. The coarse-to-fine deep learning approach is designed for frame duplication detection at both frame-level and video-level, as well as for video temporal localization. This work also included a heuristic strategy to formulate the video-level confidence score, as well as an I3D network-based inconsistency detector to distinguish the duplicated frames from the selected frames. The experimental results have demonstrated the robustness and effectiveness of the method. Our future work includes continuing to extend multi-stream 3D neural networks for both frame drop, frame duplication and other video manipulation tasks like looping detection, working on frame-rate variations and train on multiple manipulations, investigating the effects of various video codecs on algorithm accuracy.

5

These manipulation operation are defined in the MFC18 dataset.

360

C. Long et al.

Acknowledgements This work was supported by DARPA under Contract No. FA875016-C-0166. Any findings and conclusions or recommendations expressed in this material are solely the responsibility of the authors and does not necessarily represent the official views of DARPA of the U.S. Government. Approved for Public Release, Distribution Unlimited.

References Aneja J, Deshpande A, Schwing AG (2018) Convolutional image captioning. In: The IEEE conference on computer vision and pattern recognition (CVPR) Awad G, Le DD, Ngo CW, Nguyen VT, Quénot G, Snoek C, Satoh SI (2010) National institute of informatics, Japan at trecvid 2010. In: TRECVID Carreira J, Zisserman A (2017) Quo vadis, action recognition? a new model and the kinetics dataset. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 4724– 4733 Chao J, Jiang X, Sun T (2012) A novel video inter-frame forgery model detection scheme based on optical flow consistency. In: International workshop on digital watermarking, pp 267–281 Chen Y, Li W, Sakaridis C, Dai D, Van Gool L (2018) Domain adaptive faster r-cnn for object detection in the wild. In: The IEEE conference on computer vision and pattern recognition (CVPR) Christlein V, Riess C, Jordan J, Riess C, Angelopoulou E (2012) An evaluation of popular copymove forgery detection approaches. arXiv:1208.3665 Conotter V, O’Brien JF, Farid H (2013) Exposing digital forgeries in ballistic motion. IEEE Trans Inf Forensics Secur 7(1):283–296 D’Amiano L, Cozzolino D, Poggi G, Verdoliva L (2019) A patchmatch-based dense-field algorithm for video copy-move detection and localization. IEEE Trans Circuits Syst Video Technol 29(3):669–682 Douze M, Gaidon A, Jegou H, Marszalek M, Schmid C (2008) Inria-lear’s video copy detection system. In: TRECVID 2008 workshop participants notebook papers. MD, USA, Gaithersburg Guan H, Kozak M, Robertson E, Lee Y, Yates AN, Delgado A, Zhou D, Kheyrkhah T, Smith J, Fiscus J (2019) MFC datasets: large-scale benchmark datasets for media forensic challenge evaluation. In: 2019 IEEE winter applications of computer vision workshops (WACVW). IEEE, pp 63–72 Hara K, Kataoka H, Satoh Y (2018) Can spatiotemporal 3D CNNs retrace the history of 2D CNNs and imagenet? In: The IEEE conference on computer vision and pattern recognition (CVPR) He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: The IEEE conference on computer vision and pattern recognition (CVPR) Huang T, Zhang X, Huang W, Lin L, Weifeng S (2018) A multi-channel approach through fusion of audio for detecting video inter-frame forgery. Comput Secur 77:412–426 Karpathy A, Toderici G, Shetty S, Leung T, Sukthankar R, Fei-Fei L (2014) Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1725–1732 Kawai Y, Sumiyoshi H, Yagi N (2007) Shot boundary detection at TRECVID 2007. In: TRECVID 2007 workshop participants notebook papers. MD, USA, Gaithersburg Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, vol 25, pp 1097–1105 Law-To J, Buisson O, Gouet-Brunet V, Boujemaa N (2006) Robust voting algorithm based on labels of behavior for video copy detection. In: Proceedings of the 14th ACM international conference on multimedia. ACM, pp 835–844 Lin G-S, Chang J-F (2012) Detection of frame duplication forgery in videos based on spatial and temporal analysis. Int J Pattern Recognit Artif Intell 26(07):1250017

13 Video Frame Deletion and Duplication

361

Long C, Basharat A, Hoogs A (2019) A coarse-to-fine deep convolutional neural network framework for frame duplication detection and localization in forged videos. In: IEEE international conference on computer vision and pattern recognition workshop (CVPR-W) on media forensics Long C, Smith E, Basharat A, Hoogs A (2017) A C3D-based convolutional neural network for frame dropping detection in a single video shot. In: IEEE international conference on computer vision and pattern recognition workshop (CVPR-W) on media forensics Milani S, Fontani M, Bestagini P, Barni M, Piva A, Tagliasacchi M, Tubaro S (2012) An overview on video forensics. APSIPA Trans Signal Inf Process 1 Mohamadian Z, Pouyan AA (2013) Detection of duplication forgery in digital images in uniform and non-uniform regions. In: 2013 UKSim 15th international conference on computer modelling and simulation (UKSim). IEEE, pp 455–460 Oh S, Hoogs A, Perera A, Cuntoor N, Chen CC, Lee JT, Mukherjee S, Aggarwal JK, Lee H, Davis L, et al (2011) A large-scale benchmark dataset for event recognition in surveillance video. In: 2011 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 3153–3160 Ren S, He K, Girshick R, Sun J (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Cortes C, Lawrence ND, Lee DD, Sugiyama M, Garnett R (eds) Advances in neural information processing systems (NIPS), pp 91–99 Smeaton AF, Over P, Doherty AR (2010) Video shot boundary detection: seven years of trecvid activity. CVIU 114(4):411–418 Sowmya KN, Chennamma HR (2015) A survey on video forgery detection. Int J Comput Eng Appl 9(2):17–27 Stock P, Cisse M (2018) Convnets and imagenet beyond accuracy: understanding mistakes and uncovering biases. In: The European conference on computer vision (ECCV) Tang P, Wang X, Wang A, Yan Y, Liu W, Huang J, Yuille A (2018) Weakly supervised region proposal network and object detection. In: The European conference on computer vision (ECCV) Thakur MK, Saxena V, Gupta JP (2016) Learning based no reference algorithm for dropped frame identification in uncompressed video, pp 451–459 Thakur MK, Saxena V, Gupta JP (2016) Learning based no reference algorithm for dropped frame identification in uncompressed video. In: Information systems design and intelligent applications: proceedings of third international conference INDIA 2016, vol 3, pp 451–459 Tran D, Bourdev L, Fergus R, Torresani L, Paluri M (2015) Learning spatiotemporal features with 3d convolutional networks. In: 2015 IEEE international conference on computer vision (ICCV). IEEE, pp 4489–4497 Ulutas G, Ustubioglu B, Ulutas M, Nabiyev V (2017) Frame duplication/mirroring detection method with binary features. IET Image Process 11(5):333–342 Ulutas G, Ustubioglu B, Ulutas M, Nabiyev VV (2018) Frame duplication detection based on bow model. Multimed Syst 1–19 Venugopalan S, Xu H, Donahue J, Rohrbach M, Mooney R, Saenko K (2015) Translating videos to natural language using deep recurrent neural networks. In: North American chapter of the association for computational linguistics—human language technologies (NAACL-HLT) Wang J, Liu G, Zhang Z, Dai Y, Wang Z (2009) Fast and robust forensics for image region-duplication forgery. Acta Autom Sin 35(12):1488–1495 Wang Q, Li Z, Zhang Z, Ma Q (2014) Video inter-frame forgery identification based on optical flow consistency. Sens Transducers 166(3):229 Wang Q, Li Z, Zhang Z, Ma Q (2014) Video inter-frame forgery identification based on consistency of correlation coefficients of gray values. J Comput Commun 2(04):51 Wang X, Liu Y, Huan X, Wang P, Yang H (2018) Robust copy-move forgery detection using quaternion exponent moments. Pattern Anal Appl 21(2):451–467 Wang W, Farid H (2007) Exposing digital forgeries in video by detecting duplication. In: Proceedings of the 9th workshop on multimedia & security. ACM, pp 35–42 Wolf S (2009) A no reference (NR) and reduced reference (RR) metric for detecting dropped video frames. In: National telecommunications and information administration (NTIA)

362

C. Long et al.

Wu Y, Jiang X, Sun T, Wang W (2014) Exposing video inter-frame forgery based on velocity field consistency. In: 2014 IEEE International Conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 2674–2678 Xie S, Sun C, Huang J, Tu Z, Murphy K (2018) Rethinking spatiotemporal feature learning: speedaccuracy trade-offs in video classification. In: The European conference on computer vision (ECCV) Yang J, Huang T, Lichao S (2016) Using similarity analysis to detect frame duplication forgery in videos. Multimed Tools Appl 75(4):1793–1811 Yang B, Sun X, Guo H, Xia Z, Chen X (2018) A copy-move forgery detection method based on CMFD-SIFT. Multimed Tools Appl 77(1):837–855 Yongjian H, Li C-T, Wang Y, Liu B (2012) An improved fingerprinting algorithm for detection of video frame duplication forgery. Int J Digit Crime Forensics (IJDCF) 4(3):20–32 Yu H, Cheng S, Ni B, Wang M, Zhang J, Yang X (2018) Fine-grained video captioning for sports narrative. In: The IEEE conference on computer vision and pattern recognition (CVPR) Zhang N, Paluri M, Ranzato MA, Darrell T, Bourdev L (2014) Pose aligned networks for deep attribute modeling. In: CVPR. Panda Zhao DN, Wang RK, Lu ZM (2018) Inter-frame passive-blind forgery detection for video shot based on similarity analysis. Multimed Tools Appl 1–20 Zheng L, Sun T, Shi YQ (2014) Inter-frame video forgery detection based on block-wise brightness variance descriptor. In: International workshop on digital watermarking, pp 18–30 Zhou B, Lapedriza A, Xiao J, Torralba A, Oliva A (2014) Learning deep features for scene recognition using places database. In: NIPS

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Chapter 14

Integrity Verification Through File Container Analysis Alessandro Piva and Massimo Iuliani

In the previous chapters, multimedia forensics techniques based on the analysis of the data stream, i.e., the audio-visual signal, aimed at detecting artifacts and inconsistencies in the (statistics of the) content were presented. Recent research highlighted that useful forensic traces are also left in the file structure, thus offering the opportunity to understand a file’s life-cycle without looking at the content itself. This Chapter is then devoted to the description of the main forensic methods for the analysis of image and video file formats.

14.1 Introduction Most forensic techniques look for traces within the content itself that are, however, mostly ineffective in some scenarios, for example, when dealing with strongly compressed or low resolution images and videos. Recently, it has been shown that also the image or video file container maintain some traces of the content history. The analysis of the file format and metadata allows to determine their compatibility, completeness, and consistency based on the expected media’s history. This analysis proved to be strongly promising since the image and video compression standards leave some freedom in the file container’s generation. This fact allows forensic practitioners to identify media’s container signatures related to a specific brand, model, social media platform, etc. Furthermore, the cost for analyzing a file header is usually negligible. This is even more relevant when dealing with digital videos where the stream analy-

A. Piva (B) · M. Iuliani Department of Information Engineering, University of Florence, Via di S. Marta, 3, 50139 Florence, Italy e-mail: [email protected] © The Author(s) 2022 H. T. Sencar et al. (eds.), Multimedia Forensics, Advances in Computer Vision and Pattern Recognition, https://doi.org/10.1007/978-981-16-7621-5_14

363

364

A. Piva and M. Iuliani

sis can quickly become unfeasible. Indeed, recent methods highlight that video can be analyzed in seconds, independently of their length and resolution. It is worth also mentioning that most of the achieved results highlighted that most interesting containers’ signatures usually have a low intra-variability, thus suggesting that this analysis can be promising even when only a few training data are available. On the other side, these techniques’ main drawback is that they only allow forgery detection while localization is usually beyond its capabilities. Furthermore, the design of practical media containers’ parsers is not to be underestimated since it requires a deep comprehension of image and video formats, and the parser should be updated when new models are introduced in the market or unknown elements are added to the media header. Finally, we are currently not aware of any publicly available software that would allow users to consistently forge such information without advanced programming skills. This makes the creation of plausible forgeries undoubtedly a highly non-trivial task and thereby reemphasizes that file characteristics and metadata must not be dismissed as unreliable source of evidence for the purpose of file authentication per se. The Chapter is organised as follows: in Sects. 14.1.1 and 14.1.2 the main image and video file format specifications are summarized. Then, in Sect. 14.2 and 14.3 the main methods for the analysis of image and video file formats are described. Finally, in Sect. 14.4 the main findings of this research area are provided, along with some possible future works.

14.1.1 Main Image File Format Specifications Multimedia technologies allow digital image generation based on several image formats. DSLR cameras and proficient acquisition devices can store images in uncompressed formats (e.g., DNG, CR2, BMP). When necessary, lossless compression schemes (e.g., PNG) can be applied to reduce the image size without impacting its quality. Furthermore, lossy compression schemes (e.g., JPEG, HEIF) are available to strongly reduce image size with minimal impact on visual quality. Nowadays, most of the devices on the market and social media platforms generate and store JPEG images. For this reason, most of the forensic literature focuses on this image type. The JPEG standard itself JPEG (1992) defines the pixel data encoding and decoding, while the full-featured file format implementation is defined in the JPEG Interchange Format (JIF) ISO/IEC (1992). This format is quite complicated, and some simpler alternatives are generally preferred for handling and encapsulating JPEG-encoded images: the JPEG File Interchange Format (JFIF) Eric Hamilton (2004) and the Exchangeable Image File Format (Exif) JEITA (2002), both built on JIF. Each of the formats defines the basic structure of JPEG files as a set of marker segments, either mandatory or optional. An identifier of the marker id indicates the beginning of each marker segment. Abbreviations for marker ids are denoted by 16 bit short values starting with 0xFF. Marker segments can encapsulate either data compression parameters (e.g., in DQT, DHT, or SOF) or application-specific

14 Integrity Verification Through File Container Analysis

365

metadata, like thumbnails or EXIF metadata (e.g., in APP0(JFIF) or APP1(EXIF)). Additionally, some markers consist of the marker id only and indicate state transitions necessary to parse the file format. For example, SOI and EOI indicate the start and end of a JPEG file, requiring all other markers to be placed between these two mandatory markers. The JIF standard also allows application markers (APPn), populated with entries in the form of key-value pairs. The values can vary from human-readable strings to complex structures like binary data. The APP0 segment defines pixel densities and pixel ratios, and an optional thumbnail of the actual image can be placed here. In APP1, the EXIF standard enables cameras to save a vast range of optional information (EXIF-Tags) related to the camera’s photographic settings when and where the image was taken. The information are split into five main image file directories (IFDs): (i) Primary; (ii) Exif; (iii) Interoperability; (iv) Thumbnail; and (v) GPS. However, the content of each IFD is customizable by camera manufacturers, and the EXIF standard also allows for the creation of additional IFDs. Other metadata are customizable within the file header, such as XMP and IPTC that provide additional information like copyright or the image’s editing history. Furthermore, image processing software can introduce a wide variety of marker segment sequences. These differences in the file formats and the not strictly standardization of several sequences may expose forensic traces within the image file container Thomas Gloe (2012). Indeed, not all segments are mandatory and different combinations thereof can occur. Some segments can appear multiple times (e.g., quantization tables can be either encapsulated in one single or multiple separate DQT segments). Furthermore, the sequence of segments is generally not fixed (with some exceptions) and customizable. For example, JPEG thumbnails exist in different segments and employ their own complete sequence of marker segments. Eventually, arbitrary data can exist after EOI. In the next sections, we report the main technologies and the results achieved based on the technical analysis of these formats.

14.1.2 Main Video File Format Specifications When a camera acquires a digital video, image sequence processing and audio sequence processing are performed in parallel. After compression and synchronization, the streams are encapsulated in a multimedia container, simply called a video container from now on. There are several compression standards; however, H264/AVC or MPEG-4 Part 10 International Telecommunication Union (2016), and H265/HEVC or MPEG-H Part 2 International Telecommunication Union (2018) are the most relevant since most mobile devices implement them. Video and audio tracks, plus additional information (such as metadata and sync information), are then encapsulated in the video container based on specific format standards. In the following, we describe the two most adopted formats.

366

A. Piva and M. Iuliani

MP4-like videos Most smartphones and compact cameras output videos in mp4, mov, or 3gp format. This video packaging refers to the same standard, ISO/IEC 14496 Part 12 ISO/IEC (2008), that defines the main features of MP4 File Format ISO/IEC (2003) and MOV File Format Apple Computer, Inc (2001). The ISO Base format is characterized by a sequence of atoms or boxes. A unique 4-byte code identifies each node (atom) and each atom consists of a header and a data box, and possibly nested atoms. As an example, we report in Fig. 14.1 a typical structure of an MP4-like container. The file type box, ftyp, is a four-letter code that is used to identify the type of encoding, the compatibility, or the intended usage of a media file. According to the latest ISO standards, it is considered a semi-mandatory atom, i.e., the ISO expects it to be present and explicit as soon as possible in the file container. In the example given in Fig. 14.1 (a), the fields of the ftyp descriptor explain that the video file is MP4-like and it is compliant to the MP4 Base Media v1 [IS0 14496-12:2003] (here isom) and the 3GPP Media (.3GP) Release 4 (here 3gp4) specifications.1 The movie box, moov, is a nested atom containing the metadata needed to decode the data stream, which is embedded in the following mdat atom. It is important to note that moov Fig. 14.1 Representation of a fragment of an MP4-like video container, of an original video acquired by a Samsung Galaxy S3

1

The reader can refer to http://www.ftyps.com/ for further details.

14 Integrity Verification Through File Container Analysis

367

may contain multiple trak box instances, as shown in Fig. 14.1. The trak atom is mandatory, and its numerosity depends on the number of streams included in the file; for example, if the video contains a visual-stream and an audio-stream, there will be two independent trak atoms. A more detailed description of these structures can be found in Carlos Quinto Huamán et al. (2020). AVI videos Audio video interleave (AVI) is a container format developed by Microsoft in 1992 for Windows software Microsoft. Avi riff file (1992). AVI files are based on the RIFF (resource interchange file format) document format consisting of a RIFF header followed by zero or more lists and chunks. A chunk is defined as ckID, ckSize, and ckData, where ckID identifies the data contained in the chunk, ckSize is a 4-byte value giving the size of the data in ckData, and ckData is zero or more bytes of data. A list consists of LIST, listSize, listType, and listData, where LIST is the literal code LIST, listSize is a 4-byte value giving the size of the list, listType is a 32-bit unsigned integer, and listData consists of chunks or lists, in any order. An AVI file might also include an index chunk, which gives the location of the data chunks within the file. All AVI files must keep these three components in the proper sequence: the hdrl list defining the format of the data and is the first required LIST chunk; the movi list containing the data for the AVI sequence; the idx1 list containing the index. Depending on the specific camera or phone model, additional lists and JUNK chunks may exist between the hdrl and movi lists. These optional data segments are either used for padding or to store metadata. The idx1 chunk indexes the data chunks and their location in the file.

14.2 Analysis of Image File Formats The first studies in the image domain considered JPEG quantization tables and image resolution values Hany Farid (2006); Jesse D. Kornblum (2008); H. Farid (2008). Just with these few features, the authors demonstrated that it is possible to link a probe image to a set of devices or editing software. Then, in Kee Eric (2011), the authors increased the set of features by also considering JPEG coding data and Exif metadata. These studies provided better results in terms of integrity verification and device identification. The main drawback of such approaches is that a user can easily edit the metadata’s information, limiting these methods’ reliability. Following studies demonstrated that the file structure also contains a lot of traces of the content’s history. These data are also more reliable, being much more challenging to extract and modify for a user than metadata. Furthermore, available editing software and metadata editors do not allow the modification of such low-level information Thomas Gloe (2012). However, like the previous ones, this method is based on a manual

368

A. Piva and M. Iuliani

analysis of the extracted features, to be compared with a set of collected signatures. In Mullan Patrick et al. (2019), the authors present the first automatic method for characterizing the source device, based on the analysis of features extracted from the JPEG header information. Finally, another branch of research demonstrated that the JPEG file format can be exploited to understand whether a probe image has been exchanged through a social network service. In the following, we describe the primary studies of this domain.

14.2.1 Analysis of JPEG Tables and Image Resolution In Hany Farid (2006), the author shows, for the first time, that manufacturers typically configure the compression of devices differently according to their own needs and preferences. This difference, primarily manifested in the JPEG quantization table, can be used to identify the device that acquired a probe image. To carry out this analysis, the authors collect a single image, at the highest quality, from each of 204 digital cameras. Then, they extract the JPEG quantization table, noticing that 62 cameras show a unique table. The remaining cameras fall into equivalence classes of sizes between 2 and 28 (i.e., between 2 and 28 devices share the same quantization tables). Usually, cameras that share the same tables belong to the same manufacturer. Conversely, different makes and models usually share the same table. These results highlight that JPEG quantization tables can partially characterize the source device. Thus, they effectively narrow the source of an image, if not to a single camera make and model, at least to a small set of possible cameras (on average, this set size is 1.43). This study was performed by also saving the images with five Photoshop versions (at that time, version CS2, CS, 7, 4, and 3) at each of the 13 available levels of compression available. As expected, quantization tables at each compression level were different from one another. Moreover, at each compression level, the tables were the same for all five versions of Photoshop. More importantly, no one of these tables can be found in the images belonging to the 204 digital cameras. These findings allow arguing the possibility to link an image to specific editing software, or at least to a set of possible editing tools. Similar results have been presented in Jesse D. Kornblum (2008). The authors examined images from devices (a Motorola KRZR K1m, a Canon PowerShot 540, a FujiFilm Finepix A200, a Konica Minolta Dimage Xg, and a Nikon Coolpix 7900) and images edited by several software programs such as libjpeg, Microsoft Paint, the Gimp, Adobe Photoshop, and Irfanview. The analysis carried out on this dataset shows that, although some cameras always use the same quantization tables, most of them use a different set of quantization tables in each image. Moreover, these tables usually differ according to the source or editing software. In H. Farid (2008), the author expands the earlier work by considering a much larger dataset of images and adding the image resolution to the quantization table as discriminating features for source identification.

14 Integrity Verification Through File Container Analysis

369

The author analyzes over 330 thousand Flickr images from 859 different camera models from 48 manufacturers. Quantization tables and resolutions allowed to obtain 10153 different image classes. The resolution and quantization table were then combined to narrow the search criteria to identify the source camera. The analysis revealed that 2704 out of 10153 entries (26.6%) have a unique paired resolution and quantization table. In these cases, the features can correctly discriminate the source device. Moreover, 37.2% of the classes have at most two matches, and 44.1% have at most three matches. These results show that the combination of JPEG quantization table and image resolution allows matching the source of an image to a single camera make and model or at least to a small set of possible devices.

14.2.2 Analysis of Exif Metadata Parameters In Kee Eric (2011), the authors expand the previous analysis by creating a larger image signature for identifying the source camera with greater accuracy than only using quantization tables and resolution. This approach considers a set of features of the JPEG format, namely properties of the run-length encoding employed by the JPEG standard and some Exif header format characteristics. In particular, the signature is composed of three kinds of features: image parameters, thumbnail parameters, and Exif metadata parameters. The image parameters consist of the image size, quantization tables, and the Huffman codes; the Y, Cb, and Cr 8 × 8 quantization tables are represented as a vector of 192 values; the Huffman codes are specified as six vectors (one for the dc DCT coefficients and one for the ac DCT coefficients of each of the three channels Y, Cb, and Cr) storing the number of codes of length from 1 to 15. This part of the signature is then composed of 284 values: 2 image dimensions, 192 quantization values, and 90 Huffman codes. The same components are extracted from the image thumbnail, usually embedded in the JPEG header. Thus, the thumbnail parameters are other 284 values: 2 thumbnail dimensions, 192 quantization values, and 90 Huffman codes. A compact representation of EXIF Metadata is finally obtained by counting the number of entries in each of the five image file directories (IFDs) that compose the EXIF metadata, as well as the total number of any additional IFDs, and the total number of entries in each of these. It worth noticing that some manufacturers adopt metadata representation that does not conform to the EXIF standard, yielding errors when parsing the metadata. The authors also consider these errors as discriminating features, so the total number of parser errors, as specified by the EXIF standard, is also stored. Thus, they consider 8 values from the metadata: 5 entry values from the standard IFDs, 1 for the number of additional IFDs, 1 for the number of entries in these additional IFDs, 1 for the number of parser errors. The image signature is then composed of 284 header features extracted from the full resolution image, 284 header features from the thumbnail image, and another 8 from the EXIF metadata, for a total of 576 features. These signatures are extracted

370

A. Piva and M. Iuliani

Table 14.1 The percentage of camera configurations with an equivalence class size from 1 to 5. Each row corresponds to different subsets of the complete signature Equivalence class size 1 2 3 4 5 Image 12.9% 7.9% 6.2% 6.6% 3.4% Thumbnail 1.1% 1.1% 1.0% 1.1% 0.7% EXIF 8.8% 5.4% 4.2% 3.2% 2.6% Image+thumbnail 24.9% 15.3% 11.3% 7.9% 3.7% All 69.1% 12.8% 5.7% 4.0% 2.9%

from the image files and analyzed to verify if they can assign the image to a given camera make and model. Tests are carried out on over 1.3 million Flickr images, acquired by 773 camera and cell phone models, belonging to 33 different manufacturers. These images span 9, 163 different distinct pairings of the camera make, model, and signature, referred to as a camera configuration. It is worth noting that each camera model can produce different signatures based on the acquisition resolutions and quality settings. Indeed, in the collected dataset, we find an average of 12 different signatures for each camera make and model. The camera signature analysis on the above collection shows that the image, thumbnail, and EXIF parameters are not distinctive individually. However, when combined, they provide a highly discriminative signature. This result indicates that the three classes of parameters are not highly correlated, and hence their combination improves overall distinctiveness, as show in Table 14.1. Eventually, the signature was tested on images edited by Adobe Photoshop (versions 3, 4, 7, CS, CS2, CS3, CS4, CS5 at all qualities), by exploiting, in this case, image and thumbnail quantization tables and Huffman codes only. No overlaps were found between any Photoshop version/quality and camera manufacturer. The Photoshop signatures, each residing in an equivalence class of size 1, are unique. This means that any photo-editing with Photoshop can be easily and unambiguously detected.

14.2.3 Analysis of the JPEG File Format In Thomas Gloe (2012), the author makes a step forward with respect to the previous studies. They analyze the JPEG file format structure to identify new characteristics that are more discriminating than JPEG metadata only. In this study, a subset of images extracted from the Dresden Dataset (Thomas Gloe and Rainer Bähme 2014) was used. In particular, 4, 666 images belonging to the JPEG scenes (a part of the dataset built to specifically study model-specific JPEG compression algorithms) and 16, 956 images of natural scenes were considered. Images were captured with one

14 Integrity Verification Through File Container Analysis

371

device of each available camera model while iterating over all combinations of compression, image size, and flash settings) Overall, they considered images acquired by 73 devices from 26 camera models of 14 different brands. A part of these authentic images was re-saved with each of the eight investigated image processing software (ExifTool, Gimp, IrfanView, Jhead, cjpeg, Paint.NET, PaintShop, and Photoshop), with all available combinations of JPEG compression settings. Finally, they obtained and analyzed more than 32, 000 images. The study focuses firstly on analyzing the sequence and occurrence of marker segments in the JPEG data structures. The main findings of the author are that: 1. some optional segments can occur with different combinations; 2. segments can appear multiple times; 3. the sequence of segments is generally not fixed, with the exception of some required combinations; 4. JPEG thumbnails exist in different segments and employ their complete sequence of marker segments; 5. arbitrary data can exist after the marker end of image (EOI) of the main image. The markers’ analysis allows to distinguish between pristine and edited images correctly. Furthermore, none of the investigated software allows to recreate the sequence of markers consistently. The author also considered the sequence of EXIF data structures that store camera properties and image acquisition settings. Interestingly, they found differences between digital cameras, image processing software, and metadata editors. In particular, the following four characteristics appear to be relevant: 1. the byte order is different between camera models (and the default setting employed by ExifTool); 2. sequences of image file directories (IFDs) and corresponding entries (including tag, data type and often also offsets) appear constant in images acquired with the same model and differ between different sources; 3. some manufacturers use different data types for the same entry; 4. raw values of rational types differ between different models while resulting in the same interpreted values (e.g., 200/10 or 20/1). These findings allow to conclude that the analysis of EXIF data structures can be useful to discriminate between authentic and edited images.

14.2.4 Automatic Analysis of JPEG Header Information All previous works analyzed images acquired by digital cameras, whereas today, most of the visual content is generated by smartphones. These two classes of devices are somewhat different. While traditional digital cameras typically have a fixed firmware, modern smartphones keep updating their acquisition software and operating system,

372

A. Piva and M. Iuliani

making it harder to achieve a unique characterization of device models based on the image file format. In Mullan Patrick et al. (2019), the authors performed a large-scale study on Apple smartphones to characterize the source device based on JPEG header information. The results confirm that identifying the hardware is much harder for smartphones than for traditional cameras, while we can identify the operating system version and selected apps. The authors exploit Flickr images following the rules adopted in Kee Eric (2011). Starting from a crawling of one million images, they filtered the downloaded media to remove edited images, obtaining at the end a dataset of 432, 305 images. All images belong to Apple devices, including all models comprised between iPhone 4 and iPhone X, between iPad Air and iPad Air 2, with operating system versions from iOS 5 to iOS 12. They consider two families of features: the number of entries in the image file directories (IFDs) and the quantization tables. The selected directories include the standard IFDs “ExifIFD”, “IFD0”, “IFD1”, “GPS”, and the special directories “ICC_Profile” and “MakerNotes”. The Y and Cb quantization tables are concatenated, and their hash is computed, obtaining one alphanumeric number, called QCTable. Differently from Kee Eric (2011), the Huffman tables were not considered since, in the collected database, they are identical in the vast majority of cases. For similar reasons, they also discarded the sequence of JPEG syntax elements studied by Thomas Gloe (2012). First, the authors verified the modification of the header information for Apple smartphones over time. They showed that image metadata change more often than compression matrices, and that different hardware platforms with same operating system exhibit very similar metadata information. These findings highlight that it is more complicated to analyse smartphones than digital cameras. After that, the authors design an algorithm for determining the smartphone hardware model from the header information in an automatic way. This is the first algorithm designed to perform an automatic source identification on images through file format information. The classification is performed with a random forest, that uses numbers of entries per directory and quantization tables, as described before, as features. Firstly, a random forest classifier was trained to identify seven different iPhone models marketed in 2017. Secondly, the same algorithm was trained to verify its ability to identify the operating system from header information, considering all versions from iOS 4 to iOS 11. In all the two cases, experiments were performed by considering as input features: (i) the Exif directories only, (ii) the quantization tables only, (iii) metadata and quantization features together. We report the achieved results in Table 14.2. It worth noticing that using only the quantization tables gives the lowest results, whereas the highest accuracy is reached with all the considered features. Moreover, it is easier to discriminate among operating systems than among hardware models.

14 Integrity Verification Through File Container Analysis

373

Table 14.2 Overall accuracy of models and iOS version classification when using EXIF directories (EXIF-D), quantization matrices (QT), or both EXIF-D QT Both iPhone Models 61.9% 35.4% 65.6% classification iOS version 80.4% 33.7% 81.7% Classification

14.2.5 Methods for the Identification of Social Networks Several works investigated how JPEG file format modifications can be exploited to understand whether a probe image has been exchanged through a social network service. Indeed, a social network usually recompresses, resizes, and alters image metadata. Castiglione et al. Castiglione et al. (2011) made the first study on image resizing, compression, renaming, and metadata modifications introduced by Facebook, Badoo, and Google+. Giudice et al. Oliver Giudice et al. (2017) built a dataset of images from different devices, including digital cameras and smartphones with Android and iOS operating systems. Then, they exchanged all the images through ten different social networks. The considered features are a 44-dimensional vector composed of pixel width and height, an array containing the Exif metadata, the number of JPEG markers, the first 32 coefficients of the luminance quantization table, and the first 8 coefficients of the chrominance quantization table. These features are fed to an automatic detector based on two K-NN classifiers and a decision tree fitted on the built dataset. The method can predict the social network the image belongs to and the client application with an accuracy of 96% and 97.69% respectively. Summarizing, the authors noticed that modifications left by the process in the JPEG file depend on both the platform (server-side) and the application used for the upload (client-side). In Phan et al. (2019), the previous method was extended by merging JPEG metadata information (including quantization tables, Huffman coding tables, image size) with content-related features, namely the histograms of DCT coefficients. A CNN is trained on these features to detect an image’s multiple exchanges through Facebook, Flickr, and Twitter. The adopted dataset consists of two parts: R-SMUD (RAISE Social Multiple UpDownload), generated by the RAISE dataset Duc-Tien Dang-Nguyen et al. (2015), containing images shared over the three Social Networks; V-SMUD (VISION Social Multiple Up-Download), obtained from a part of the VISION dataset Dasara Shullani et al. (2017), shared three times via the three social networks, following different testing configurations. The experiments demonstrated that JPEG metadata’s information improves the performance with respect to using content-based information only. When considering

374

A. Piva and M. Iuliani

single-shared contents, the accuracy improves from 85.63% to 99.87% on R-SMUD and is in both cases 100.00% on V-SMUD. When considering images shared both once and twice, the accuracy improves from 43.24% to 65.91% on R-SMUD and from 58.82% to 77.12% on V-SMUD. When dealing with three times shared images also, they achieve an accuracy of 36.18% on R-SMUD and 49.72% on V-SMUD.

14.3 Analysis of Video File Formats Gloe et al. proposed the first more in-depth analysis of video file containers Gloe Thomas et al. (2014). The authors observe that videos from different cameras and phone models expose different container formats and compression. This allows a forensic practitioner to extract this information manually and, based on a comparison with a reference, expose forensic findings. In Jieun Song et al. (2016) the authors studied 296 video files in AVI format acquired with 43 video event data recorders (VEDRs), and their manipulated version with 5 different video editing software tools. All videos were classified according to a sequence of field data structure types. This study showed that the field data structure represents a valid feature set to determine whether video editing software was used to process a probe video. However, these methods require the manual analysis of the video container information, thus needing high effort and high programming skills. Some attempts were proposed to identify the containers’ signature of the most relevant brands and social media platforms Carlos Quinto Huamán et al. (2020). Another path tried to automatize the forensic analysis of video file structures by verifying their multimedia stream descriptors with simple binary classifiers through a supervised approach David Güera et al. (2019). However, in Iuliani Massimo (2019), the authors propose a way to formalize the MP4-like (.mp4, .mov, .3gp) video file structure. A probe video is converted in an XML tree-based structure; then, the comparison between the parsed data and a reference dataset addresses both integrity verification and brand classification. A similar approach has been also proposed in Erik Gelbing (2021), but the experiments have been applied only to the scenario of brand classification. Both Iuliani Massimo (2019) and David Güera et al. (2019) allow the integrity assessment of a digital video with negligible computational costs. A further improvement was finally proposed in Yang Pengpeng et al. (2020), where decision trees were employed to reduce further the computational effort required to a fixed amount of time, independently of the reference dataset’s size. These latest works also introduced an open-world scenario where the tested device was not available in the training set. The issue is also addressed in Raquel Ramos López et al. (2020), where information extracted from the video container was used to cluster sets of videos by data source, consisting in brand, model, or social media. In the next sections, we briefly summarize the main works: (i) the first relevant paper on video container Gloe Thomas et al. (2014), (ii) the first video container formalization and usage Iuliani Massimo (2019), (iii) the most recent work for efficient video file format analysis Yang Pengpeng et al. (2020).

14 Integrity Verification Through File Container Analysis

375

14.3.1 Analysis of the Video File Structure In Gloe Thomas et al. (2014), the authors identify the manufacturer and modelspecific video file format characteristics and show how processing software further modifies those features. In particular, they consider 19 digital cameras and 14 mobile phones belonging to different models, and they consider from 3 to 14 videos per device, with all available video quality settings (e.g., frame size and frame rate). In Table 14.3, we report the list of these devices. Most of these digital cameras Table 14.3 The list of devices analyzed in the study Make Model Digital camera models Agfa DC-504, DC-733s, DC-830i, Sensor530s Agfa Sensor505-X Canon S45,S70, A640, Ixus IIs Canon EOS-7D Casio EX-M2 Kodak M1063 Minolta DiMAGE Z1 Nikon CoolPix S3300 Pentax Optio A40 Pentax Optio W60 Praktica DC2070 Ricoh GX100 Samsung NV15 Mobile phone models Apple IPhone 4 Benq Siemens S88 BlackBerry 8310 Google Nexus 7 LG KU990 Motorola MileStone Nokia 6710 Nokia E61i Nokia E65 Nokia X3-00 Palm Pre Samsung GT-5500i Samsung SGH-D600 Sony Ericsson K800i

Container AVI AVI AVI MOV AVI MOV MOV AVI AVI AVI MOV AVI AVI MOV 3GP 3GP 3GP 3GP, AVI 3GP 3GP, MP4 3GP, MP4 3GP, MP4 3GP MP4 MP4 MP4 3GP

376

A. Piva and M. Iuliani

store videos in AVI format, and some use Apple Quicktime MOV containers and compress video data using motion JPEG (MJPEG), and only three use more efficient codecs (DivX, Xvid, or H.264). All the mobile phones store video data in MOV-based container formats (MOV, 3GP, MP4), except for the LG KU990 camera phone, which supports AVI. On the contrary, the mobile phones adopt H.263, H.264, MPEG-4 or DivX, and not MJPEG compression. A subset of the camera models has also been used to generate cut short videos, 10 seconds long, through video editing tools (FFmpeg, Avidemux, FreeVideoDub, VirtualDub, Yamb, and Adobe Premiere. Except for Adobe Premiere, all of them have allowed lossless video editing, i.e., the edited files have been saved without recompressing the original stream. After that, they have analyzed the extracted data. This analysis reveals considerable differences between device models and software vendors, and some general findings are summarized here. The authors carry out the first analysis by grouping the devices according to the file structure, i.e., AVI or MP4. Original AVI videos do not share the same components. Both the content and the format of specific chunks may vary, depending on the particular Camera model. Their edited versions, with all software tools, even when lossless processing is applied, leave distinct traces in the file structure, which do not match any of the camera’s features in the dataset. It is then possible to use these differences to spot if a video file was edited with these tools by comparing a questioned file’s file structure with a device’s reference container. Original MP4 videos, due to the by far more complex file format, exhibit an even more considerable degree of source-dependent internal variations than AVI files. However, none of the cut videos produced through the editing tools is compatible with any source device considered in the study. Dealing with original MJPEGcompressed video frames, different camera models adopt different JPEG marker segment sequences when building MJPEG-compressed video frames. Moreover, content-adaptive quantization tables are generally used in a video sequence, such that 2914 unique JPEG luminance quantization tables in this small dataset have been found. On these contents, lossless video editing leaves compression settings unaltered, but they introduce very distinctive artifacts in container files’ structure. These findings stated that videos from different digital cameras and mobile phones appear to employ different container formats and compression. It worth noticing that this study is not straightforward since the authors had to write customized file parsers to extract the file format information from the videos. On the other side, this difficulty is also advantageous since we are currently unaware of any publicly available software that consistently allows users to forge such information without advanced programming skills. This fact makes the creation of realistic forgeries undoubtedly a highly non-trivial undertaking and thereby reemphasizes that file characteristics and metadata must not be dismissed as unreliable sources of evidence to address file authentication.

14 Integrity Verification Through File Container Analysis

377

14.3.2 Automated Analysis of mp4-like Videos The previous method is subjected to two main limitations: Firstly, being manual is time demanding. Secondly, it is prone to error since the forensic practitioner have to subjectively assess the finding’s value. In Iuliani Massimo (2019), an automated method for unsupervised analysis of video file containers to assess video integrity and classify the source device brand. The authors developed an MP4 Parser library Apache (2021) that automatically extracted a video container and store it in an XML file. The video container is represented as a labeled tree where internal nodes are labeled by atoms names (e.g.,moov-2), and leaves are labeled by field-value attributes (e.g.,@stuff:MovieBox[]). To take into account the order of the atoms, each XMLnode is identified by a 4-byte code of the corresponding atom along with an index that represents the relative position with respect to the other siblings at a certain level. Technical Description A video file container, extracted from a video X , can be represented as an ordered collection of atoms a1 , . . . , an , possibly nested. atom in terms of a set of field-value attributes, as ai =   We can describe each ω1 (ai ), . . . , ωm i (ai ) . By combining the two previous descriptions,  the video container can be characterized by the set of field-value attributes X = ω1 , . . . , ωm , each with its associated path p X (ω), that is the ordered list of atoms to be crossed to reach ω in X starting from the root. As an example, consider the video X , whose fragments are reported in Fig. 14.1. The path to reach the field-value ω = @timescale: 1000 in X is the sequence  of atoms (ftyp-1, moov-2, mvhd-1). In this sense, we state that p X (ω) = p X  (ω ) if the same ordered list of atoms is crossed to reach the field-values in the two trees, respectively, which is not the case in the previous example. In summary, the video container structure is completely described by a list of m  field-value attributes X = ω1 , . . . , ωm , and their corresponding paths p X (ω1 ), . . . , p X (ωm ). From now on, we will denote with X = {X 1 , . . . , X N } the world set of digital videos, and with C = {C1 , . . . , Cs } the set of disjoint possible origins, e.g., device Huawei P9, iPhone 6s, and so on. When a video is processed in any way (with respect to its native form), its integrity is compromised SWGDE-SWGIT (2017).  More generally, given a query video X and a native reference video X coming from the same supposed device model, their container structure dissimilarities can be exploited to expose integrity violation evidence, as follows.    Given two containers X = (ω1 , . . . , ωm ), X = (ω1 , . . . , ωm ) with the same cardinality2 m, we define a similarity core function between two field-values as 2

If the two containers have a different cardinality, we pad the smaller one with empty field-values to obtain the same value m.

378

A. Piva and M. Iuliani

 

S(ωi , ω j ) =





1 if ωi = ω j and p X (ωi ) = p X  (ω j ) 0 otherwise

(14.1)

We can easily extend the measure to the comparison between a single field-value  ωi ∈ X and the whole X as  1 X  (ωi ) =







1 if ∃ ω j ∈ X : S(ωi , ω j ) = 1 0 otherwise

(14.2)



Then, the dissimilarity between X and X can be computed as the mismatching percentage of all field-values, i.e., m  

mm(X, X ) = 1 −

1 X  (ωi )

i=1

(14.3)

m 

and, to preserve symmetry, the degree of dissimilarity between X and X can be computed as   mm(X, X ) + mm(X , X )  . (14.4) D(X, X ) = 2 





Based on its definition, D(X, X ) ∈ [0, 1], D(X, X ) = 1 when X = X , and D(X,  X ) = 0, when they have no elements in common. Experiments The method is tested on 31 portable devices from VISION dataset Dasara Shullani et al. (2017) that leads to an available collection of 578 videos in the native format plus their corresponding social versions (YouTube and WhatsApp are considered). The metric in Eq. (14.4) is applied to determine whether a video’s integrity is preserved or violated. The authors consider four different scenarios of integrity violation3 : • • • •

exchange through WhatsApp; exchange through YouTube; cut without re-encoding through FFmpeg; date editing through ExifTool.

For each of the four cases, we consider the set of videos X 1 , . . . , X NCi , available for each device Ci , and we compute the intra-class dissimilarities between two native videos X i , X j belonging to the same model Di, j = D(X i , X j ), ∀i = j based on Eq. (14.4). For simplicity, we denote with Doo this set of dissimilarities. Then, we consider the corresponding inter-class dissimilarities Di,t j = D(X i , X tj ), ∀i = j, 3

See the paper for the implementation details.

14 Integrity Verification Through File Container Analysis

379

Table 14.4 AUC values computed for all devices for each of the four attacks, and when the native video is compared to videos from other devices. In the case of ExifTool and Other Devices, the achieved min and max AUC values are represented. For all other cases, an AUC equal to 1 is always obtained Case WhatsApp YouTube FFmpeg ExifTool AUC

1.00

1.00

1.00

[0.64 1.00]

between a native video X i and its corrupted version X tj obtained with the tool-t t this set of dissimi(WhatsApp, YouTube, ffmpeg, or Exiftool). We denote with Doa larities. By applying this procedure to all the considered devices, we collected 2890 t . samples for both Doo and any of the four Doa In most of the performed tests the maximum value achieved for Doo is lower than t , thus indicating that the two classes can be separated. the minimum value of Doa In Table 14.4 we report the AUCs for each of the considered cases. Noticeably, the integrity violation through ffmpeg, YouTube, and WhatsApp, the AUC is 1 for all devices. These values clearly show that the exchange through WhatsApp and YouTube strongly compromises the container structure, and thus it is easily possible to detect the corresponding integrity violation. Interestingly, cutting the video using ffmpeg, without any re-encoding, also results in a substantial container modification that we can detect. On the contrary, the date change with Exiftoolinduced on some containers a modification that is comparable with the observed intra-variability of native videos; in these cases, the method can not detect it. Fourteen devices still achieve unitary AUC, eight more devices yield an AUC greater than 0.8, and the remaining nine devices yield AUC below 0.8, with the lowest value of 0.64 obtained for D02 and D20 (Apple iPhone 4s and iPad mini). To summarize, for the video integrity verification tests, the method obtains perfect discrimination for videos altered by social networks or ffmpeg, while for Exiftoolit achieves an AUC greater than 0.82 on 70% of the considered devices. It worth also noticing that analyzing a video query requires less than a second. More detailed results are reported in the corresponding paper. The authors also show that this container description can be immersed in a likelihood ratio framework to determine which atoms and field-values are highly discriminative for specific classes. In this way, they also show how to classify the brand of the originating device Iuliani Massimo (2019).

14.3.3 Efficient Video Analysis The method described in Iuliani Massimo (2019), although effective, has a linear computational cost since it requires checking the dissimilarity of the probe video with all available reference containers. As a consequence, increasing the reference dataset size leads to a higher computational effort.

380

A. Piva and M. Iuliani

In this section, we summarize the method proposed in Yang Pengpeng et al. (2020) that provides an efficient way to analyze video file containers independently on the reference dataset’s size. The method, called EVA from now on, is based on Decision Trees Ross Quinlan (1986), a non-parametric learning method used for classification problems in many signal processing fields. Their key feature is breaking down a complex decision-making process into a collection of more straightforward decisions. The process is extremely efficient since a decision can be taken by checking a small number of features. Furthermore, EVA allows both to characterize the identified manipulations and to provide an explanation for the outcome. The method is also enriched with a likelihood ratio framework designed to automatically clean up the container elements that only contribute to source intravariability (for the sake of brevity, we do not report these details here). In short, EVA can identify the manipulating software (e.g., Adobe Premiere, ffmpeg, …) and provide additional information related to the original content history such as the source device operating system. Technical Description Similarly to the previous case, we consider a video container X as a labeled tree where internal nodes and leaves correspond to, respectively, atoms and field-value attributes. It can be characterised by the set of symbols {s1 , . . . sm }, where si can be: (i) the path from the root to any field (value excluded), also called field-symbols; (ii) the path from the root to any field-value (value included), also called value-symbols. An example of this representation can be4 : s1 = [ftyp/@majorBrand] s2 = [ftyp/@majorBrand/isom] … si = [moov/mvhd/@duration] si+1 = [moov/mvhd/@duration/73432] … Overall, we denote with  the set of all unique symbols s1 , . . . , s M available in the world set of digital video containers X = {X 1 , . . . , X N }. Similarly, C = {C1 , . . . , Cs } denotes a set of possible origins (e.g., Huawei P9, Apple iPhone 6s). Let a container X , the different structure of its symbols {s1 , . . . , sm } can be exploited to assign the video to a specific class Cu . For this purpose, binary decision trees Rasoul Safavian and Landgrebe (1991) are employed to build a set of hierarchical decisions. In each internal tree node the input data is tested against a specific condition; the test outcome is used to select a child as the next step in the decision process. Leaf nodes represent decisions taken by the 4

Note that @ is used to identify atom parameters, and root is used for visualization purposes. Still, it is not part of the container data.

14 Integrity Verification Through File Container Analysis

381

algorithm. More specifically, EVA adopts the growing-pruning-based Classification And Regression Trees (CART) Leo Breiman (2017). Given the size of unique symbols || = M, a video container X is converted into a vector of integers X → (x1 . . . x M ) where xi is the number of times that si occurs into X . This approach is inspired by the bag-of-words representation Hinrich Schütze et al. (2008) which is used to reduce variable-length documents to a fixedlength vectorial representation. For the sake of example, we consider two classes: Cu : iOS devices, native videos; Cv : iOS devices, dates modified through Exiftool (see Sect. 14.3.3 for details). With these settings, the decision tree highlights that the usage of Exiftool can be simply decided by looking, for instance, at the presence of moov/udta/XMP_/ @stuff. Experiments Tests were performed on 34 smartphones of 10 different brands. Over a thousand tampered videos were tested, considering both automated (ffmpeg, Exiftool) and manual editing operations (Kdenlive, Avidemux and Adobe Premiere). The manipulations mainly include cut with and without re-encoding, speed up, and slow motion. Achieved videos (both native and manipulated) were also exchanged through YouTube, Facebook, Weibo, and TikTok. All tests were performed by adopting an exhaustive leave-one-out cross-validation strategy: the dataset is partitioned in 34 subsets, each one of them containing pristine, manipulated, and social-exchanged videos belonging to a specific device. In this way, test accuracy collected after each iteration is computed on videos belonging to an unseen device. When compared to state of the art, EVA exposes competitive effectiveness at the lowest computational effort (see 14.5). In comparison with David Güera et al. (2019), it achieves higher accuracy. We can reasonably attribute this performance to their use of a smaller feature space; indeed, only a subset of the available pieces of information are extracted without considering their position within the video container. On the contrary, EVA features also include the path from the root to the value, thus providing a higher discriminating power. Indeed, this approach distinguishes between two videos where the same information

Table 14.5 Comparison of our method with the state of the art. Values of accuracy and time are averaged over the 34 folds Balanced accuracy Training time Test time Guera et al. David Güera et al. (2019) Iuliani et al. Iuliani Massimo (2019) EVA

0.67

347 s

< 1s

0.85

N/A

8s

0.98

31 s

< 1s

382

A. Piva and M. Iuliani

is stored in different atoms. When compared with Iuliani Massimo (2019), EVA is capable of obtaining better classification performance with a lower computational cost. In Iuliani Massimo (2019), O(N ) comparisons are required since all the N reference-set examples must be compared with a tested video; on the contrary, the cost for a decision tree analysis is O(1) since the output is reached in a constant number of steps. Furthermore, EVA allows a simple explanation for the outcome. For the sake of example, we report in Fig. 14.2a a sample tree from the integrity verification experiments: the decision is taken by up to four checks, just based on the presence of the symbols ftyp/@minorVersion = 0, uuid/@userType, moov/udta/XMP_ and moov/udta/auth. Then, a single decision tree can handle both easy- and hard-to-classify cases at the same time. Manipulation Characterization EVA is also capable of identifying the manipulating software and the operating system of the originating device. More specifically, three main questions were posed: A Software identification: Is the proposed method capable of identifying the software used to manipulate a video? If yes, is it possible to identify the operating system of the original video? B Integrity Verification on Social Media: Given a video from a social media platform (YouTube, Facebook, TikTok or WeiBo), can we determine whether the original video was pristine or tampered? C Blind scenario: Given a video that may or may not has been exchanged through a social media platform, is it possible to retrieve some information on the video origin? In the Software Identification, the experiment comprises the following classes: “native” (136 videos), “Avidemux” (136 videos), “Exiftool” (136 videos), “ffmpeg” (680 videos), “Kdenlive” (136 videos), and “Premiere” (136 videos). As shown in Table 14.6, EVA obtains a balanced global accuracy of 97.6% with slightly lower accuracy in identifying ffmpeg with respect to the other tools. These wrong classifications are reasonable because ffmpeg library is used by other software and, internally, by Android devices. The algorithm maintains a high discriminative power even when trained to identify the originating operating system (Android vs. iOS). In a few cases, EVA wrongly classifies only the source’s operating system. This mistake happens explicitly in the case of Kdenlive and Adobe Premiere. At the same time, these programs are always correctly identified. This result indicates that the container’s structure of videos saved by Kdenlive and Adobe Premiere is probably reconstructed in a software-specific way. More detailed performance is reported in the related paper. With regard to Integrity Verification on Social Media, EVA was also tested on YouTube, Facebook, TikTok, and Weibo videos to determine whether they were pristine or manipulated before the upload (Table 14.7). Results highlight poor performance due to the social media transcoding process that flattens the containers almost independently on the video origin. For example,

14 Integrity Verification Through File Container Analysis

Fig. 14.2 Pictorial representation of some of the generated decision trees

383

384

A. Piva and M. Iuliani

Table 14.6 Confusion matrix under the software identification scenario for native contents (Na), Avidemux (Av), Exiftool (Ex), ffmpeg (Ff), Kdenlive (Kd), Premiere (Pr) Na Av Ex Ff Kd Pr Na Av Ex Ff Kd Pr

0.97 – 0.01 – –

– 1.00 – 0.01 – –

0.03 – 0.99 – – –

– – – 0.90 – –

– – – 0.09 1.00 –

– – – – – 1.00

Table 14.7 Performance achieved for integrity verification on social media contents. We report for each social network the obtained accuracy, true positive rate (TPR), and true negative rate (TNR). All these performance measures are balanced Accuracy TNR TPR Facebook TikTok Weibo YouTube

0.76 0.80 0.79 0.60

0.40 0.51 0.45 0.36

0.86 0.75 0.82 0.74

after YouTube transcoding, videos produced by Avidemux and by Exiftool are shown to have exactly the same container representation. We do not know how the considered platforms process the videos due to the lack of public documentation, but we can assume that uploaded videos undergo custom/multiple processing. Indeed, social media videos need to be viewable on a great range of platforms and thus need to be transcoded to multiple video codecs and adapted for various resolutions and bitrates. Thus, it seems plausible that those operations could discard most of the original container structure. Eventually, in the Blind Scenario, the authors show that we can remove the prior information (whether the video belongs to social media or not) without degrading detection performances. In summary, EVA can be considered an efficient forensic method for checking video integrity. If manipulation is detected, EVA can also identify the editing software and, in most cases, the video source device’s operating system.

14.4 Concluding Remarks In this chapter, we described how features extracted from image and video file containers can be exploited for integrity verification. In some cases, these features also provide insights into the probe’s previous history. In several cases, they allow the

14 Integrity Verification Through File Container Analysis

385

characterization of the source device’s brand or model. Similarly, they allow deriving that the content was exchanged through a particular social network. This kind of information shows several advantages with respect to content-based traces. Firstly, it is usually much faster to be extracted and analyzed than contentbased ones. It also requires an almost fixed computational effort, independently on the media size and resolution (this characteristic is particularly relevant for videos). Secondly, it is also much more challenging to perform anti-forensics operations on these features since the structure of an image, or video file is overly complicated. Thus, it is not straightforward to mimic a native file container. There are, however, several drawbacks to be taken into account. First of all, current editing tools and metadata editors cannot access such low-level information. It is required to design a proper parser to extract the information stored in the file container. Moreover, these features’ effectiveness strongly depends on the availability of updated reference datasets of native and manipulated contents. Eventually, these features cannot discriminate the full history of a media since some processing operations, like uploading to a social network, tend to remove the traces left by previous steps. It is also worth mentioning that the forensic analysis of social media content is fragile since the providers’ settings keep changing. This fact makes collected data and achieved results outdated in a short time. Furthermore, collecting large datasets can be challenging depending on the social media provider’s upload and download protocol. Future work should be devoted to the fusion of content-based and format-based features since it is expected that the exploitation of these two domains can provide a better performance, as witnessed in Phan et al. (2019), where images exchanged on multiple social networks were studied.

References Apache. Java mp4 parser. http://www.github.com/sannies/mp4parser Apple Computer, Inc. Quicktime file format. 2001 Carlos Quinto Huamán, Ana Lucila Sandoval Orozco, and Luis Javier García Villalba. Authentication and integrity of smartphone videos through multimedia container structure analysis. Future Generation Computer Systems, 108:15–33, 2020 A. Castiglione, G. Cattaneo, and A. De Santis. A forensic analysis of images on online social networks. In 2011 Third International Conference on Intelligent Networking and Collaborative Systems, pages 679–684, 2011 Dasara Shullani, Marco Fontani, Massimo Iuliani, Omar Al Shaya, and Alessandro Piva. VISION: a video and image dataset for source identification. EURASIP Journal on Information Security, 2017(1):15, 2017 David Güera, Sriram Baireddy, Paolo Bestagini, Stefano Tubaro, and Edward J Delp. We need no pixels: Video manipulation detection using stream descriptors. In International Conference on Machine Learning (ICML), Synthetic Realities: Deep Learning for Detecting AudioVisual Fakes Workshop, 2019 Duc-Tien Dang-Nguyen, Cecilia Pasquini, Valentina Conotter, and Giulia Boato. Raise: A raw images dataset for digital image forensics. In Proceedings of the 6th ACM multimedia systems conference, pages 219–224, 2015 Eric Hamilton. Jpeg file interchange format. 2004

386

A. Piva and M. Iuliani

Eric Kee, Johnson Micah K, Hany Farid (2011) Digital image authentication from JPEG headers. IEEE Trans. Inf. Forensics Secur. 6(3–2):1066–1075 Erik Gelbing, Leon Würsching, Sascha Zmudzinski, and Martin Steinebach. Video source identification from mp4 data based on field values in atom/box attributes. In Proceedings of Media Watermarking, Security, and Forensics 2021, 2021 H. Farid. Digital image ballistics from jpeg quantization: a followup study. [Technical Report TR2008-638] Department of Computer Science, Dartmouth College, Hanover, NH, USA, 2008 Hany Farid. Digital image ballistics from jpeg quantization. [Technical Report TR9-1-2006] Department of Computer Science, Dartmouth College, Hanover, NH, USA, 2006 Hinrich Schütze, Christopher D Manning, and Prabhakar Raghavan. Introduction to information retrieval, volume 39. Cambridge University Press Cambridge, 2008 International Telecommunication Union. Advanced video coding for generic audiovisual services h.264. https://www.itu.int/rec/T-REC-H.264/, 2016 International Telecommunication Union. High efficiency video coding. https://www.itu.int/rec/TREC-H.265, 2018 ISO/IEC 10918-1. Information technology. digital compression and coding of continuous-tone still images. ITU-T Recommendation T.81, 1992 ISO/IEC 14496. Information technology. coding of audio-visual objects, part 12: Iso base media file format, 3rd ed. 2008 ISO/IEC 14496. Information technology. coding of audio-visual objects, part 14: Mp4 file format. 2003 JEITA CP-3451. Exchangeable image file format for digital still cameras: Exif version 2.2. 2002 Jesse D. Kornblum. Using jpeg quantization tables to identify imagery processed by software. Digital Investigation, 5:S21–S25, 2008. The Proceedings of the Eighth Annual DFRWS Conference Jieun Song, Kiryong Lee, Wan Yeon Lee, and Heejo Lee. Integrity verification of the ordered data structures in manipulated video content. Digital Investigation, 18:1–7, 2016 JPEG: Still image data compression standard. Springer Science & Business Media, 1992 Leo Breiman. Classification and regression trees. Routledge, 2017 Massimo Iuliani, Dasara Shullani, Marco Fontani, Saverio Meucci, Alessandro Piva (2019) A video forensic framework for the unsupervised analysis of mp4-like file container. IEEE Trans. Inf. Forensics Secur. 14(3):635–645 Microsoft. Avi riff file. https://docs.microsoft.com/en-us/windows/win32/directshow/avi-riff-filereference, 1992 Oliver Giudice, Antonino Paratore, Marco Moltisanti, and Sebastiano Battiato. A classification engine for image ballistics of social data. In Sebastiano Battiato, Giovanni Gallo, Raimondo Schettini, and Filippo Stanco, editors, Image Analysis and Processing - ICIAP 2017, pages 625– 636, Cham, 2017. Springer International Publishing Patrick Mullan, Christian Riess, Felix Freiling (2019) Forensic source identification using jpeg image headers: The case of smartphones. Digital Investigation 28:S68–S76 Q. Phan, G. Boato, R. Caldelli, and I. Amerini, Tracking multiple image sharing on social networks. In: 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2019), pages 8266–8270, 2019 Pengpeng Yang, Daniele Baracchi, Massimo Iuliani, Dasara Shullani, Rongrong Ni, Yao Zhao, Alessandro Piva (2020) Efficient video integrity analysis through container characterization. IEEE J. Sel. Top. Signal Process. 14(5):947–954 Raquel Ramos López, Elena Almaraz Luengo, Ana Lucila Sandoval Orozco, and Luis Javier GarcíaVillalba. Digital video source identification based on container’s structure analysis. IEEE Access, 8:36363–36375, 2020 J. Ross Quinlan. Induction of decision trees. Machine learning, 1(1):81–106, 1986 S Rasoul Safavian and David Landgrebe. A survey of decision tree classifier methodology. IEEE transactions on systems, man, and cybernetics, 21(3):660–674, 1991 SWGDE-SWGIT. SWGDE best practices for maintaining the integrity of imagery, Version: 1.0, July 2017

14 Integrity Verification Through File Container Analysis

387

Thomas Gloe. Forensic analysis of ordered data structures on the example of JPEG files. In 2012 IEEE International Workshop on Information Forensics and Security, WIFS 2012, Costa Adeje, Tenerife, Spain, December 2-5, 2012, pages 139–144. IEEE, 2012 Thomas Gloe, and Rainer Bähme (2010) The Dresden image database for benchmarking digital image forensics. J Digit Forensic Pract 3(2–4):150–159 Thomas Gloe, André Fischer, Matthias Kirchner (2014) Forensic analysis of video file formats. Digit. Investig. 11(1):S68–S76

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Chapter 15

Image Provenance Analysis Daniel Moreira, William Theisen, Walter Scheirer, Aparna Bharati, Joel Brogan, and Anderson Rocha

The literature of multimedia forensics is mainly dedicated to the analysis of single assets (such as sole image or video files), aiming at individually assessing their authenticity. Different from this, image provenance analysis is devoted to the joint examination of multiple assets, intending to ascertain their history of edits, by evaluating pairwise relationships. Each relationship, thus, expresses the probability of one asset giving rise to the other, through either global or local operations, such as data compression, resizing, color-space modifications, content blurring, and content splicing. The principled combination of these relationships unveils the provenance of the assets, also constituting an important forensic tool for authenticity verification. This chapter introduces the problem of provenance analysis, discussing its importance and delving into the state-of-the-art techniques to solve it.

15.1 The Problem Consider a questioned media asset, namely a query (such as a digital image whose authenticity is suspect), and a large corpus of media assets (such as the Internet). Provenance analysis comprises the problem of (i) finding, within the available corpus, D. Moreira · W. Theisen · W. Scheirer (B) University of Notre Dame, Notre Dame, IN, USA e-mail: [email protected] A. Bharati Lehigh University, Bethlehem, PA, USA J. Brogan Oak Ridge National Laboratory, Oak Ridge, TN, USA A. Rocha University of Campinas, Campinas, Brazil © The Author(s) 2022 H. T. Sencar et al. (eds.), Multimedia Forensics, Advances in Computer Vision and Pattern Recognition, https://doi.org/10.1007/978-981-16-7621-5_15

389

390

D. Moreira et al.

Fig. 15.1 Images that became viral on the web in the last decade, all with unknown sources. In a, the claimed world’s first dab, supposedly captured during WWII. The dabbing soldier was highlighted, to make him more noticeable. In b, the claimed proof of an unlikely friendship between the Notorious B.I.G., on the left, and Kurt Cobain, on the right. In c, a photo of a supposed NATO meeting aimed at supporting a particular political narrative

Fig. 15.2 A reverse image search of Fig. 15.1a leads to the retrieval of these two images, among others. Figure 15.1a is probably the result of cropping (a), which in turn is a color transformation of (b). The dabbing soldier was highlighted in a, to make him more noticeable. Image (b) is, thus, the source, comprising a behind-the-scenes picture from Dunkirk (2017)

the assets that directly and transitively share content with the query, as well as of (ii) establishing the derivation and content-donation processes that explain the existence of the query. Take, for example, the three queries depicted in Fig. 15.1, which became viral images in the last decade. Reasons for their virality range from the popularity of harmless pop-culture jokes and historical oddities (such as in the case of Fig. 15.1a, b) to interest in more critical political narratives and agendas (such as in Fig. 15.1c). Provenance analysis offers a principled and automated framework to debunk such media types by retrieving and associating other assets that help to elucidate their authenticity. To get a glimpse of the expected outcome of performing provenance analysis, one could manually submit each one of the three queries depicted in Fig. 15.1 to a reverse image search engine, such as TinEye (2021), and try to select and associate the retrieved images to the queries by hand. This process is illustrated through Figs. 15.2, 15.3, and 15.4. Based on Fig. 15.2, for instance, one can figure out that the claimed world’s first dab, supposedly captured during WWII, is a crop of Fig. 15.2a, which

15 Image Provenance Analysis

391

Fig. 15.3 A reverse image search of Fig. 15.1b leads to the retrieval of these two images, among others. Figure 15.1b is probably a composition of (a) and (b), where a donates the background, while b donates the Notorious B.I.G. on his car’s seat. Cropping and color corrections are also performed to complete the forgery

Fig. 15.4 A typical reverse image search of Fig. 15.1c leads to the retrieval of a but not b. Image b was obtained through one of the content retrieval techniques presented in Sect. 15.2 (context incorporation). Figure 15.1c is, thus, a composition of a and b, where a donates the background and b donates Vladimir Putin. Cropping and color corrections are also performed to complete the forgery

in turn is a color modified version of Fig. 15.2b, a well-known picture of the cast of the Hollywood movie Dunkirk (2017). Figure 15.3, in turn, helps to reveal that Fig. 15.1b is actually a forgery (composition), where Fig. 15.3a probably serves as the background for the splicing of the Notorious B.I.G. on his car’s seat, taken from Fig. 15.3a. In a similar fashion, Fig. 15.4 points out that Fig. 15.1c is also a composition, this time using Fig. 15.4a as background, and Fig. 15.4b as the donor of the portrayed individual. In this particular case, Fig. 15.4b is not easily found by a typical reverse image search. To do so, we had to perform context incorporation, a content retrieval strategy adapted to provenance analysis that is explained in Sect. 15.2. In the era of misinformation and “fake news”, there is a symptomatic crisis of trust in the media assets shared online. People are aware of editing software, with which even unskilled users can quickly fabricate and manipulate content. Although many of these manipulations have benign purposes (no, there is nothing wrong with the silly memes you share), some content is generated with malicious intent (general public deception and propaganda), and some modifications may undermine the ownership

392

D. Moreira et al.

of the media assets. Under this scenario, provenance analysis reveals itself as a convenient tool to expose the provenance of the assets, aiding in the verification of their authenticity, protecting their ownership, and restoring credibility. In the face of the massive amount of data produced and shared online, though, there is no space for performing provenance analysis manually, such as formerly described. Besides the need for particular adaptations such as context incorporation (see Sect. 15.2), provenance analysis must also be performed at scale automatically and efficiently. By combining ideas from image processing, computer vision, graph theory, and multimedia forensics, provenance analysis constitutes an interesting interdisciplinary endeavor, into which we delve into in detail in the following sections.

15.1.1 The Provenance Framework Provenance analysis can be executed at scale and fully automated by following a basic framework that involves two stages. Such a framework is depicted in Fig. 15.5. As one might observe, the first stage is always related to the activity of content retrieval, which incorporates a questioned media asset (a.k.a. the query) and a corpus of media assets to retrieve a selection of assets of interest that are related to the query. Figure 15.6 depicts the expected outcome of content retrieval for Fig. 15.1b as the query. In the case of provenance analysis, the content retrieval activity must retrieve not only the objects that directly share content with the query but also transitively. Take, for example, images 1, 2, and 3 within Fig. 15.6, which all share some visual

Fig. 15.5 The provenance framework. Provenance analysis is usually executed in two stages. Starting from a given query and a corpus of media assets, the first stage is always related to the content retrieval activity, which is herein explained in Sect. 15.2. The first stage’s output is a list of assets of interest (content list), which is fed to the second stage. The second stage, in turn, may either comprise the activity of graph construction (discussed within Sect. 15.3) or the activity of content clustering (discussed within Sect. 15.4). While the former activity aims at organizing the retrieved assets in a provenance graph, the latter focuses on establishing meaningful asset clusters

15 Image Provenance Analysis

393

Fig. 15.6 Content retrieval example for a given query. The desired output of content retrieval for provenance analysis is a list of media assets that share content with the query, either directly (such as objects 1, 2, and 3) or transitively (object 4, through object 1). Methods to perform content retrieval are discussed in Sect. 15.2

elements with the query. Image 4, however, has nothing in common with the query. But its retrieval is still desirable because it shares visual content with image 1 (the head of Tupac Shakur, on the right), hence it is related to the query transitively. Techniques to perform content retrieval for the provenance analysis of images are presented and discussed in Sect. 15.2. Once a list of related media assets is available, the provenance framework’s typical execution moves forward to the second stage, which has two alternate modes. The first, provenance graph construction, aims at computing the directed acyclic graph whose nodes individually represent the query and the related media assets and whose edges express the edit and content-donation story (e.g., cropping, blurring, splicing, etc.) between pairs of assets, linking seminal to derived elements. It, thus, embodies the provenance of the objects it contains. Figure 15.7 provides an example of provenance graph, constructed for the media assets depicted in Fig. 15.6. As shown, the query is probably a crop of image 1, which is a composition. It uses image 2 as background and splicing objects from images 3 and 4 to complete the forgery. Methods to construct provenance graphs from sets of images are presented in Sect. 15.3. The provenance framework may be changed by replacing the second stage of graph construction with a content clustering approach. This setup is sound in the study of contemporary communication on the Internet, such as exchanging memes and the reproduction of viral movements. Dabbing (see Fig. 15.1a), for instance, is an example of this phenomenon. In these cases, the users’ intent is not limited to retrieving near-duplicate variants or compositions that make use of a given query. They are also interested in the retrieval and grouping of semantically similar objects, which may greatly vary in appearance to elucidate the provenance of a trend. This situation is represented in Fig. 15.8. Since graph construction may not be the best approach to organize semantically similar media assets obtained during content retrieval, content clustering reveals itself as an interesting option, as we discuss in Sect. 15.4.

394

D. Moreira et al.

Fig. 15.7 Provenance graph example. This graph is constructed using the media assets retrieved in the example given through Fig. 15.6. In a nutshell, it expresses that the query is probably a crop of image 1, which in turn is a composition forged by combining the contents of images 2, 3, and 4. Methods to perform provenance graph construction are presented in Sect. 15.3

Fig. 15.8 Content retrieval of semantically similar assets. Some applications might aim at identifying particular behaviors or gestures that are depicted in a meme-style viral query. That might be the case for someone trying to understand the trend of dabbing, which is reproduced in all of the images above. In such a case, provenance content retrieval might still be helpful to fetch related objects. Graph construction, however, may be rendered useless due to the dominant presence of semantically similar objects rather than near-duplicates or compositions. In such a situation, we propose using content clustering as the second stage of the provenance framework, which we discuss in Sect. 15.4

15.1.2 Previous Work Early Work: De Rosa et al. (2010) have mentioned the importance of considering groups of images instead of single images while performing media forensics. Starting with a set of images of interest, they have proposed to express pairwise image dependencies through the analysis of the mutual information between every pair of images. By combining these dependencies into a single correlation adjacency matrix for the entire image set, they have suggested the generation of a dependency graph,

15 Image Provenance Analysis

395

whose edges should be individually evaluated as being in accordance with a particular set of image transformation assumptions. These assumptions should presume a known set of operations, such as rotation, scaling, and JPEG compression. Edges unfit to these assumptions should be removed. Similarly, Kennedy and Chang (2008) have explored solutions to uncover the processes through which near-duplicate images have been copied or manipulated. By relying on the detection of a closed set of image manipulation operations (such as cropping, text overlaying, and color changing), they have proposed a system to construct visual migration maps: graph data structures devised to express the parent-child derivation operations between pairs of images, being equivalent to the dependency graphs proposed in De Rosa et al. (2010). Image Phylogeny Trees: Rather than modeling an exhaustive set of possible operations between near-duplicate images, Dias et al. (2012) designed and adopted a robust image similarity function to compute a pairwise image similarity matrix M. For that, they have introduced an image similarity calculation protocol to generate M, which is widely used across the literature, including provenance analysis. This method is detailed in Sect. 15.3. To obtain a meaningful image phylogeny tree from M, to represent the evolution of the near-duplicate images of interest, they have introduced oriented Kruskal, a variation of Kruskal’s algorithm that extracts an oriented optimum spanning tree from M. As expected, phylogeny trees are analogous to the aforementioned dependency graphs and visual migration maps. In subsequent work, Dias et al. (2013) have reported a large set of experiments with their methodology in the face of a family of six possible image operations, namely scaling, warping, cropping, brightness and contrast changes, and lossy compression. Moreover, Dias et al. (2013) have also explored the replacement of oriented Kruskal with other phylogeny tree-building methods, such as oriented Prim and Edmond’s optimum branching. Melloni et al. (2014) have contributed to the topic of image phylogeny tree reconstruction by investigating ways to combine different image similarity metrics. Bestagini et al. (2016) have focused on the clues left by local image operations (such as object splicing, object removal, and logo insertion) to reconstruct the phylogeny trees of near-duplicates. More recently, Zhu and Shen (2019) have proposed heuristics to improve phylogeny trees by correcting local image inheritance relationship edges. Castelletto et al. (2020), in turn, have advanced the state of the art by training a denoising convolutional autoencoder that takes an image similarity adjacency matrix as input and returns an optimum spanning tree as the desired output. Image Phylogeny Forests: All the techniques mentioned so far were conceived to handle near-duplicates. Aiming at providing a solution to deal with semantically similar images, Dias et al. (2013) have extended the oriented Kruskal method proposed in Dias et al. (2012) to what they named the automatic oriented Kruskal. This technique is an algorithm to compute a family of disjoint phylogeny trees (hence a phylogeny forest) from a given set of near-duplicate and semantically similar images, such that each disjoint tree describes the relationships of a particular group of nearduplicates.

396

D. Moreira et al.

In the same direction, Costa et al. (2014) have provided two extensions to the optimum branching algorithm proposed in Dias et al. (2013), namely automatic optimum branching and extended automatic optimum branching. Both solutions are based on the automatic calculation of branching cut-off execution points. Alternatively, Oikawa et al. (2015) have proposed the use of clustering techniques for finding the various disjoint phylogeny trees. Images coming from the same source (near-duplicates) should be placed in the same cluster, while semantically similar images should be placed in different clusters. Milani et al. (2016), in turn, have suggested relying on the estimation of the geometric localization of captured viewpoints within the images as a manner to distinguish between near-duplicates (which should share viewpoints) from semantically similar objects (which should present different viewpoints). Lastly, Costa et al. (2017) have introduced solutions to improve the creation of the pairwise image similarity matrices, even in the presence of semantically similar images and regardless of the graph algorithm used to construct the phylogeny trees. Multiple Parenting: Previously mentioned phylogeny work did not address the critical scenario of image compositions, in which objects from one image are spliced into another. Aiming at dealing with these cases, de Oliveira et al. (2015) have modeled every composition as the outcome of two parents (one donor, which provides the spliced object, and one host, which provides the composition background). Extended automatic optimum branching, proposed in Costa et al. (2014), should then be applied for the reconstruction of ideally three phylogeny trees: one for the near-duplicates of the donor, one for the near-duplicates of the host, and one for the near-duplicates of the composite. Other Types of Media: Besides processing still images, some works in the literature have addressed the phylogeny reconstruction of assets belonging to other types of media, such as video (see Dias et al. 2011; Lameri et al. 2014; Costa et al. 2015, 2016; Milani et al. 2017) and even audio (see Verde et al. 2017). In particular, Oikawa et al. (2016) have investigated the role of similarity computation between digital objects (such as images, video, and audio) in multimedia phylogeny. Provenance Analysis: The herein-mentioned literature of media phylogeny has made use of diverse metrics, individually focused on retrieving either the root, the leaves, or the ancestors of a node within a reconstructed phylogeny tree, evaluating the tree as a whole. Moreover, the datasets used in the experiments presented different types of limitations, such as either containing only images in JPEG format or lacking compositions with more than two sources (cf. object 1 depicted in Fig. 15.7). Aware of these limitations and aiming to foster more research in the topic of multi-asset forensic analysis, the American Defense Advanced Research Projects Agency (DARPA) and the National Institute of Standards and Technology (NIST) have joined forces to introduce new terminology, metrics, and datasets (all herein presented, in the following sections), within the context of the Media Forensics (MediFor) project (see Turek 2021). Therefore, they coined the term Provenance Analysis to express a broader notion of phylogeny reconstruction, in the sense of

15 Image Provenance Analysis

397

including not only the task of reconstructing the derivation stories of the assets of interest but also the fundamental step of retrieving these assets (as we discuss in Sect. 15.2). Furthermore, DARPA and NIST have suggested the use of directed acyclic graphs (a.k.a. provenance graphs), instead of groups of trees, to represent the derivation story of the assets better.

15.2 Content Retrieval With the amount of content on the Internet being so vast, performing almost any type of computationally expensive analysis across the web’s entirety or even smaller subsets for that matter is simply intractable. Therefore, when setting out to tackle the task of provenance analysis, a solution must start with an algorithm for retrieving a reasonably-sized subset of relevant data from a larger corpus of media. With this in mind, effective strategies for content retrieval become an integral module within the provenance analysis framework. This section will focus specifically on image retrieval algorithms that provide results contingent on one or multiple images as queries into the system. This image retrieval is commonly known as reverse image search or more technically ContentBased Image Retrieval (CBIR). For provenance analysis, an appropriate CBIR algorithm should produce a corpus subset that contains a rich collection of images with relationships relevant to the provenance of the query image.

15.2.1 Approaches Typical CBIR: Typical CBIR solutions employ multi-level representations of the processed images to reduce the semantic gap between the image pixel values (in the low level) and the system user’s retrieval intent (in the highlevel, see Liu et al. 2007). Having provenance analysis in mind, the primary intent is to trace connections between images that mutually share visual content. For instance, when performing the query from Fig. 15.6, one would want a system to retrieve images that contain identical or near-identical structural elements (such as the corresponding images, respectively, depicting the Notorious B.I.G. and Kurt Cobain, both used to generate the composite query, but not other images of unrelated people sitting in cars). Considering that, we can describe an initial concrete example of a CBIR system that is not entirely suited to provenance analysis and further gradually provide methods to make it more suitable to the problem at hand. A typical and moderately useful CBIR system for provenance analysis may rely on local features (a.k.a. keypoints) to obtain a low-level representation of the image content. Hence, it may consist of four steps, namely: (1) feature extraction, (2) feature compression, (3) feature retrieval, and (4) result ranking. Feature extraction comprises the task of computing thousands of n-dimensional representations for

398

D. Moreira et al.

Fig. 15.9 SURF keypoints extracted from a given image. Each yellow circle represents a keypoint location, whose pixel values generate an n-dimensional feature vector. In a, a regular extraction of SURF keypoints, as defined by Bay et al. (2008). In b, a modified keypoint extraction dubbed distributed keypoints, proposed by Moreira et al. (2018), whose intention is also to describe homogeneous regions, such as the wrist skin and clapboard in the background. Images adapted from Daniel Moreira et al. (2018)

each image, ranging from classical handcrafted technologies, such as Scale Invariant Feature Transform (SIFT, see Lowe 2004) and Speeded-up Robust Features (SURF, see Bay et al. 2008), to neural network learned methods, such as Learned Invariant Feature Transform (LIFT, see Yi et al. 2016) and Deep Local Features (DELF, see Noh et al. 2017). Figure 15.9a depicts an example of SURF keypoints extracted from a target image. Although the feature-based representation of images drastically reduces the amount of space needed to store their content, there is still a necessity for reducing their size, a task performed during the second CBIR step of feature compression. Take, for example, a CBIR system that extracts 1,000 64-dimensional floating-point SURF features from each image. Using 4 bytes for each dimension, each image occupies a total of 4 × 64 × 1000 = 256,000 bytes (or 256 kB) of memory space. While that may seem relatively low from a single-image standpoint, consider an image database containing ten million images (which is far from being an unrealistic number, considering the scale of the Internet). This would mean the need for an image index on the order of 25 terabytes. Instead, we recommend utilizing Optimized Product Quantization (OPQ, see Ge et al. 2013) to reduce the size of each feature vector. In summary, OPQ learns grids of bins arranged along different axes within the feature vector space. These bin grids are rotated and scaled within the feature vector space to distribute optimally feature vectors extracted from an example training set. This provides a significant reduction in the number of bits required to describe a vector while keeping relatively high fidelity. For the sake of illustration, a simplified two-dimensional example of OPQ bins is provided in Fig. 15.10. The third CBIR step (feature retrieval) aims at using the local features to index and compare, within the optimized n-dimensional space they constitute, and through Euclidean distance or similar method, pairs of image localities. Inverted File Indices (IVF, see Baeza-Yates and Ribeiro-Neto 1999) are the index structure commonly used in the feature retrieval step. Utilizing a feature vector binning scheme, such

15 Image Provenance Analysis

399

Fig. 15.10 A simplified 2D representation of OPQ. A set of points in the feature space a is redescribed by rotating and translating grids of m bins to high-density areas (b), in red. The grid intervals and axis are learned separately for each dimension. Each dimension of a feature point can then be described using only m bits

as that of OPQ, an index is generated such that it stores a list of features contained within each bin. Each stored feature, in turn, points back to the image it was extracted from (hence the inverted terminology). The only calculation required is to determine the query feature’s respective bin within the IVF to retrieve the nearest neighbors to a given query feature. Once this bin is known, the IVF can return the feature vectors’ list in that bin and the nearest surrounding neighbor bins. Given that each feature vector points back to its source image, one can trace back the database images that are similar to the query. This simple yet powerful method provides easy scalability and distributability to index search. This is the main storage and retrieval structure used within powerful state-of-the-art search frameworks, such as the open-source Facebook Artificial Intelligence Similarity Search (FAISS) library introduced by Jeff Johnson et al. (2019). Lastly, the result ranking step takes care of polling the feature-wise most similar database images to the query. The simplest metric with which one can rank the relatedness of the query image with the other images is feature voting (see Pinto et al. 2017). To perform it, one must iterate through each query feature and its retrieved nearest neighbor features. Then, by checking which database image each returned feature belongs to, one must accumulate a tally of how many features are matched to the query for each image. This final tally is then utilized as the votes for each database image, and these images are ranked (or ordered) accordingly. An example of this method can be seen in Fig. 15.11. With these four steps implemented, one already has access to a rudimentary image search system capable of retrieving both near-duplicates and semantically similar images. While operational, this system is not particularly powerful when tasked with finding images with nuanced inter-image relationships relevant to provenance analysis. In particular, images that share only small amounts of content with the query image will not receive many votes in the matching process and may not be retrieved as a relevant result. Instead, this system can be utilized as a foundation for a retrieval system more fine-tuned to the task of image provenance analysis. In this regard, four methods to improve a typical CBIR solution are discussed in the following.

400

D. Moreira et al.

Fig. 15.11 A 2D simplified representation of IVF and feature voting. Local features are extracted from three images, representing a small gallery. The feature vector space is partitioned into a grid, and gallery features are indexed within this grid. Local features are then extracted from the query image of Reddit Superman (whose chest symbol and underwear are modified versions of the yawning cat’s mouth depicted within the first item of the gallery). Each local query feature is fed to the index, and nearest neighbors are retrieved. Colored lines represent how each local feature matches the IVF features and subsequently map back to individual gallery features. We see how these feature matches can be used in a voting scheme to rank image similarities in the bottom left

Distributed Keypoints: To take the best advantage of these concepts, we must ensure that the extracted keypoints for the local features used within the indexing and retrieval pipeline do a good job describing the entire image. Most keypoint detectors return points that lie on corners, edges, or other high-entropy patches containing visual information. This, unfortunately, means that some areas within an image may be given too many vital points and may be over-described, while other areas may be entirely left out, receiving no keypoints at all. An area with no keypoints and, thus, no representation within the database image index has no chance of being correctly retrieved if a query with similar features comes along. To mitigate over-description and under-description of image areas, Moreira et al. (2018) proposed an additional step in the keypoint detection process, which is the avoidance and removal of keypoints that present too much overlap with others. Such elements are then replaced with keypoints coming from weaker-entropy image regions, allowing for a more distributed content description. Figure 15.9b depicts an example of applying this strategy, in comparison with the regular SURF extraction approach. Context Incorporation: One of the main reasons a typical CBIR solution performs poorly in an image provenance scenario is the nature behind many image manipulations within provenance cases. These manipulations often consist of composite images with small objects coming from donor images. An example of these types of relationships is shown in Fig. 15.12.

15 Image Provenance Analysis

401

Fig. 15.12 An example of composite from the r/photoshopbattles subreddit. The relations of the composite with the images donating small objects, such as the hummingbird and the squirrel, challenge the retrieval capabilities of a typical CBIR solution. Context incorporation comes in handy in these situations. Image adapted from Joel Brogan et al. (2021)

These types of image relationships do not lend themselves to a naive feature voting strategy, as the size of the donated objects is often too small to garner enough local feature matches to impart a high vote score with the query image. We can augment the typical CBIR pipeline with an additional step, namely context incorporation, to solve this problem. Context incorporation takes into consideration the top N retrieved images for a given query, to accurately localize areas within the query and the retrieved images that differ from each other (most likely due to a composite or manipulation), to generate attention masks, and to re-extract features over only the distinct regions of the query, for a second retrieval execution. By using only these features, the idea is that the additional retrieval and voting steps will be driven towards finding the images that have donated small regions to the query due to the absence of distractors. Figure 15.13 depicts an example where context incorporation is crucial to obtain the image that has donated Vladimir Putin (Fig. 15.13d) to a questioned query (Fig. 15.13a), in the first positions of the result rank. Different approaches for performing context incorporation, including attention mask generation, were proposed and benchmarked by Brogan et al. (2017), while an end-to-end CBIR pipeline employing such a strategy was discussed by Pinto et al. (2017). Iterative Filtering: Another requirement of content retrieval within provenance analysis is the recovery of images directly related to the query and transitively related to it. That is the case of image 4 depicted within Fig. 15.6, which does not share content

402

D. Moreira et al.

Fig. 15.13 Context incorporation example. In a, the query image. In b, the most similar retrieved near-duplicate (top 1 image) through a typical CBIR solution. In c, the attention mask highlighting the different areas between a and b, after proper content registration. In d, the top 1 retrieved image after using c as a new query (a.k.a., context incorporation). The final result rank may be a combination of the two ranks after using a and c as a query, respectively. Image d is only properly retrieved, from the standpoint of provenance analysis, thanks to the execution of context incorporation

directly with the query, but is related to it through image 1. Indeed, any other nearduplicates of image 4 should ideally be retrieved by a flawless provenance-aware CBIR solution. Aiming at also retrieving transitively related content to the query, Moreira et al. (2018) introduced iterative filtering. After retrieving the first rank of images, the results are iteratively refined by suppressing near-duplicates of the query and promoting non-near-duplicates as new queries to the next retrieval iteration. This process is executed a number of times, leading to a set of image ranks for each iteration, which are then combined into a single one, at the end of the process. Object-Level Retrieval: Despite the modifications above to improve typical CBIR solutions towards provenance analysis, local-feature-based retrieval systems do not inherently incorporate structural aspects into the description and matching process. For instance, any retrieved image with a high match vote score could still, in fact, be completely dissimilar to the query image. That happens because the matching process does not take into account the position of local features with respect to each other within an image. As a consequence, unwanted database images that contain features individually similar to the query’s features, but in a different pattern, may still be ranked highly.

15 Image Provenance Analysis

403

Fig. 15.14 An example of how OS2OS matching is accomplished. In a, a query image has the local feature keypoints mapped relative to a computed centroid location. In b, matching features from a database image are projected to an estimated centroid location using the vectors shown in a. In c, the subsequent vote accumulation matrix is created. In d, the accumulation matrix is overlayed on the database image. The red area containing many votes shows that the query and database images most likely share an object in that location

To avoid this behavior, Brogan et al. (2021) modified the retrieval pipeline to account for the structural layout of local features. To do so, they proposed a method to leverage the scale and orientation components calculated as part of the SURF keypoint extraction mechanism and to perform a transform estimation similar to the generalized Hough voting (see Ballard 1981), relative to a computed centroid. Because each feature’s scale and orientation are known, for both the query’s and database images’ features, database features can be projected to the query keypoint layout space. Areas that contain similar structures accumulate votes in a given location on a Hough accumulator matrix. By clustering highly active accumulation areas, one is able to quickly determine local areas shared between images that have structural consistency with each other. A novel ranking algorithm then leverages these areas within the accumulator to subsequently score the relationship between images. This entire process is called “objects in scenes to objects in scenes” (OS2OS) matching. An example of how the voting accumulation works is depicted in Fig. 15.14.

404

D. Moreira et al.

15.2.2 Datasets and Evaluation The literature for provenance analysis has reported results of content retrieval over two major datasets, namely the Nimble Challenge (NC17, 2017) and the Media Forensics Challenge (MFC18, 2018) datasets. NC17 (2017): On the occasion of promoting the Nimble Challenge 2017, NIST released an image dataset containing a development partition (Dev1-Beta4) specifically curated to support both research tasks for provenance analysis. Namely, provenance content retrieval (named provenance filtering by the agency) and provenance graph construction. This partition contains 65 image queries and 11,040 images either related to the queries through provenance graphs, or completely unrelated material (named distractors). The provenance graphs were manually created by image edition experts and include operations such as splicing, removal, cropping, scaling, blurring, and color transformations. As a consequence, the partition offers content retrieval ground truth composed of 65 expected image ranks. Aiming to increase the challenge offered by this partition, Moreira et al. (2018) extended the set of distractors by adding one million unrelated images, which were randomly sampled from EvalVer1, another partition released by NIST as part of the 2017 challenge. We rely on this configuration to provide some results of content retrieval and explain how the different provenance CBIR add-ons explained in Sect. 15.2.1 contribute to solve the problem at hand. MFC18 (2018): Similar to the 2017 challenge, NIST released another image dataset in 2018, with a partition (Eval-Ver1-Part1) also useful for provenance content retrieval. Used to officially evaluate the participants of the MediFor program (see Turek 2021), this set contains 3,300 query images and over one million images, including content related to the queries and distractors. Many of these queries are composites, with the expected content retrieval image ranks provided as ground truth. Moreover, this dataset also provides ground-truth annotations as to whether a related image contributes only a particular small object to the query (such as in the case of image 4 donating Tupac Shakur’s head to image 1, within Fig. 15.7), instead of an entire large background. These cases are particularly helpful to assess the advantages of using the object-level retrieval approach presented in Sect. 15.2.1 in comparison to the other methods. As suggested in the protocol introduced by NIST (2017), the metric used to evaluate the performance of a provenance content retrieval solution is the CBIR recall of the images belonging to the ground truth rank, at three specific cut-off points. Namely, (i) R@50 (i.e., the percentage of ground-truth expected images that are retrieved among the top 50 assets returned by the content retrieval solution), (ii) R@100 (i.e., the percentage of ground truth images retrieved among the top 100 assets returned by the solution), and (iii) R@200 (i.e., the percentage of ground truth images retrieved among the top 200 assets returned by the solution). Since the recall expresses the percentage of relevant images being effectively retrieved, the method delivering higher recall is considered better.

15 Image Provenance Analysis

405

In the following section, results in terms of a recall are reported for the different solutions presented in Sect. 15.2.1, over the aforementioned datasets.

15.2.3 Results Table 15.1 summarizes the results of provenance content retrieval reported by Moreira et al. (2018) over the NC17 dataset. It helps to put into perspective some of the techniques detailed in Sect. 15.2.1. As one might observe, by comparing rows 1 and 2 of Table 15.1, the simple modification of using more keypoints (from 2,000 to 5,000 features) to describe the images within the CBIR base module already provides a significant improvement on the system recall. Distributed Keypoints, in turn, improve the recall of larger ranks (for R@100 and R@200), while Iterative Filtering alone allows the system to reach an impressive recall of 90% of the expected images among the top 50 retrieved ones. At the time of their publication, Moreira et al. (2018) found that a combination of Distributed Keypoints and Iterative Filtering led to the best content retrieval solution, represented by the last row of Table 15.1. More recently, Brogan et al. (2021) performed new experiments on the MFC18 dataset, this time aiming at evaluating the performance of their proposed OS2OS approach. Table 15.2 compares the results of the best solution previously identified by Moreira et al. (2018), in row 1, with the addition of OS2OS, in row 2, and

Table 15.1 Results of provenance content retrieval over the NC17 dataset. Reported here are the average recall values of 65 queries at the top 50 (R@50), top 100 (R@100), and top 200 (R@200) retrieved images. Provenance add-ons on top of the CBIR base module were presented in Sect. 15.2.1 CBIR Base

Provenance Add-ons

R@50

R@100

R@200

Source

2,000 SURF features, Context OPQ Incorporation 5,000 SURF features, Context OPQ Incorporation 5,000 SURF features, Distributed OPQ Keypoints

71%

72%

74%

88%

88%

88%

88%

90%

90%

5,000 SURF features, Iterative OPQ Filtering

90%

90%

92%

5,000 SURF features, Distrib. Key- 91% OPQ points, Iterative Filtering

91%

92%

Daniel Moreira et al. (2018) Daniel Moreira et al. (2018) Daniel Moreira et al. (2018) Daniel Moreira et al. (2018) Daniel Moreira et al. (2018)

406

D. Moreira et al.

Table 15.2 Results of provenance content retrieval over the MFC18 dataset. Reported here are the average recall values of 3,300 queries at the top 50 (R@50), top 100 (R@100), and top 200 (R@200) retrieved images. Provenance add-ons on top of the CBIR base module were presented in Sect. 15.2.1. OS2OS stands for “objects in scene to objects in scene”, previously presented as object-level retrieval CBIR Base Provenance R@50 R@100 R@200 Source Add-ons 5,000 SURF features, Distrib. KeyOPQ points, Iterative Filtering 5,000 SURF features, Distrib. KeyOPQ points, OS2OS 1,000 DELF features, Iterative OPQ Filtering 1,000 DELF features, OS2OS OPQ

77%

81%

82%

Joel Brogan et al. (2021)

83%

83%

84%

Joel Brogan et al. (2021)

87%

90%

91%

91%

93%

95%

Joel Brogan et al. (2021) Joel Brogan et al. (2021)

replacement of the SURF features with DELF (see Noh et al. 2017) features, in rows 3 and 4, over the MFC18 dataset. OS2OS alone (compare rows 1 and 2) improves the system recall for all the three rank cut-off points. Also, the usage of only 1,000 DELF features (a data-driven image description approach that relies on learned attention models, in contrast to the 5,000 handcrafted SURF features) significantly improves the system recall (compare rows 1 and 3). In the end, the combination of a DELFbased CBIR system and OS2OS leads to the best content retrieval approach, whose recall values are shown in the last row of Table 15.2. Lastly, as explained before, the MFC18 dataset offers a unique opportunity to understand how the content retrieval solutions work for the recovery of dataset images that eventually donated small objects to the queries at hand since it contains specific annotations in this regard. Table 15.3 summarizes the results obtained by Brogan et al. (2021) when evaluating this aspect. As expected, the usage of OS2OS greatly improves the recall of small-content donors, as it can be seen through rows 2 and 4 of Table 15.3. Overall, the best content retrieval approach (a combination of DELF features and OS2OS) is able to retrieve only 55% of the expected small-content donors among the top 200 retrieved assets (see the last row of Table 15.3). This result indicates that more work still needs to be done in this specific aspect: improving the provenance content retrieval of small-content donor images.

15 Image Provenance Analysis

407

Table 15.3 Results of provenance content retrieval over the MFC18 dataset, with focus on the retrieval of images that donated small objects to the queries. Reported here are the average recall values of 3,300 queries at the top 50 (R@50), top 100 (R@100), and top 200 (R@200) retrieved images. Provenance add-ons on top of the CBIR base module were presented in Sect. 15.2.1. OS2OS stands for “objects in scene to objects in scene”, previously presented as object-level retrieval CBIR Base Provenance R@50 R@100 R@200 Source Add-ons 5,000 SURF features, Distrib. KeyOPQ points, Iterative Filtering 5,000 SURF features, Distrib. KeyOPQ points, OS2OS 1,000 DELF features, Iterative OPQ Filtering 1,000 DELF features, OS2OS OPQ

28%

34%

42%

Joel Brogan et al. (2021)

45%

48%

52%

Joel Brogan et al. (2021)

41%

45%

49%

51%

54%

55%

Joel Brogan et al. (2021) Joel Brogan et al. (2021)

15.3 Graph Construction A provenance graph depicts the story of edits and manipulations underwent by a media asset. This section focuses on the provenance graph of images, whose vertices individually represent the image variants and whose edges represent the direct pairwise image relationships. Depending on the transformations applied to one image to obtain another, the two connected images can share partial to full visual content. In the case of partial content sharing, the source images of the shared content are called the donor images (or simply donors), while the resultant manipulated image is called the composite image. In full-content sharing, we have near-duplicate variants when one image is created from another through a series of transformations such as cropping, blurring, and color changes. Once a set of related images is collected from the first stage of content retrieval (see Sect. 15.2), a fine-grained analysis of pairwise relationships is required to obtain the full provenance graph. This analysis involves two major steps, namely (1) image similarity computation and (2) graph building. Similarity computation involves understanding the degree of similarity between two images. It is a fundamental task for any visual recognition problem. Image matching methods are at the core of vision-based applications, ranging from handcrafted approaches to modern deep-learning-based solutions. A matching method is a similarity (or dissimilarity) score that can be used for further decision-making and classification. For provenance analysis, computing pairwise image similarity helps distinguish between direct versus indirect relationships. A selection of a feasible set of pairwise relationships creates a provenance graph. To analyze the closest provenance match to an image in the provenance graph, pairwise matching is performed for all possible image pairs in the set of k retrieved images. The similarity scores are

408

D. Moreira et al.

then recorded in a matrix M of size k × k where each cell indexed M(i, j) represents the similarity between image Ii and image I j . Graph building, in turn, comprises the task of constructing the provenance graph after similarity computation. The matrix containing the similarity scores for all pairs of images involved in the provenance analysis for each case can be interpreted as an adjacency matrix. This implies that each similarity score in this matrix is the weight of an edge in a complete graph of k vertices. Extracting a provenance graph requires selecting a minimal set of edges that span the entire set of relevant images or vertices (this can be different from k). If the similarity measure used for the previous stage is symmetric, the final graph will be undirected, whereas an asymmetric measure of the similarity will lead to a directed provenance graph. The provenance cases considered in the literature, so far, are spanning trees. This implies that there are no cycles within graphs, and there is at most one path to get from one vertex to another.

15.3.1 Approaches There are multiple aspects of a provenance graph. Vertices represent the different variants of an image or visual subject, pairwise relationships between images (i.e., undirected edges) represent atomic manipulations that led to the evolution of the manipulated image, and directions for these relationships provide more precise information about the change. Finally, the last details are the specific operations performed on one image to create the other. The fundamental task for an image-based provenance analysis is, thus, performing image comparison. This stage requires describing an image using a global or a set of local descriptors. Depending on the methods used for image description and matching, the similarity computation stage can create different types of adjacency weight matrices. The edge selection algorithm then depends on the nature of the computed image similarity. In the rest of this section, we present a series of six graph construction techniques that have been proposed in the literature and represent the current state of the art in image provenance analysis. Undirected Graphs: A simple and yet-effective graph construction solution was proposed by Bharati et al. (2017). It takes the top k retrieved images for the given query and computes the similarity between the two elements of every image pair, including the query, through keypoint extraction, description, and matching. Keypoint extractors and descriptors, such as SIFT (see Lowe 2004) or SURF (see Bay et al. 2008), offer a manner to highlight the important regions within the images (such as corners and edges), and to describe their content in a way that is robust to several of the transformations manipulated images might have been through (such as scaling, rotating, and blurring.). The quantity of keypoint matches that are geometrically consistent with the others in the match set can act as an image similarity score for each image pair. As depicted in Fig. 15.15, two images that share visual content will present more keypoint matches (see Fig. 15.15a) than the ones that have nothing in common (see

15 Image Provenance Analysis

409

Fig. 15.15 Examples of geometrically consistent keypoint matching. In a, the matching of keypoints over two images that share visual content. In b, the absence of matches between images that do not look alike. The number of matching keypoint pairs can be used to express the similarity between two images

Fig. 15.15b). Consequently, a symmetric pairwise image adjacency matrix can be built by simply using the number of keypoint matches between every image pair. Ultimately, a maximum spanning tree algorithm, such as Kruskal’s (1956) or Prim’s (1957), can be used to generate the final undirected image provenance graph. Directed Graphs: The previously described method has the limitation of generating only symmetric adjacency matrices, therefore, not providing enough information to compute the direction of the provenance graphs’ edges. As explained in Sect. 15.1, within the problem of provenance analysis, the direction of an edge within a provenance graph expresses the important information of which asset gives rise to the other. Aiming to mitigate this limitation and inspired by the early work of Dias et al. (2012), Moreira et al. (2018) proposed an extension to the keypoint-based image similarity computation alternative. After finding the geometrically consistent keypoint matches for each pair of images (Ii , I j ), the obtained keypoints can be used for estimating the homography Hi j that guides the registration of image Ii onto image I j , as well as the homography H ji that analogously guides the registration of image I j onto image Ii . In the particular case of Hi j , after obtaining the transformation T j (Ii ) of image Ii towards I j , T j (Ii ) and I j are properly registered, with T j (Ii ) presenting the same size of I j and the matched keypoints relying on the same position. One can, thus, compute the bounding boxes that enclose all the matched keypoints within each image, obtaining two correspondent patches R1 , within T j (Ii ), and R2 , within I j . With the two aligned patches at hand, the distribution of the pixel values of R1 can be matched to the distribution of R2 , before calculating the similarity (or dissimilarity) between them. Considering that patches R1 and R2 have the same width W and height H after content registration, one possible method of patch dissimilarity computation is the pixel-wise mean squared error (M S E): W  H M S E(R1 , R2 ) =

w

h

(R1 (w, h) − R2 (w, h))2 , H ×W

(15.1)

410

D. Moreira et al.

where R1 (w, h) ∈ [0, 255] and R2 (w, h) ∈ [0, 255] are the pixel values of R1 and R2 at position (w, h), respectively. Alternatively to M S E, one can express the similarity between R1 and R2 as the mutual information (M I ) between them. From the perspective of information theory, M I is the amount of information that one random variable contains about another. From the point of view of probability theory, it measures the statistical dependence of two random variables. In practical terms, assuming each random variable as, respectively, the aligned and color-corrected patches R1 and R2 , the value of M I can be given by the entropy of discrete random variables: M I (R1 , R2 ) =

 x∈R1 y∈R2



 p(x, y)  p(x, y) log  , x p(x, y) y p(x, y)

(15.2)

where x ∈ [0, 255] refers to the pixel values of R1 , and y ∈ [0, 255] refers to the pixel values of R2 . The p(x, y) value regards the joint probability distribution function of R1 and R2 . As explained by Costa et al. (2017), it can be satisfactorily approximated by h(x, y) p(x, y) =  , (15.3) x,y h(x, y) where h(x, y) is the joint histogram that counts the number of occurrences for each possible value of the pair (x, y), evaluated on the corresponding pixels for both patches R1 and R2 . As a consequence of their respective natures, while M S E is inversely proportional to the two patches’ similarity, M I is directly proportional. Aware of this, one can either use (i) the inverse of the M S E scores or (ii) the M I scores directly as the similarity elements si j within the pairwise image adjacency matrix M, to represent the similarity between image I j and the transformed version of image Ii towards I j , namely T j (Ii ). The homography H ji is calculated in an analogous way to Hi j with the difference that Ti (I j ) is manipulated by transforming I j towards Ii . Due to this, the size of the registered images, the format of the matched patches, and the matched color distributions will be different, leading to unique M S E (or M I ) values for setting s ji . Since si j = s ji , the resulting similarity matrix M will be asymmetric. Figure 15.16 depicts this process. Upon computing the full matrix, the assumption introduced by Dias et al. (2012) is that, in the case of si j > s ji , it would be easier to transform image Ii towards image I j , than the contrary (i.e., I j towards Ii ). Analogously, si j < s ji would mean the opposite. This information can, thus, be used for edge selection. The oriented Kruskal (2012) solution (with a preference for higher adjacency weights) would help construct the final provenance graph.

15 Image Provenance Analysis

411

Fig. 15.16 Generation of an asymmetric pairwise image adjacency matrix. Based on the comparison of two distinct content transformations for each image pair (Ii , I j ), namely Ii towards I j and viceversa, this method allows the generation of an asymmetric pairwise image similarity matrix, which is useful for computing provenance graphs with directed edges

Clustered Graph Construction: As an alternative to the oriented Kruskal, Moreira et al. (2018) introduced a method of directed provenance graph building, which leverages both symmetric keypoint-based and asymmetric mutual-information-based image similarity matrices. Inspired by Oikawa et al. (2015) and dubbed clustered graph construction, the idea behind such a solution is to group the available retrieved images in a way that only near-duplicates of a common image are added to the same cluster. Starting from the image query Iq as the initial expansion point, the remaining images are sorted according to the number of geometrically consistent matches shared with Iq , from the largest to the smallest. The solution then clusters probable near-duplicates around Iq , as long as they share enough content, which is decided based upon the number of keypoint matches (see Daniel Moreira et al. 2018). Once the query’s cluster is finished (i.e., the remaining images do not share enough keypoint matches with the query), a new cluster is computed over the remaining unclustered images, taking another image of the query’s cluster as the new expansion point. This process is repeated iteratively by trying different images as the expansion point until every image belongs to a near-duplicate cluster. Once all images are clustered, it is time to establish the graph edges. Images belonging to the same cluster are sequentially connected into a single path without branches. This makes sense in scenarios containing sequential image edits where one near-duplicate is obtained on top of the other. As a consequence of the iterative execution and selection of different images as expansion points, the successful ones (i.e., the images that were helpful in the generation of new image clusters) fatally belong to more than one cluster, hence serving as graph bifurcation points. Orthogonal edges are established in such cases, allowing every near-duplicate image branch to be connected to the final provenance graph through an expansion point image as a

412

D. Moreira et al.

Fig. 15.17 Usage of metadata information to refine the direction of pairwise image provenance relationships. For the presented images, the executed manipulation operation could be either the splicing or the removal of the male lion. According to the image generation date metadata, the operation is revealed to be a splice since the image on the left is older. Adapted from Aparna Bharati et al. (2019)

joint. To determine the direction of every single edge, Moreira et al. (2018) suggested using the mutual information similarity asymmetry in the same way as depicted in Fig. 15.16. Leveraging Metadata: Image comparison techniques may be limited depending upon the transformations involved in any given image’s provenance analysis. In cases where the transformations are reversible or collapsible, the visual content analysis may not suffice for edge selection during graph building. Specifically, the homography estimation and color mapping steps involved in asymmetric matrix computation for edge direction inference could be noisy. To make this process more robust, it is pertinent to utilize other evidence sources to determine connections. As can be seen from the example in Fig. 15.17, it is difficult to point out the plausible direction of manipulation with visual correspondence, but auxiliary information related to the image, mostly accessible within the image files (a.k.a. image metadata), can increase confidence in predicting the directions. Image metadata, when available, can provide additional evidence for directed edge inference. Bharati et al. (2019) identify highly relevant tags for the task. Specific tags that provide the time of image acquisition and editing, location, editing operation, etc. can be used for metadata analysis that corroborates visual evidence for provenance analysis. An asymmetric heuristic-based metadata comparison parallel to a symmetric visual comparison is proposed. The metadata comparison similarity scores are higher for image pairs (ordered) with consistency from more sets of metadata tags. The resulting visual adjacency matrix is used for edge selection, while the metadata-based comparison scores are used for edge direction inference. As explained in clustered graph construction, there are three parts of the graph building method, namely node cluster expansion, edge selection, and assigning directions to edges. The metadata information can supplement the last two depending on the specific stage at which it is incorporated. As metadata tags can be volatile in the world of intelligent forgeries, a conservative approach is to use them to improve the

15 Image Provenance Analysis

413

confidence of the provenance graph obtained through visual analysis. The proposed design enables the usage of metadata when available and consistent. Transformation-Aware Embeddings: While metadata analysis can improve the fidelity of edge directions in provenance graphs when available and not tampered with, local keypoint matching for visual correspondence faces challenges in image ordering for provenance analysis. Local matching is efficient and robust to finding shared regions between related images. This works well for connecting donors with composite images but can be insufficient in capturing subtle differences between near-duplicate images, which affect the ordering of long chains of operations. Establishing sequences of images that vary slightly based on the transformations requires differentiating between slightly modified versions of the same content. Towards improving the reconstructions of globally-related image chains in provenance graphs, Bharati et al. (2021) proposed encoding awareness of the transformation sequence in the image comparison stage. Specifically, the devised method learns transformation-aware embeddings to better order related images in an edit sequence or provenance chain. The framework uses a patch-based siamese structure trained with an Edit Sequence Loss (E S L) using sets of four image patches. Each set is expressed as quadruplets or edit sequences, namely (i) the anchor patch, which represents the original content, (ii) the positive patch, a near-duplicate of the anchor after M image processing transformations, (iii) the weak positive patch, the positive patch after N transformations, and (iv) the negative patch, a patch that is unrelated to the others. The quadruplets of patches are obtained for training using a specific set of image transformations that are of interest to image forensics, particularly image phylogeny and provenance analysis, as suggested in Dias et al. (2012). For each anchor patch, random unit transformations are sequentially applied, one on top of the other’s result, allowing to generate positive and weak positive patches from the anchor, after M and M + N transformations, respectively. The framework aims at providing distance scores to pairs of patches, where the output score between the anchor and the positive patch is smaller than the one between the anchor and the weak positive, which, in turn, is smaller than the score between the anchor and the negative patch (as shown in Fig. 15.18). Given a feature vector for an anchor image patch a, two transformed derivatives of the anchor patch p (positive) and p  (weak positive) where p = TM (a) and p  = TN (TM (a)), and an unrelated image patch from a different image n, E S L is a pairwise margin ranking loss computed as follows: S L(a, p, p  , n) = max(0, −y × (d(a, p  ) − d(a, n)) + µ1 )+ max(0, −y × (d( p, p  ) − d( p, n)) + µ2 )+ max(0, −y × (d(a, p) − d(a, p  )) + µ3 )

(15.4)

Here, y is the truth function which determines the rank order (see Rudin and Schapire 2009) and µ1 , µ2 , and µ3 are margins corresponding to each pairwise distance term and are treated as hyperparameters. Both terms having the same sign

414

D. Moreira et al.

Fig. 15.18 Framework to learn transformation-aware embeddings in the context of ordering image sequences for provenance. Specifically, to satisfy an edit-based similarity precision constraint, i.e., d(a, p) < d(a, p  ) < d(a, n). Adapted from Aparna Bharati et al. (2021)

implies ordering is correct, and the loss is zero. A positive loss is accumulated when the ordering is wrong, and they are of opposite signs. The above loss is optimized, and the model corresponding to the best measure for validation is used for feature extraction from patches of test images. Features learned with the proposed technique are used to provide pairwise image similarity scores. The value di j between images Ii and I j is computed by matching the set of features (extracted from patches) from one image to the other using an iterative greedy bruteforce matching strategy. At each iteration, the best match is selected as the pair of patches between image Ii and image I j whose l2-distance is the smallest and whose patches did not participate in a match on previous iterations. This guarantees a deterministic behavior regardless of the order of the images, meaning that either comparing the patches of Ii against I j or vice-versa will lead to the same consistent set of patch pairs. Once all patch pairs are selected, the average l2-distance is calculated and finally set as di j . The inverse of di j is then used to set both si j and s ji within the pairwise image similarity matrix M, which in this case is a symmetric one. Upon computing all values within M for all possible image pairs, a greedy algorithm (such as Kruskal’s 1956) is employed to order these pairwise values and create an optimally connected undirected graph of images. Leveraging Manipulation Detectors: A challenging aspect of image provenance analysis is establishing high-confidence direct relationships between images that share a small portion of content. Keypoint-based approaches may not suffice as there may not be enough keypoints in the shared regions, and global matching approaches may not appropriately capture the matching region’s importance. To improve analysis of composite images where source images have only contributed a small region and determine the source image among a group of image variants, Zhang et al. (2020) proposed to combine a pairwise ancestor-offspring classifier with manipulation detection

15 Image Provenance Analysis

415

approaches. They build the graph by combining edges based on both local feature matching and pixel similarity. Their proposed algorithm attempts to balance global and local features and matching scores to boost performance. They start by using a weighted combination of the matched SIFT keypoints and the matched pixel values for image pairs that can be aligned, and null for the ones that cannot be aligned. A hierarchical clustering approach is used to group images coming from the same source together. For graph building within each determined cluster, the authors combine the likelihood of images being manipulated or extracted from a holistic image manipulation detector (see Zhang et al. 2020) and the pairwise ancestor score extracted by an L2-Net (see Tian et al. 2017). The image manipulation detector uses a patch-based convolutional neural network (CNN) to predict manipulations from a median-filtered residual image. For ambiguous cases where the integrity score may not be assigned accurately, a lightweight CNN-based ancestor-offspring network takes patch pairs as input and predicts one’s scores to be derived from the other. The similarity scores used as edge weights are the average of the integrity and the ancestor scores from the two used networks. The image with the highest score among the smaller set of images is considered as the source. All incoming links to this vertex are removed to reduce confusion in directions. This one is then treated as the root of the arborescence built by applying Chu-Liu/Edmonds’ algorithm (see Chu 1965; Edmonds 1967) on pairwise image similarities. The different arborescences are connected by finding the best-matched image pair among the image clusters. If the matched keypoints are above a threshold, these images are connected, indicating a splicing or composition possibility. As reported in the following section, this method obtains state-of-the-art results on the NIST challenges (MFC18 2018 and MFC19 2019), and it significantly improves the computation of the edges of the provenance graphs over the Reddit Photoshop Battles dataset (see Brogan 2021).

15.3.2 Datasets and Evaluation With respect to the step of provenance graph construction, four datasets stand out as publicly available and helpful benchmarks, namely NC17 (2017), MFC18 (2018) (both discussed in Sect. 15.2.2), MFC19 (2019), and the Reddit Photoshop Battles dataset (2021). NC17 (2017): As mentioned in Sect. 15.2.2, this dataset contains an interesting development partition (Dev1-Beta4), which presents 65 image queries, each one belonging to a particular manually curated provenance graph. As expected, these provenance graphs are provided within the partition as ground truth. The number of images per provenance graph ranges from two to 81, with the average graph order being equal to 13.6 images. MFC18 (2018): Besides providing images and ground truth for content retrieval (as explained in Sect. 15.2.2), the Eval-Ver1-Part1 partition of this dataset also pro-

416

(a)

D. Moreira et al.

(b)

Fig. 15.19 A representation of how provenance graphs were obtained from the users’ interactions within the Reddit r/photoshopbattles subreddit. In a, the parent-child posting structure of comments, which are used to generate the provenance graph depicted in b. Equivalent edges across the two images have the same color (either green, purple, blue, or red). Adapted from Daniel Moreira et al. (2018)

vides provenance graphs and 897 queries aiming at evaluating graph construction. For this dataset, the average graph order is 14.3 images, and the resolution of its images is larger, on an average, when compared to NC17. Moreover, its provenance cases encompass a larger set of applied image manipulations. MFC19 (2019): A more recent edition of the NIST challenge released a larger set of provenance graphs. The Eval-Part1 partition within the MFC19 (2019) dataset has 1,027 image queries, and the average order of the provided ground truth provenance graphs is equal to 12.7 image vertices. In this group, the number of types of image manipulations used to generate the edges of the graphs was almost twice the number of MFC18. Reddit Photoshop Battles (2021): Aiming at testing the image provenance analysis solutions over more realistic scenarios, Moreira et al. (2018) introduced the Reddit Photoshop Battles dataset. This dataset was collected from images posted to the Reddit community known as r/photoshopbattles (2012), where professional and amateur image manipulators share doctored images. Each “battle” starts with a teaser image posted by a user. Subsequent users post modifications of either the teaser or previously submitted manipulations in comments to the related posts. By using the underlying tree comment structure, Moreira et al. (2018) were able to infer and collect 184 provenance graphs, which together contain 10,421 original and composite images. Figure 15.19 illustrates this provenance graph inference process.

15 Image Provenance Analysis

417

To evaluate the available graph construction solutions, two configurations are proposed by the NIST challenge (2017). In the first one, named the “oracle” scenario, there is a strong focus on the graph construction task. It assumes that a flawless content retrieval solution is available, thus, starting from the ground-truth content retrieval image ranks to build the provenance graphs, with neither missing images nor distractors. In the second one, named “end-to-end” scenario, content retrieval must be performed before graph construction, thus, delivering imperfect image ranks (with missing images or distractors) to the step of graph construction. We rely on both configurations and on the aforementioned datasets to report results of provenance graph construction, in the following section. Metrics: As suggested by NIST (2017), given a provenance graph G(V, E) generated by a solution whose performance we want to assess, we compute the F1-measure (i.e., the harmonic mean of precision and recall) of the (i) retrieved image vertices V and of the (ii) established edges E, when compared to the ground truth graph G  (V  , E  ), with its V  and E  homologous components. The first metric is named vertex overlap (V O) and the second one is named edge overlap (E O), respectively: 2×

|V  ∩ V | , |V  | + |V |

(15.5)

E O(G  , G) = 2 ×

|E  ∩ E| . |E  | + |E|

(15.6)

V O(G  , G) =

Moreover, we compute the vertex and edge overlap (V E O), which is the F1-measure of retrieving both vertices and edges simultaneously: V E O(G  , G) =



|V  ∩ V | + |E  ∩ E| . |V  | + |V | + |E  | + |E|

(15.7)

In a nutshell, these metrics aim at assessing the overlap between G and G  . The higher the values of V O, E O, and V E O, the better the performance of the solution. Finally, in the particular case of E O and V E O, when they are both assessed for an approach that does not generate directed graphs (such as Undirected Graphs and Transformation-Aware Embeddings, presented in Sect. 15.3.1), an edge within E is considered a hit (i.e., a correct edge) when there is a homologous edge within E  that connects equivalent image vertices, regardless of the edges’ directions.

15.3.3 Results Table 15.4 puts in perspective the different provenance graph construction approaches explained in Sect. 15.3.1, when executed over the NC17 dataset. The provided results

418

D. Moreira et al.

were all collected in oracle mode, hence the high values of V O (above 0.9), since there are neither distractors nor missing images in the rank lists used to build the provenance graphs. A comparison between rows 1 and 2 within this table shows the efficacy of leveraging image metadata as additional information to compute the edges of the provenance graphs. The values of E O (and V E O, consequently) have a significant increase (from 0.12 to 0.45, and from 0.55 to 0.70, respectively), when metadata is available. In addition, by comparing rows 3 and 4, one can observe the contribution of the data-driven Transformation-Aware Embeddings approach, in the scenario where only undirected graphs are being generated. In both cases, the generated edges have no direction by design, making their edge overlap conditions easier to be achieved (since the order of the vertices within the edges become irrelevant for the computation of E O and V E O, justifying their higher values when compared to rows 1 and 2). Nevertheless, contrary to the first two approaches, these solutions are not able to define which image gives rise to the other within the established provenance edges. Table 15.5 compares the current state-of-the-art solution (Leveraging Manipulation Detectors by Xu Zhang et al. 2020) with the official NIST challenge participation results of the Purdue-Notre Dame team (2018), for both MFC18 and MFC19 datasets. In both cases, the reported results refer to the more realistic end-to-end scenario, where performers must execute content retrieval prior to building the provenance graphs. As a consequence, the image ranks fed to the graph construction step are noisy, since they contain both missing images and distractors. For all the reported cases, the image ranks had 50 images and presented an average R@50 of around 90% (i.e., nearly 10% of the needed images are missing). Moreover, nearly 35% of the images within the 50 available ones in a rank are distractors, on average. The

Table 15.4 Results of provenance graph construction over the NC17 dataset. Reported here are the average vertex overlap (V O), edge overlap (E O), and vertex and edge overlap (V E O) values of 65 queries. These experiments were executed in the “oracle” scenario, where the image ranks fed to the graph construction step are perfect (i.e., with neither distractors nor missing images) Graph VO EO VEO Source construction approach Clustered graph construction Clust. Graph Const., Leveraging Metadata Undirected Graphs TransformationAware Embeddings

0.93

0.12

0.55

0.93

0.45

0.70

0.90

† 0.65

† 0.78

1.00

† 0.68

† 0.85

†Values collected over undirected edges

Daniel Moreira et al. (2018) Aparna Bharati et al. (2019)

Aparna Bharati et al. (2021) Aparna Bharati et al. (2021)

15 Image Provenance Analysis

419

Table 15.5 Results of provenance graph construction over the MFC18 and MFC19 datasets. Reported here are the average vertex overlap (V O), edge overlap (E O), and vertex and edge overlap (V E O) values of 897 queries, in the case of MFC18, and of 1,027 queries, in the case of MFC19. These experiments were executed in the “end-to-end” scenario, thus, building graphs upon imperfect image ranks (i.e., with distractors or missing images) Dataset Graph VO EO VEO Source Const. approach MFC18

MFC19

Clustered Graph Construction Leveraging Manipulation Detectors Clustered Graph Construction Leveraging Manipulation Detectors

0.80

0.27

0.54

0.82

0.40

0.61

0.70

0.30

0.52

0.85

0.42

0.65

MFC19 (2019) Xu Zhang et al. (2020) MFC19 (2019) Xu Zhang et al. (2020)

Table 15.6 Results of provenance graph construction over the Reddit Photoshop Battles dataset. Reported here are the average vertex overlap (V O), edge overlap (E O), and vertex and edge overlap (V E O) values of 184 queries. These experiments were executed in the “oracle” scenario, where the image ranks fed to the graph construction step are perfect. “N.R.” stands for not-reported values Graph VO EO VEO Source construction approach Clustered Graph Construction Clust. Graph Const., Leveraging Metadata Leveraging Manipulation Detectors

0.76

0.04

0.40

0.76

0.09

0.42

N.R.

0.21

N.R.

Daniel Moreira et al. (2018) Aparna Bharati et al. (2019)

Xu Zhang et al. (2020)

best solution (contained in rows 2 and 4 within Table 15.5) still delivers low values of E O when compared to V O, revealing an important limitation of the available approaches. Table 15.6, in turn, reports results on the Reddit Photoshop Battles dataset. As one might observe, especially in terms of E O, this set is a more challenging one for the graph construction approaches, except for the state-of-the-art solution (Leveraging Manipulation Detectors by Xu Zhang et al. (2020). While methods that have worked fairly well on the NC17, MFC18, and MFC19 datasets drastically fail in the case of the Reddit dataset (see E O values below 0.10 in the case of rows 1 and 2), the state-of-the-art approach (in the last row of the table) more than doubles the results

420

D. Moreira et al.

of E O. Again, even with this improvement, increasing the values of E O within graph construction solutions is still an open problem that deserves attention from researchers.

15.4 Content Clustering In the study of human communication on the Internet and the understanding of the provenance of trending assets, such as memes and other forms of viral content, the users’ intent is mainly focused on the retrieval, selection, and organization of semantically similar objects, rather than the gathering of near-duplicate variants or compositions that are related to a query. Under this scenario, although the step of content retrieval may be useful, the step of graph construction loses its purpose, since the available content is preferably related through semantics (e.g., different people on diverse scenes doing the same action, such as the “dabbing” trend depicted on Fig. 15.8), greatly varying in appearance, making the techniques presented on Sect. 15.3 less suitable. Humans can only process so much data at once—if too much is present, they begin to be overwhelmed. In order to help facilitate the human processing and perception of the retrieved content, Theisen et al. (2020) proposed a new stage to the provenance pipeline focused on image clustering. Clustering the images based on shared elements helps triage the massive amounts of data that the pipeline has grown to accommodate. Nobody can reasonably review several million images to find emerging trends and similarities, especially without ordering in the image collection. However, when these images are grouped based on shared elements using the provenance pipeline, the number of items a reviewer would have to look at can decrease by several magnitude orders.

15.4.1 Approach Object-Level Content Indexing: From the content retrieval techniques discussed in Sect. 15.2, Theisen et al. (2020) recommended the use of OS2OS matching (see Brogan et al. 2021) to index the millions of images eventually available, due to two major reasons. Firstly, to obtain a fast and scalable content retrieval engine. Secondly, to benefit from the OS2OS capability of comparing images through either large and global content matching or through many small object-wise local matches. Affinity Matrix Creation: In the task of analyzing a large corpus of assets shared on the web to understand the provenance of a trend, the definition of a query (i.e., a questioned asset) is not always as straightforward as it is in the image provenance analysis case. For example, the memes shared during the 2019 Indonesian elections and discussed in William Theisen et al. (2020). In such cases, a natural question to ask

15 Image Provenance Analysis

421

would be which memes in the dataset one should use as the queries, for performing the first step of content retrieval. Inspired by a simplification of iterative filtering (see Sect. 15.2.1), Theisen et al. (2020) identified the cheapest option as being randomly sampling images from the dataset and iteratively using them as queries, for executing content retrieval until all the dataset images (or a sufficient number of them) are “touched” (i.e., they are retrieved by the content retrieval engine). There are several advantages to this, other than being easy to implement. Randomly sampling means that end-users would need to have no prior knowledge of the dataset and potential trends they are looking for. The cluster created at the end of the process would show “emergent” trends, which could even surprise the reviewer. On the other hand, from a more forensics-driven perspective, it is straightforward to imagine a system in which “informed queries” are used. If the users already suspect that several specific trends may exist in the data, cropped images of the objects pertaining to a trend may be used as a query, thus, prioritizing the content that the reviewers are looking for. This might be a demanding process because the user must already have some sort of idea of the landscape of the data and must produce the query images themselves. Following the suit in the solution proposed in William Theisen et al. (2020), prior to performing clustering, a particular type of image pairwise adjacency matrix (dubbed affinity matrix) must first be generated. By leveraging the provenance pipeline’s pre-existing steps, this matrix can be constructed based on the retrieval of the many selected queries. To prepare for the clustering step, a number of queries need to be run through the content retrieval system, the number of queries, and the recall of them depending on what type of graph the user wants to model for the analyzed dataset. Using the retrieval step’s output, a number of query-wise “hub” nodes are naturally generated, each of them having many connections. Consider, for example, that the user has decided to have a recall of the top 100 best matches for any selected query. This means that for every query submitted, there are 100 connections to other assets. These connections link the query image to each of the 100 matches, thus, imposing a “hubness” onto the query image. For the sake of illustration, Fig. 15.20 shows an example of this process, for the case of memes shared during the 2019 Indonesian elections. By varying the recall and number of queries, the affinity matrix can be generated, and an order can be imposed. Lowering the recall will result in a decrease in hublike nodes in the structure but will require more queries to be run to connect all the available images. The converse is also true. Once enough queries have been run, the final step of clustering can be executed. Spectral Clustering: After the affinity matrix has been generated, Theisen et al. (2020) suggested the use of multiclass spectral clustering (see Stella and Shi 2003) to organize the available images. In the case of memes, this step has the effect of assigning each image to a hypothesized meme genre, similar to the definition presented by Shifman (2014). The most important and perhaps trickiest part is deciding on a number of clusters to create. While more clusters may allow for a more targeted

422

D. Moreira et al.

Fig. 15.20 A demonstration of the type of structure that may be inferred from the output of content retrieval, with many selected image queries. According to global and object-level local similarity, images with similar sub-components should be clustered together, presenting connections to their respective queries. Image adapted from William Theisen et al. (2020)

look into the dataset, it increases processing time for both the computer generating the clusters and the human reviewing them at the end. In their studies, Theisen et al. (2020) pointed out that 150 clusters appeared sufficient when the number of total dataset available images was on the order of millions of samples. This allowed for a maximum amount of images per cluster that a human could briefly review, but not so few images that trends could not be noticed.

15.4.2 Datasets and Evaluation Indonesian Elections Dataset: The alternative content clustering stage inside the framework of provenance analysis aims to unveil emerging trends in large image datasets collected around a topic or an event of interest. To evaluate the capability of the proposed solutions, Theisen et al. (2020) collected over two million images from social media, concentrated around the 2019 Indonesian presidential elections. Harvested from Twitter (2021) and Instagram (2021) over 13 months (from March 31, 2018, to April 30, 2019), this dataset spans an extensive range of emotion and

15 Image Provenance Analysis

423

Fig. 15.21 Imposter-host test to measure the quality of the computed image clusters. Five images are presented at a time to a human subject, four of which belong to the same host cluster. The remaining one belongs to an unrelated cluster, therefore, being named imposter. The human subject is always asked to identify the imposter, which in this example is the last image. A high number of correct answers given by humans indicate that the clustering solution was successful. Image adapted from William Theisen et al. (2020)

thought surrounding the elections. The images are publicly available at https://bit. ly/2Rj0odI. Human Understanding Assessments: Keeping the objective of aiding the human understanding of large image datasets in mind, a metric is needed to measure how meaningful an image cluster is for a human. Inspired by the work of Weninger et al. (2012), Theisen et al. (2020) proposed an imposter-host test, which is performed by showing a person a collection of N images, all but one of which are from a common “host” cluster, which was previously computed by the proposed solution, whose quality needs to be measured. The other item, the “imposter”, is randomly selected from one of the other established clusters. The idea is that the more related the images in a single cluster are, the easier it should be for a human to pick out the imposter image. Figure 15.21 depicts an example of this configuration. To test the results of their studies, Theisen et al. (2020) hired Amazon Mechanical Turk workers (2021) to perform 25 of these imposter-host tasks each subject, with N equal to 5 images, in order to measure the ease with which a human can pick out an imposter from the clusters that were generated.

424

D. Moreira et al.

Fig. 15.22 Accuracy of the imposter-host tasks given to Amazon Mechanical Turk workers, relative to cluster size. The larger the point on the accuracy axis, the more images were in the cluster. The largest cluster is indicated as having 15.89% of all the dataset images. Image adapted from William Theisen et al. (2020)

15.4.3 Results The Amazon Mechanical Turk experiments reported by Theisen et al. (2020) demonstrated that the provenance pipeline’s clustering step can produce human interpretable clusters while minimizing the average cluster size. The average accuracy for the imposter-host test was 62.42%. If the worker is shown five images, the chance of correctly guessing would be 1/5 (20%). Therefore, the reported average is far above the baseline, thus, demonstrating the salience of the trends discovered in the clusters to human reviewers. The median cluster size was only 132 images per cluster. A spread of the cluster sizes as related to the worker accuracy for a cluster can be seen in Fig. 15.22. Surprisingly even the largest cluster, containing 15.89% of all the images, has an accuracy still higher than random chance. Three examples of what an individual cluster could look like can be seen in Fig. 15.23.

15.5 Open Issues and Research Directions State of the Art: Based on the results presented and discussed in the previous sections within this chapter, one can safely conclude that, in the current state of the art of provenance analysis, the available solutions are indeed proven to be effective for at least two tasks, namely (1) authenticity verification of images and (2) the understanding of image-sharing trends online. As for the verification of the authenticity of a questioned image (i.e. the query), this is done through the principled inspection of a given corpus of potentially related images to the query. In this case, whenever a non-empty set of related images is retrieved and a non-empty provenance graph (either directed or undirected) is generated and presented, one might infer the edit history of the query and take it as evidence of potential conflicts that might attest against its authenticity, such as inadvertent manipulations or source misattributions. Regarding the understanding of trending content online, this can be done during the unfolding of a target event, such as national elections or international sports competitions. In this case, the quick retrieval of related images and subsequent content-based clustering have the potential to unveil emergent trends and surface

15 Image Provenance Analysis

425

Fig. 15.23 Three examples of what the obtained clusters may look like. In a and b, it is very easy to see the similarity shared among all the images. In c, it is not quite as clear, but if one were to look a little more closely, they might notice the stylized “01” logo in each of the images. This smaller, shared component is what makes object-level matching and local feature analysis a compelling tool. Image adapted from William Theisen et al. (2020)

426

D. Moreira et al.

their provenance, allowing for a better comprehension of the event itself, as well as its relevance. A summary of what the public is thinking about an event and who the actors are trying to steer public opinion is only a couple of the possibilities that provenance content clustering may allow one to do. While results have been very promising, no solution is without flaws, and much work is left to be done. In particular, image provenance analysis still has many caveats, which are briefly described below. Similarly, multimodal provenance analysis is an unexplored field, which deserves a great deal of attention from researchers shortly. Image Provenance Analysis Caveats: There are open problems in image provenance analysis that still need attention from the scientific community. For instance, the solutions proposed to compute the directions of the edges of the provenance graphs still deliver values of edge overlap that are inferior to the results for vertex recall and vertex overlap. This aspect indicates that the proposed solutions are better at determining which images must be added to the provenance graphs as vertices, but there is still plenty of room to improve the computation of how these vertices must be connected. In this regard, novel techniques of learning which image might have been acquired or generated first and leveraging the output of single-image manipulation detectors (tackled throughout this book) are desired. Moreover, an unexplored aspect of image provenance analysis is understanding, representing, and detecting the space of image transformations used to generate one image from the other. By doing so, one will determine what transformations might have been performed during the establishment of an edge, a problem that currently still lacks solutions. Thankfully, through the recent joint efforts of DARPA and NIST within the MediFor program (2021), a viable regime for data generation and annotation at the level of registering the precisely applied image transformations that have been performed from one asset to the other has emerged. This regime’s outcome is available to the scientific community as a useful benchmark (see Guan et al. 2019) for further research. Multimodal Provenance Analysis: As explained in Sect. 15.3, we have already witnessed that additional information such as image metadata helps to improve provenance analysis. Moving forward, another important research direction that requires progress is the identification and principled usage of information coming from other asset modalities (such as the text of image captions, in the case of questionable assets that are rich documents), as well as the development of provenance analysis for media types other than images (e.g., the provenance of text documents, videos, audios, etc.). When one considers the multimodal aspect, many questions appear and are open for research. One of them is how to analyze complex assets, such as the texts coming from different suspect documents (looking for cases of plagiarism and attribution to the original author), or the images extracted from scientific papers (which may be inadvertently reused to fabricate scientific findings, such as what was recently described in PubPeer Foundation (2020)). Another question is how to leverage images and captions in a document, or the video frames and their respective audio subtitles, within a movie.

15 Image Provenance Analysis

427

Video provenance analysis, in particular, is a topic that deserves a great deal of attention. While one might assume image-based methods could be extended to video by being run over multiple frames, such a solution would fail to glean videos’ inherent temporal dimension. Sometimes videos shared on social media such as Instagram (2021) or TikTok (2021) are composites of one or more pieces of viral footage. Tracking the provenance of videos is as important as tracking the provenance of still images. Document provenance analysis, in turn, has also gained attention lately due to the recent pandemic of bad science (see Scheirer 2020). The COVID-19 crisis has caused a subsequent explosion in scientific publications surrounding the pandemic, not all of which might be considered highly rigorous. Using a provenance pipeline to aid in document analysis could allow reviewers to find repeated figures across many publications. It would be the reviewers’ decision, though, to determine if the repetition is something as simple as citing the previous work or something more nefarious like trying to claim pre-existing work as one’s own. Towards an End-user Provenance Analysis System: Finally, for the solutions discussed in this chapter to be truly useful, they require front-end development and back-end integration, which are currently missing. Such a system would allow users to perform tasks similar to reverse image search, with the explicit intent of finding the origins of all related content within the query asset in question. If a front-end requiring minimal user skill to use could be designed, provenance analysis could be consolidated as a powerful tool for fighting fake news and misinformation.

Table 15.7 List of datasets useful for provenance analysis

428

D. Moreira et al.

15.6 Summary The current state of the art of provenance analysis is mature enough for aiding image authenticity verification by processing a corpus of images rather than the processing of a single and isolated asset. However, more work needs to be done to improve the quality of the generated provenance graphs concerning the direction and identification of image transformations associated with the graphs’ edges. Besides, provenance analysis still needs to be extended to media types other than images, such as video, audio, and text. In this regard, both the development of algorithms and the collection of datasets are yet to be done, revealing a unique research opportunity. Provenance Analysis Datasets: Table 15.7 enlists the currently available and useful datasets for image provenance analysis.

Table 15.8 List of implementations of provenance analysis solutions

15 Image Provenance Analysis

429

Provenance Analysis Source Code: Table 15.8 summarizes the currently available source code for image provenance analysis. Acknowledgements Anderson Rocha thanks the financial support of the São Paulo Research Foundation (FAPESP, Grant #2017/12646-3).

References Advance Publications, Inc (2012) Reddit PhotoshopBattles. https://bit.ly/3ty7pJr. Accessed on 19 Apr 2021 Amazon Mechanical Turk, Inc (2021) Amazon Mechanical Turk. https://www.mturk.com/. Accessed 11 Apr 2021 Baeza-Yates R, Ribeiro-Neto B (1999) Modern information retrieval, vol 463, Chap 8. ACM Press, New York, pp 191–227 Ballard D (1981) Generalizing the Hough transform to detect arbitrary shapes. Elsevier Pattern Recognit 13(2):111–122 Bay H, Ess A, Tuytelaars T, Van Gool L (2008) Speeded-up robust features (SURF). Elsevier Comput Vis Image Understand 110(3):346–359 Bestagini P, Tagliasacchi M, Tubaro S (2016) Image phylogeny tree reconstruction based on region selection. In: IEEE international conference on acoustics, speech and signal processing, pp 2059– 2063 Bharati A, Moreira D, Flynn P, Rocha A, Bowyer K, Scheirer W (2021) Transformation-aware embeddings for image provenance. IEEE Trans Inf Forensics Secur 16:2493–2507 Bharati A, Moreira D, Brogan J, Hale P, Bowyer K, Flynn P, Rocha A, Scheirer W (2019) Beyond pixels: Image provenance analysis leveraging metadata. In: IEEE winter conference on applications of computer vision, pp 1692–1702 Bharati A, Moreira D, Pinto A, Brogan J, Bowyer K, Flynn P, Scheirer W, Rocha A (2017) Uphylogeny: Undirected provenance graph construction in the wild. In: IEEE international conference on image processing, pp 1517–1521 Brogan J (2019) Reddit Photoshop Battles Image Provenance. https://bit.ly/3jfrLSJ. Accessed on 16 Apr 2021 Brogan J, Bestagini P, Bharati A, Pinto A, Moreira D, Bowyer K, Flynn P, Rocha A, Scheirer W (2017) Spotting the difference: Context retrieval and analysis for improved forgery detection and localization. In: IEEE international conference on image processing, pp 4078–4082 Brogan J, Bharati A, Moreira D, Rocha A, Bowyer K, Flynn P, Scheirer W (2021) Fast local spatial verification for feature-agnostic large-scale image retrieval. IEEE Trans Image Process 30:6892–6905 ByteDance Ltd (2021) About TikTok. https://www.tiktok.com/about?lang=en. Accessed 11 Apr 2021 Castelletto R, Milani S, Bestagini P (2020) Phylogenetic minimum spanning tree reconstruction using autoencoders. In: IEEE international conference on acoustics, speech and signal processing, pp 2817–2821 Chu Y-J (1965) On the shortest arborescence of a directed graph. Sci Sini 14:1396–1400 Costa F, Oikawa M, Dias Z, Goldenstein S, Rocha A (2014) Image phylogeny forests reconstruction. IEEE Trans Inf Forensics Secur 9(10):1533–1546 Costa F, De Oliveira A, Ferrara P, Dias Z, Goldenstein S, Rocha A (2017) New dissimilarity measures for image phylogeny reconstruction. Springer Pattern Anal Appl 20(4):1289–1305 Costa F, Lameri S, Bestagini P, Dias Z, Rocha A, Tagliasacchi M, Tubaro S (2015) Phylogeny reconstruction for misaligned and compressed video sequences. In: IEEE international conference on image processing, pp 301–305

430

D. Moreira et al.

Costa F, Lameri S, Bestagini P, Dias Z, Tubaro S, Rocha A (2016) Hash-based frame selection for video phylogeny. In: IEEE international workshop on information forensics and security, pp 1–6 De Oliveira A, Ferrara P, De Rosa A, Piva A, Barni M, Goldenstein S, Dias Z, Rocha A (2015) Multiple parenting phylogeny relationships in digital images. IEEE Trans Inf Forensics Secur 11(2):328–343 De Rosa A, Uccheddu F, Costanzo A, Piva A, Barni M (2010) Exploring image dependencies: a new challenge in image forensics. In: IS&T/SPIE electronic imaging, SPIE vol 7541, Media forensics and security II, pp 1–12 Dias Z, Rocha A, Goldenstein S (2012) Image phylogeny by minimal spanning trees. IEEE Trans Inf Forensics Secur 7(2):774–788 Dias Z, Goldenstein S, Rocha A (2013) Toward image phylogeny forests: automatically recovering semantically similar image relationships. Elsevier Forensic Sci Int 231(1–3):178–189 Dias Z, Goldenstein S, Rocha A (2013) Large-scale image phylogeny: tracing image ancestral relationships. IEEE Multimed 20(3):58–70 Dias Z, Goldenstein S, Rocha A (2013) Exploring heuristic and optimum branching algorithms for image phylogeny. Elsevier J Vis Commun Image Represent 24(7):1124–1134 Dias Z, Rocha A, Goldenstein S (2011) Video phylogeny: Recovering near-duplicate video relationships. In: IEEE international workshop on information forensics and security, pp 1–6 Edmonds J (1967) Optimum branchings. J Res Natl Bure Stand B 71(4):233–240 Facebook, Inc (2021) About Instagram. https://about.instagram.com/about-us. Accessed 11 Apr 2021 Ge T, He K, Ke Q, Sun J (2013) Optimized product quantization for approximate nearest neighbor search. In: IEEE conference on computer vision and pattern recognition, pp 2946–2953 Guan H, Kozak M, Robertson E, Lee Y, Yates AN, Delgado A, Zhou D, Kheyrkhah T, Smith J, Fiscus J (2019) Mfc datasets: Large-scale benchmark datasets for media forensic challenge evaluation. In: 2019 IEEE winter applications of computer vision workshops (WACVW). IEEE, pp 63–72 Idée, Inc. TinEye (2020) Reverse image search. https://tineye.com/. Accessed on 17 Jan 2021 Johnson J, Douze M, Jégou H (2019) Billion-scale similarity search with gpus. IEEE Trans Big Data 1–12 Kennedy L, Chang S-F (2008) Internet image archaeology: Automatically tracing the manipulation history of photographs on the web. In: ACM international conference on multimedia, pp 349–358 Kruskal J (1956) On the shortest spanning subtree of a graph and the traveling salesman problem. Proc Am Math Soc 7(1):48–50 Lameri S, Bestagini P, Melloni A, Milani S, Rocha A, Tagliasacchi M, Tubaro S (2014) Who is my parent? reconstructing video sequences from partially matching shots. In: IEEE international conference on image processing, pp 5342–5346 Liu Y, Zhang D, Guojun L, Ma W-Y (2007) A survey of content-based image retrieval with highlevel semantics. Elsevier Pattern Recognit 40(1):262–282 Lowe D (2004) Distinctive image features from scale-invariant keypoints. Springer Int J Comput Vis 60(2):91–110 Melloni A, Bestagini P, Milani S, Tagliasacchi M, Rocha A, Tubaro S (2014) Image phylogeny through dissimilarity metrics fusion. In: IEEE European workshop on visual information processing, pp 1–6 Milani S, Bestagini P, Tubaro S (2016) Phylogenetic analysis of near-duplicate and semanticallysimilar images using viewpoint localization. In IEEE international workshop on information forensics and security, pp 1–6 Milani S, Bestagini P, Tubaro S (2017) Video phylogeny tree reconstruction using aging measures. In: IEEE European signal processing conference, pp 2181–2185 Moreira D, Bharati A, Brogan J, Pinto A, Parowski M, Bowyer K, Flynn P, Rocha A, Scheirer W (2018) Image provenance analysis at scale. IEEE Trans Image Process 27(12):6109–6123 National Institute of Standards and Technology (2017) Nimble Challenge 2017 Evaluation. https:// bit.ly/3e8GeOP. Accessed on 16 Apr 2021

15 Image Provenance Analysis

431

National Institute of Standards and Technology (2018) Nimble Challenge 2018 Evaluation. https:// bit.ly/3mY2sHA. Accessed on 16 Apr 2021 National Institute of Standards and Technology (2019) Media Forensics Challenge 2019. https:// bit.ly/3susnrw. Accessed on 19 Apr 2021 Noh H, Araujo A, Sim J, Weyand T, Han B (2017) Large-scale image retrieval with attentive deep local features. In: IEEE international conference on computer vision, pp 3456–3465 Oikawa M, Dias Z, Rocha A, Goldenstein S (2015) Manifold learning and spectral clustering for image phylogeny forests. IEEE Trans Inf Forensics Secur 11(1):5–18 Oikawa M, Dias Z, Rocha A, Goldenstein S (2016) Distances in multimedia phylogeny. Int Trans Oper Res 23(5):921–946 Pinto A, Moreira D, Bharati A, Brogan J, Bowyer K, Flynn P, Scheirer W, Rocha A (2017) Provenance filtering for multimedia phylogeny. In: IEEE international conference on image processing, pp 1502–1506 Prim R (1957) Shortest connection networks and some generalizations. Bell Syst Tech J 36(6):1389– 1401 PubPeer Foundation (2020) Traditional Chinese medicine for COVID-19 treatment. https://bit.ly/ 2Su3g8U. Accessed on 11 Apr 2021 Rudin C, Schapire R (2009) Margin-based ranking and an equivalence between adaboost and rankboost. J Mach Learn Res 10(10):2193–2232 Scheirer W (2020) A pandemic of bad science. Bull Atom Sci 76(4):175–184 Shifman L (2014) Memes in digital culture. MIT Press Stella Y, Shi J (2003) Multiclass spectral clustering. In: IEEE international conference on computer vision, pp 313–319 Theisen W, Brogan J, Thomas PB, Moreira D, Phoa P, Weninger T, Scheirer W (2021) Automatic discovery of political meme genres with diverse appearances. In: AAAI International Conference on Web and Social Media, pp 714–726 Tian Y, Fan B, Wu F (2017) L2-net: Deep learning of discriminative patch descriptor in Euclidean space. In: IEEE conference on computer vision and pattern recognition, pp 661–669 Turek M (2020) Media Forensics (MediFor). https://www.darpa.mil/program/media-forensics. Accessed on 24 Feb 2021 Twitter, Inc (2021) About Twitter. https://about.twitter.com/. Accessed 11 Apr 2021 University of Notre Dame, Computer Vision Research Laboratory (2018) MediFor (Media Forensics). https://bit.ly/3txaZU7. Accessed on 16 Apr 2021 Verde S, Milani S, Bestagini P, Tubaro S (2017) Audio phylogenetic analysis using geometric transforms. In: IEEE workshop on information forensics and security, pp 1–6 Weninger T, Bisk Y, Han J (2012) Document-topic hierarchies from document graphs. In: ACM international conference on information and knowledge management, pp 635–644 Yi KM, Trulls E, Lepetit V, Fua P (2016) LIFT: learned invariant feature transform. In: Springer European conference on computer vision, pp 467–483 Zhang X, Sun ZH, Karaman S, Chang S-F (2020) Discovering image manipulation history by pairwise relation and forensics tools. IEEE J Sel Top Signal Process 1012–1023 Zhu N, Shen J (2019) Image phylogeny tree construction based on local inheritance relationship correction. Springer Multimed Tools Appl 78(5):6119–6138

432

D. Moreira et al.

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Part IV

Counter-Forensics

Chapter 16

Adversarial Examples in Image Forensics Mauro Barni, Wenjie Li, Benedetta Tondi, and Bowen Zhang

Abstract We describe the threats posed by adversarial examples in an image forensic context, highlighting the differences and similarities with respect to other application domains. Particular attention is paid to study the transferability of adversarial examples from a source to a target network and to the creation of attacks suitable to be applied in the physical domain. We also describe some possible countermeasures against adversarial examples and discuss their effectiveness. All the concepts described in the chapter are exemplified with results obtained in some selected image forensics scenarios.

16.1 Introduction Deep neural networks, most noticeably Convolutional Neural Networks (CNNs), are increasingly used in image forensic applications due to their superior accuracy in detecting a wide number of image manipulations, including double and multiple JPEG compression (Wang and Zhang 2016; Barni et al. 2017), median filtering (Chen et al. 2015), resizing (Bayar and Stamm 2018), contrast manipulation (Barni et al. 2018), splicing (Ahmed and Gulliver 2020). Good performance of CNNs have also been reported for image source attribution, i.e., to identify the model of the camera which acquired a certain image (Bondi et al. 2017; Freire-Obregon et al. 2018; Bayar and Stamm 2018), and deepfake detection (Rossler et al. 2019). Despite the good performance they achieve, the use of CNNs in security-oriented applications, like image forensics, is put at risk by the easiness with which adversarial examples can be built (Szegedy et al. 2014; Carlini and Wagner 2017; Papernot et al. 2016). As a M. Barni (B) · B. Tondi Department of Information Engineering and Mathematics, University of Siena, Siena, Italy e-mail: [email protected] W. Li Institute of Information Science, Beijing Jiaotong University, Beijing, China B. Zhang School of Cyber Engineering, Xidian University, Xi’an, China © The Author(s) 2022 H. T. Sencar et al. (eds.), Multimedia Forensics, Advances in Computer Vision and Pattern Recognition, https://doi.org/10.1007/978-981-16-7621-5_16

435

436

M. Barni et al.

matter of fact, an attacker who has access to the internal details of the CNN used for a certain image recognition task can easily build an attacked image which is visually indistinguishable from the original one but is misclassified by the CNN. Such a problem is currently the subject of an intense research activity, yet no satisfactory solution has been found yet (see Akhtar and Ajmal (2018) for a survey on this topic). The problem is worsened by the observation that adversarial attacks are often transferable from the target network to other networks designed for the same task (Papernot et al. 2016). This means that even in a Limited Knowledge (LK) scenario, wherein the attacker has only partial information about the to-be-attacked network, he can attack a surrogate network mimicking the target one and the attack will be effective also on the target network with good probability. Such a property opens the way toward very powerful attacks that can be used in real applications wherein the attacker does not have full access to the attacked system (Papernot et al. 2016). While some recent works have shown that CNN-based image forensics tools are also susceptible to adversarial examples (Guera et al. 2017; Marra et al. 2018), some differences exist between the generation of adversarial examples targeting computer vision networks and networks trained to solve image forensic problems. For instance, in Barni et al. (2019), it is shown that adversarial examples aiming at deceiving image forensic networks are less transferable than those addressing computer vision tasks. A possible explanation for such a different behavior could be the different kinds of features image forensic networks rely on with respect to those used in computer vision applications, however a definitive answer to this question is not known yet. With the above ideas in mind, the goal of this chapter is to give a brief introduction to adversarial examples and illustrate their applicability to image forensics networks. Particular attention is given to the transferability of the examples, since it is mainly thanks to this property that adversarial examples can be exploited to develop practical attacks against image forensic detectors. We also describe some possible defenses against adversarial examples, even if, up to date, a definitive strategy to make the creation of adversarial examples impossible, or at least impractical, is not available.

16.2 Adversarial Examples in a Nutshell In this section we briefly review the various approaches proposed so far to generate adversarial examples, without referring specifically to a multimedia forensics scenario. After a rigorous formalization of the problem, we review the main approaches proposed so far, then we consider the generation of adversarial examples in the physical domain. Eventually, we discuss the problems associated with the generation of adversarial attacks in a black-box setting. Throughout the rest of this chapter we will always refer to the case of deep neural networks aiming at classifying an input image into a number of possible classes, since this is by far the most relevant setting in a multimedia forensic context.

16 Adversarial Examples in Image Forensics

437

16.2.1 Problem Definition and Review of the Most Popular Attacks Let x ∈ [0, 1]n denote a clean image1 whose true label is y and let φ be a CNNbased image classifier providing a correct classification of x, namely, φ(x) = y. An adversarial example is a perturbed image x  = x + δ for which φ(x  ) = y. More specifically, an untargeted attack aims at generating an adversarial example for which φ(x  ) = y  for any y  = y, while for a targeted attack the wrong label y  must correspond to a specific target class yt . In practice, we require that δ is a very small quantity, so that the perturbed image x  is perceptually identical to x. Such a goal is usually achieved by constraining the L p norm of δ to be lower than a small positive quantity, that is δ p ≤ ε, where ε is a very small value. Starting from the seminal work by Szegedy, many approaches to generate the adversarial perturbation δ have been proposed, leading to a wide variety of different algorithms an attacker can rely on. The proposed approaches include gradient-based attacks (Szegedy et al. 2014; Goodfellow et al. 2015; Kurakin et al. 2017; Madry et al. 2018; Carlini and Wagner 2017), transfer-based attacks (Dong et al. 2018, 2019), decision-based attacks (Chen et al. 2017; Ilyas et al. 2018; Li et al. 2019), and so on. Most methods assume that the attacker has a full knowledge of the tobe-attacked model φ a situation usually referred to as a white-box attack scenario. It goes without saying that this may not be the case in practical applications for which more flexible black-box or gray-box attacks are needed. We will discuss the main difficulties associated with black-box attacks in Sect. 16.2.2. In the following we review some of the most popular gradient-based white-box attacks. • L-BFGS Attack. In Szegedy et al. (2014), Szegedy et al. first proposed to generate adversarial examples aiming at fooling a CNN-based classifier and minimizing the perturbation of the image at the same time. The problem is formulated as minimize δ22 subject to φ(x + δ) = yt . x + δ ∈ [0, 1]n

(16.1)

Here, the perturbation introduced by the attack is limited by using the squared L 2 norm. Solving the above minimization is not an easy task, so the adversarial examples are looked for by solving more manageable problem: minimize λ · δ22 + L(x + δ, yt ) δ

subject to x + δ ∈ [0, 1]n

(16.2)

1 Hereafter, we assume that image pixels assume values in [0, 1]. For gray-level images n is equal to the number of pixels, while for color images n accounts for all the color channels.

438

M. Barni et al.

where L(x + δ, yt ) is a loss term forcing the CNN to assign the target label yt to x + δ and λ is a parameter balancing the importance of two terms. In Szegedy et al. (2014), the usual cross-entropy loss is adopted as L(x + δ, yt ). The latter optimization problem is solved by the box-constraint L-BFGS method. Moreover, the optimal value of λ is determined by conducting a binary search within a predefined range of values. • Fast Gradient Sign Method (FGSM). A drawback of the L-BFGS method is its computational complexity. For this reason, Goodfellow et al. (2015) proposed a faster attack. The new method starts from the observation that, for small values of the perturbation, the output of the CNN can be computed by using a linear approximation, and that, due to the large dimensionality of x, a small perturbation of the input in the direction of gradient can result in a large modification of the output. Accordingly, the adversarial examples are generated by adding a small perturbation to the clean image as follows: x  = x + ε · sign(∇x L(x, y))

(16.3)

where ∇x L is the gradient of L with respect to x, and the adversarial perturbation is constrained by δ∞ ≤ ε. Using the sign of the gradient, ensures that the perturbation is within the allowed limits. Since the adversarial examples are generated based on the gradient of the loss function with respect to the input image x, they can be efficiently calculated by using the back-propagation algorithm, thus resulting in a very efficient attack. • Iterative FGSM (I-FGSM). The FGSM attack is a one-step attack whose strength can be modulated only by increasing the value of ε, with the risk that the linearity approximation the method relies on is no more valid. In Kurakin et al. (2017), the one-step FGSM is extended to an iterative, multi-step, method by applying FGSM multiple times with a small step size, each time by recomputing the gradient. The perturbed pixels are then clipped in an ε−neighborhood of the original input, to ensure that the distortion constraint is satisfied. The N -th step of the resulting method, referred to as I-FGSM (or sometimes BIM—Basic Iterative Method), is defined by x0 = x, x N = x N −1 + clipε {α · sign(∇x L(x N −1 , y))}

(16.4)

where α is a small step size used to update the attack at each iteration. Similarly to IFGSM, a universal first-order adversary, called projected gradient descent (PGD), has been proposed in Madry et al. (2018). Compared with I-FGSM, in which the starting point of the iterations is exactly the input image, the starting point of PGD is obtained by randomly projecting the input image within an allowed perturbation ball, which can be done by adding small random perturbations to the input image. Then, the following iterations are conducted in the same way as I-FGSM. • Jacobian-based Saliency Map Attack (JSMA). According to the JSMA algorithm proposed by Papernot et al. in (2016), the adversarial examples are generated on the basis of an adversarial saliency map, which indicates the contribution of the

16 Adversarial Examples in Image Forensics

439

image pixels to the classification result. In particular, the saliency map is computed based on the forward propagation of the pixel values to the output of the to-beattacked network. According to the saliency map, at each iteration, a pair of pixels whose modification leads to a significant increase of the output values assigned to the target class and/or a decrease of the values assigned to the other classes are perturbed by an amount θ. The iterations stop when the perturbed image is classified as belonging to the target class or a maximum distortion level is reached. To keep the perturbation imperceptible, each pixel can be modified up to a maximum of T times. • Carlini & Wagner Attack (C&W). In Carlini and Wagner (2017), Carlini and Wagner proposed to solve the optimization problem in (16.2) in a more efficient way. Specifically, two modifications of the basic optimization algorithm are proposed. On one hand, the loss term is replaced by a properly designed function directly related to the maximum difference between the logit of the target class and those of the other classes. On the other hand, the box-constraint used to keep the perturbed pixels in a valid range is automatically satisfied by letting δ = 21 (tanh(ω) + 1) − x, where ω is a new variable used for the transformation of δ. In this way, the box-constraint optimization problem in (16.2) is transformed into the following unconstrained problem: 2  minimize  21 (tanh(ω) + 1) − x 2 + λ · g( 21 (tanh(ω) + 1)) ω

(16.5)

where the function g is defined as g(x  ) = max(max z i (x  ) − z t (x  ), −κ), i=t

(16.6)

and where z i is the logit corresponding to the i−th class, and the parameter κ > 0 is used to encourage the attack to generate adversarial examples classified with high confidence. Finally, the adversarial examples are generated by solving the modified optimization problem using the Adam optimizer.

16.2.2 Adversarial Examples in the Physical Domain The algorithms described in the previous sections work in the digital domain, since they directly modify the digital images fed to the CNN. In many applications, however, it is required that the adversarial examples are generated in the physical domain, producing real-world objects that, when sensed by the sensors of the attacked system and fed to the CNN, cause a misclassification error. In this setting, and by focusing on the case of still images, which is the most relevant case for multimedia forensics applications, the images with the adversarial examples must first be printed or displayed on a screen and then shown to a camera, which will digitize them again before feeding them to the CNN classifier. This process, sometimes referred to as

440

M. Barni et al.

Fig. 16.1 Physical domain and digital domain attacks

image rebroadcast, involves a digital-to-analog and an analog-to-digital conversions that may degrade the adversarial perturbation thus making the attack ineffective. The rebroadcast process involved by a physical domain attack is exemplified in Fig. 16.1. As shown in the figure, during the rebroadcast process, the image is affected by several forms of degradation including noise addition, light changes and geometric transformations. If no countermeasure is taken, it is very likely that the invisible perturbations introduce to generate the adversarial examples are damaged up to a point to make the attack ineffective. Recognizing the above difficulties, several methods have been proposed to generate physical domain adversarial examples. Kurakin et al. first brought up this concept in Kurakin et al. (2017). In this paper, the new images taken after rebroadcasting are geometrically calibrated before being fed to the classifier. In this way, the adversarial perturbation is not affected by any geometric distortion, hence easing the attack. More realistic scenarios are considered in later works. In the seminal paper by Athalye et al. (2018) a method is proposed to generate physical adversarial examples that are robust to various distortions, including changes of the viewing angle and distance. Specifically, a procedure, named Expectation Over Transformation (EOT), is applied to generate the adversarial examples by ensuring that they are effective across a number of possible transformations. Let x be the to-be-attacked image, with ground truth label y, and let yt be target class of the attack. Without loss of generality, let the output of the classifier be

16 Adversarial Examples in Image Forensics

441

obtained by computing the probability P(yi |x) that the input image x belongs to class yi and then choosing for the class resulting in the maximum probability,2 namely, φ(x) = arg max P(yi |x). (16.7) i

The generation of adversarial examples is formalized as an optimization problem that maximizes the likelihood of the target class yt over an -radius ball around the original image x, i.e., arg max log P(yt |x + δ) δ

subject to ||δ|| p ≤ .

(16.8) (16.9)

To ensure robustness of the attack, EOT considers a set T of transformations (usually consisting of geometric transformations) and a probability distribution PT over T , and maximizes the likelihood of the target class averaged over a transformed version of the input images, generated according to PT . A similar strategy is applied to the constraint, i.e., EOT constrains the expected distance between the transformed versions of the adversarial example and the original image. As a result, with EOT the optimization problem is reformulated as arg max E PT [log P(yt |t (x + δ))]

(16.10)

subject to E PT [||t (x + δ) − t (x)|| p ]

(16.11)

δ

where t is a transformation in T and E PT denotes expectation over T . The basic EOT approach has been extended in many ways, including a wider variety of transformations and by optimizing the perturbation over more than a single image. For instance, Sharif et al. (2016) have proposed a way to generate adversarial examples in the physical domain for face recognition systems. They restrict the perturbation within a small area around the eye-glass frame and increase the robustness of the perturbations by optimizing them over a set of properly chosen images. Eykholt et al. (2018) have proposed an attack capable of generating adversarial examples under even more realistic conditions. The attack targets a road sign classification system possibly deployed on board of autonomous vehicles. In addition, to sample the images from synthetic transformations as done by the basic EOT attack, they also generated experimental data containing actual physical condition variability. Their experiments show that the generated adversarial examples are so effective that can be printed on stickers and applied to road signs, and fool a vehicular traffic sign recognition system in driving-by tests. Zhang et al. (2020) further enlarged the set of transformations to implement a physical domain attack against a face authentication system equipped with a spoofing2

This is the common case of CNN classifiers adopting one-hot-encoding to represent the result of the classification.

442

M. Barni et al.

detection module, whose aim is to distinguish real faces and rebroadcast face photos. This situation presents an additional challenge, as explained in the following. To attack the face authentication system in the physical world, the attacker must present the perturbed images in front of the system by printing them or showing them on a screen. The rebroadcasting procedure, then, introduces new physical traces into the image fed to the classifier, which, having been trained to recognize rebroadcast traces, can recognize them and classify the images correctly despite the presence of the adversarial perturbation. To exit this apparent deadlock, the authors propose to utilize the EOT framework to design an attack that pre-emptively takes into account the new traces introduced by the rebroadcast procedure. The experiments shown in Zhang et al. (2020) demonstrate that EOT effectively allows to carry out an adversarial attack even in such difficult conditions.

16.2.3 White Versus Black-Box Attacks An adversary may have different levels of knowledge about the algorithm used by the forensic analyst. From this point of view, adversarial examples in DL are commonly classified as white-box and black-box attacks (Yuan et al. 2019). In a white-box scenario, the attacker knows everything about the target forensic classifier, that is, he knows all the details of φ(·). In a black-box scenario, instead, the attacker knows nothing about it, or, has only a very limited knowledge. In some works, following the taxonomy in Zheng and Hong (2018), the class of gray-box attacks is also considered, referring to a situation wherein the attacker has a partial knowledge of the classifier, for instance, he knows the architecture of the classifier, but has no knowledge of its internal parameters (e.g., he does not know some of the hyper-parameters or, more in general, he does not know the training data Dtr used to train the classifier). In the white-box scenario, the attacker is also assumed to know the defence strategy adopted by the forensic analyst φ(·), hence he can take into account the presence of such a defence during the attack. Having full access to φ(·), the most common white-box attacks rely on some form of gradient-descent (Goodfellow et al. 2015; Kurakin et al. 2017; Dong et al. 2018), or directly solve the optimization problem underlying the attack as in Carlini and Wagner (2017); Chen et al. (2018). When the attacker has a partial knowledge of the algorithm (gray-box setting), he can build its own version of the classifier φ (·), usually referred to as substitute or surrogate classifier, providing a hopefully good approximation of φ(·), and then use φ (·) to carry out the attack. Hence, the attacker implements a white-box attack against φ (·), hoping that the attack will also work against φ(·) (attack transferability, see Sect. (16.3.1)). Mathematically speaking, the difference between white-box and black-box attacks is the loss function that the adversary seeks to maximize, namely L(φ(·), ·) in the white-box case and L(φ (·), ·) in the black-box case. As long as an attack thought to work against φ(·) can be transferred to φ (·), adversarial samples crafted for φ (·) will also be misclassified by φ(·) (Papernot et al. 2016). The substitute

16 Adversarial Examples in Image Forensics

443

classifier φ (·) is built by exploiting the available information and making an educated guess about the unknown parameters. Black-box attacks often assume that the target network can be queried as an oracle, for a limited number of times, to get useful information to build the substitute classifier (Liu et al. 2016). When the score values or the logit values z i can be obtained by querying the target model (score-based black-box attacks), the gradient can be estimated by drawing some random samples and acquiring the corresponding loss values (Ilyas et al. 2018). A more challenging scenario is when only hard-label predictions, i.e., the predicted classes, are returned by the queried model (Brendel et al. 2017). In the DL literature, several approaches have been proposed to craft adversarial examples against substitute models, in such a way to maximize the probability that they can be transferred to the original model. One approach is the translation-invariant (Dong et al. 2019) attack method, that, instead of optimizing the objective function directly on the perturbed image, optimizes it over a set of shifted versions of the image. Another closely related approach improves the transferability of adversarial examples by creating diverse input patterns (Xie et al. 2019). Instead of only using the original images to generate adversarial examples, the method applies image some transformations to the inputs with a given probability at each iteration of the gradientbased attack. The transformations include resizing and padding. A general and simple method to increase the transferability of adversarial examples is described in Sect. 16.3.2.

16.3 Adversarial Examples in Multimedia Forensics Given the recent trend toward the use of deep learning architectures in Multimedia Forensics, several Counter-Forensic (CF) attacks against deep learning models have been developed in the last years. In this case, one of the main advantages of CNNbased classifiers and detectors, namely, their ability to learn forensic features directly from the input data (be their images, videos, or audio tracks), turns into a weakness. An informed attacker can in fact generate his attacks directly in the sample domain without the need to map them from the feature domain to the sample domain, as it happens with conventional methods based on hand-crafted features. The vulnerability of multimedia forensics tools based on deep learning has been addressed by many works as summarized in Marra et al. (2018). The underlying motivations for the vulnerability of CNNs used in multimedia forensics are basically the same characterizing image recognition applications, with a prominent role played by the huge dimension of the space of possible inputs, which is substantially larger than the set of images used to train the model. As a result, in a white-box scenario, the attacker can easily create slightly modified images that fall into an ‘unseen’ region of the image space thus forcing a misclassification error. Such a weakness of deep learning-based approaches represents a serious threat in multimedia forensics where the tools are designed to work under intrinsically adversarial conditions.

444

M. Barni et al.

Adversarial examples that have been developed against CNNs in multimedia forensics are reviewed in the following. The first targeted attack based on adversarial examples was proposed in Guera et al. (2017) to fool a CNN-based camera model identification system. By relying on adversarial examples, the authors propose a counter-forensic method for slightly altering an image in such a way to change the estimated camera model. A stateof-the-art CNN-based camera model detector based on the DenseNet architecture (Huang et al. 2017) is considered in the analysis. The counter-forensic method uses both the FGSM attack and the JSMA attack to craft adversarial images obtaining good performance in both cases. Adversarial attacks against CNN-based image forensics methods have been derived in Gragnaniello et al. (2018) for several common manipulation detection tasks. The following common processing has been considered: image blurring, applied with different variance of the Gaussian filter; image resizing, both downscaling and upscaling; JPEG compression, with different qualities; and median filtering detection, with window sizes 3 × 3 and 7 × 7. Several CNN models have been successfully attacked by means of adversarial perturbations obtained via the FGSM algorithm applied with different strengths ε. Adversarial perturbations have been shown to offer poor robustness to image processing operations, e.g., image compression (Marra et al. 2018). In particular, rounding to integers is sometimes sufficient to wash out the perturbation and make an adversarial example ineffective. A gradient-inspired pixel domain attack to generate adversarial examples against CNN-based forensic detectors in the integer domain has been proposed in Tondi (2018). In contrast to standard gradient-based attacks, which perturb the attacked image based on the gradient of the loss function w.r.t. the input, in the attack proposed in Tondi (2018), the gradient of the output score function is approximated with respect to the pixel values, incorporating the constraint that the input is an integer in the optimization. The performance of the integer-based attack has been assessed against three image manipulation detectors, for median filtering, resizing, and contrast adjustment. Moreover, while common gradient-based adversarial attacks, like FGSM, I-FGSM, C&W, etc. …, work in a white-box setting, the attack proposed in Tondi (2018) is a black-box one, since it does not need any knowledge of the internal parameters of the model, requiring only that the network can be queried as an oracle and the output observed. All the above attacks are targeted to a specific CNN classifier. Carrying out an effective attack in a completely black-box or no-knowledge scenario, that is, generating an attack that can be effectively transferred to the (unknown) model used by the analyst, turns out to be a difficult task, that is exacerbated by the complexity of the decision functions learnt by neural networks. For this reason, and given that white-box attacks can be hardly applied in practical applications, studying the transferability of adversarial examples plays a major role to assess the security of DL-based multimedia forensics tools.

16 Adversarial Examples in Image Forensics

445

16.3.1 Transferability of Adversarial Examples in Multimedia Forensics It has been demonstrated in several studies concerning computer vision applications (Tramèr et al. 2017; Papernot et al. 2016) that, in a gray-box or black-box scenario, adversarial examples crafted to attack a given CNN model are also effective against other models designed for the same task. In this section, we focus on the transferability of adversarial examples in an image forensics context and review the main results reported so far regarding the transferability of adversarial examples in image forensics. Several works have investigated the transferability of adversarial examples in a network mismatch scenario (Marra et al. 2018; Gragnaniello et al. 2018). Focusing on camera model identification, in Marra et al. (2018), Marra et al. analyzed the transferability of adversarial examples generated by means of FGSM among different networks. Four different network architectures for camera model identification trained on the VISION dataset (Shullani et al. 2017) are utilized for experiments, i.e., Shallow CNN (Bondi et al. 2017), DenseNet-40 (Huang et al. 2017), DenseNet-121 (Huang et al. 2017), and XceptionNet (Chollet 2017). The classification accuracy achieved by these models is 80.77% for Shallow CNN, 87.96% for DenseNet-40, 93.88% for DenseNet-121, and 95.15% for XceptionNet. The transferability performance for the case of cross-network mismatch is shown in Table 16.1. As it can be seen, the classification accuracy on attacked images remains pretty high when the network used to build the adversarial examples is not the same targeted by the attack, thus proving a poor transferability of the attacks. A more comprehensive investigation of the transferability of adversarial examples in different mismatch scenarios has been carried out in Barni et al. (2019). As in Barni et al. (2019), in the following we report the transferability of adversarial examples in various settings, according to the kind of attack used to create the examples and the mismatch existing between the source and the target models.

Table 16.1 Classification accuracy (%) for cross-network mismatch in the context of camera model identification. The FGSM attack is used Source network Shallow CNN DenseNet-40 DenseNet-121 XceptionNet TN

Shallow CNN DenseNet-40 DenseNet-121 XceptionNet

2.19 37.23 62.81 68.19

40.87 3.88 60.60 79.80

42.13 37.13 4.67 57.21

42.51 53.42 44.35 2.63

446

M. Barni et al.

Type of attacks We consider the transferability of attacks generated according to two of the most popular general attacks proposed so far, namely I-FGSM and JSMA (see Sect. 16.2.1 for a detailed description of such attacks). The attacks were implemented by relying on the Foolbox package (Rauber et al. 2017). Source/target model mismatch To carry out a comprehensive evaluation of the transferability of adversarial examples, three different mismatch situations between the network used to create the adversarial examples (hereafter referred to as source network—SN) and the network the adversarial examples should be transferred to (hereafter named target network—TN) are considered. Specifically, we consider the following situations: (i) cross-network mismatch, wherein different network architectures are trained on the same dataset; (ii) cross-training-set mismatch, according to which the same network architecture is trained on different datasets; and (iii) cross-network-and-training-set mismatch, wherein different architectures are trained on different datasets. Network architectures For cross-network mismatch, we consider BSnet (Bayar and Stamm 2016), designed for the detection of a number of widely used image manipulations, and BC+net (Barni et al. 2018) proposed to detect generic contrast adjustment operations. In particular, BSnet consists of 3 convolutional layers, 3 max-pooling layers, and 3 fully-connected layers, and the filters used in the first convolutional layer are constrained to extract residual-based features from images. As for BC+net, it has 9 convolutional layers, and no constraint is applied to the filters used in the network. The results reported in the following refer to two common image manipulations, namely, median filtering (with window size 5 × 5) and image resizing (by a factor of 0.8). Datasets For cross-training-set mismatch, we consider two datasets: the RAISE (R) dataset (Dang-Nguyen et al. 2021) and the VISION (V) dataset (Shullani et al. 2017). More in details, RAISE consists of 8156 high-resolution uncompressed images with size 4288 × 2848 in RAISE, 2000 of which were used for the experiments. With regard to the VISION dataset, it contains 11,732 JPEG images, with 2000 images used for experiments, with a minimum resolution equal to 2336 × 4160 and a maximum one of 3480 × 4640. Results refer to JPEG images compressed with a quality factor larger than 97. The images were split into training, validation, and test sets without overlap. All color images were first converted to gray-scale.

16 Adversarial Examples in Image Forensics

447

Table 16.2 Detection accuracy (%) of the models in the absence of attacks Model Median filtering Image resizing V R R φR φ φ φ φV BSnet BSnet BC+net BSnet BSnet Accuracy

98.1

99.5

98.4

97.5

96.6

φR BC+net 98.5

Experimental methodology Before evaluating the transferability of adversarial examples, the detection models were first trained. The input patch size was set to 128 × 128. To train the BSnet models, 200,000 patches per class were considered while 10,000 patches were used for testing. As to BC+net models, 106 patches were used for training, 105 for validation, and 5 × 104 for testing. The Adam optimizer was used with a learning rate of 10−4 for both networks, and the training batch size was set to 32 patches. BSnet was trained for 30 epochs and BC+net for 3 epochs. The detection accuracies of the models are given in Table 16.2. For sake of clarity, the symbol φDB net is used to denote the trained models, where net ∈ {BSnet, BC+net} corresponds to the network architecture, and DB ∈ {R, V} is the dataset used for training. In counter-forensic applications, it is reasonable to assume that the attacker is interested only in passing off the manipulated images as original ones to hide the traces of manipulations. Therefore, for all the experiments, 500 manipulated images were attacked to generate the adversarial examples. For I-FGSM, the number of iterations was set to 10, and the optimal attack strength used in each iteration was determined by spanning the range [0 : εs : 0.1], where εs was set to 0.01 and 0.001 in different cases. As for JSMA, the perturbation strength θ was set to 0.01 and 0.1. For each attack, two different attack strengths were considered. The average PSNR of the adversarial examples is always larger than 40 dB. Transferability results For each setting, we show the average PSNR of the adversarial examples, and the attack success rate on the target network, ASRTN , which corresponds to the transferability degree (the attack success rate on the source network—ASRSN —is always close to 100%, hence it is not reported in the tables). Table 16.3 reports the results regarding cross-network transferability. All models were trained on the RAISE dataset. In most of the cases, the adversarial examples are not transferable, indicating a likely failure of the attacks in a black-box scenario. The only exception is median filtering when the attack is carried out by I-FGSM, for which we have ASRTN = 82.5% with an average PSNR of 40 dB. With regard to cross-training-set transferability, the BSnet network was trained on the RAISE and the VISION datasets, obtaining the results shown in Table 16.4. For the cases of median filtering, the values of ASRTN are relatively high for I-FGSM with εs = 0.01, while the transferability is poor with the other attacks. With regard

448

M. Barni et al.

Table 16.3 Attack transferability for cross-network mismatch R SN = φR BSnet , TN = φBC+net Attack

Parameter

I-FGSM

εs = 0.01 εs = 0.001 θ = 0.1 θ = 0.01

JSMA

Median filtering ASRTN % PSNR (dB) 82.5 40.0 18.1 59.7 1.0 49.6 1.6 58.5

Image resizing ASRTN % 0.2 0.2 1.6 0.6

PSNR (dB) 40.0 58.5 46.1 55.0

Table 16.4 Transferability of adversarial examples for cross-training-set mismatch V SN = φR BSnet , TN = φBSnet Attack

Parameter

εs = 0.01 εs = 0.001 JSMA θ = 0.1 θ = 0.01 R SN = φV BSnet , TN = φBSnet Attack Parameter I-FGSM

I-FGSM JSMA

εs = 0.01 εs = 0.001 θ = 0.1 θ = 0.01

Median filtering ASRTN % PSNR (dB) 84.5 40.0 4.5 59.7 1.2 49.6 0.2 58.5

Image resizing ASRTN % 69.2 4.9 78.2 11.5

PSNR (dB) 40.0 58.5 46.0 55.0

Median filtering ASRTN % PSNR (dB) 94.1 40.0 7.7 59.9 1.0 49.6 0.8 58.1

Image resizing ASRTN % 0.2 0.0 0.0 0.0

PSNR (dB) 40.0 59.6 50.6 57.8

to image resizing, when the source network is trained on RAISE, the ASRTN reaches a high level when the stronger attacks are employed (IFGSM with εs = 0.01, and JSMA with θ = 0.1). However, the adversarial examples are never transferable when the SN is trained on VISION, showing that the transferability between models trained on different datasets is not symmetric. The results for the case of strongest mismatch (cross-network-and-training-set) are illustrated in Table 16.5. Only the case of median filtering attacked by I-FGSM with εs = 0.01 achieves a large transferability, while for all the other cases the transferability is nearly null. In summary, in contrast to computer vision applications, adversarial examples targeting image forensics networks are generally non-transferable. Accordingly, from the attacker’s side, it is important to investigate if the attack transferability can be improved by increasing the attack strength used to generate adversarial examples lying deeper inside the target region of the attack.

16 Adversarial Examples in Image Forensics

449

Table 16.5 Attacks transferability for cross-network-and-training-set mismatch R SN = φV BSnet , TN = φBC+net Attack

Parameter

I-FGSM

εs = 0.01 εs = 0.001 θ = 0.1 θ = 0.01

JSMA

Median filtering ASRTN % PSNR (dB) 79.6 40.0 0.8 59.9 0.8 49.6 1.2 58.1

Image resizing ASRTN % 0.4 0.2 0.0 0.0

PSNR (dB) 40.0 59.6 50.2 57.4

16.3.2 Increased-Confidence Adversarial Examples with Improved Transferability The lack of transferability of adversarial examples in image forensic applications is partly due to the fact that, in most implementations, the attacks aim at generating adversarial examples with minimum distortion. Consequently, the resulting adversarial examples are close to the decision boundary and a small difference between the decision boundaries of the source and target networks can lead to the failure of the attack on the TN. To overcome this problem, the attacker may want to generate adversarial examples lying deeper into the target region of the attack. However, given the complexity of the decision boundary learned by CNNs, it is hard to control the distance of the adversarial examples from the boundary. In addition, increasing the attack strength by simply going on with the attack iterations until a limit value of PSNR is reached may not be effective, since a lower PSNR does not necessarily result in a stronger attack with higher transferability (see Table 16.3). In order to generate adversarial examples with higher transferability that can be used for counter-forensics applications, a general strategy consists in increasing the confidence of the misclassification, where the confidence is defined as the maximum difference between the logit of the target class and those of the classes (Li et al. 2020). Specifically, by focusing on the binary case, for a clean image x with label y = i (i = 0, 1), a perturbed image x  is looked for which z 1−i − z i > c, where c > 0 is the desired confidence value. By implementing the above stop condition, the most popular attacks (such as I-FGSM and PGD) can be modified to generate adversarial examples with increased confidence. In the following, we evaluate the transferability of increased-confidence adversarial examples by adopting a methodology similar to that used in Sect. 16.3.1. Attacks We report the results obtained by applying the increased-confidence strategy to four popular attacks, namely, I-FGSM, Momentum-based I-FGSM (MI-FGSM) (Dong et al. 2018), PGD, and C&W attack. In particular, the MI-FGSM is a method proposed explicitly to improve the transferability of the attacks by mitigating the momentum

450

M. Barni et al.

term in each iteration of the I-FGSM with a decay factor μ. Networks Three network architectures are considered in the cross-network experiments, that is: BSnet (Bayar and Stamm 2016), BC+net (Barni et al. 2018), and VGG-16 network (VGGnet) commonly used in computer vision applications (VGGnet consists of 13 convolutional layers and 3 fully connected layers, more information can be found in Simonyan and Zisserman (2015)). With regard to the image manipulation tasks, we report results referring to median filtering, image resizing, and addition of white Gaussian noise (AWGN) with zero mean and unitary standard deviation. Datasets The experiments on cross-training-set mismatch are based on the RAISE and VISION datasets. Experimental setup BSnet and BC+net were trained by using the same setting described in Sect. 16.3.1. With regard to VGGnet, 105 patches were used for training and validation, and 104 patches were used for testing. The models were trained for 50 epochs by using the Adam optimizer with a learning rate of 10−5 . The range of detection accuracies in the absence of attacks are [98.1, 99.5%] for median filtering detection, [96.6, 99.0%] for image resizing, and [98.3, 99.9%] for AWGN detection. The attacks were applied to 500 manipulated images for each task. The Foolbox toolbox (Rauber et al. 2017) was employed to generate increased-confidence adversarial examples with the new stop condition. Specifically, for C&W attack, all the parameters were set to their default values. For PGD, the binary search was conducted with initial values of ε = 0.3 and α = 0.005, for 100 iterations. For IFGSM, we also applied 100 steps, with the optimal stepsize determined in the range [0.001 : 0.001 : 0.1]. Eventually, for MI-FGSM, all the parameters were set to the same values of I-FGSM except the decay factor, for which we let μ = 0.2. Transferability results In the following we show some results demonstrating the improved transferability of adversarial examples with increased confidence. For each attack, we report the ASRTN and the average PSNR. The ASRSN is always close to 100% and hence it is omitted in the tables. For the confidence value c that depends on the logits of the SN, different values were chosen for different networks. Table 16.6 reports the results for cross-network mismatch, showing the transferability when (SN,TN) = (φRVGGnet , φRBSnet ) for three different image manipulations. As expected, by using an increased confidence, the transferability of the adversarial examples improves significantly in all these cases. Specifically, the ASRTN reaches a

16 Adversarial Examples in Image Forensics

451

R Table 16.6 Attacks transferability for cross-network mismatch (SN = φR VGGnet , TN = φBSnet ). The ASRTN (%) and average PSNR (dB) of adversarial examples is reported Median filtering

c

I-FGSM ASRTN 0 27.2 12 71.0 12.5 88.2 13 96.4 Image resizing c I-FGSM ASRTN 0 0.4 17 25.0 18 39.6 19 51.6 AWGN c I-FGSM ASRTN 0 10.6 10 40.4 15 79.4 20 91.0

PSNR 58.1 47.9 45.3 42.5

MI-FGSM ASRTN PSNR 27.2 58.1 71.6 47.8 88.6 45.3 96.6 42.7

PGD ASRTN 5.4 55.0 79.0 94.6

PSNR 67.5 52.0 48.9 45.4

C&W ASRTN 8.4 50.6 70.1 91.2

PSNR 69.1 54.6 51.5 48.1

PSNR 59.3 33.3 30.9 28.9

MI-FGSM ASRTN PSNR 0.4 59.2 24.8 33.3 37.4 30.9 52.6 28.9

PGD ASRTN 1.8 22.0 39.8 52.0

PSNR 75.4 33.4 30.9 28.9

C&W ASRTN 1.2 40.4 53.6 64.4

PSNR 71.5 36.8 34.3 32.2

PSNR 56.2 52.1 49.4 45.4

MI-FGSM ASRTN PSNR 10.8 56.0 41.4 51.9 77.8 49.2 92.2 45.4

PGD ASRTN 5.8 20.6 50.8 82.8

PSNR 60.6 54.8 51.4 46.8

C&W ASRTN 1.2 19.2 52.0 79.9

PSNR 64.2 57.8 53.8 49.5

maximum of 96.6, 64.4, and 92.2% for median filtering, image resizing, and AWGN, respectively. Moreover, a larger confidence always results in adversarial examples with higher transferability and larger image distortion, and different degrees of transferability are obtained on different attacks. Among the four attacks, the C&W attack always achieves the highest PSNR for similar values of ASRTN . A noticeable observation regards the MI-FGSM attack. While (Dong et al. 2018) reports that MI-FGSM attack improves the transferability of attacks against computer vision networks, a similar improvement of ASRTN is not observed in Table 16.6 (w.r.t. I-FGSM). The reason could be that the gradients between subsequent iterations are highly correlated in image forensics models, and thus the advantage of gradient stabilization sought for by MI-FGSM is reduced. Considering cross-training-set transferability, the results for the cases of median filtering, image resizing, and AWGN addition are shown in Table 16.7. The transferability of adversarial examples from BSnet trained on RAISE to the same architecture trained on VISION is reported. According to the table, increasing the confidence always helps to improve the transferability of the adversarial examples. Moreover, although the transferability is improved in all the cases, transferring adversarial examples for the case of image resizing detection still tends to be more difficult, as a larger

452

M. Barni et al.

V Table 16.7 Attacks transferability for cross-training-set mismatch (SN: φR BSnet , TN: φBSnet ). The ASRTN (%) and the average PSNR (dB) of adversarial examples is reported Median filtering

c

I-FGSM ASRTN 0 4.8 50 65.0 80 88.0 100 97.4 Image resizing c I-FGSM ASRTN 0 12.8 30 53.0 40 64.0 50 67.2 AWGN c I-FGSM ASRTN 0 0.6 20 13.8 30 54.4 40 88.2

PSNR 59.7 48.9 45.1 42.9

MI-FGSM ASRTN PSNR 4.8 59.7 65.4 48.8 88.0 44.8 97.4 42.7

PGD ASRTN 0.2 61.2 84.4 96.6

PSNR 74.5 50.3 45.8 43.5

C&W ASRTN 0.2 60.0 82.0 95.0

PSNR 72.0 52.2 47.8 45.2

PSNR 58.7 48.0 44.7 41.7

MI-FGSM ASRTN PSNR 12.8 58.6 48.6 47.8 60.2 44.6 64.2 41.7

PGD ASRTN 9.8 38.6 54.2 59.8

PSNR 66.9 49.6 45.7 42.3

C&W ASRTN 9.8 23.8 32.8 39.2

PSNR 68.3 52.5 48.9 45.9

PSNR 57.7 49.7 44.8 41.0

MI-FGSM ASRTN PSNR 0.6 57.6 15.0 49.6 72.8 45.1 93.0 41.3

PGD ASRTN 0.2 11.0 77.2 94.0

PSNR 62.4 52.2 47.3 43.0

C&W ASRTN 0.2 12.2 78.0 95.4

PSNR 65.0 54.4 49.4 45.5

Table 16.8 Attacks transferability for cross-network-and-training-set mismatch (SN = φV BSnet , TN = φR BC+net ), including ASRTN (%) and average PSNR (dB) of adversarial samples Median filtering c

I-FGSM ASRTN 0 2.6 100 84.6 Image resizing c I-FGSM ASRTN 0 2.0 400 37.4

PSNR 60.0 42.4

MI-FGSM ASRTN PSNR 2.6 60.0 85.6 42.4

PGD ASRTN 0.0 83.0

PSNR 71.9 43.1

C&W ASRTN 0.0 78.0

PSNR 70.5 45.1

PSNR 59.8 31.2

MI-FGSM ASRTN PSNR 2.0 59.7 33.4 31.1

PGD ASRTN 0.8 40.0

PSNR 74.7 31.2

C&W ASRTN 0.8 17.0

PSNR 73.3 33.6

image distortion is needed to achieve a transferability comparable to that obtained for median filtering and AWGN addition.

16 Adversarial Examples in Image Forensics

453

Finally we consider the strongest mismatch case, i.e., cross-network-and-trainingset mismatch, in the case of median filtering and image resizing detection. Only the attack transferability with c = 0 and that with largest confidences are reported in Table 16.8. A higher transferability is achieved (with a good image quality) for the case of median filtering, while adversarial examples targeting image resizing detection are less transferable. To summarize, in contrast to computer vision applications, using adversarial examples for counter-forensics in a black-box or gray-box scenario, requires that proper measures are taken to ensure the transferability of the attacks (Li et al. 2020). As we have shown by referring to the method proposed in Li et al. (2020), however, attack transferability can be ensured in a wide variety of settings, the price to pay being a slight increase of the distortion introduced by the attack, thus calling for the development of suitable defences.

16.4 Defenses As a response to the threats posed by the existence of adversarial examples and by the ease with which they can be crafted, many defence mechanisms have been proposed to develop CNN-based forensic methods that can work in adversarial conditions. Generally speaking, defences can work in a reactive or proactive manner. In the first case, the defence mechanisms are designed to work in conjunction with the original CNN algorithm φ, e.g., by revealing whether an adversarial example is being fed to the CNN by means of a dedicated detector, or by mitigating the (possibly present) adversarial perturbation applying some input transformations before feeding the CNN (this latter approach has been tested in the general DL literature— e.g., Xu et al. (2017)—but has not been adopted as a defence strategy in forensic applications). The second branch of defences aims at building more secure CNN models from scratch or by properly modifying the original models. The large majority of the methods developed in the ML and DL-based forensic literature belongs to this category. Among the most popular approaches, we mention adversarial training, multiple classification, and detector randomization. With regard to detector randomization, we refer to methods wherein the detector is built by including within it some random elements, thus qualifying this approach as a proactive one. This contrasts with randomization techniques commonly adopted in DL literature, like, for instance, stochastic activation pruning (Dhillon et al. 2018), that randomly drops some neurons of the CNN during the forward pass. In the following, we review some of the methods developed in the forensic literature to defend against adversarial examples.

454

M. Barni et al.

16.4.1 Detect Then Defend The most common defence approach in early anti-counter-forensics works consisted in performing adversarial examples detection to rule out adversarial images, followed by the application of standard (unaware) detectors for the analysis of the samples deemed to be benign. The analyst, aware of the counter-forensic method the system may be subject to, develops a new detector φ A capable to expose the attacked images by looking for the traces left by the counter-forensic algorithm. This goal is achieved by resorting to new, tailored, features. The new algorithm φ A is explicitly designed to reveal whether the image underwent a certain attack or not. If this is the case, the analyst may refuse the image or try to clean it to remove the effect of the attack. In this kind of approaches, the algorithm φ A is used in conjunction with the original, unaware, algorithm φ. Such a view is adopted in Valenzise et al. (2013); Zeng et al. (2014), to address the adversarial detection of JPEG compression and median filtering, when the attacker tries to hinder the detection by removing the traces of JPEG compression and median filtering. Among the examples for the case of model-based analysis, we mention the algorithm in Costanzo et al. (2014) against the keypoint removal and injection attack against copy-move detectors. The “detect then defend” approach has also been adopted recently in the case of CNN-based forensic detectors, to defend against adversarial examples. For instance, Carrara et al. (2018) proposes to tell apart adversarial examples from benign images by looking at their behavior in the feature space. The method focuses on the analysis of the trajectory of the internal representations of the network (i.e., the activation of the neurons of the hidden layers), arguing that, for adversarial inputs, the representations follow a different evolution with respect to the case of genuine (non-adversarial) inputs. Detection is achieved by defining a feature distance to encode these trajectories into a fixed length sequence, used to train an Long Short Term Memory (LSTM) neural network to discern adversarial inputs from genuine ones. Modeling normal data distributions for the activations has also been considered in the general DL literature to reveal abnormal behavior of adversarial examples (Tao et al. 2018). It goes without saying that, this kind of defences can be easily circumvented if the attacker has some information about the method used by the analyst to expose the attack, thus entering a never-ending loop where attacks and forensic algorithms are iteratively developed.

16.4.2 Adversarial Training A simple, yet effective, approach to improve the robustness of machine learning classifiers against adversarial attacks is adversarial training. Adversarial training consists in augmenting the training dataset with examples of adversarial images.

16 Adversarial Examples in Image Forensics

455

Such an approach implicitly assumes that the attack algorithm is known to the analyst and that the attack is not carried out on the retrained version of the detector. This identifies adversarial training as a white-box defense carried out against a gray-box attack. Let D A be the set of attacked images used for training. The adversarial trained classifier is φ(·, T ; Dtr ∪ D A ), where T = {t1 , t2 , . . .} denotes the set of all the hyperparameters (i.e., the non-trainable parameters) of the network, including, for instance, the type of algorithm, its structure, the loss function, the internal parameters, the training procedure, etc. …Adversarial training has been widely adopted in the general DL literature to improve the robustness of DL classifiers against adversarial examples (Goodfellow et al. 2015; Madry et al. 2018). DL-based forensics is no exception, with the proposal of several approaches that resort to adversarial training to defend against adversarial examples (see, for instance, (Schöttle et al. 2018; Zhao et al. 2018)). In machine-learning-based forensic literature, JPEG-aware training is often considered to achieve robustness against JPEG compression. The interest in JPEG compression is motivated by the fact that JPEG compression is a common post-processing operation applied to images, either innocently (e.g., when uploading images on a social network), or maliciously (in some cases, in fact, JPEG compression can be considered as a simple and effective laundering attack (Barni et al. 2019)). The algorithms in Barni et al. (2017, 2016), for double JPEG detection, and in Boroumand and Fridrich (2017) for a variety of manipulation detection problems, based on Support Vector Machines (SVM), are examples of this approach. Several examples have also been proposed more recently in CNN-based forensics, e.g., for camera model identification (Marra et al. 2018), contrast adjustment detection (Barni et al. 2018), and GAN detection (Barni et al. 2020; Nataraj et al. 2019). Resizing is another common processing operator applied to images, either innocently or maliciously. In a splicing scenario, for instance, image resizing can be applied to delete the traces of the splicing operation. Accordingly, a resizing-aware detector can be trained to detect splicing in the presence of resizing. From a different perspective, resizing can also be used as a defence against adversarial perturbations. Like other geometric transformations, resizing can be applied to the input images before testing, so to disable the effectiveness of the adversarial perturbations (He et al. 2017).

16.4.3 Detector Randomization Detector randomization is another defense strategy that has been proposed to make life difficult for the attacker. With regard to computer vision applications, many randomization-based approaches have been proposed to hinder the transferability of adversarial examples in gray-box or black-box scenarios, and hence improve the security of the applications (Xie et al. 2018; Dhillon et al. 2018; Taran et al. 2019; Liu et al. 2018).

456

M. Barni et al.

A similar approach can also be employed for securing image forensics detectors. The method proposed in Chen et al. (2019) applies randomization in the feature space by randomly selecting a subset of features to be used by the subsequent forensic detector. Randomization is based on a secret key, unknown to the attacker, who, then, cannot gain full knowledge of the to-be-attacked system (thus being forced to operate in a gray-box attack scenario). In Chen et al. (2019), the effectiveness of random feature selection (RFS) is proven theoretically, under some simplifying assumptions, and verified experimentally for two detectors based on support vector machines. The approach proposed in Chen et al. (2019) has been extended to detectors based on deep learning by means of a techniques called Random Deep Feature Selection (RDFS) (Barni et al. 2020). In detail, a CNN-based forensic detector is first divided into two parts, i.e., the convolutional layers, playing the role of feature extractor, and the fully connected layers, to be regarded as the final detector. Let N denote the number of features extracted by the convolutional layers. To train a secure detector, a subset of K features is randomly selected according to a secret key. The fully connected layers are then retrained based on the selected K features. To maintain a good classification accuracy, the same model with the same secret key is applied during the training and the testing phases. In this case, the convolutional layers are only used for feature extraction and are frozen in the retraining phase to keep the feature space unchanged. Assuming that the attacker has no knowledge of the RDFS strategy, he will target the original CNN-based model during his attack and the effectiveness of the randomized network will ultimately depend on the transferability of the attack. The effectiveness of RDFS has been evaluated in Barni et al. (2020). Specifically, the experiments were conducted based on the RAISE dataset (Dang-Nguyen et al. 2021). Two different network architectures (BSnet (Bayar and Stamm 2016) and BC+net (Barni et al. 2018)) are utilized to detect three different image manipulations, namely, median filtering, image resizing, and adaptive histogram equalization (CLAHE). To build the original models based on BSnet, for each class, a number of 100,000 patches was considered for training, and 3000 and 10,000 patches were considered for validation and testing, respectively. As to the models based on BC+net, 500,000 patches per class were used for training, 5000 for validation, and 10,000 for testing. For both networks, the input patch size was set to 64 × 64, and the batch size was set to 32. The Adam optimizer with learning rate of 10−4 was used for training. The BSnet models were trained for 40 epochs and the BC+net models for 4 epochs. For BSnet, the classification accuracy of the models are 98.83, 91.30, and 90.45% for median filtering, image resizing, and CL-AHE, respectively. The corresponding accuracy values are 99.73, 95.05, and 98.30% for BC+net. To build the RDFS-based detectors, first, a subset of the original training set with 20,000 patches (per class) was randomly selected. Then, for each model, the FC layers were retrained based on K features randomly selected from the full feature set. The Adam optimizer with learning rate of 10−5 is utilized. An early stop condition was adopted with a maximum number of 50 epochs, and the training process will

16 Adversarial Examples in Image Forensics

457

stop when the validation loss changes less than 10−3 within 5 epochs. The number of K considered in the experiments are K = {5, 10, 30, 50, 200, 400} as well as the full feature with K = N . For each K , 50 models trained on randomly selected K features were utilized in the experiments. Three popular attacks were applied to generate adversarial examples, namely LBFGS, I-FGSM and PGD. All the attacks were conducted based on Foolbox (Rauber et al. 2017). The detailed parameters of the attacks are given below. For L-BFGS, the parameters were set as default. I-FGSM was conducted with 10 iterations, and the best strength was found by spanning the range [0 : 0001 : 0.1]. For PGD, a binary search was conducted with initial values of ε = 0.3 and α = 0.05, to find the optimal parameters. An exception for PGD is that the parameters used for attacking the models for the detection of CL-AHE were set as ε = 0.01 and α = 0.025 without using binary search. In the absence of attacks, the classification performance of the models trained on K features was evaluated on 4000 patches per class, while the defense performance was evaluated based on 500 manipulated images. Tables 16.9 and 16.10 show the classification accuracy for the cases of BSnet and BC+net on three different manipulations. The RDFS strategy is helpful to hinder the transferability of adversarial examples on both networks at the expense of a slight decrease of the classification accuracy on clean images (in absence of attacks). Considering the BSnet, with the decrease of K , the detection accuracy of adversarial examples increases. For instance, the gain of the accuracy reaches 20–30% for K = 30 and 30–50% for K = 10, while the classification accuracy on clean images decreases by only 2–4% in these two cases. A similar behavior can be observed in the cases of BC+net in Table 16.10. In some cases, the detection accuracy for K = N is already pretty high, which means that the adversarial examples built on the original CNN cannot transfer to the new detector trained on the full feature set. This phenomenon confirms the conclusion that adversarial examples have a poor transferability in the context of image forensics. Further research has been carried out to evaluate the effectiveness of RDFS against increased-confidence adversarial examples (Li et al. 2020). Both the original CNNs and the RDFS detectors utilized in Barni et al. (2020) were tested. Two different attacks were conducted, namely, C&W attack and PGD, to generate adversarial examples with confidence value c. Similarly to Li et al. (2020), different confidence values are chosen for different SN. The detection accuracies of the RDFS detectors for the median filtering and image resizing task in the presence of increased-confidence adversarial examples are shown in Tables 16.11 and 16.12. It can be observed that decreasing K also helps improving the defense performance of the RDFS detectors against increased-confidence adversarial examples. For example, considering the case of BSnet for median filtering detection, for adversarial examples with c = 10 generated based on C&W attack, by decreasing K from N to 10, the detection accuracy is improved by 34.5%. However, with the increase of the confidence, the attack tends to be stronger, and the detection accuracy tends to be very low, even for small value of K .

K N 400 200 50 30 10 5

No 80.5 81.3 81.5 80.7 80.1 78.0 73.0

CL-AHE

L-BFGS 35.1 41.0 42.8 56.3 64.7 78.6 88.0

I-FGSM 93.9 94.5 94.0 91.3 90.7 89.1 89.2

PGD 91.8 91.8 91.6 90.2 89.5 88.0 87.4

No 99.0 98.8 98.7 97.7 96.8 93.2 88.7

L-BFGS 39.7 42.6 44.8 53.5 56.1 67.1 73.0

Median filtering I-FGSM 4.3 7.5 10.8 24.6 30.8 44.5 51.0

PGD 81.9 76.6 77.6 80.0 79.7 80.6 79.8

Table 16.9 Accuracy (%) of the RDFS detector based on BSnet. ‘No’ corresponds to the absence of attacks No 98.0 97.7 97.8 97.4 97.0 95.0 91.0

L-BFGS 20.3 9.1 17.4 40.1 48.8 62.0 65.6

Image resizing I-FGSM 0.6 7.2 13.7 35.9 43.4 55.7 61.6

PGD 31.5 20.7 31.0 52.0 58.5 68.0 69.9

458 M. Barni et al.

K N 400 200 50 30 10 5

No 98.2 97.1 96.9 95.1 94.3 91.1 87.4

CL-AHE

L-BFGS 26.2 21.0 26.0 35.3 39.8 48.3 47.0

I-FGSM 34.0 83.6 83.0 80.0 76.3 68.8 63.7

PGD 33.5 30.1 48.5 50.6 56.7 55.6 47.2

No 99.7 99.6 99.6 99.6 99.4 98.8 97.2

L-BFGS 71.3 75.6 76.2 76.6 79.6 79.2 77.1

Median filtering I-FGSM 13.7 15.6 17.0 21.9 30.0 44.3 48.3

PGD 85.2 88.1 88.6 87.4 88.5 86.1 83.3

No 100 99.8 99.7 96.8 92.7 78.6 74.4

L-BFGS 81.2 80.0 77.9 73.0 70.7 63.0 60.7

Image resizing

Table 16.10 Accuracy (%) of the RDFS detector based on BC+net. ‘No’ corresponds to the absence of attacks I-FGSM 75.2 71.8 69.6 66.8 65.5 59.9 58.7

PGD 89.8 89.3 88.0 85.2 81.8 71.9 67.7

16 Adversarial Examples in Image Forensics 459

460

M. Barni et al.

Table 16.11 Detection accuracy (%) of the RDFS detector (based on BSnet) on increasedconfidence adversarial examples Median filtering Image resizing c K K K K K K K

C&W 0 =N 62.9 = 400 53.6 = 200 56.7 = 50 62.1 = 30 64.5 = 10 72.0 =5 75.5

10 0.2 0.2 0.4 6.7 14.9 34.7 46.2

20 0.2 0.2 0.2 0.5 2.2 15.5 29.2

PGD 0 84.7 75.2 74.7 80.1 79.1 81.1 80.0

10 0.2 0.2 0.4 9.2 19.4 38.2 53.7

20 0.2 0.2 0.2 0.5 2.7 16.4 33.0

C&W 0 32.1 18.4 26.7 48.0 57.5 66.9 64.0

5 1.0 0.7 3.9 17.5 29.9 47.1 51.6

10 0.0 0.0 0.1 2.9 9.1 29.2 39.9

PGD 0 30.1 20.5 29.0 52.0 59.8 69.0 65.1

5 0.7 0.7 4.0 20.2 30.6 48.8 52.7

10 0.0 0.0 0.1 3.4 9.5 30.0 40.6

Table 16.12 Detection accuracy (%) of the RDFS detector (based on the BC+net) on increasedconfidence adversarial examples Median filtering Image resizing c K K K K K K K

C&W 0 =N 46.1 = 400 52.6 = 200 56.0 = 50 61.9 = 30 68.2 = 10 71.7 =5 71.1

30 1.4 3.1 4.9 10.5 18.1 36.0 39.8

60 0.0 0.0 0.4 0.7 2.5 14.8 20.9

PGD 0 84.3 87.3 87.7 86.8 87.9 85.8 82.8

30 0.7 2.7 4.9 13.8 24.8 42.5 46.9

60 0.0 0.0 0.0 0.5 2.5 14.2 20.1

C&W 0 89.3 88.3 86.2 80.1 76.7 72.0 61.1

10 33.1 35.4 35.2 37.1 39.4 46.3 38.8

20 5.4 5.4 6.4 11.0 14.4 26.2 23.4

PGD 0 89.8 89.4 88.4 85.0 80.4 75.4 63.1

10 23.7 28.1 31.0 37.3 39.7 47.3 39.9

20 3.3 3.2 4.5 8.7 13.5 25.2 23.0

16.4.4 Multiple-Classifier Architectures Another possibility to combat counter-forensics attacks is to resort to intrinsically more secure architectures. This approach has been widely considered in the general literature of ML security and in ML-based forensics, however there are only few methods resorting to such an approach in the case of CNNs. Among the approaches developed for general ML applications we mention (Biggio et al. 2015), where a multiple classifier architecture is proposed, referred to as a one-and-a-half classifier. The one-and-a-half class architecture consists of two-class classifiers and two one-class classifiers run in parallel followed by a final one-class classifiers. It has been shown that, when properly trained, this architecture can effectively limit the damage of an attacker with perfect knowledge. The one-anda-half architecture has also been considered in forensic applications, to improve the security of SVM-based detectors (Barni et al. 2020). In particular, considering sev-

16 Adversarial Examples in Image Forensics

461

eral manipulation detection tasks, the authors of Barni et al. (2020) showed that the one-and-a-half class architecture can outperform two-class architectures in terms of security against white-box attacks, for a fixed level of robustness. The effectiveness of such an approach for CNN-based classifiers has not been investigated yet. Another simple possibility to design a more secure classifier consists in building an ensemble of individual algorithms using some convenient ensemble strategies, as proposed in Strauss et al. (2017) in the general DL literature. A similar approach has been consider in Fontani et al. (2014) for ML-based forensics, where the authors propose to improve the robustness of the decision by fusing the outputs of several forensic algorithms looking for different traces. The method has been assessed for the representative forensic task of splicing detection in JPEG images. Even in this case, the effectiveness of such an approach for CNN-based forensic applications has still to be investigated. Other approaches that have been proposed in security-oriented applications for general ML-based classification consist in using multiple classifiers in conjunction with randomization. Randomness can pertain to the selection of the training samples of the individual classifiers, like in Breiman (1996), or it can be associated with the selection of the features used by the classifiers, as in Ho (1998). Another strategy resorting to multiple classification and randomization has been proposed in Biggio et al. (2008) for spam-filtering applications, where the source of randomness is in the choice of the weights assigned to the filtering modules of the individual classifiers. Regarding the DL literature, an approach that goes in this direction is model switching (Wang et al. 2020), according to which a certain number of trained submodels are randomly selected at test time. All considered, combining multiple classification with randomization in the attempt to improve the security of DL-based forensic classifiers is something that has not been studied much and is worth of further investigation.

16.5 Final Remarks Deep learning architectures provide new powerful tools enriching the toolbox available to image forensics designers. At the same time, they introduce new vulnerabilities due to some inherent security weaknesses of DL techniques. In this chapter, we have reviewed the threats posed by adversarial examples and their possible use for counter-forensics purposes. Even if adversarial examples targeting image forensics networks tend to be less transferable than those created for computer vision applications, we have shown that, by properly increasing the strength of the attacks, a transferability level which is sufficient for practical applications can be reached. We have also presented some possible remedies against attacks based on adversarial examples, even if a definitive solution capable to prevent such attacks in the most challenging white-box scenario has not been found yet. We are convinced that, together with the necessity of developing image forensics techniques suitable to be applied outside controlled laboratory settings, robustness against intentional attacks

462

M. Barni et al.

is one of the most pressing needs, if we want that image forensics is finally used in real-life applications contributing to restore the credibility of digital media.

References Ahmed B, Gulliver TA (2020) Image splicing detection using mask-RCNN. Signal, Image Video Process Akhtar N, Ajmal M (2018) Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 2018(6):14410–14430 Athalye A, Engstrom L, Ilyas A, Kwok K (2018) Synthesizing robust adversarial examples. In: International conference on machine learning, pp 284–293 Barni M, Bondi L, Bonettini N, Bestagini P, Costanzo A, Maggini M, Tondi B, Tubaro S (2017) Aligned and non-aligned double JPEG detection using convolutional neural networks. J Vis Commun Image Represent 49:153–163 Barni M, Nowroozi E, Tondi B (2020) Improving the security of image manipulation detection through one-and-a-half-class multiple classification. Multimed Tools Appl 79(3):2383–2408 Barni M, Chen Z, Tondi B (2016) Adversary-aware, data-driven detection of double jpeg compression: how to make counter-forensics harder. In: 2016 IEEE international workshop on information forensics and security (WIFS). IEEE, pp 1–6 Barni M, Costanzo A, Nowroozi E, Tondi B (2018) CNN-based detection of generic contrast adjustment with jpeg post-processing. In: 2018 IEEE international conference on image processing, ICIP 2018, Athens, Greece, Oct 7–10, 2018. IEEE, pp 3803–3807 Barni M, Huang D, Li B, Tondi B (2019) Adversarial CNN training under jpeg laundering attacks: a game-theoretic approach. In: 2019 IEEE international workshop on information forensics and security (WIFS), pp 1–6 Barni M, Kallas K, Nowroozi E, Tondi B (2019) On the transferability of adversarial examples against CNN-based image forensics. In: IEEE international conference on acoustics, speech and signal processing, ICASSP 2019, Brighton, United Kingdom, 12–17 May 2019. IEEE, pp 8286– 8290 Barni M, Kallas K, Nowroozi E, Tondi B (2020) CNN detection of gan-generated face images based on cross-band co-occurrences analysis. In: IEEE international workshop on information forensics and security Barni M, Nowroozi E, Tondi B (2017) Higher-order, adversary-aware, double jpeg-detection via selected training on attacked samples. In: 2017 25th European signal processing conference (EUSIPCO), pp 281–285 Barni M, Nowroozi E, Tondi B, Zhang B (2020) Effectiveness of random deep feature selection for securing image manipulation detectors against adversarial examples. In: 2020 IEEE international conference on acoustics, speech and signal processing, ICASSP 2020, Barcelona, Spain, 4–8 May 2020. IEEE, pp 2977–2981 Bayar B, Stamm M (2018) Constrained convolutional neural networks: a new approach towards general purpose image manipulation detection. IEEE Trans Inf Forensics Secur 13(11):2691– 2706 Bayar B, Stamm MC (2016) A deep learning approach to universal image manipulation detection using a new convolutional layer. In: Pérez-González F, Bas P, Ignatenko T, Cayre F (eds) Proceedings of the 4th ACM workshop on information hiding and multimedia security, IH&MMSec 2016, Vigo, Galicia, Spain, 20–22 June 2016. ACM, pp 5–10 Bayar B, Stamm MC (2018) Towards open set camera model identification using a deep learning framework. In: 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 2007–2011

16 Adversarial Examples in Image Forensics

463

Biggio B, Corona I, He ZM, Chan PP, Giacinto G, Yeung DS, Roli F (2015) One-and-a-half-class multiple classifier systems for secure learning against evasion attacks at test time. In: International workshop on multiple classifier systems. Springer, pp 168–180 Biggio B, Fumera G, Roli F (2008) Adversarial pattern classification using multiple classifiers and randomisation. In: Joint IAPR international workshops on statistical techniques in pattern recognition (SPR) and structural and syntactic pattern recognition (SSPR). Springer, pp 500–509 Bondi L, Baroffio L, Guera D, Bestagini P, Delp EJ, Tubaro S (2017) First steps toward camera identification with convolutional neural networks. IEEE Signal Process Lett 24(3):259–263 Boroumand M, Fridrich J (2017) Scalable processing history detector for jpeg images. Electron Imaging 2017(7):128–137 Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140 Brendel W, Rauber J, Bethge M (2017) Decision-based adversarial attacks: reliable attacks against black-box machine learning models. arXiv:abs/1712.04248 Carlini N, Wagner DA (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy, SP 2017, San Jose, CA, USA, 22–26 May 2017. IEEE, Computer Society, pp 39–57 Carrara F, Becarelli R, Caldelli R, Falchi F, Amato G (2018) Adversarial examples detection in features distance spaces. In: Proceedings of the European conference on computer vision (ECCV) workshops Chen PY, Sharma Y, Zhang H, Yi J, Hsieh CJ (2018) Ead: elastic-net attacks to deep neural networks via adversarial examples. In: Proceedings of the AAAI conference on artificial intelligence, vol 32 Chen PY, Zhang H, Sharma Y, Yi J, Hsieh CJ (2017) ZOO: zeroth order optimization based blackbox attacks to deep neural networks without training substitute models. In: Thuraisingham BM, Biggio B, Freeman DM, Miller B, Sinha A (eds) Proceedings of the 10th ACM workshop on artificial intelligence and security, AISec@CCS 2017, Dallas, TX, USA, 3 Nov 2017. ACM, pp 15–26 Chen J, Kang X, Liu Y, Wang ZJ (2015) Median filtering forensics based on convolutional neural networks. IEEE Signal Process Lett 22(11):1849–1853 Nov Chen Z, Tondi B, Li X, Ni R, Zhao Y, Barni M (2019) Secure detection of image manipulation by means of random feature selection. IEEE Trans Inf Forensics Secur 14(9):2454–2469 Chollet F (2017) Xception: deep learning with depthwise separable convolutions. In: 2017 IEEE conference on computer vision and pattern recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017. IEEE Computer Society, pp 1800–1807 Costanzo A, Amerini I, Caldelli R, Barni M (2014) Forensic analysis of sift keypoint removal and injection. IEEE Trans Inf Forensics Secur 9(9):1450–1464 Dang-Nguyen DT, Pasquini C, Conotter V, Boato G (2021) RAISE: a raw images dataset for digital image forensics. In: Proceedings of the 6th ACM multimedia systems conference, MMSys ’15, New York, NY, USA, 2015. ACM, pp 219–224 Dhillon GS, Azizzadenesheli K, Lipton ZC, Bernstein J, Kossaifi J, Khanna A, Anandkumar A (2018) Stochastic activation pruning for robust adversarial defense. In: Conference track proceedings 6th international conference on learning representations, ICLR 2018, Vancouver, BC, Canada, April 30–May 3, 2018. OpenReview.net Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X, Li J (2018) Boosting adversarial attacks with momentum. In: 2018 IEEE conference on computer vision and pattern recognition, CVPR 2018, Salt Lake City, UT, USA, 18–22 June 2018. IEEE Computer Society, pp 9185–9193 Dong Y, Pang T, Su H, Zhu J (2019) Evading defenses to transferable adversarial examples by translation-invariant attacks. In: IEEE conference on computer vision and pattern recognition, CVPR 2019, Long Beach, CA, USA, 16–20 June 2019. Computer Vision Foundation/IEEE, pp 4312–4321 Fontani M, Bonchi A, Piva A, Barni M (2014) Countering anti-forensics by means of data fusion. In: Media watermarking, security, and forensics 2014, vol 9028. International Society for Optics and Photonics, p 90280Z

464

M. Barni et al.

Freire-Obregon D, Narducci F, Barra S, Castrillon-Santana M (2018) Deep learning for source camera identification on mobile devices. Pattern Recognit Lett Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: Bengio Y, LeCun Y (eds) Conference track proceedings 3rd international conference on learning representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015 Gragnaniello D, Marra F, Poggi G, Verdoliva L (2018) Analysis of adversarial attacks against CNNbased image forgery detectors. In: 26th European signal processing conference, EUSIPCO 2018, Roma, Italy, 3–7 Sept 2018. IEEE, pp 967–971 Guera D, Wang Y, Bondi L, Bestagini P, Tubaro S, Delp EJ (2017) A counter-forensic method for CNN-based camera model identification. In: IEEE computer vision and pattern recognition workshops, pp 1840–1847 He W, Wei J, Chen X, Carlini N, Song D (2017) Adversarial example defense: ensembles of weak defenses are not strong. In: 11th {USENIX} workshop on offensive technologies ({WOOT} 17) Ho TK (1998) The random subspace method for constructing decision forests. IEEE Trans Pattern Anal Mach Intell 20(8):832–844 Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In 2017 IEEE conference on computer vision and pattern recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017. IEEE Computer Society, pp 2261–2269 Ilyas A, Engstrom L, Athalye A, Lin J (2018) Black-box adversarial attacks with limited queries and information. In: Dy JG, Krause A (eds) Proceedings of the 35th international conference on machine learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, 10–15 July 2018, vol 80 of Proceedings of machine learning research. PMLR, pp 2142–2151 Kevin Eykholt K, Evtimov I, Fernandes E, Li B, Rahmati A, Xiao C, Prakash A, Kohno T, Song D (2018) Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1625–1634 Kurakin A, Goodfellow I, Bengio S (2017) Adversarial examples in the physical world. In: Workshop track proceedings 5th international conference on learning representations, ICLR 2017, Toulon, France, 24–26 April 2017. OpenReview.net Li Y, Li L, Wang L, Zhang T, Gong B (2019) NATTACK: learning the distributions of adversarial examples for an improved black-box attack on deep neural networks. In: Chaudhuri K, Salakhutdinov R (eds) Proceedings of the 36th international conference on machine learning, ICML 2019, 9–15 June 2019, Long beach, California, USA, vol 97 of Proceedings of machine learning research. PMLR, pp 3866–3876 Li W, Tondi B, Ni R, Barni M (2020) Increased-confidence adversarial examples for deep learning counter-forensics. In: Del Bimbo A, Cucchiara R, Sclaroff S, Farinella GM, Mei T, Bertini M, Escalante HJ, Vezzani R (eds) Pattern recognition. ICPR international workshops and challenges—virtual event, 10–15 Jan 2021, Proceedings, Part VI, vol 12666 of Lecture notes in computer science. Springer, pp 411–424 Liu X, Cheng M, Zhang H, Hsieh CJ (2018) Towards robust neural networks via random selfensemble. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y (eds) Computer vision—ECCV 2018—15th European conference, Munich, Germany, 8–14 Sept 2018, Proceedings, Part VII, vol 11211 of Lecture notes in computer science. Springer, pp 381–397 Liu Y, Chen X, Liu C, Song D (2016) Delving into transferable adversarial examples and black-box attacks. arXiv:1611.02770 Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: Conference track proceedings 6th international conference on learning representations, ICLR 2018, Vancouver, BC, Canada, April 30–May 3, 2018. OpenReview.net Marra F, Gragnaniello D, Verdoliva L (2018) On the vulnerability of deep learning to adversarial attacks for camera model identification. Signal Process Image Commun 65:240–248 Nataraj L, Mohammed TM, Manjunath BS, Chandrasekaran S, Flenner A, Bappy JH, RoyChowdhury AK (2019) Detecting gan generated fake images using co-occurrence matrices. Electron Imaging 2019(5):532–1

16 Adversarial Examples in Image Forensics

465

Papernot N, McDaniel P, Goodfellow IJ (2016) Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. CoRR. arXiv:abs/1605.07277 Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: IEEE European symposium on security and privacy, EuroS&P 2016, Saarbrücken, Germany, 21–24 Mar 2016. IEEE, pp 372–387 Rauber J, Brendel W, Bethge M (2017) Foolbox v0. 8.0: a python toolbox to benchmark the robustness of machine learning models. arXiv:1707.04131 Rossler A, Cozzolino D, Verdoliva L, Riess C, Thies J, Niessner M (2019) Faceforensics++: learning to detect manipulated facial images. In: IEEE/CVF international conference on computer vision Schöttle P, Schlögl A, Pasquini C, Böhme R (2018) Detecting adversarial examples-a lesson from multimedia security. In: 2018 26th European signal processing conference (EUSIPCO). IEEE, pp 947–951 Sharif M, Bhagavatula S, Bauer L, Reiter MK (2016) Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: ACM conference on computer and communications security, p 1528–1540 Shullani D, Fontani M, Iuliani M, Shaya OA, Piva A (2017) VISION: a video and image dataset for source identification. EURASIP J Inf Secur 1–16 Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: Bengio Y, LeCun Y (eds) Conference track proceedings 3rd international conference on learning representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015 Strauss T, Hanselmann M, Junginger A, Ulmer H (2017) Ensemble methods as a defense to adversarial perturbations against deep neural networks. arXiv:1709.03423 Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2014) Intriguing properties of neural networks. In: Bengio Y, LeCun Y (eds) Conference track proceedings 2nd international conference on learning representations, ICLR 2014, Banff, AB, Canada, 14–16 April 14–16 2014 Tao G, Ma S, Liu Y, Zhang X (2018) Attacks meet interpretability: attribute-steered detection of adversarial samples. arXiv:1810.11580 Taran O, Rezaeifar S, Holotyak T, Voloshynovskiy S (2019) Defending against adversarial attacks by randomized diversification. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 11226–11233 Tondi B (2018) Pixel-domain adversarial examples against CNN-based manipulation detectors. Electron Lett Tramèr F, Papernot N, Goodfellow I, Boneh D, McDaniel P (2017) The space of transferable adversarial examples. CoRR. arXiv:abs/1704.03453 Valenzise G, Tagliasacchi M, Tubaro S (2013) Revealing the traces of JPEG compression antiforensics. IEEE Trans Inf Forensics Secur 8(2):335–349 Wang Q, Zhang R (2016) Double JPEG compression forensics based on a convolutional neural network. EURASIP J Inf Secur 1:2016 Wang X, Wang S, Chen PY, Lin X, Chin P (2020) Block switching: a stochastic approach for deep learning security. arXiv:2002.07920 Xie C, Wang J, Zhang Z, Ren Z, Yuille A (2018) Mitigating adversarial effects through randomization. In: Conference track proceedings 6th international conference on learning representations, ICLR 2018, Vancouver, BC, Canada, April 30–May 3, 2018. OpenReview.net Xie C, Zhang Z, Zhou Y, Bai S, Wang J, Ren Z, Yuille AL (2019) Improving transferability of adversarial examples with input diversity. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2730–2739 Xu W, Evans D, Qi Y (2017) Feature squeezing: detecting adversarial examples in deep neural networks. arXiv:1704.01155 Yuan X, He P, Zhu Q, Li X (2019) Adversarial examples: attacks and defenses for deep learning. IEEE Trans Neural Netw Learn Syst 30(9):2805–2824 Zeng H, Qin T, Kang X, Liu L (2014) Countering anti-forensics of median filtering. In: IEEE international conference acoustics, speech and signal processing. IEEE, pp 2704–2708

466

M. Barni et al.

Zhang B, Tondi B, Barni M (2020) Adversarial examples for replay attacks against CNN-based face recognition with anti-spoofing capability. Comput Vis Image Underst 102988:197–198 Zhao W, Yang P, Ni R, Zhao Y, Wu H (2018) Security consideration for deep learning-based image forensics. CoRR arXiv:abs/1803.11157 Zheng Z, Hong P (2018) Robust detection of adversarial attacks by modeling the intrinsic properties of deep neural networks. In: Proceedings of the 32nd international conference on neural information processing systems, NIPS’18, USA, 2018. Curran Associates Inc., pp 7924–7933

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Chapter 17

Anti-Forensic Attacks Using Generative Adversarial Networks Matthew C. Stamm and Xinwei Zhao

17.1 Introduction The rise of deep learning has led to rapid advances in multimedia forensics. Algorithms based on deep neural networks are able to automatically learn forensic traces, detect complex forgeries, and localize falsified content with increasingly greater accuracy. At the same time, deep learning has expanded the capabilities of antiforensic attackers. New anti-forensic attacks have emerged, including those discussed in Chap. 14 based on adversarial examples, and those based on generative adversarial networks (GANs). In this chapter, we discuss the emerging threat posed by GAN-based anti-forensic attacks. GANs are a powerful machine learning framework that can be used to create realistic, but completely synthetic data. Researchers have recently shown that antiforensic attacks can be built by using GANs to create synthetic forensic traces. While only a small number of GAN-based anti-forensic attacks currently exist, results show these early attacks are both effective at fooling forensic algorithms and introduce very little distortion into attacked images. Furthermore, by using the GAN framework to create new anti-forensic attacks, attackers can dramatically reduce the time required to create a new attack. For simplicity, we will assume in this chapter that GAN-based anti-forensic attacks will be launched against images. This also aligns with the current state of research, since all existing attacks are targeted at images. However, the ideas and methodologies presented in this chapter can be generalized to a variety of media modalities including video and audio. We expect new GAN-based anti-forensic attacks targeted on these modalities will likely arise in the near future. This chapter begins with Sect. 17.2, which provides a brief background on GANs. Section 17.3 provides background on anti-forensic attacks, including how they are M. C. Stamm (B) · X. Zhao Department of Electrical and Computer Engineering, Drexel University, Philadelphia, PA, USA e-mail: [email protected] © The Author(s) 2022 H. T. Sencar et al. (eds.), Multimedia Forensics, Advances in Computer Vision and Pattern Recognition, https://doi.org/10.1007/978-981-16-7621-5_17

467

468

M. C. Stamm and X. Zhao

traditionally designed as well as their shortcomings. In Sect. 17.4, we discuss how GANs are used to create anti-forensic attacks. This gives a high-level overview of the components of an anti-forensic GAN, differences between attacks based on GANs and adversarial examples, as well as an overview of existing GAN-based anti-forensic attacks. Section 17.5 provides more details about how these attacks are trained, including different techniques that can be employed based on the attacker’s knowledge of and access to the forensic algorithm under attack. We discuss known shortcomings of anti-forensic GANs and future research directions in Sect. 17.6.

17.2 Background on GANs Generative adversarial networks (GANs) are a machine learning framework that is used to train generative models (Goodfellow et al. 2014). In their standard form, GANs consist of two components: a generator G and a discriminator D. The generator and the discriminator can be viewed as competitors in a two-player game. The goal of the generator is to learn to produce deceptively synthetic data that can mimic the real data, while the goal of the discriminator aims to learn to distinguish the synthetic data from the real data. The two parties are trained alternatively and each party attempts to improve its performance to defeat the other until the synthetic data is indistinguishable from the real data. The generator is a generative model that creates synthetic data x by mapping some input z in a latent space to an output as a set of real data x. Ideally, the generator produces synthetic data x with the same distribution as real data x such that the two cannot be differentiated. The discriminator is a discriminative model that will assign a scalar to each input data. The discriminator can output any scalar between 0 and 1, with 1 representing the real data and 0 representing the synthetic data. During training phase, the discriminator aims to maximize the probability of assigning correct label to both real data x and synthetic data x , while the generator aims to reduce the difference between the discriminator’s decision on the synthetic data and 1 (i.e., it must be real data). Training a GAN is equivalent to finding the solution of the min-max function min max Ex∼ f X (x) [log D(x)] + Ez∼ f Z (z) [log(1 − D(G(z)))] G

D

(17.1)

where f X (x) represents the distribution of real data, f Z (z) represents the distribution of input noise, E represents the operation of calculating expected value.

17 Anti-Forensic Attacks Using Generative Adversarial Networks

469

17.2.1 GANs for Image Synthesis While GANs can be used to synthesize many types of data, one of their most common uses is to synthesize images or modify images. The development of several specific GANs has provided important insights and guidelines into the design of GANs for images. For example, while the GAN framework does not specify the specific form of the generator or discriminator, the development of DCGAN showed that deep convolutional generators and discriminators yield substantial benefits when performing image synthesis (Radford et al. 2015). Additionally, this work suggested constraints on the architectures of the generator and discriminator that can result in stable training. The creation of Conditional GANs (CGANs) showed the benefit of using information such as labels as auxiliary inputs to both the generator and the discriminator (Mehdi Mirza and Simon Osindero 2014). This can help can improve the visual quality and control the appearance of synthetic images generated by a GAN. InfoGAN showed that the latent space can be structured to both select and control generation of images (Chen et al. 2016). Pix2Pix (Isola et al. 2018) showed that GANs can learn to translate from one image space to another (Isola et al. 2018), while StackGAN demonstrated that text can be used to guide the generation of images via a series of CGANs (Zhang et al. 2017). Since their first appearance in literature, the development of GANS for image synthesis has proceeded at an increasingly rapid pace. Research has been performed to improve the architectures of the generator and discriminator (Zhang et al. 2019; Brock et al. 2018; Karras et al. 2017; Zhu et al. 2017; Karras et al. 2020; Choi et al. 2020), improve the training procedure (Arjovsky et al. 2017; Mao et al. 2017), and build publicly available datasets to aid the research community (Yu et al. 2016; Liu et al. 2015; Karras et al. 2019; Caesar et al. 2018). Because of these efforts, GANs have been used to achieve state-of-the-art performance on various computer vision tasks, such as super-resolution (Ledig et al. 2017), photo inpainting (Pathak et al. 2016), photo blending (Wu et al. 2019), image-to-image translation (Isola et al. 2018; Zhu et al. 2017; Jin et al. 2017), text-to image-translation (Zhang et al. 2017), semantic-image-to-photo translations (Park et al. 2019), face aging (Antipov et al. 2017), generating human faces (Karras et al. 2017; Choi et al. 2018, 2020; Karras et al. 2019, 2020), generating 2-D (Jin et al. 2017; Karras et al. 2017; Brock et al. 2018) and 3-D objects (Wu et al. 2016), video prediction (Vondrick et al. 2016), and many more applications.

17.3 Brief Overview of Relevant Anti-Forensic Attacks In this section, we provide a brief discussion of what anti-forensic attacks are and how they are designed.

470

M. C. Stamm and X. Zhao

17.3.1 What Are Anti-Forensic Attacks Anti-forensic attacks are countermeasures that a media forger can use to fool forensic algorithms. They operate by falsifying the traces that forensic algorithms rely upon to make classification or detection decisions. A forensic classifier that is targeted by an anti-forensic attack is referred to as a victim classifier. Just as there are many different forensic algorithms, there are similarly a wide variety of anti-forensic attacks (Stamm et al. 2013; Böhme and Kirchner 2013). Many of them, however, can be broadly grouped into two categories: • Attacks designed to disguise evidence of manipulation and editing. Attacks have been developed to fool forensic algorithms designed to detect a variety of manipulations such as resampling (Kirchner and Bohme 2008; Gloe et al. 2007), JPEG compression (Stamm and Liu 2011; Stamm et al. 2010; Fan et al. 2013, 2014; Comesana-Alfaro and Pérez-González 2013; Pasquini and Boato 2013; Chu et al. 2015), median filtering (Fontani and Barni 2012; Wu et al. 2013; Dang-Nguyen et al. 2013; Fan et al. 2015), contrast enhancement (Cao et al. 2010), and unsharp masking (Laijie et al. 2013), as well as algorithms detect forgeries by identifying inconsistencies in lateral chromatic aberration (Mayer and Stamm 2015), identify copy-move forgeries (Amerini et al. 2013; Costanzo et al. 2014), and detect video frame deletion (Stamm et al. 2012a, b; Kang et al. 2016). • Attacks designed to falsify the source of a multimedia file. Attacks have been developed to falsify demosaicing traces associated with an image’s source camera model (Kirchner and Böhme 2009; Chen et al. 2017)??, as well as PRNU traces that can be linked to a specific device (Lukas et al. 2005; Gloe et al. 2007; Goljan et al. 2010; Barni and Tondi 2013; Dirik et al. 2014; Karaküçük and Dirik 2015) To understand how anti-forensic attacks operate, it is helpful to first consider a simple model of how forensic algorithms operate. Forensic algorithms extract forensic traces from an image, then associate those traces with a particular forensic class. That class can be associated with editing operation or manipulation that an image has undergone, the image’s source (i.e., it’s camera model, its source device, it’s distribution channel, etc.,) or some other property that a forensic investigator wishes to ascertain. It is important to note that when performing manipulation detection, even unaltered images contain forensic traces. While they do not contain traces associated with editing, splicing, or falsification, they do contain traces associated with an image being generated by a digital camera and not undergoing subsequent processing. Anti-forensic attacks create synthetic forensic traces within an image that are designed to fool a forensic classifier. This holds true even for attacks designed to remove evidence of editing. These attacks synthesize traces in manipulated images that are associated with unaltered images. Most anti-forensic attacks are targeted, i.e., they are designed to trick the forensic classifier into associating an image with a particular target forensic class. For example, attacks against manipulation detectors typically specify the class of ‘unaltered’ images as the target class. Attacks designed to fool source identification algorithms synthesize traces associated with a target

17 Anti-Forensic Attacks Using Generative Adversarial Networks

471

source that the image did not truly originate from. Additionally, a small number of anti-forensic attacks are untargeted. This means that the attack does not care which class the forensic algorithm associates the image with, so long as it is not the true class. Untargeted attacks are more commonly used to fool source identification algorithms, since an untargeted attack still tricks the forensic algorithm into believing that an image can form an untrue source. They are typically not used against manipulation detectors, since a forensic investigator will still decide that image is inauthentic even if their algorithm mistakes manipulation A for manipulation B.

17.3.2 Anti-Forensic Attack Objectives and Requirements When creating an anti-forensic attack, an attacker must consider several design objectives. For an attack to be successful, it must satisfy the following design requirements; • Requirement 1: Fool the victim forensic algorithm An anti-forensic attack should cause the victim algorithm to classify an image as belonging to the attacker’s target class. If the attack is designed to disguise manipulation or content falsification, then the target class is the class of unaltered images. If the attack is designed to falsify an image’s source, then the target class is the desired fake source chosen by the attacker such as a particular source camera. • Requirement 2: Introduce no visually perceptible artifacts. An attack should not introduce visually distinct telltale signs that a human can easily identify. If this occurs, a human will quickly disregard an attacked image as fake even if it fools a forensic algorithm. This is particularly important if the image is to be used as part of a misinformation campaign, since images that are not plausibly real will be quickly flagged. • Requirement 3: Maintain high visual quality of the attacked image. Similar to Requirement 2, if an attack fools a detector but significantly distorts an image, it is not useful to an information attacker. Some distortion is allowable, since ideally no one other than the attacker will be able to compare the attacked image to its unattacked counterpart. Additionally, this requirement is put in place to ensure that an attack does not undo desired manipulations that an attacker has previously performed to an image. For example, localized color corrections and contrast adjustments may be required to make a falsified image appear visually plausible. If an anti-forensic attack reverses these intentional manipulations, then the attack is no longer successful. In addition to these requirements, there are several other highly desirable properties for an attack to possess. These include

472

M. C. Stamm and X. Zhao

• Desired Goal 1: Be rapidly deployable in practical scenarios. Ideally, an attack should be able to be launched quickly and efficiently, thus enabling rapid attacks or attacks at scale. It is flexible enough to attack images of any arbitrary size, not only images of a fixed predetermined size. Additionally, it should not require prior knowledge of which region of an image will be analyzed, such as a specific image block or block grid (it is typically unrealistic to assume that an investigator will only examine certain fixed image locations). • Desired Goal 2: Achieve attack transferability. An attack achieves transferability if it is able to fool other victim classifiers that it has not been explicitly designed or trained to fool. This is important because an investigator can typically utilize multiple different forensic algorithms to perform a particular task. For example, several different algorithms exist for identifying an image’s source camera model. An attack designed to fool camera model identification algorithm A is maximally effective if it also transfers to fool camera model identification algorithms B and C. While it is not necessary that an attack satisfy these additional goals, attacks that do are significantly more effective in realistic scenarios.

17.3.3 Traditional Anti-Forensic Attack Design Procedure and Shortcomings While anti-forensic attacks target different forensic algorithms, at a high level the majority of them operate in a similar manner. First, an attack builds or estimates a model of the forensic trace that it wishes to falsify. This could be associated with the class of unaltered images in the case that the attack is designed to disguise evidence of manipulation or falsification, or it could be associated with a particular image source. Next, this model is used to guide a technique that synthesizes forensic traces associated with this target class. For example, anti-forensic attacks to falsify an image’s source camera model synthesize demosaicing traces or other forensic traces associated with a target camera model. Alternatively, techniques designed to remove evidence of editing such as multiple JPEG compression or median filtering synthesize traces that manipulation detectors associate with unaltered images. Traditionally, creating a new anti-forensic attack has required a human expert to first design an explicit model of the target trace that the attack wishes to falsify. Next, the expert must create an algorithm capable of synthesizing this target trace such that it matches their model. While this approach has led to the creation of several successful anti-forensic attacks, it is likely not scalable enough to keep pace with the rapid development of new forensic algorithms. It is very difficult and time consuming for humans to construct explicit models of forensic traces that are accurate enough to for an attack to be successful. Furthermore, the forensic community has widely adopted the use machine learning and deep learning approaches to learn sophisticated implicit models directly from data. In this case, it may be intractable for human experts to

17 Anti-Forensic Attacks Using Generative Adversarial Networks

473

explicitly model a target trace. Even if a model can be successfully constructed, a new algorithm must be designed to synthesize each trace under attack. Again, this is also a challenging and time-consuming process. In order to respond to the rapid advancement of forensic technologies, adversaries would like to develop some automated means of creating new anti-forensic attacks. Tools from deep learning such as convolutional neural networks have allowed forensics researchers to successfully automate how forensic traces are learned. Similarly, intelligent adversaries have begun to look to deep learning for approaches to automate the creation of new attacks. As a result, new attacks based upon GANs have begun to emerge.

17.3.4 Anti-Forensic Attacks on Parametric Forensic Models WIth the advances of software and apps, it is very easy for people to manipulate images the way they want. To make an image deceptively convincing to human eyes, post-processing operations such as resampling, blending, denoising are often applied to the falsified images. Recent years, deep learning-based algorithms are also used to create innovation contents. In the hand of an attacker, these techniques can be used for malicious purposes. Previous research shows that manipulations and attacks leave traces that can be modeled or characterized by forensic algorithms. While these traces may be invisible to human eyes, through forensic algorithms forged images can be detected or distinguished from the real images. Anti-forensic attacks are countermeasures attempting to fool target forensic algorithms. Some anti-forensic attacks aim to remove forensic traces left by manipulation operations or attacks that the forensic algorithms analysis. Some anti-forensic attacks synthesize fake traces to make the forensic algorithms make a wrong decision. For example, anti-forensic attacks can remove traces left by resampling and make the forensic algorithms designed for characterizing resampling traces to believe the antiforensically attacked images as “unaltered”. Anti-forensic attacks can also synthesize fake forensic traces associated with a particular target camera model B into an images, and make a camera model identification algorithm to believe an image was captured by the camera model B, while the image was actually captured by camera model A. Anti-forensic attacks can also hide forensic traces to obfuscate their true origins. Anti-forensic attacks are expected to leave invisible distortion, since an visible distortion often flag “fake” for investigators. However, anti-forensic attacks do not consider countermeasures of itself, and only examine fooling target forensic algorithms. Additionally, anti-forensic attacks are often designed to falsify particular forensic traces. A common methodology is typically used to create anti-forensic attacks similar to those described in the previous section. First, a human expert designs a parametric model of the target forensic trace that they wish to falsify during the attack. Next, an algorithm is created to introduce a synthetic trace into the attacked image so that it matches this model. Often this is accomplished by creating a generative

474

M. C. Stamm and X. Zhao

noise model which is used to introduce specially designed noise either directly into the image or into some transform of the image. Many anti-forensic attacks have been developed following this methodology, such as removing traces left by median filtering, JPEG compression, or resampling, or synthesizing demosaicing information of camera models. This approach to creating attacks has several important drawbacks. First, it is both challenging and time consuming for human experts to create accurate parametric models of forensic traces. In some cases it is extremely difficult or even infeasible to create models accurate enough to fool state-of-the-art detectors. Furthermore, models of one forensic cannot typically be reused to attack a different forensic trace. This creates an important scalability challenge for the attacker.

17.3.5 Anti-Forensic Attacks on Deep Neural Networks An even greater challenge arises from the widespread adoption of deep-learningbased forensic algorithms such as convolutional neural networks (CNNs). The approach described above cannot be used to attack forensic CNNs since they don’t utilize an explicit feature set or model of a forensic trace. Anti-forensic attacks targeting on deep-learning-based forensic algorithms now draw an increasing amount of attentions both from academia and industries. While deep learning has achieved many state-of-the-art performances on many machine learning tasks, including computer vision and multimedia forensics, researchers found that non-ideal properties of neural networks can be exploited and cause misclassifications of an input image. Several explanations have been posited for the non-ideal properties of neural networks that adversarial examples exploit, such as imperfections caused by the locally-linear nature of neural networks (Goodfellow et al. 2014) or that there is misalignment between the features used by humans and those learned by neural networks when performing the same classification task (Ilyas et al. 2019). These findings facilitate the development of adversarial example generation algorithms. Adversarial example generation algorithms operate by adding adversarial perturbations to an input image. The adversarial perturbations are some noises produced by optimizing a certain distance metric, usually L p norm (Goodfellow et al. 2014; Madry et al. 2017; Kurakin et al. 2016; Carlini and Wagner 2017). The adversarial perturbations aim to push the representation in latent feature space just across the decision boundary of the desired class (i.e., targeted attack) or push the data away from its true class (i.e., untargeted attack). Forensic researchers found that adversarial example generation algorithms can be used as anti-forensic attacks on forensic algorithms. In 2017, Güera et al. showed that camera model identification CNNs (Güera et al. 2017) can be fooled by Fast Gradient Sign method (FGSM) (Goodfellow et al. 2014) and Jacobian-based Saliency Map (JSMA) (Papernot et al. 2016).

17 Anti-Forensic Attacks Using Generative Adversarial Networks

475

There are several drawbacks for using adversarial example generation algorithms as anti-forensic attacks. To ensure the CNN can be successfully fooled, the adversarial perturbations added to the image may be strong and result in fuzzy or distorted output images. While using some tricks, researchers could control the visual quality of antiforensic attacked images to a certain level, maintaining high visual quality is still a challenge for further study. Another drawback is that adversarial perturbations have to be optimized for particular given input image. As a result, it could be time-consuming for producing large volume of anti-forensically attacked images. Additionally, the adversarial perturbations are not invariant to detection block alignment.

17.4 Using GANs to Make Anti-Forensic Attacks Recently, researchers have begun adapting the GAN framework to create a new class of anti-forensic attacks. Though research into these new GAN-based attacks is still in its early stages, recent results have shown that these attacks can overcome many of the problems associated with both traditional anti-forensic attacks and attacks based on adversarial examples. In this section, we will describe at a high level how GAN-based anti-forensic attacks are constructed. Additionally, we will discuss the advantages of utilizing these types of attacks.

17.4.1 How GANs Are Used to Construct Anti-Forensic Attacks GAN-based anti-forensic attacks are designed to fool forensic algorithms built upon neural networks. At a high level, they operate by using an adversarially trained generator to synthesize a target set of forensic traces in an image. Training occurs first, before the attack is launched. Once the anti-forensic generator has been trained, an image can be attacked simply by passing it through the generator. The generator does not need to be re-trained for each image, however, it must be re-trained if a new target class is selected for the attack. Researchers have shown that GANs can be adapted to both automate and generalize the anti-forensic attack design methodology described in Sect. 17.5 of this chapter. The first step in the traditional approach to creating an anti-forensic attack involves the creation of a model of the target forensic trace. While it is difficult or potentially impossible to build an explicit model of the traces learned by forensic neural networks, GANs can exploit the model that has already been learned by the forensic neural network. This is done by adversarially training the generator against a pre-trained version of the victim classifier C. The victim classifier can either take the place of the discriminator in the traditional GAN formulation, or it can be used in conjunction with a traditional discriminator. Currently, there is not a clear consen-

476

M. C. Stamm and X. Zhao

sus on whether a discriminator is necessary or beneficial when creating GAN-based anti-forensic attacks. Additionally, in some scenarios the attacker may not have direct access to the victim classifier. If this occurs, the victim classifier cannot be directly integrated into the adversarial training process. Instead, alternate means of training the generator must be used. These are discussed in Sect. 17.5. When the generator is adversarially trained against the victim classifier, it learns the model of the target forensic trace implicitly used by the victim classifier. Because of this, the attacker does not need to explicitly design a model of the target trace, as would typically be done in the traditional approach to creating an anti-forensic attack. This dramatically reduces the time required to construct an attack. Furthermore, it increases the accuracy of the trace model used by the attack, since it is directly matched to the victim classifier. Additionally, GAN-based attacks significantly reduce the effort required for the second step of the traditional approach to creating an anti-forensic attack, i.e., creating a technique to synthesize the target trace. This is because the generator automatically learns to synthesize the target trace through adversarial training. While existing GANbased anti-forensic attacks utilize different generator architectures, the generator still remains somewhat generic in that it is a deep convolutional neural network. CNNs can be quickly and easily implemented using deep learning frameworks such as TensorFlow or Pytorch, making it easy for an attacker to experimentally optimize the design of their generator. Currently, there are no well-established guidelines for designing anti-forensic generators. However, research by Zhang et al. has shown that when an upsampling component is utilized in a GAN’s generator, it leaves behind “checkerboard” artifacts in the synthesized image. Zhang et al. (2019). As a result, it is likely important to avoid the use of upsampling in the generator so that visual or statistically detectable artifacts that may act as giveaways are not introduced to an attacked image. This is reinforced by the fact that many existing attacks utilize fully convolutional generators that do not include any pooling or upsampling layers. Convolutional generators also possess the useful property that once they are trained, they can be used to attack an image of any size. Fully convolutional CNNs are able to accept images of any size as an input and produce output images of the same size. As a result, these generators can be deployed in realistic scenarios in which the size of the image under attack may not be known a prior, or may vary in the case of attacking multiple images. They also provide an advantage in that synthesized forensic traces are distributed throughout the attacked image and do not depend on a blocking grid. As a result, the attacker does not require advanced knowledge of which image region or block that an investigator will analyze. This is a distinct advantage over anti-forensic attacks based upon adversarial examples. Research also suggests that a well-constructed generator can be re-trained to synthesize different forensic traces than the ones which it was initially designed for. For example, the MISLGAN attack was initially designed to falsify an image’s origin by synthesizing source camera model traces linked with a different camera model.

17 Anti-Forensic Attacks Using Generative Adversarial Networks

477

Recent work has also used this generator architecture to construct a GAN-based attack that removes evidence of multiple editing operations.

17.4.2 Overview of Existing GAN-Based Attacks Recently, several GAN-based anti-forensic attacks have been developed to falsify information about an image’s source and authenticity. Here we provide a brief overview of existing GAN-based attacks. In 2018, Kim et al. proposed a GAN-based attack to remove median filtering traces from an image (Kim et al. 2017). This GAN was built using a generator and a discriminator. The generator was trained to remove the median filtering traces, and the discriminator was trained to distinguish between the restored images and the unaltered images. The author introduced loss computed from a pre-trained VGG network to improve the visual quality of attacked images. This attack was able to remove media filtering traces from gray-scale images, and fool many traditional forensic algorithms using hand-designed features, and CNN detectors. Chen et al. proposed a GAN-based attack in 2018 named MISLGAN to falsify an image’s source camera model (Chen et al. 2018). In this work, the authors assumed that the attacker has access to the forensic CNN that the investigator will use to identify an image’s source camera model. Their attack attempts to modify the traces in images associated with a target camera model such that the forensic CNN misclassifies an attacked image as having been captured by the target camera model. MISLGAN is consists of three major components: a generator, a discriminator, and the pre-trained forensic camera model CNN. The generator was adversarially trained against both the discriminator and the camera model CNN for each target camera model. The trained generator was shown to be able to falsify fake camera models for any color images with high success attack rate (i.e., the percentage of an image being classified as the target camera model after being attacked). This attack was demonstrated to be effective even when images under attack were captured by camera models not used during training. Additionally, the attacked images also maintain high visual qualities even comparing with the original unaltered image side by side. In 2019, the authors of MISLGAN extended this work to create a black box attack against forensic source camera model identification CNNs (Chen et al. 2019). To accomplish this, the authors integrated a substitute network to approximate the camera model identification CNNs under attack. By training MISLGAN against a substitute network, this attack can successfully fool state-of-the-art camera model CNNs even in black-box scenarios, and outperformed adversarial example generation algorithms such as FGSM (Goodfellow et al. 2014). This GAN-based attack has been adapted to remove traces left by editing operations and to create transferable anti-forensic attacks (Zhao et al. 2021). Additionally, by improving the generator’s architecture, Zhao et al. were able to falsify traces left in synthetic images by the GAN generation process (Xinwei Zhao and Matthew C. Stamm 2021). This attack

478

M. C. Stamm and X. Zhao

was shown to fool existing synthetic image detectors targeted at detecting both GANgenerated faces and objects. In 2021, Cozzolino et al. proposed a GAN-based attack, SpoC (Cozzolino et al. 2021), to falsify camera model information of a GAN-synthesized image. SpoC is consisted of a generator, a discriminator, and a pre-trained embedder. The embedder is used to learn the reference vectors of real images or the images of the target camera models in the latent spaces. This attack can successfully fool GAN-synthesized image detectors or the camera model identification CNNs, and outperformed adversarial example generation algorithm such as FGSM (Goodfellow et al. 2014) and PGD (Madry et al. 2017). The authors also showed that the attack is robust to JPEG compression of a certain level. In 2021, Wu et al. proposed a GAN-based attack, JRA-GAN, to restore JPEG compressed images (Wu et al. 2021). The authors used a GAN to restore the high frequency information in the DCT domain, and remove the blocking effect caused by JPEG compression to fool JPEG compression detection algorithms. The authors showed that the attack can successfully restore gray-scale JPEG compressed images and fool traditional JPEG detection algorithms using hand-designed features, and CNN detectors.

17.4.3 Differences Between GAN-Based Anti-Forensic Attacks and Adversarial Examples While attacks built from GANs and from adversarial examples both use deep learning to create anti-forensic countermeasures, it is important to note that the GAN-based attacks described in this chapter are fundamentally different from adversarial examples As discussed in Chap. 16, adversarial examples operate by exploiting non-ideal properties of a victim classifier. There are several explanations that have been considered by the research community as to why adversarial examples exist and are successful. These include “blind spots” caused by non-ideal training or overfitting, a mismatch between features used by humans and those learned by neural networks when performing classification (Ilyas et al. 2019), and inadvertent effects caused by the locally-linear nature of neural networks (Goodfellow et al. 2014). In all of these explanations, adversarial examples cause a classifier to misclassify due to unintended behavior. By contrast, GAN-based anti-forensic attacks do not try to trick the victim classifier into misclassifying an image by exploiting its non-ideal properties. Instead, they replace the forensic traces present in an image with a synthetic set of traces that accurately match the victim classifier’s model of the target trace. In this sense, they attempt to synthesize traces that the victim classifier correctly uses to perform tasks such as manipulation detection and source identification. This is possible because forensic traces, such as those linked to editing operations or an image’s source cam-

17 Anti-Forensic Attacks Using Generative Adversarial Networks

479

era, are typically not visible to the human eye. As a result, GANs are able to create new forensic traces in an image without altering its content or visual appearance. Additionally, adversarial example attacks are customized to each image under attack. Often, this is done through an iterative training or search process that must be repeated each time a new image is attacked. It is worth noting that GAN-based techniques for generating adversarial examples that have recently been proposed in the computer vision and machine learning literature (Poursaeed et al. 2018; Xiao et al. 2018). These attacks are distinct from the GAN-based attacks discussed in this chapter, specifically because they generate adversarial examples and not synthetic versions of features that should accurately be associated with the target class. For these attacks, the GAN framework is instead adapted to search for an adversarial example that can be made from a particular image in the same manner that iterative search algorithms are used.

17.4.4 Advantages of GAN-Based Anti-Forensic Attacks There are several advantages to using GAN-based anti-forensic attacks over both traditional human-designed approaches and those based on adversarial examples. We briefly summarize these below. Open problems and weaknesses of GAN-based adversarial attacks are further discussed in Sect. 17.6. • Rapid and automatic attack creation: As discussed earlier, GANs both automate and generalized the traditional approach to designing anti-forensic attacks. As a result, it is much easier and quicker to create new GAN-based attacks than through traditional human-designed approaches. • Low visual distortion: In practice, existing GAN-based attacks tend to introduce little-to-no visually detectable distortion. While this condition is not guaranteed, visual distortion can be effectively controlled during training by carefully balancing the weight placed on the generator’s loss term controlling visual quality. This is discussed in more detail in Sect. 17.5. By contrast, several adversarial example attacks can leave behind visually detectable noise-like artifacts (Güera et al. 2017). • Attack only requires training once: After training, GAN-based attack can be launched on any image as long as the target class remains constant. This differs from attacks based on adversarial examples, which need to create a custom set of modifications for each image, often involving an iterative algorithm. Additionally, no image-specific parameters need to be learned, unlike for several traditional anti-forensic attacks. • Scalable deployment in realistic scenarios: Attack requires only to pass the image through a pre-trained convolutional generator. It launches very rapidly and can be deployed at scale. Additionally, since the attack is launched • Ability to attack images of arbitrary size: Because the attack is launched via a convolutional generator, the attack can be applied to images of any size. By contrast, attacks based on adversarial examples only produce attacked images of

480

M. C. Stamm and X. Zhao

the same size as the input to the victim CNN, which is typically a small patch and not a full sized image. • Does not require advanced knowledge of the analysis region: One way to adapt adversarial attacks to larger images is to break the image into blocks, then attack each block separately. This however requires advanced knowledge of the particular blocking grid that will be used during forensic analysis. If the blocking grid used by the attacker and the detector do not align, or if the investigator analyzes an image multiple times with different blocking grids, adversarial example attacks are unlikely to be successful.

17.5 Training Anti-Forensic GANs In this section, we give an overview of how to train an anti-forensic generator. We begin by discussing the different terms of the loss function used during training. We shown how each term of the loss function is formulated in the perfect knowledge scenario (i.e., when a white box attack is launched). After this, we show how to modify the loss function to train the attack to create black box attacks in limited knowledge scenarios, as well transferable attacks in the zero knowledge scenario.

17.5.1 Overview of Adversarial Training During adversarial training, the generator G used in GAN-based attack should be incentivized to produce visually convincing anti-forensic attacked images that can fool the victim forensic classifier C, as well as a discriminator D if it is used. These goals are achieved by properly formulating the generator’s loss function LG . A typical loss function for adversarially training an anti-forensic generator consists of three major loss terms: the classification loss LC , the adversarial loss L A (if a discriminator is used), and perceptual loss L P , LG = αL P + βLC + γL A ,

(17.2)

where α, β, γ are weights to balance the performance of the attack and the visual quality of the attacked images. Like all anti-forensic attacks, the generator’s primary goal is to produce output images that fool a particular forensic algorithm. This is done by introducing a term into the generator’s loss function that we describe as the forensic classification loss, or classification loss for short. Classification loss is the key element to guide the generator to learn the forensic traces of the target class. Depending on the attacker’s access to and knowledge level of the forensic classifier, particular strategies may need to be adopted during training We will elaborate on this later in this section. Typically, the classification loss is defined as 0 if the output of the generator is classified by the

17 Anti-Forensic Attacks Using Generative Adversarial Networks

481

forensic classifier as belonging to the target class t and 1 if the output is classified as any other class. If the generator is trained to fool a discriminator as well, then the adversarial loss provided by the feedback of the discriminator is consolidated into the total loss function. Typically, the adversarial loss is 0 if the generated anti-forensically attacked images fools the discriminator and 1 if not. At the same time, the anti-forensic generator should introduce minimal distortion into the attacked image. Some amount of distortion is acceptable, since in an anti-forensic setting, the unattacked image will not be presented to the investigator. However these distortions should not undo or significantly alter any modifications that were introduced by the attacker before the image is passed through the antiforensic generator. To control the visual quality of anti-forensically attack images, the perceptual loss measures the pixel-wise difference between the image under attack and the anti-forensically attacked image produced by the generator. Minimizing the perceptual loss during the adversarial training process helps to ensure that attacked images produced by the generator contain minimal, visually imperceptible distortions. When constructing an anti-forensic attack, it is typically advantageous to exploit as much knowledge of the algorithm under attack as possible. This holds especially true with GAN-based attacks, which directly integrates a forensic classifier into training the attack. Since gradient information from a victim forensic classifier is needed to compute the classification loss, different strategies must be adopted to train the anti-forensic generator depending on the attacker’s knowledge about the forensic classifier under attack. As a result, it is necessary to provide more detail about the different knowledge levels that an attacker may have of the victim classifier before we are able to completely formulate the loss function. We note that the knowledge level typically only has an influence on the classification loss. The perceptual loss and discriminator loss remain unchanged for all knowledge levels.

17.5.2 Knowledge Levels of the Victim Classifier As mentioned previously, we refer to the forensic classifier under attack as the victim classifier C. Depending on the attacker’s knowledge level of the victim classifier, it is common to categorize the knowledge scenarios into the perfect knowledge (i.e., white box) scenario, the limited knowledge (i.e., black box) scenario, and the zero knowledge scenario. In the perfect knowledge scenario, the attacker has full access to the victim classifier or an exact identical copy of the victim classifier. This is an ideal situation for the attacker. The attacker can launch a white box attack in which they train directly against the victim classifier that they wish to fool. In many other situations, however, the attacker has only partial knowledge or even zero knowledge of the victim classifier. Partial knowledge scenarios may include a variety of specific settings in real life. A common aspect is that the attacker does not

482

M. C. Stamm and X. Zhao

have full access or control over the victim classifier. However, the attacker is allowed to probe the victim classifier as a black box. For example, they can query the victim classifier using an input, then observe the value of its output. Because of this, the partial knowledge scenario is also referred to as the black box scenario. This scenario is a more challenging, but potentially more realistic situation for the attacker. For example, a piece of forensic software may or online service may be encrypted to reduce the risk of attack. As a result, its users would only be able to provide input images and observe the results that the system output. An even more challenging situation is when the attacker has zero knowledge of the victim classifier. Specifically, the attacker not only has no access to the victim classifier, but also the attacker is not allowed to observe the victim classifier by any means. This is also a realistic situation for the attacker. For example, an investigator may develop private forensic algorithms and keep this information classified for security purposes. In this case, we assume that the attacker knows what the victim classifier’s goal is (i.e., manipulation detection, source camera model identification, etc.). A broader concept of zero knowledge may include the attacker not knowing if a victim classifier exists, or what the specific goals of the victim classifier are (i.e., the attacker will not know anything about an image will be analyzed). Currently, however, this is beyond the scope of existing antiforensics research. In both the black box and the zero knowledge scenarios, the attacker cannot use the victim classifier C to directly train the attack. The attacker needs to modify the classification loss in (17.2) such that a new classification loss is provided by other classifiers that can accessed during attack training. We note that the perceptual loss and the adversarial loss usually remain the same for all knowledge scenarios, or may need trivial adjustment based on specific cases. The main change, however, is the formulation of the classification loss.

17.5.3 White Box Attacks We start by discussing the formulation of each loss term in the perfect knowledge scenario, i.e., when creating a white box attack. Each term in the generator’s loss is defined below. The perceptual loss and the adversarial loss will remain the same for all subsequent knowledge scenarios. • Perceptual Loss: The perceptual loss L P can be formulated in many different ways. The mean squared error (L 2 loss) or mean absolute difference (L 1 loss) between an image before and after attack are the most commonly used formulations, i.e., (17.3) L P = N1 I − G(I ) p , where N is the number of pixels in the reference image I and anti-forensically attacked image G(I ),  ·  p denotes the p norm and p equals to 1 or 2. Empirically, the L 1 loss usually yields better visual quality for GAN-based anti-forensic

17 Anti-Forensic Attacks Using Generative Adversarial Networks

483

attacks. This is potentially because the L 1 loss penalizes small pixel differences comparatively more than the L 2 loss, which puts greater emphasis on penalizing large pixel differences. • Adversarial Loss: The adversarial loss L A is used to ensure the anti-forensically attacked image can fool the discriminator D. Ideally, the discriminator’s output of the anti-forensically attacked images is 1 when the attacked images fool the discriminator, 0 otherwise. Therefore, the adversarial loss is typically formulated as the sigmoid cross-entropy between the discriminator’s output of the generated anti-forensically attacked image and 1. L A = log(1 − D(G(I ))),

(17.4)

As mentioned previously, a separate discriminator is not always used when creating a GAN-based anti-forensic attack. If this is the case, then the adversarial loss term can be discarded. • Classification Loss: In the perfect knowledge scenario, since the victim classifier C is fully accessible by the attacker, the victim classifier can be directly used to calculate the forensic classification loss LC . Typically this is done by computing the softmax cross-entropy between the classifier’s output and the target class t, i.e., (17.5) LC = − log(C(G(I ))t ) where C(G(I ))t is the softmax output of victim classifier at location t.

17.5.4 Black Box Scenario The perfect knowledge scenario is often used to evaluate the baseline performance of an anti-forensic attack. However, it is frequently unrealistic to assume that an attacker would have full access to the victim classifier. More commonly, the victim classifier may be presented as a black box to the attacker. Specifically, the attacker does not have full control over the victim classifier, nor do they have enough knowledge of its architecture to construct a perfect replica. As a result, the victim classifier can not be used to produce the classification loss as formulated in the perfect knowledge scenario. The victim classifier can, however, be probed by the attacker as a black box, then the output can be observed. This is done by providing it input images, then recording how the victim classifier classifies those images. The attacker can use this information and modify the classification loss to exploit it. One approach is to build a substitute network Cs to mimic the behavior of the victim classifier C. The substitute network is a forensic classifier built and fully controlled by the attacker. Ideally, if the substitute network is trained properly, it perfectly mimics the decisions made by the victim classifier. The substitute network

484

M. C. Stamm and X. Zhao

can then be used to adversarially train the generator instead of the victim classifier by reformulating the classification loss as LC = − log(Cs (G(I ))t )

(17.6)

where Cs (G(I ))t the softmax output of the substitute network Cs at location t. When training a substitute network, it is important that it is trained to reproduce the forensic decisions produced by the victim classifier even if these decisions are incorrect. By doing this, the substitute network can approximate the latent space in which the victim classifier encodes forensic traces. For an attack to be successful, it is important to match this space as accurately as possible. Additionally, there are no strict rules guiding the design of the substitute network’s architecture. Research suggests that as long as the substitute network is deep and expressive enough to reproduce the victim classifier’s output, a successful black box attack can be launched. For example, Chen et al. demonstrated that successful black box attacks against source camera model identification algorithms can be trained using different substitute networks can built primarily from residual blocks and dense blocks (Chen et al. 2019).

17.5.5 Zero Knowledge In the zero knowledge scenario, the attacker has no access to the victim classifier C. Furthermore, the attacker cannot observe or probe the victim classifier like in black box scenario. As a result, the substitute network approach described in the black box scenario does not translate to the zero knowledge scenario. In this circumstance, the synthetic forensic traces created by the attack must be transferable enough to fool the victim classifier. Creating transferable attacks is challenging and remains an open research area. One recently proposed method of achieving attack transferability is to adversarially train the anti-forensic generator against an ensemble of forensic classifiers. Each forensic classifier Cm in the ensemble is built and pre-trained for a forensic classification by the attacker and acts as a surrogate for the victim classifier. For example, if the attacker’s goal is to fool a manipulation detection CNN, then each surrogate classifier in the ensemble should be trained to perform manipulation detection as well. These surrogate classifiers, however, should be as diverse as possible. Diversity can be achieved by varying the architecture of the surrogate classifiers, the classes that they distinguish between, or other factors. Together, these surrogate classifiers can be used to replace the victim classifier and compute a single classification loss in the similar fashion as in the white box scenario. The final classification loss is formulated as a weighted sum of individual classification losses pertaining to each surrogate classifier in the ensemble, such that

17 Anti-Forensic Attacks Using Generative Adversarial Networks

LC =

M 

βm LCm

485

(17.7)

m=1

where M is the number of surrogate classifiers in the ensemble, LCm corresponds to individual classification loss of the mth surrogate classifier, βm corresponds to the weight of mth individual classification loss. While there exists no strict mathematical justification for why this approach can achieve robustness, the intuition is that each surrogate classifier in the ensemble learns to partition the forensic feature space into separate regions for the target and other classes. By defining the classification loss in this fashion, the generator is trained to synthesize forensic features that lie in the intersection of these regions. If a diverse set of classifiers are used to form the ensemble, this intersection will likely lie inside the decision region that other classifiers (such as the victim classifier) associate with the target class.

17.6 Known Problems with GAN-Based Attacks & Future Directions While GAN-based anti-forensic attacks pose a serious threat to forensic algorithms, this research is still in its very early stages. There are several known shortcomings of these new attacks, as well as open research questions. One important issue facing GAN-based anti-forensic attacks is transferability. While these attacks are strongest in the white box scenario where the attacker has full access to the victim classifier, this scenario is the least realistic. More likely, the attacker knows only the general goal of the victim classifier (i.e., manipulation detection, source identification, etc.) and possibly the set of classes this classifier is trained to differentiate between. This corresponds to the zero knowledge scenario described in Sect. 17.5.5. In this case, the attacker must rely entirely on attack transferability to launch a successful attack. Recent research has shown, however, that achieving attack transferability against forensic classifiers is a difficult task (Barni et al. 2019; Zhao and Stamm 2020). For example, small implementation differences between two forensic CNNs with the same architecture can prevent a GAN-based white box attack trained against one CNN from transferring to the other nearly identical CNN (Zhao and Stamm 2020). These differences can include the data used to train each CNN or how each CNN’s classes are defined, i.e., distinguish unmanipulated versus manipulated (binary) or unmanipulated versus each possible manipulation (multi-class). For GAN-based attacks to work in realistic scenarios, new techniques must be created for achieving transferrable attacks. While the ensemble-based strategy training discussed in Sect. 17.5.5 is a new way to create transferrable attacks, there is still much work to be done. Additionally, it is possible that there are theoretical limits on attack trans-

486

M. C. Stamm and X. Zhao

ferability. As of yet, these limits are unknown, but future research on this topic may help reveal them. Another important topic that research has not yet addressed is the types of forensic algorithms that GAN-based attacks are able to attack. Currently, anti-forensic attacks based on both GANs and adversarial examples are targeted against forensic classifiers. However, many forensic problems are more sophisticated than simple classification. For example, state-of-the-art techniques to detect locally falsified or spliced content are use sophisticated approaches built upon Siamese networks, not simple classifiers (Huh et al. 2018; Mayer and Stamm 2020, ?). Siamese networks are able to compare forensic traces in two local regions of an image and produce a measure of how similar or different these traces are, rather than simply providing a class decision. Similarly, state-of-the-art algorithms built to localize fake or manipulated content also utilize Siamese networks during training or localization (Huh et al. 2018; Mayer and Stamm 2020, ?; Cozzolino and Verdoliva 2018). Attacking these forensic algorithms is much more challenging than attacking comparatively simple CNN-based classifiers. It remains to be seen if and how GAN-based anti-forensic attacks can be constructed to fool these algorithms. At the other end of the spectrum, little-to-no work currently exists aimed specifically at detecting or defending against GAN-based anti-forensic attacks. Past research has shown that traditional anti-forensic attacks often leave behind their own forensically detectable traces (Stamm et al. 2013; Piva 2013; Milani et al. 2012; Barni et al. 2018). Currently, it is unknown if these new GAN-based attacks leave behind their own traces, or what these traces may be. Research into adversarial robustness may potentially provide some protection against GAN-based anti-forensic attacks (Wang et al. 2020; Hinton et al. 2015). However, because GAN-based attacks operate differently than adversarial examples, it is unclear if these techniques will be successful. New techniques may need to be constructed to allow forensic algorithms to correctly operate in the presence of a GAN-based anti-forensic attack. Clearly, much more research is needed to provide protection against these emerging threats. Acknowledgements Research was sponsored by the Army Research Office and was accomplished under Cooperative Agreement Number W911NF-20-2-0111, the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL) under agreement numbers and HR001120C0126, and by the National Science Foundation under Grant No. 1553610. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA and AFRL, the Army Research Office, the National Science Foundation, or the U.S. Government.

17 Anti-Forensic Attacks Using Generative Adversarial Networks

487

References Amerini I, Barni M, Caldelli R, Costanzo A (2013) Counter-forensics of sift-based copy-move detection by means of keypoint classification. EURASIP J Image Video Process 1:1–17 Antipov G, Baccouche M, Dugelay JL (2017) Face aging with conditional generative adversarial networks. In: 2017 IEEE international conference on image processing (ICIP), pp 2089–2093 Arjovsky M, Chintala S, Bottou L (2017) Wasserstein GAN Barni M, Kallas K, Nowroozi E, Tondi B (2019) On the transferability of adversarial examples against CNN-based image forensics. In: ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 8286–8290 Barni M, Stamm MC, Tondi B (2018) Adversarial multimedia forensics: Overview and challenges ahead. In: 2018 26th European signal processing conference (EUSIPCO). IEEE, pp 962–966 Barni M, Tondi B (2013) The source identification game: an information-theoretic perspective. IEEE Trans Inf Forensics Secur 8(3):450–463 Böhme R, Kirchner M (2013) Counter-forensics: attacking image forensics. In: Sencar HT, Memon N (eds) Digital image forensics: there is more to a picture than meets the eye. Springer, pp 327–366 Brock A, Donahue J, Simonyan K (2018) Large scale GAN training for high fidelity natural image synthesis. arXiv:1809.11096 Caesar H, Uijlings J, Ferrari V (2018) Coco-stuff: thing and stuff classes in context. In: 2018 IEEE conference on computer vision and pattern recognition (CVPR). IEEE Cao G, Zhao Y, Ni R, Tian H (2010) Anti-forensics of contrast enhancement in digital images. In: Proceedings of the 12th ACM workshop on multimedia and security, pp 25–34 Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy, pp 39–57 Chen X, Duan Y, Houthooft R, Schulman J, Sutskever I, Abbeel P (2016) Interpretable representation learning by information maximizing generative adversarial nets, Infogan Chen C, Zhao X, Stamm MC (2017) Detecting anti-forensic attacks on demosaicing-based camera model identification. In: 2017 IEEE international conference on image processing (ICIP), pp 1512–1516 Chen C, Zhao X, Stamm MC (2018) Mislgan: an anti-forensic camera model falsification framework using a generative adversarial network. In: 2018 25th IEEE international conference on image processing (ICIP). IEEE, pp 535–539 Chen C, Zhao X, Stamm MC (2019) Generative adversarial attacks against deep-learning-based camera model identification. IEEE Trans Inf Forensics Secur 1 Chen C, Zhao X, Stamm MC (2019) Generative adversarial attacks against deep-learning-based camera model identification. IEEE Trans Inf Forensics Secur 1 Choi Y, Choi M, Kim M, Ha JW, Kim S, Choo J (2018) Stargan: unified generative adversarial networks for multi-domain image-to-image translation. In: Proceedings of the IEEE conference on computer vision and pattern recognition Choi Y, Uh Y, Yoo J, Ha JW (2020) Stargan v2: diverse image synthesis for multiple domains. In: Proceedings of the IEEE conference on computer vision and pattern recognition Chu X, Stamm MC, Chen Y, Liu KR (2015) On antiforensic concealability with rate-distortion tradeoff. IEEE Trans Image Process 24(3):1087–1100 Comesana-Alfaro P, Pérez-González F (2013) Optimal counterforensics for histogram-based forensics. In: 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, pp 3048–3052 Costanzo A, Amerini I, Caldelli R, Barni M (2014) Forensic analysis of sift keypoint removal and injection. IEEE Trans Inf Forensics Secur 9(9):1450–1464 Cozzolino D, Thies J, Rossler A, Nießner M, Verdoliva L (2021) Spoofing camera fingerprints, Spoc Cozzolino D, Verdoliva L (2018) Noiseprint: a CNN-based camera model fingerprint. arXiv:1808.08396

488

M. C. Stamm and X. Zhao

Dang-Nguyen DT, Gebru ID, Conotter V, Boato G, De Natale FG (2013) Counter-forensics of median filtering. In: 2013 IEEE 15th international workshop on multimedia signal processing (MMSP). IEEE, pp 260–265 Dirik AE, Sencar HT, Memon N (2014) Analysis of seam-carving-based anonymization of images against PRNU noise pattern-based source attribution. IEEE Trans Inf Forensics Secur 9(12):2277– 2290 Fan W, Wang K, Cayre F, Xiong Z (2013) A variational approach to jpeg anti-forensics. In: 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, pp 3058–3062 Fan W, Wang K, Cayre F, Xiong Z (2014) JPEG anti-forensics with improved tradeoff between forensic undetectability and image quality. IEEE Trans Inf Forensics Secur 9(8):1211–1226 Fan W, Wang K, Cayre F, Xiong Z (2015) Median filtered image quality enhancement and antiforensics via variational deconvolution. IEEE Trans Inf Forensics Secur 10(5):1076–1091 Fontani M, Barni M (2012) Hiding traces of median filtering in digital images. In: Proceedings of the 20th European signal processing conference (EUSIPCO). IEEE, pp 1239–1243 Gloe T, Kirchner M, Winkler A, Böhme R (2007) Can we trust digital image forensics? In: Proceedings of the 15th ACM international conference on Multimedia, pp 78–86 Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. arXiv:1412.6572 Güera D, Wang Y, Bondi L, Bestagini P, Tubaro S, Delp EJ (2017) A counter-forensic method for CNN-based camera model identification. In: Computer vision and pattern recognition workshops (CVPRW). IEEE, pp 1840–1847 Hinton G, Vinyals O, Dean J (2015) Distilling the knowledge in a neural network Huh M, Liu A, Owens A, Efros AA (2018) Fighting fake news: image splice detection via learned self-consistency. In: Proceedings of the European conference on computer vision (ECCV), pp 101–117 Ian Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672–2680 Ilyas A, Santurkar S, Tsipras D, Engstrom L, Tran B, Madry A (2019) Adversarial examples are not bugs, they are features Isola P, Zhu JY, Zhou T, Efros AA (2018) Image-to-image translation with conditional adversarial networks Jin Y, Zhang J, Li M, Tian Y, Zhu H, Fang Z (2017) Towards the automatic anime characters creation with generative adversarial networks Kang X, Liu J, Liu H, Wang ZJ (2016) Forensics and counter anti-forensics of video inter-frame forgery. Multimed Tools Appl 75(21):13833–13853 Karaküçük A, Dirik AE (2015) Adaptive photo-response non-uniformity noise removal against image source attribution. Digit Investig 12:66–76 Karras T, Aila T, Laine S, Lehtinen J (2017) Progressive growing of gans for improved quality, stability, and variation. arXiv:1710.10196 Kim D, Jang HU, Mun SM, Choi S, Lee HK (2017) Median filtered image restoration and antiforensics using adversarial networks. IEEE Signal Process Lett 25(2):278–282 Kirchner M, Bohme R (2008) Hiding traces of resampling in digital images. IEEE Trans Inf Forensics Secur 3(4):582–592 Kirchner M, Böhme R (2009) Synthesis of color filter array pattern in digital images. In: Media forensics and security, vol 7254. International Society for Optics and Photonics, p 72540K Kurakin A, Goodfellow I, Bengio S, et al (2016) Adversarial examples in the physical world Laijie L, Gaobo Y, Ming X (2013) Anti-forensics for unsharp masking sharpening in digital images. Int J Digit Crime Forensics (IJDCF) 5(3):53–65 Ledig C, Theis L, Huszar F, Caballero J, Cunningham A, Acosta A, Aitken A, Tejani A, Totz J, Wang Z, Shi W (2017) Photo-realistic single image super-resolution using a generative adversarial network

17 Anti-Forensic Attacks Using Generative Adversarial Networks

489

Liu Z, Luo P, Wang X, Tang X (2015) Deep learning face attributes in the wild. In: Proceedings of international conference on computer vision (ICCV) Lukas J, Fridrich J, Goljan M (2005) Determining digital image origin using sensor imperfections. In: Image and video communications and processing 2005, vol 5685. International Society for Optics and Photonics, pp 249–260 Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2017) Towards deep learning models resistant to adversarial attacks. arXiv:1706.06083 Mao X, Li Q, Xie H, Lau RY, Wang Z, Paul Smolley S (2017) Least squares generative adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2794– 2802 Mayer O, Stamm MC (2015) Anti-forensics of chromatic aberration. In: Media watermarking, security, and forensics 2015, vol 9409. International Society for Optics and Photonics, p 94090M Mayer O, Stamm MC (2020) Exposing fake images with forensic similarity graphs. IEEE J Sel Top Signal Process 14(5):1049–1064 Mayer O, Stamm MC (2020) Forensic similarity for digital images. IEEE Trans Inf Forensics Secur 15:1331–1346 Milani S, Fontani M, Bestagini P, Barni M, Piva A, Tagliasacchi M, Tubaro S (2012) An overview on video forensics. APSIPA Trans Signal Inf Process 1 Miroslav Goljan, Jessica Fridrich, and Mo Chen. Sensor noise camera identification: Countering counter-forensics. In Media Forensics and Security II, volume 7541, page 75410S. International Society for Optics and Photonics, 2010 Mirza M, Osindero S (2014) Conditional generative adversarial nets Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P). IEEE, pp 372–387 Park T, Liu MY, Wang TC, Zhu JY (2019) Gaugan: semantic image synthesis with spatially adaptive normalization. In: ACM SIGGRAPH 2019 Real-Time Live!, p 1 Pasquini C, Boato G (2013) JPEG compression anti-forensics based on first significant digit distribution. In: 2013 IEEE 15th international workshop on multimedia signal processing (MMSP). IEEE, pp 500–505 Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros AA (2016) Context encoders: feature learning by inpainting Piva A (2013) An overview on image forensics. ISRN Signal Process 2013 Poursaeed O, Katsman I, Gao B, Belongie S (2018) Generative adversarial perturbations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4422–4431 Radford A, Metz L, Chintala S (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434 Stamm MC, Lin WS, Liu KR (2012a) Temporal forensics and anti-forensics for motion compensated video. IEEE Trans Inf Forensics Secur 7(4):1315–1329 Stamm MC, Lin WS, Liu KR (2012b) Forensics versus anti-forensics: a decision and game theoretic framework. In: 2012 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 1749–1752 Stamm MC, Liu KR (2011) Anti-forensics of digital image compression. IEEE Trans Inf Forensics Secur 6(3):1050–1065 Stamm MC, Tjoa SK, Lin WS, Liu KR (2010) Anti-forensics of jpeg compression. In: 2010 IEEE international conference on acoustics, speech and signal processing. IEEE, pp 1694–1697 Stamm MC, Wu M, Liu KR (2013) Information forensics: an overview of the first decade. IEEE Access 1:167–200 Tero Karras T, Laine S, Aila T (2019) A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 4401–4410 Tero K, Samuli L, Miika A, Janne H, Jaakko L, Timo A (2020) Analyzing and improving the image quality of style GAN. In: Proceedings of the CVPR

490

M. C. Stamm and X. Zhao

Vondrick C, Pirsiavash H, Torralba A (2016) Generating videos with scene dynamics Wang SY, Wang O, Zhang R, Owens A, Efros AA (2020) CNN-generated images are surprisingly easy to spot... for now. In: Proceedings of the IEEE conference on computer vision and pattern recognition, vol 7 Wu ZH, Stamm MC, Liu KR (2013) Anti-forensics of median filtering. In: 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, pp 3043–3047 Wu J, Kang X, Yang J, Sun W (2021) A framework of generative adversarial networks with novel loss for jpeg restoration and anti-forensics. Multimed Syst, pp 1–15 Wu J, Zhang C, Xue T, Freeman WT, Tenenbaum JB (2016) Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. arXiv:1610.07584 Wu H, Zheng S, Zhang J, Huang K (2019) Towards realistic high-resolution image blending, Gp-gan Xiao C, Li B, Zhu JY, He W, Liu M, Song D (2018) Generating adversarial examples with adversarial networks. arXiv:1801.02610 Yu F, Seff A, Zhang Y, Song S, Funkhouser T, Xiao J (2016) Lsun: construction of a large-scale image dataset using deep learning with humans in the loop Zhang H, Goodfellow I, Metaxas D, Odena A (2019) Self-attention generative adversarial networks. In: International conference on machine learning. PMLR, pp 7354–7363 Zhang X, Karaman S, Chang SF (2019) Detecting and simulating artifacts in GAN fake images. In: 2019 IEEE international workshop on information forensics and security (WIFS), pp 1–6 Zhang H, Xu T, Li H, Zhang S, Wang X, Huang X, Metaxas DN (2017) Stackgan: text to photorealistic image synthesis with stacked generative adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 5907–5915 Zhao X, Chen C, Stamm MC (2021) A transferable anti-forensic attack on forensic CNNs using a generative adversarial network Zhao X, Stamm MC (2020) The effect of class definitions on the transferability of adversarial attacks against forensic CNNs. Electron Imaging 2020(4):119–1 Zhao X, Stamm MC (2021) Making GAN-generated images difficult to spot: a new attack against synthetic image detectors Zhu JY, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2223–2232

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.