246 97 8MB
English Pages 268 [276] Year 2023
A Media Epigraphy of Video Compression Reading Traces of Decay
Marek Jancovic
A Media Epigraphy of Video Compression “This is the book I’ve been waiting to read! Jancovic’s exciting method of media epigraphy draws, out of a still image, deep histories of mathematics, technologies, power, embodiment, and energy. In just one of his many stimulating discoveries, it is very moving to find that that those annoying compression artifacts arise from a discrepancy between calculus’ assumption of infinity (woven into the DCT or discrete cosine transform that undergirds almost every image) and the physical finitude of signals. An ontological struggle lodges in our squinting eyes.” —Dr. Laura U. Marks, Grant Strate University Professor, School for the Contemporary Arts at Simon Fraser University, Vancouver “With A Media Epigraphy of Video Compression, Marek Jancovic combines key lessons offered by media archaeology, science/technology studies, and forensics and he pushes all three of these fields forward with a new approach he calls ‘media epigraphy.’ Jancovic defines this new approach as “the study of media inscriptions as traces.” Through his deep-seeing analyses of media inscriptions such as compressions, format changes, and standards, he reveals not only how these inscriptions are deeply material but how they have deeply material effects on the physical world, from environments to human bodies. This is a must-read book for anyone looking for a model of how to successfully undertake a detailed, nuanced, and layered materialist study of even the most seemingly immaterial process.” —Dr. Lori Emerson, Associate Professor of English and Director of the Intermedia Arts, Writing, and Performance Program at University of Colorado at Boulder, and Founding Director of the Media Archaeology Lab “Marek Jancovic’s erudite tracing of that liminal threshold where visuality is just about to blur and to glitch is a magnificent take on the cultural politics of perception. As a media garbologist interested in waste and remains, Jancovic shows that compression is much more than making smaller. Cultural techniques of folding and trimming link media periods from paper and books to signals and electromagnetic waves.” —Dr. Jussi Parikka, Professor in Digital Aesthetics and Culture at Aarhus University, and Winchester School of Art (University of Southampton)
Marek Jancovic
A Media Epigraphy of Video Compression Reading Traces of Decay
Marek Jancovic Faculty of Humanities Vrije Universiteit Amsterdam Amsterdam, The Netherlands
ISBN 978-3-031-33214-2 ISBN 978-3-031-33215-9 (eBook) https://doi.org/10.1007/978-3-031-33215-9 © The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Cover illustration: Quantization in Motion / Ted Davis This Palgrave Macmillan imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Acknowledgements
Conversations with many friends and colleagues have folded, molded and formatted this book over a period of six years, give or take. I owe my deepest gratitude to Alexandra Schneider for her mentorship, guidance and encouragement throughout the entire process, her attentive and considerate feedback, her intellectual generosity and her phenomenal suggestions. I continue to be amazed by Alexandra’s uncanny ability to have both an eagle eye for detail and to always see the bigger picture. I became a scholar because of her and thanks to her. Wanda Strauven’s comments have also been instrumental in finalizing this book and helped me spell out many of the points I was struggling to articulate. I was blessed to have a reader as sharp and meticulous as her. Marc Siegel and Markus Stauff have provided invaluable input in the final stages. I am also immensely grateful to all of my wonderful colleagues at the Institute of Film, Theater, Media and Cultural Studies of the Johannes Gutenberg University of Mainz for the many engaging and thought- provoking conversations over the years. Imme Klages, Claudia Mehlinger, Carlo Thielmann, Nicole Braida, Kristina Köhler, Jakob Larisch, Michael Brodski—it was a privilege to work and teach with you, and it was also a lot of fun. A big thank you also to my students in Mainz, who have helped me explore and develop many of the early ideas for this book. Most of this project was conducted while I was an associate member of the Configurations of Film graduate research program in Frankfurt, Germany, then and now under Vinzenz Hediger’s nonpareil direction. Geographical distance did not permit me to be a part of this marvelous collective as much as I would have liked to, but its members made me feel v
vi
ACKNOWLEDGEMENTS
welcome every single time regardless. I truly wish I could have experienced more of the collective’s spirit, but I have benefited from its academic vibrancy to the fullest. I thank Eef Masson, Giovanna Fossati and the Moving Images: Preservation, Curation, Exhibition research group at the Amsterdam School for Cultural Analysis and EYE Film Institute Netherlands, as well as the ASCA family at large, for frequently providing me with exciting new impulses and motivation to continue my research. Many people have generously acted as intellectual sparring partners at numerous events where I tested out ideas and presented bits and pieces of this book. Among these, I would like to expressly thank fellow participants at the 2016 NECS Graduate Workshop in Potsdam, the conference Vom Medium zum Format at the University of Bochum, the Re/Dissolution workshop at the Academy of Fine Arts in Munich, the Aggressive Image conference at Yale University, the Dissecting Violence conference at Amsterdam School for Cultural Analysis, the 2019 MAGiS International Film Studies Spring School at the University of Udine, and the Technically Yours conference at National Taiwan University, and their respective organizers Sophia Satchell-Baeza, Marta Małgorzata Wa ̨sik and Anna Luise Kiss; Oliver Fahle and Elisa Linseisen; Sebastian Althoff and—once more—Elisa Linseisen; Ido Lewit, Max Fulton and Jason Douglass; Peyman Amiri, Natasha Basu and Bernardo Caycedo; Diego Cavallotti, Simone Venturini and Andrea Mariani, and Chun-yen Chen and Hsien- hao Liao. Many of the conversations I had and the films I saw at these events ended up as argumentative threads and examples in this book. My thanks to Manolis Tsipos for bringing me into contact with the DVD of Olympia, that inscrutable thing that instigated most of the questions at the heart of this book; to Ted Davis, for generously allowing me to use an image from his work on the cover; to Todd Wiener from UCLA Film & Television archives and the employees of the library and archive of the German Technology Museum in Berlin for their support and assistance; to Ailton D’Silva for introducing me to the writings of Sara Ahmed; to Judith Keilbach for her advice, guidance and friendship; to my new colleagues Ginette Verstraete, Sebastian Scholz and Ivo Blom for making me feel at home at the Vrije Universiteit Amsterdam in the most impossible circumstances of a global pandemic. Conversations with Rosa Menkman, Maral Mohsenin, Hannah Bosma, Melle Kromhout and Axel Volmar have helped me think through several of my central points.
ACKNOWLEDGEMENTS
vii
For their support throughout the long and equal parts grueling and gratifying process of writing, I thank my dear parents, Jarmila and Ján, and my family and friends. Without their help, I surely would have succumbed to the Kafkaesque bureaucracy that living and working in two different countries entailed. Marije Bartholomeus, Roy Marcks, Fadi Hindash, Serhat Özçelik and Esteban Ramírez Hincapié, thank you for keeping me on track when it mattered and for distracting me when it was necessary. Takuma Fujii, Puya Sabet, Xiaodan Sommer-Zhang, Yuuka Yamaoka and Adam Greguš, thank you for all those years—now decades—of friendship. Writing this would also have been impossible without the calming purrs of Miley, my soft, adorable companion. And lastly, thank you, Micha. For the music. This book was finalized with a finishing grant from the German Academic Exchange Service, the German Federal Foreign Office and the Stipendienstiftung Rheinland-Pfalz, made possible by the University of Mainz. Fragments of the text have previously appeared in print in different iterations. I have explored some of the ideas now contained in “Media Epigraphy” in “Fold, Format, Fault: On Reformatting and Loss,” in Format Matters: Standards, Practices, and Politics in Media Cultures, eds. Marek Jancovic, Axel Volmar, and Alexandra Schneider (Lüneburg: meson press, 2020). The themes of violence and technology developed in “Viewer Discretion is Advised” have appeared in a condensed version in “When a GIF Becomes a Weapon: The Latent Violence of Technological Standards and Media Infrastructure,” in The Palgrave Handbook of Violence in Film and Media, ed. Steve Choe (New York: Palgrave Macmillan, 2022). A tiny portion of “+Et cetera in infinitum” has been published in “Streaming Against the Environment: Digital Infrastructures, Video Compression, and the Environmental Footprint of Video Streaming,” co-authored with Judith Keilbach, in Situating Data: Inquiries in Algorithmic Culture, eds. Karin van Es and Nanna Verhoeff (Amsterdam: Amsterdam University Press, 2023).
Contents
1 Introduction: Looking at Olympia 1 2 Media Epigraphy: Format, Trace, and Failure 27 3 Interlacing: The First Video Compression Method 77 4 +Et cetera in Infinitum: Harmonic Analysis and Two Centuries of Video Compression121 5 Viewer Discretion is Advised: Flicker in Media, Medicine and Art169 6 Close Exposure: Of Seizures, Irritating Children, and Strange Visual Pleasures217 7 Conclusion: Tracing Compression247 Appendix: List of referenced audiovisual works255 Index257
ix
About the author
Marek Jancovic is Assistant Professor of Media Studies at the Vrije Universiteit Amsterdam, Netherlands. His research is centered around the materialities of the moving image, film preservation practices, media and the environment, and format studies.
xi
List of Figures
Fig. 1.1 Fig. 1.2
Fig. 1.3 Fig. 1.4 Fig. 2.1
Fig. 2.2 Fig. 2.3 Fig. 2.4
Still from a DVD edition of Leni Riefenstahl’s Olympia (1938), published by Hot Town Music-Paradiso 2 Detail of various traces of compression in Olympia. Left: interlacing artifacts and ringing artifacts. Right: Widescreen Signaling code and blocking artifacts (visible as a very faint square mosaic structure across the image) 3 Ghosting artifacts in Olympia: an intermediate video frame created by superimposing two adjacent film frames 5 A glitch in the landscape: potash mining pools near Moab, Utah. (Photograph by and courtesy of Nelson Minar) 15 Detail of chainlines, visible as a faint pattern of vertical stripes running along the sheet of paper. Minor edge damage is also visible. The image is of handwritten lecture notes from 1822 or 1823, from the Modern Manuscripts Collection of and digitized by the Library of the Vrije Universiteit Amsterdam, object ID 38558578 30 A Hinman collator at the Folger Shakespeare library in Washington, DC. (Photograph by Julie Ainsworth. Image 48192, used by permission of the Folger Shakespeare Library) 38 Hinman collator in use at Watson Library, 1959. (Image source: University of Kansas Libraries, Special Collections. Call Number: RG 32/37 1959) 39 Three frames from the explosion scene at the end of Akira on 35 mm film, showing faint vertical scratch marks and dust throughout. (Film scan courtesy of UCLA Film & Television Archive)41
xiii
xiv
List of Figures
Fig. 2.5
Fig. 2.6
Fig. 2.7
Fig. 2.8 Fig. 2.9 Fig. 2.10 Fig. 2.11
Fig. 2.12 Fig. 3.1
Fig. 3.2 Fig. 3.3
Still from the explosion scene in Akira digitized from a VHS. Uploaded to YouTube by user intel386DX on July 22, 2017. Provenance and digitization method unknown. Color shifts, slight shearing at the top and bottom of the frame, macroblocking artifacts, a black border from several reformattings and other subtle and less subtle image distortions are apparent Still from the same scene in the 2001 “special edition” DVD by Pioneer Entertainment. This copy was derived from an interpositive film print restored and scanned by Pioneer Entertainment, which removed dust, dirt and scratches. Square blocking artifacts (“pixilation”) can be discerned faintly Still from the 2013 “25th anniversary” Blu-ray edition of Akira released by Funimation. This copy is based on a new scan of the same restored photochemical print as above. The image shows film grain with minor blocking artifacts resulting from its digitization. Of note is also the slightly different aspect ratio of each version Screenshot from a trailer for The Hitchhiker’s Guide to the Galaxy, directed by Garth Jennings, circulated on YouTube. © Disney Enterprises The same frame from the 2007 Touchstone Home Entertainment Blu-ray release of the film. © Disney Enterprises Still from “Welcome to Heartbreak” music video (2009) by Kanye West feat. Kid Cudi, directed by Nabil Elderkin. © 2009 Roc-A-Fella Records Still frame from Light Is Calling (2004), a film by Bill Morrison. Courtesy of Hypnotic Pictures. Original photography from The Bells (1926), directed by James Young, shot by L. William O’Connell Stills from Machine to Machine (2013) by Philippe Rouy. (Images courtesy of the artist) Interlacing in Olympia. Interlacing creates an intermediate frame where a hard cut would have been in the original film. These intermediate frames are not very visible in motion, because they only appear for 1/50th of a second, but are noticeable to a trained eye Detail of interlacing or “combing” artifacts in Olympia as they might appear on a digital display Examples of Walter’s image transmission grids. (Image source: Walter 1898, 4)
42
43
43 46 47 60
62 63
78 79 83
List of Figures
Fig. 3.4
Fig. 3.5 Fig. 3.6 Fig. 3.7
Fig. 3.8 Fig. 3.9 Fig. 4.1 Fig. 4.2 Fig. 4.3
Fig. 4.4 Fig. 4.5
Fig. 4.6
Fig. 4.7 Fig. 4.8
An image received via the telediagraph in New York in 1899 (left, note the handwritten instruction) and the finished sketch by an artist as it appeared in the New York Herald (right). (Image source: Cook 1900, 347) Perpendicular scanning. (Image source: Schröter 1928, 456) Helical scanning: alternating even and odd scan lines. (Image source: Schröter 1928, 457) A format war unfolding in print: advertisements for scanning discs and cathode ray tubes side by side, as they appeared in the German TV and film journal Fernsehen und Tonfilm 2 (3), 1931 Severe field dominance error. (Image courtesy of Esben Hardt) Samuel L. Hart’s “apparatus for transmitting pictures of moving objects.” (Image source: Hart 1915, 17) Blocking artifacts—a transient decoding error during playback of Olympia from DVD Still from Olympia DVD showing the Olympic fire with heavy MPEG blocking Still from a copy of Stan Brakhage’s Mothlight uploaded to YouTube in 2012. Apart from the poor resolution and drastic blocking, interlacing artifacts can be seen as horizontal striations Detail of ringing artifacts in Olympia. A portion of the thin halo is indicated with white arrows The Gibbs phenomenon. Top: an ideal square wave with sharp edges. Middle: a graph of its Fourier series approximation with 5 coefficients. Bottom: 20 coefficients. Around each edge, the function “rings.” By adding more coefficients, the Fourier series will more closely approximate the original function and the wiggles will get narrower, but they will not disappear completely. (Graphs: Author) Undated photograph of Michelson and Stratton’s 80-coefficient harmonic analyzer, as demonstrated by scientist Harley E. Tillitt, probably sometime in the 1960s. (Image courtesy of the Special Collections & Archives Department, Nimitz Library, U.S. Naval Academy) William Thomson’s 10-coefficient tide-predicting machine (1876). (Image courtesy of and © Science Museum Group) Graphs drawn by Michelson and Stratton’s analyzer. As the number of coefficients increases, the curve begins to approximate the square wave, but the oscillations around the discontinuity (the small wiggles around the corners of the square graph) remain. (Image source: Michelson and Stratton 1898, 88)
xv
86 88 89
91 94 98 125 137
138 140
142
146 147
148
xvi
List of Figures
Fig. 4.9
Fig. 4.10 Fig. 4.11 Fig. 4.12 Fig. 5.1 Fig. 5.2 Fig. 5.3 Fig. 5.4 Fig. 5.5 Fig. 6.1 Fig. 6.2
Still from Wings of Desire (1987) by Wim Wenders, from the Criterion Collection Blu-ray, compressed with Advanced Video Coding. (Image © 1987 Road Movies GmbH—Argos Films. Courtesy of Wim Wenders Stiftung—Argos Films) 150 Still from Wings of Desire, compressed with high efficiency video coding. (Image © 1987 Road Movies GmbH—Argos Films)151 An analytical table on a transparency. (Image source: Zipperer 1922)154 Example of a DCT quantization table (left) and the zig-zag path that the run-length encoder follows 157 One frame from the two-frame strobing animated GIF sent to Eichenwald. This is a meme that has been circulating online since at least 2004 170 A detail of Marey’s myograph as illustrated in Etienne-Jules Marey, La machine animale: Locomotion terrestre et aérienne (Paris: G. Baillie, 1873), 31 178 Photograph of a television monitor showing a patient and their EEG in multiple brain regions at the onset of an absence seizure. (Image source: Bowden et al. 1975, 17) 184 Seizure warning in Tony Conrad’s The Flicker. (Image courtesy of the Tony Conrad Estate and Greene Naftali, New York)201 A portion of the 16 mm strip of Epileptic Seizure Comparison. (Image courtesy of the Estate of Paul Sharits and Greene Naftali, New York) 202 Still from I’m Not the Girl who Misses Much (1986) by Pipilotti Rist. © Pipilotti Rist c/o Pictoright Amsterdam 2022 231 Still from (Absolutions) Pipilotti’s Failure (1988) by Pipilotti Rist. © Pipilotti Rist c/o Pictoright Amsterdam 2022 234
CHAPTER 1
Introduction: Looking at Olympia
It is wonderful how a handwriting which is illegible can be read, oh yes it can. Gertrude Stein, The Geographical History of America (1936)
Films can look quite alien when you look at them as surfaces inscribed with traces. What you see in Fig. 1.1 is a still from a DVD of Olympia, Leni Riefenstahl’s controversial documentary of the 1936 Summer Olympics in Nazi Berlin. The DVD edition that I captured this still from was published in 2008, and its packaging advertised the contents with the prominent subtitle “Original German Version.” In an immediate sense, what we are looking at here is the figure of a torch relay runner carrying the Olympic flame towards Berlin. But if you squint a little and explore the surface of the image with some care, you will find that this obvious content is overlaid with intriguing textures. Besides the occasionally visible scratches and organic matter remaining from some distant transfer from photochemical film, what might catch your attention is a strange, jagged pattern of alternating horizontal lines which seems to arrest the runner’s right arm in two moments in time simultaneously (Fig. 1.2, left). These stripes are the result of a video compression method known as interlacing. Video compression is a set of mathematical and engineering techniques that make video signals “smaller” and thereby make them faster to transmit, easier to store and cheaper to circulate. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Jancovic, A Media Epigraphy of Video Compression, https://doi.org/10.1007/978-3-031-33215-9_1
1
2
M. JANCOVIC
Fig. 1.1 Still from a DVD edition of Leni Riefenstahl’s Olympia (1938), published by Hot Town Music-Paradiso
There are many ways of achieving this, and interlacing is one of them. It works by “interlacing” two consecutive frames of video into a single image composed of hundreds of thin lines. By alternating the odd and even lines quickly, interlacing manages to remove half of the visual information in every frame and leaves our eyes none the wiser. A further attentive look at the paused image also reveals exaggerated contrast around shape boundaries. You will notice that those parts of the runner’s body that abut the light grey background, such as the left edge of his head and torso, are overcast with a shadow slightly darker than the rest of his body. There is a technical term for this phenomenon; it is known as a “ringing artifact.” It, too, can result from various forms of compression, but also from manipulations of electronic images such as artificial sharpening during post-production. The image also reveals a pixilation effect that resembles a very faint checkerboard pattern (Fig. 1.2, right). These are so-called blocking artifacts, yet another common side-effect of digital video compression methods, which often reduce image data by dividing images into square “blocks” of pixels.
1 INTRODUCTION: LOOKING AT OLYMPIA
3
Fig. 1.2 Detail of various traces of compression in Olympia. Left: interlacing artifacts and ringing artifacts. Right: Widescreen Signaling code and blocking artifacts (visible as a very faint square mosaic structure across the image)
Altogether, this is a somewhat outlandish way of watching Olympia. It puts aside for a moment the meaning of the image—its function as a record of a major sporting event and a controversial aesthetic landmark in the history of cinema. Rather than attending to what is shown in its images, looking at Olympia this way brings forward the traces inadvertently inscribed in them. The traces hint at a historical narrative, but one told in a different style of narration than what we might normally picture under the term “film history.” Instead, these are traces of Olympia’s gradual decay, and they speak of a history of compressions, format changes and incompatible standards.
A New and Worse Original Olympia’s video signal is stored on the DVD in a digital format known as MPEG-2. MPEG stands for the Motion Picture Experts Group, a consortium that creates video compression standards. You probably have at least a few MPEG files stored on your computer and phone, although they are most likely compressed and formatted with newer, more efficient methods. Analyzing the MPEG-2 video track from the Olympia DVD yields some surprising information. Despite being published in Europe, the video on the disc has a vertical resolution of 480 pixels and a frame rate
4
M. JANCOVIC
of 29.97 frames per second (fps). This is unusual. It means that the video was formatted according to an American standard, based on old U.S.American analog television norms. Normally, you would expect a DVD sold in Europe to have a higher resolution and a lower frame rate, namely 25 fps, matched to European TV norms and to the frequency of the European electrical grid. Olympia was originally projected at 24 fps for its first cinema audiences, a slightly lower speed. Somehow, it had found itself on a DVD published in Belgium with a North American television frame rate standard. The displacements across at least three different media formats and technological norms introduced visible disturbances into the image. The film scholar Vinzenz Hediger (2005) has argued that when talking about film history, “the original” of a film is best not thought of as an object, but rather a set of practices. If Hediger spoke of film reconstructions and historical editions on DVD as “new and improved originals,” then a traditionally-minded DVD collector and cinephile might consider my Olympia a new and worse original. The phantom images haunting the surface could not have been present in the “original” film, since they were first elicited by the very processes of reformatting, standards conversion and compression. Despite the claim on the cover, the Olympia I saw was hardly the “Original German version” but rather a cover version orchestrated by machines, algorithms and engineers. Film historians sometimes complain about the deleterious effects that reformatting procedures and compression processes like interlacing can have on the aesthetics of film and on established methods of film analysis (Bordwell 2007). But I argue that under the right circumstances, these faults and failures can also inform our understanding of media and their histories in productive ways. Slowly advancing the Olympia MPEG file frame by frame reveals a syncopated cadence: out of every six frames, at least four are interlaced and at most two are not. This is the transatlantic jazz of incompatible frame rates. It is the result of a reformatting method known by the clunky technical term “2:2:3:2:3 pulldown,” and characteristic of video footage that has been converted to the American norm of 29.97 frames per second from a source running at the European speed of 25 fps. A different counting method confirms this: out of every 30 frames on the DVD, 25 are unique frames that correspond to images present in Riefenstahl’s 35 mm film, and 5 frames (20% of the entire film, after all) will show ghosting (Fig. 1.3). These additional “ghost” frames are composites. They were not present
1 INTRODUCTION: LOOKING AT OLYMPIA
5
Fig. 1.3 Ghosting artifacts in Olympia: an intermediate video frame created by superimposing two adjacent film frames
in the film strip Riefenstahl and her audiences would have witnessed, but rather synthesized electronically in order to fill in the difference between the two compression formats. There is some other fascinating information that software tools can reveal about my Belgian Olympia DVD. The video file claims to adhere to a standard known as BT.601, as most commercial DVDs do. This document standardized in 1982 how analog television signals in Europe are to be digitized, including how the color information should be encoded. BT.601 itself refers to the color space defined by the International Commission on Illumination (CIE) in 1931, when Riefenstahl was still pursuing a career as an actor rather than a director. The CIE’s color system specified the quantitative relationships between the spectral properties of visible light and how that light could be matched to what we humans call color. In the process, the Commission also established a so-called standard colorimetric observer, otherwise known as “a normal eye”—a mathematical function that represents average human color perception. This “normal eye” was derived from a group of 17 people observed by the scientists
6
M. JANCOVIC
John Guild and William David Wright in 1929.1 Finally, the CIE color space also defined “standard illuminants,” a theoretical light source based on the spectrophotometric average of sunlight at noon on a clear day in Western Europe (Wright 1929; Guild 1932; Fairman et al. 1997). My close viewing of Olympia thus turned into a gateway to an almost spiritual experience of the sublime. Measurements of the operation of tiny photoreceptor cells in the fovea of a small group of people with “normal” eyes were entangled with geographical, climatic, even cosmic coordinates of the Earth and with the light that had reached it from the Sun almost a century ago. Inscribed in the metadata I had pulled out of the DVD was a long, branching chain of standards that refer to other standards, reaching all the way back to the 1920s and beyond. These standards are very technical documents. Their objective is to relate the functioning of the human eye with the properties of light to regulate how electronic moving images are displayed by video devices. But through the standards, the DVD format becomes ensnared not only in a history of media technology, but also in histories of science, perception and corporeality. The 17 study participants examined by Guild and Wright were all sighted, trichromatic and had no yellowing of the eye’s lens due to old age. The standard thus also tacitly incorporates ideas about vision, sight ability and youth. It does not just describe how a “normal” human eye functions. It defines what a normal eye is. Shaped by these standards, Olympia—specifically, my DVD copy of Olympia—seemed to be simultaneously present in multiple historical times and originate from several regions of the world at once. Interpreting the Traces Curious about the provenance of this hybrid, cosmopolitan and, in a sense, queer object, I called its Belgian publisher. Hot Town Music-Paradiso is actually a music label that also happens to publish a small but eclectic selection of DVDs, including various editions of Riefenstahl’s propaganda films. The owner of the label disclosed to me that the video material had been delivered by a post-production facility in Flanders that went bankrupt in 2010, but was unable to provide any further information on the source of the digitized film. I can, however, say with certainty that the 1 “A normal eye” was the title of a 1931 memorandum by John Guild, as quoted in Wright (2007)[1981].
1 INTRODUCTION: LOOKING AT OLYMPIA
7
material on the DVD did not directly originate from scanned film but a secondary if not tertiary analog video source. The presence of a Widescreen Signaling code, a thin horizontal line of alternating black and white dashes at the topmost edge of the image (Fig. 1.2, right) and the fact that this line flashes five times per second clearly indicate that the source had been made in Europe before being converted and recompressed to the American DVD format.2 Widescreen Signaling was introduced in European analog television in 1995, so the source for the DVD was made no earlier than this, but probably at least several years later, and most likely no later than 2005 or so. It is not unreasonable to speculate that the video on the DVD may have originated from a VHS tape with much lower quality. But it is puzzling that a European video source would be converted to an American DVD format and then end up being published in Europe anyway. It means that Riefenstahl’s film material had first been converted to a different frame rate, making the film run 4% faster and elevating the audio pitch by about a semitone. Cinema and television use different frame rate standards, so in order to make a theatrical film viewable on TV, it must first be “conformed.” In Europe and most other regions of the world, simply playing the film faster is the accepted way of achieving this. Indeed, comparing my DVD with some of the copies circulated on streaming platforms online, the motion is evidently more dynamic and the music is “tuned” higher, changing the experience, tone and pacing in subtle but clearly perceptible ways. This reformatted material was then reformatted again to the American norm before being mastered to DVD, despite being targeted primarily at the Dutch-language market in Europe. All of this is counterintuitive and none of it makes much sense, at least not aesthetically. Not only does this process introduce one unnecessary format conversion (which lowers the resolution and overall image quality) but playing back an American- formatted disk on a DVD player and television that output a European frame rate could introduce further unpleasant artifacts, like visible judder in the movement. The Flemish post-production company could deliver a DVD master in an alien frame rate knowing that it would be viewable on all DVD players sold in Europe. The reverse is not necessarily true: many North American DVD players would not be able to convert the video signal to the frame rate expected by the television set. Multi-system TVs 2
A WSS specification for NTSC exists, but an analog widescreen norm was never adopted.
8
M. JANCOVIC
able to accept different frame rates were a rarity in North America when the DVD was published. I surmise that Olympia was encoded in this unusual way because a copy in an American norm was simply already available at hand, procuring other material would not have been viable financially, and possibly to also avoid (the financial inconvenience of) further destructive reformatting of an already suboptimal video tape. The result of each of these reformattings and recompressions was still the film Olympia, but in the plural. It is ironic that a work by “the quintessential articulator of the Nazi film aesthetic” (Schulte-Sasse 1991, 125) so mired in fascist ideologies of purity and cleanliness would be marked with traces of all kinds of intermixing of different mediatic bloodlines. The Olympia DVD exhibits many impurities, losses and imperfections. But in this sense, it is a paradigmatic film. To witness a film in a “pure” state is such an extraordinarily rare and abstract circumstance that it is almost hard to envision. A small fraction of our audiovisual heritage exists in this form protected by our archives, but most moving images are seen under imperfect conditions: with traces of dozens of reformattings, small errors or major failures, with interruptions and buffering and compression artifacts irritating the image, with projector breakdowns and format incompatibilities, wrong aspect ratios, edits, scratches and signal dropouts, with contrast and brightness improperly set, screens too reflective or too dark, audio out of sync, impossible to hear over the noise of the street or too loud for comfort. These are the everyday realities of media spectatorship. They are also an everyday appurtenance of film history. Despite objections from a handful of purists, much of film history has been written after viewing films in this way, in such a state, from deteriorated VHS tapes and 16 mm copies of copies of copies. Certainly, most people watching Olympia from a DVD will have the ability to ignore what may seem like trifling details. As spectators, we are trained to turn a blind eye to small disturbances and learn not to notice them too much. They are considered extraneous to some imagined original that may have pre-existed them and they seem irrelevant to the “content.” But as the media theorist Adrian Mackenzie put it, “viewers may not be highly conscious of how brightness, chrominance and movement have been minutely altered by the [compression] codec. These differences can be easily cancelled out or remain almost imperceptible. This does not mean that they make no difference” (A. Mackenzie 2013, 153). These small failures of transparent mediation, even though they may be easy to miss, are an integral part of the experience of moving images.
1 INTRODUCTION: LOOKING AT OLYMPIA
9
The argument I will make in this book is that such traces of compression and decay are like historical inscriptions. With the right method, paying attention to them can lead to rich new histories of the moving image and facilitate an encounter with its countless neglected entanglements with the world: the ways in which moving images interact with our bodies, affect the environment, shape the formation of scientific knowledge and circulate around the globe. A New Way of Looking at Moving Images Not every film is necessarily like this, of course. But many are just like Olympia or worse. Lossy, reformatted and compressed haphazardly, many times, in ways that may appear crude or illogical. Every frame in Olympia is marked with traces of its migrations around the world. It has passed through multiple compression formats; formats that are the result of a century of technological standardization and in many cases tied to geopolitical and ideological conflicts (Boetcher and Matzel 2002; Fickers 2007; Angulo et al. 2011). And yet Olympia still fails to properly conform to any of them—it is inscribed by standards, but improperly; misinscribed. This makes this particular DVD into a uniquely suitable object with which to start thinking about how complex the history of moving images truly is. The various traces in the image—which we might call misinscriptions— speak against Olympia. The material reality of the film’s circulation undermines the imaginaries of purity that are so central to the political aesthetics of fascist societies (Schulte-Sasse 1991). The markings are minor, but reaffirm Anna Tsing’s observation about the world and our place in it: “Everyone carries a history of contamination; purity is not an option” (2015, 27). Loss and decay, mistakes, errors and interference, perhaps even a sort of intractable queer impulse to continuously transmute one’s form and resist norming and standardization are always working at the core of media culture. Despite what people make of Olympia as a film, my DVD edition of it insists on the hybridity of media, of media histories and of media practices. These are the themes I will unfold in this book. Previous ways of analyzing Riefenstahl’s film have opted to address its iconography, pointed to the historical exchanges between national-socialist ideology and cinema, scrutinized the invention of an ancient past in the service of both aesthetics and propaganda, or argued about the film’s political meaning (Mandell 1971; Hart-Davis 1986; M. Mackenzie 2003). Film-historiographical approaches have also tried to assess Olympia’s
10
M. JANCOVIC
trustworthiness as a historical document, as “the source of what everybody, except for those who were there, thought went on at the 1936 Games” (McFee and Tomlinson 1999, 89). I propose a different method of looking at moving images. Borrowing from the discipline of epigraphy, a field closely affiliated with archaeology that studies ancient stone engravings, I call it media epigraphy—the study of media inscriptions as traces. Media epigraphy asks somewhat different questions, not in order to replace critical semiotic, aesthetic and historiographical approaches to media, but to complement and complicate them. “The emulsion itself remembers the passage of time,” Laura U. Marks once wrote of analog film (2002, 96). If we take this statement not just as a poetic metaphor but at face value, then it has far-reaching historiographical implications. How would Olympia “remember” the blemishes and scars on its skin? What would its displacements from emulsion to magnetic tape to hard drive to polycarbonate disk say about its past? And can that “memory” be made accessible? The textures in Olympia, layered generation upon generation, do not necessarily reveal its past with certitude; they do not represent it. But they do extend its presence. The traces of compression suggest a somewhat more complex history of Olympia not simply as a film, but a film that has existed in many formats. The literary scholar Meredith McGill (2018) has argued that the notion of “format” is especially well-suited to begin addressing media circulation, that historically and theoretically neglected interval between media production and reception. Indeed, the traces in Olympia urge us to ask why it passed through the various formats it did, who these formats were seen by and how it moved across various regions and cultures of spectatorship. Personally, I encountered this object in the context of an art performance that followed the long pre-history of the neo-Nazi Golden Dawn party in Greece. Who else might have seen it? Could it be that it is formatted in a strange way because it was part of some bootlegging circles before finding its way to a legitimate DVD publisher? What type of circles could they have been? The traces in the film’s images do not answer such questions, but they do make them possible. They allow us to propose new speculative interconnections between moving image standards, infrastructures of circulation and viewing practices. I say speculative because there may ultimately be no conclusive way to tell why Olympia was released in Europe in a North American compression format. But the traces contained within allow some conjectures and interpretations.
1 INTRODUCTION: LOOKING AT OLYMPIA
11
In order to make these conjectures, media epigraphy needs to ask novel questions, examine novel objects of research, and take novel linkages into account. It may need to research how DVD player hardware works and what its functionality was in various regions of the world at specific moments in time. It might have to look at how the MPEG-2 compression standard is designed and implemented, how films are compressed and reformatted and how post-production companies and DVD publishers collaborate. To that end, the approach I am calling media epigraphy may need to be open-ended, informed by thorough understanding of material and technology, seek inspiration from many other disciplines, but also carefully reflect on perception and on what exactly one is seeing and why. As a method of studying moving images, media epigraphy may need to wonder how one’s bodily constitution and sensory habits influence the traces one notices. Practicing epigraphy means that one may need to zoom in, pause, rewind and squint at the image, or use software tools for assistance. And we may also have to carefully examine what the limits of such tools are. Even capturing suitable stills to demonstrate some of the visual phenomena I have described can be tricky. Many of them are difficult to notice and describe in a book when the image is not moving, scraping the epistemological limits of knowledge that can be communicated through language rather than the senses. Different playback software and hardware might create different perceptual effects and change the appearance of video recordings, stubbornly insisting on the concrete temporal, spatial and material circumstances of video as a performed process and enacted technique, rather than simply a technology. This means that the traces of compression I speak about are not simply already “in” the image, waiting to be “read,” nor are they produced solely by the machine showing it. Rather, they dynamically occur during its performance, and are distributed across devices, standards, perceiving bodies and infrastructural systems, and the frictions and incompatibilities among them. The early history of film recording, projection and television broadcast technology has been exhaustively researched on the level of objects, machines and their components (Turquety 2018). But there is practically no research on the influence of signal manipulation techniques like compression which, as the communications and sound scholars Jonathan Sterne and Tara Rodgers (2011) have argued, are directly involved in the “cultural politics of perception.” The field of media studies still has a poor grasp of the significance of compression algorithms, formats, codec
12
M. JANCOVIC
libraries, authoring and mastering software, or of video processing firms and chip manufacturers whose electronics reformat, filter, upscale, sharpen, deinterlace, deblock, convert and “rectify” a large portion of the moving images that encounter us. Olympia is a palimpsest of such processes, some of which I will address in this book. But this is, in fact, not a book about Olympia at all. Its DVD provoked many of the questions and I will keep returning to it often. But I am interested in something more fundamental than this single film. I intend to approach the question of what exactly the history of the moving image entails. Video compression, it turns out, is an unexpectedly rewarding entrance towards a tentative answer. What on its surface seems like a highly technical subject, of interest to hardly anyone but mathematicians and video engineers, can, in reality, reveal many of the principles by which our audiovisual culture operates. Compression is what allows images to circulate. Without it, there would be no television, no streaming platforms, no digital cinema. There would be no amateur video, no smartphone recordings. No YouTube, no Netflix, no TikTok. No VHS, no DVD, no Blu-ray. And compression is remarkably universal. Works of high filmic art tend to get compressed with the same techniques as a B-movie or a news broadcast or that funny video you saw online a few days ago. Compression is at work on a massive projection screen in the digital cinema, on my Olympia DVD, as well as in the animated memes we exchange on our phones—although it may not be the same degree of compression in all of these settings all the time. But compression also reaches beyond these familiar configurations of the moving image, causing headaches for neurologists who work with digital images, as well as to film archivists who try to preserve our audiovisual cultural heritage. And still: this is only a fraction of the effects that compression has on art, science, our physical environment and even on our bodies. The effects of compression, once you begin to notice them, manifest in such strange and unexpected situations as the wardrobe of newscasters, the frequency of epileptic seizures around the world or obscure nineteenth- century mathematical controversies. Compression is involved in so much more than just the task of making video files smaller and more mobile. It can tell us about how the electrical infrastructure of our world is built and how it affects our bodies when we move through it. It can reveal how machines and media both support and hinder the work of scientists and how these scientists grapple with phenomena they don’t understand. Following compression into these
1 INTRODUCTION: LOOKING AT OLYMPIA
13
contexts is, like all historical research, somewhat akin to detective work. In order to fully appreciate the web of influences—the many “distant correspondences” as Michel Foucault (1982, 138) might have called them—we might need to adjust how we look at moving images. We might need to strain our view a little and stay alert to faint clues and small vestiges. The traces might lead us down a path where previous analytical categories and distinctions reach their limits and it becomes necessary to invent new ones. In fact, compression is so central to so many different processes in our culture that we might have to entirely revise how we think of the history of media, technology and infrastructure. Not only films are kept in the state of disarray and hybridity that Olympia stands in for. If the viewing conditions in cinemas or in the home are often suboptimal, this is even more true of the many other devices that produce light and moving images: lightbulbs flicker, information screens at train stations glitch, LED signs stop working, the touchscreens of ATM machines refuse to reciprocate our impatient, desiring touches. Olympia’s non-conformance to standards speaks to the larger non-functioning of noisy infrastructure that supports our daily lives as viewers, consumers and citizens. This infrastructure also decays and often fails, usually not in disastrous ways, but to an extent irritating enough to interlope our actions and be noticed. This book is an exploratory epigraphy that follows compression transversally across such electrical and technical infrastructure, across epistemic systems of science and medicine, and across bodily techniques of spectatorship. Compression as a Material Process The film and media scholar Lisa Parks (2007) once proposed salvaging as a metaphor for doing television history. She suggested that as “new media” were becoming the more attractive field of research, television would slowly turn into a forgotten wasteland which might, nevertheless, still hold many valuable scraps left to discover. With the growing importance of ecological questions in humanities scholarship, Parks’s presentiment now appears more compelling than ever. Waste, not only metaphorical, now has value. It is theorized, aestheticized and politicized (Schneider and Strauven 2013). Taking cue from Parks and responding to Jonathan Sterne’s invitation (2012, 250) to treat compression as the basis for understanding the history of media and communication technology, in this
14
M. JANCOVIC
book, I will also seek out and salvage visual waste—traces of failures, errors and decay—to find clues leading to a better understanding of media’s past. Compression and reformatting, as we have seen in Olympia, are two of the primary ways in which such traces become inscribed in media objects. In the following chapters, compression will lead us into the “junkyard” of film, television and computing. The visual scraps will reveal how antiquated video compression methods irritate linear narratives of media history, and how short-lived and now obsolete machines have played crucial roles in the development of media technology, but also in scientific disciplines like bibliography, mathematics and neurology. First, though, let us take a moment to reflect on what compression, as a process integral to media culture, actually does. Compression of information can be “lossy” or “lossless.” Lossy compression can be understood simply as “the technique of making image files smaller by deleting some information,” as summarized by Lev Manovich in his classic The Language of New Media (2001, 54). Lossy techniques irreversibly alter signals or destroy information, which cannot be recovered after compression. Lossy techniques are often used to compress sound, images and video, because the human sensorium has a high tolerance (or low sensitivity, depending on your perspective) for such “losses.” We tend not to notice very fine differences in texture or color in images, especially if they are moving. Our ears are blissfully oblivious to musical tones of high frequencies, especially if they are overpowered by louder sounds. Lossy compression techniques exploit this partial numbness of our bodies to make media content travel faster. Some other methods are lossless: redundancies are shrunk during transport or storage, but can be fully recovered when decompressed. Fitting a letter in an envelope by folding it rather than cutting it to size is a good analogy for this. Lossless compression is commonly used for text, but increasingly also in digital audiovisual preservation. This book is about the former, lossy types of compression, because it is my aim to recast the “losses” it leaves behind as traces. Yet in light of current discussions, we could also think of lossy compression as a special case of waste management. As a thought experiment, let us briefly reconceptualize compression as smelting, as a high-temperature value extraction process. Compressing data means extracting a valuable part of a signal and discarding the rest as waste—a process in many ways similar to the separation of an ore into valuable commodity and worthless gangue.
1 INTRODUCTION: LOOKING AT OLYMPIA
15
In the processing of metals, after the ore is smelted, the gangue is discarded but does not simply vanish. In many parts of the world, smelting waste becomes an orographic feature. Artificial mountains and toxic pools built from coal, copper, nickel and bauxite tailings dot global landscapes with unnaturally sharp contours and oversaturated colors, much like the visual artifacts in a corrupted MPEG video file (Fig. 1.4). Spoil tips and slag heaps are the geological glitches of the Anthropocene, inscribed into the medium of Earth. Like modern middens, these un/natural visual archives document a history of labor and of the human re-shaping of the environment. If we conceive of the history of the moving image in these metallurgic terms, it is of some interest to note that before William Dickson invented the 35 mm film format and the Kinetoscope, he had been researching for
Fig. 1.4 A glitch in the landscape: potash mining pools near Moab, Utah. (Photograph by and courtesy of Nelson Minar)
16
M. JANCOVIC
Thomas Edison a process to separate iron and gold from low-grade ore (Spehr 2000, 6). I am using this line of thought not only to recall the work of scholars like Jussi Parikka and Nicole Starosielski, who have discovered new ways of thinking through the intimate interconnections between media technologies, minerals and waste (Parikka 2015) and between visual cultures and temperature-modulating regimes of power (Starosielski 2021). The reason I am invoking metaphorical parallels between video compression and industrial processes of smelting is also to draw attention back to their materiality and physicality. There is a tendency in media theory to regard the compression of signals as a process of reduction and its results as reductive forms of some larger, originary, fuller and more complex metaphysical murmur (Galloway and LaRivière 2017, 129). There is a longer genealogy of similar media theories that rely on imagining some “uncompressed” plane of existence to which compression can serve as a diminished form.3 But the sound scholar Melle Kromhout counters that in the context of signal processing and communication, any contemporary notion of a pure, uncompressed signal, as a metaphysical or even theological entity separable from noise, is no more than a symbolic limit case fabricated by mathematical idealizations of wave phenomena in the eighteenth and nineteenth centuries (Kromhout 2017; also Siegert 2003). The idea of a perfect original signal is useful because it allows calculations that have practical applications in both digital media and mathematics. But this is purely a working model. On the material level, no signal is uncompressed, because compression is not an a posteriori effect. It is the condition of existence of signals and mediation. This will be my starting point in thinking about compression. As a process, compression can be elusive. Its outcomes are certainly observable: most people will readily recognize pixelation and poor resolution in an image, and will hear that the person on the video call or on the other end of the phone line sounds tinnier than they do during a face-to- face conversation. But the actual mechanisms of compressing something tend to be invisible. They take place in electric cables, antennas, transistors or silicon chips, as algorithms that work on some data, or as 3 In Friedrich Kittler’s media theory, loosely based on Lacan’s psychoanalytic trifecta, this domain coincided with “the Real,” which he associated with the technology of the phonograph (Kittler 1999). Wolfgang Ernst (2014) also adapts a similar model in some of his writings on sound media. There has been some criticism of this idea (cf. Hansen 2015) and there has been criticism of the criticism, too (cf. Kromhout 2017).
1 INTRODUCTION: LOOKING AT OLYMPIA
17
electromagnetic modulations of a radio signal traveling through the ether. Perhaps this is what often hampers our ability to envision compression as a material action. Nevertheless, rather than viewing every reduction of information as a compression of some pre-existing originary signal, I propose that compression is neither reductive nor destructive, but highly productive. The fundamental actions underlying all media-technological processes of compression are spatial and temporal gestures of folding, ordering and selection. These actions are generative. They create relationships and order that did not exist before, and whose traces often remain inscribed in the compressed objects. Some information may get lost, but something else is won. The compressed images in my Olympia DVD have lower resolution and quality than the film Riefenstahl herself would have seen in the cinema, but throughout the process of compression, they also preserve indices about the video signal’s past. All processes we call compression in the media-technological sense first require a sequence of decisions that formats and orders a signal in very specific, organized ways before separating the dispensable from the indispensable. This is, essentially, the same socioeconomic process by which some matter, metals and minerals become valued as desirable commodities and others are turned into waste. The distribution of value between them materializes in the industrial process of separation by smelting. As an example, the interlacing process, which produces the regular thin horizontal stripes seen in Olympia, cuts the bandwidth of a video signal in half. But it does not do this by crude bisection, like severing a body at the torso. Instead, it first orders the image into a series of regular horizontal lines and then meticulously discards every other row. In effect, we still lose half of the “body,” but the thin slices that remain will tell us more about the anatomy of the whole than just a bust would. Just how fundamental the ordering of data prior to its compression is, is demonstrated in Codec, a 2009 work by Paul B. Davis, a media artist who frequently works with obsolete computer technologies. Davis was one of the early glitch artists who used the manipulation and intentional misuse of compression codecs as a way of exploring new video aesthetics, critiquing popular culture and hacking technology. Codec consists of a proprietary compression algorithm (called PBD) and a 6.5-minute single-channel video that explains its use, in the form of a computer desktop tutorial with Davis’s voiceover. Codec stands out in the landscape of metacodecs, a term Ingrid Hoelzl and Remi Marie (2014)
18
M. JANCOVIC
introduce for contemporary works of art that are both “about” compression algorithms and go beyond them. Many metacodecs, including Davis’s earlier works, focus on glitches and thus on the visible or audible moments of compression failure. But Codec moves on from the aesthetics of glitch art and brings the ordering processes to the fore by completely redefining what a successful compression means. Davis’s unique compression algorithm reduces video file size by comparing image data to a predefined model (his earlier video work Video Compression Study #4 from 2007) and adjusting any deviation so as to match the model. This is a convoluted way of saying that any video “encoded” with the PBD algorithm is simply replaced by Davis’s earlier work. Davis demonstrates this by “compressing” a copy of Kanye West’s Welcome to Heartbreak music video into PBD format. The result is a file that looks and sounds exactly like Video Compression Study #4—indeed, it is exactly just that since its bitstream has become identical to it. Davis’s satirical codec pokes fun at the implicit contract we maintain with compression algorithms that makes us expect that their output should be in some sensorially specific ways similar to the input. Instead of approaching compression from the assumption that all humans will have statistically similar psychophysical responses to visual stimuli, Codec assumes that all video, “art” or “mainstream,” is intrinsically fungible. Entertainingly enough, it lets the referential and representational promise of compression fail totally while actually preserving most other useful properties of video encoding. For example, as with every other codec, some types of signals are better suited to be compressed with the PBD algorithm than others. The more the footage resembles the model, Davis explains in his deadpan voiceover, the better it will compress. Davis has only removed the sorting and ordering that allows various psychovisual and psychoacoustic models of human perception to be implemented in the content. But without that step, the PBD format is useful only for quite a small number of use cases, namely, to compress just one single video. Compression, to furnish Lev Manovich’s earlier definition with more precise contours, is therefore “making image files smaller by deleting some information,” but not any information. Preceding the deletion, compression involves the careful manufacturing of spatial and temporal relationships and a discriminate formatting of signals into epistemic molds—tables, grids, matrices, trees—according to finely-tuned criteria that often reflect scientific understandings of humans’ sensory functions and dis/abilities. Only then can compression bring about the work it does.
1 INTRODUCTION: LOOKING AT OLYMPIA
19
The processes of ordering and the resulting spatial data structures are mostly concealed and forgotten. We do not need to know that an analog television image that appears like a continuous surface is actually composed of two “fields” and 576 “lines,” or that every “frame” of an MPEG-2 video is a mosaic of “blocks” exactly 8 pixels tall and 8 pixels wide, grouped into “macroblocks” of four. We consider compression successful when it—and these underlying structures of information it creates—remain invisible to us. But such invisibility is rarely perfect. Traces of compression often become discernible in relief, especially as moving images move around and decay over time. The media artist and theorist Hito Steyerl uses the term poor images for all those highly compressed, badly resolved and repeatedly reformatted GIFs and viral videos that circulate the world over. What poor images lack in resolution, they make up with speed and mobility (Steyerl 2009). The famous term is a compact shorthand suitable for addressing image circulation and visual culture in relation to capital. But it is useful to remember that even very poor images have some value. Indeed, poor images, too, are the result of repeated smelting, refining and value extraction. Even the tiny and artifact-ridden pictures Steyerl wrote about are the valuable part left over after some other signal detritus had been discarded. Somewhere, at some point in time, there had been surplus data that was deemed to be even poorer and was simply compressed away. Unlike real-world gangue, data discarded during lossy compression does eventually disappear into nothingness. But even data that is lost can leave behind traces. Lossy compression is, therefore, really, somewhat of a misnomer. In fact, the lossier the compression, the more new features, which we sometimes reify with the term artifacts, an image tends to acquire. But how could loss turn into an artifact? This paradox is part of what makes artworks about compression so compelling: they make us aware of the befuddling fact that a removal causes signals to gain characteristics they did not possess before. As the following chapters will show, compression is therefore not analogous to abstraction, as some media philosophers have proposed (Galloway and LaRivière 2017), nor simply a “cultural practice” that operates on symbols. Compression is a material operation, performed on industrial global scale, central to the functioning of media economies. It operates as infrastructure. Accepting the premise that compression is a physical process and continuing with the smelting analogy leads to some absurd- sounding but provocative questions: Where does waste from compression
20
M. JANCOVIC
go? What does it leave behind? Does it accumulate in our environment? Is it toxic? Surprisingly, it can be. In the next chapters, I will elaborate on how digital compression algorithms translate into environmental effects and discuss images whose compression characteristics cause all sorts of tangible medical effects, from nausea to seizures.
Chapter Overview Each chapter of this book takes up a visual trace of compression. I begin with a deliberately low-tech example—folds in paper—to show that compression has been a central technique of media culture for centuries. The subsequent chapters address a range of compression artifacts: interlacing artifacts in both analog and digital video, ringing and macroblocking artifacts in digital video, and flicker. All of these result from techniques of making things smaller, including non-things like electromagnetic signals. The next chapter, “Media Epigraphy,” delineates some key principles of the method I am introducing, anchored around three central terms: trace, failure, and format. Compression artifacts are often traces of minor failure. Image compression, after all, is considered successful if it remains imperceptible, so any sensuous trace might be seen as a small failure to achieve this goal. Both the concept of failure and the concept of trace have become increasingly important to our understanding of history, culture and society. Compression usefully ties these two terms together, and enables us to link the historiographical tradition known as microhistory to more recent developments in media forensics and book studies. I will show that film and media studies can benefit from engaging with these fields, not least because a look towards the materiality of paper can inform our understanding of abstract media notions like “format” and, indeed, “compression.” “Interlacing” traces the development of the oldest video compression method. Interlacing was developed in a complex, confusing and hybrid media environment, and this chapter centers around the contributions of the German physicist and engineer Fritz Schröter and his failed 1920s experiments with phototelegraphy. Interlacing is one of the essential compression methods in both analog and digital video and television. It has been “invented” multiple times in many different forms, in connection with numerous imaging procedures. I take its multifaceted history as an opportunity to think critically about the historiographical value of origin
1 INTRODUCTION: LOOKING AT OLYMPIA
21
stories at large, and propose that instead of objects and inventions, media history should take techniques and practices as its central focus. The chapter “+Et cetera in infinitum” explores the many linkages that connect digital video compression to a long history of mechanical computing machines, mathematical techniques, optical devices and scientific fields like calorimetry, the study of the propagation of heat. By examining the traces of phenomena like blocking and ringing, a type of compression artifact commonly seen in both digital and analog video, this chapter investigates how algorithms developed in mathematical physics at the beginning of the nineteenth century reverberate through present-day moving images, and argues that computing has been a part of the history of moving images long before contemporary digital media. An epigraphy of the material culture of mathematics demonstrates that compression, even in its contemporary sense relating to digital media, is always a physical operation that manipulates and deforms concrete things and has tangible effects on the world. “Viewer Discretion is Advised” examines how video compression can exert dangerous effects on human bodies. Following flicker, a visual disturbance that results from compressing moving images, I explore the strange entanglements of medicine, electrical infrastructure, visual media and avant-garde art. A media archaeology of neurology shows how various light-emitting and projecting devices have been central to the formation of this medical field. I then examine the influence that neurological research has had on the work of a number of experimental film and video artists, offering a critical analysis of several film works in which compressed and flickering images play a leading role. “Close Exposure” carries on with an analysis of how compression mediates between our bodies and the lived infrastructure of our world, but reverses the narrative. Instead of focusing on the harmful effects that compression standards can have, I show how they can also become a source of visual pleasure. The core historical case study consists of an investigation of some productive ways of looking at flickering, failing and dysfunctional images developed by people with a neurological condition known as photosensitive epilepsy. Lying hidden in neurological literature is a history of unusual ways of seeing and sensing the world. These creative forms of spectatorship queer medical and patriarchal hierarchies and reveal how the compression of moving images comes to covertly operate in such surprising contexts as gender, health and sexuality.
22
M. JANCOVIC
This book is rooted in the traditions of media archaeology and science and technology studies, but develops their methods further. It offers a new way of considering the history of media technology; a history of intersecting epistemic techniques, bodily sensations and electrical infrastructures. Engaging conceptually and methodologically with book studies and sound studies, disability studies and queer phenomenology, media epigraphy is new a way of looking at errors, failures and decay in images as traces of subdued histories. This is not an exhaustive history of video compression, partly because I do not believe that exhaustion is the goal of history- writing. Rather, it is one possible epigraphy among many others. The cases of compression that I have chosen are incomplete and selective in that there are many other methods of compressing video which I do not discuss. But my examples do bring forward many of the ways in which compression has historically shaped our relationship with moving images and with the world. This book shows the neglected role of science in the history of media, and the role of media in the history of science. It brings to light a wide range of marginal media practices and historical media devices that have rarely—in some cases never—been discussed by media scholars. The chapters are not connected by any particular chronological links, even though they might loosely suggest a historical progression from early compression techniques to later ones. However, the examples addressed here really rather call into question the very idea of chronology as historical succession, since one of the motifs accompanying all chapters is the complex and multidimensional temporality of many compression methods, and the impossibility to locate their origins with certainty or pin their effects down to a specific scale of inquiry or historical period. Moreover, the examples frequently blur the lines between analog and digital media, complicating essentialist ontologies. Instead of origins and historical certainties, what awaits us is a strange and circuitous journey. Small traces in the moving images we encounter will lead us to large changes in cultural, technological and epistemic systems.
References Angulo, Jorge, Joan Calzada, and Alejandro Estruch. 2011. Selection of Standards for Digital Television: The Battle for Latin America. Telecommunications Policy 35: 773–787. https://doi.org/10.1016/j.telpol.2011.07.007. Boetcher, Sven, and Eckhard Matzel. 2002. Entwicklung der Farbfernsehsysteme (PAL, SECAM, NTSC, PALplus). In Medienwissenschaft: ein Handbuch zur
1 INTRODUCTION: LOOKING AT OLYMPIA
23
Entwicklung der Medien und Kommunikationsformen, ed. Joachim-Felix Leonhard, Hans-Werner Ludwig, Dietrich Schwarze, and Erich Straßner, 2174–2187. Berlin: Walter de Gruyter. Bordwell, David. 2007. My Name is David and I’m a Frame-counter. Observations on Film Art. Ernst, Wolfgang. 2014. Between the Archive and the Anarchivable. Mnemoscape 1: 92–103. Fairman, Hugh S., Michael H. Brill, and Henry Hemmendinger. 1997. How the CIE 1931 Color-matching Functions were Derived from Wright-Guild data. Color Research & Application 22: 11–23. https://doi.org/10.1002/(SICI )1520-6378(199702)22:13.0.CO;2-7. Fickers, Andreas. 2007. “Politique de la grandeur” versus “Made in Germany”: Politische Kulturgeschichte der Technik am Beispiel der PAL-SECAM-Kontroverse. München: De Gruyter Oldenbourg. Foucault, Michel. 1982. The Archaeology of Knowledge. Translated by A.M. Sheridan Smith. New York, NY: Pantheon Books. Galloway, Alexander R., and Jason R. LaRivière. 2017. Compression in Philosophy. Boundary 2 (44): 125–147. https://doi.org/10.1215/01903659-3725905. Guild, J. 1932. The Colorimetric Properties of the Spectrum. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 230: 149–187. https://doi.org/10.1098/rsta.1932.0005. Hansen, Mark B.N. 2015. Symbolizing Time: Kittler and Twenty-First-Century Media. In Kittler Now: Current Perspectives in Kittler Studies, ed. Stephen Sale and Laura Salisbury, 210–238. Cambridge: Polity Press. Hart-Davis, Duff. 1986. Hitler’s Games: The 1936 Olympics. New York: Olympic Marketing Corp. Hediger, Vinzenz. 2005. The Original Is Always Lost: Film History, Copyright Industries and the Problem of Reconstruction. In Cinephilia: Movies, Love and Memory, ed. Marijke de Valck and Malte Hagener, 135–149. Amsterdam: Amsterdam University Press. Hoelzl, Ingrid, and Remi Marie. 2014. CODEC: On Thomas Ruff’s JPEGs. Digital Creativity 25: 79–96. https://doi.org/10.1080/1462626 8.2013.817434. Kittler, Friedrich A. 1999. Gramophone, Film, Typewriter. Translated by Geoffrey Winthrop-Young and Michael Wutz. Stanford, CA: Stanford University Press. Kromhout, Melle Jan. 2017. Noise Resonance: Technological Sound Reproduction and the Logic of Filtering. Doctoral dissertation, Amsterdam: University of Amsterdam. Mackenzie, Michael. 2003. From Athens to Berlin: The 1936 Olympics and Leni Riefenstahl’s Olympia. Critical Inquiry 29: 302–336. https://doi. org/10.1086/374029.
24
M. JANCOVIC
Mackenzie, Adrian. 2013. Every Thing Thinks: Sub-representative Differences in Digital Video Codecs. In Deleuzian Intersections: Science, Technology, Anthropology, ed. Casper Bruun Jensen and Kjetil Rodje, 139–154. New York: Berghahn Books. Mandell, Richard D. 1971. The Nazi Olympics. New York: Macmillan. Manovich, Lev. 2001. The Language of New Media. Cambridge, MA: The MIT Press. Marks, Laura U. 2002. Touch: Sensuous Theory and Multisensory Media. Minneapolis: University of Minnesota Press. McFee, Graham, and Alan Tomlinson. 1999. Riefenstahl’s Olympia: Ideology and Aesthetics in the Shaping of the Aryan Athletic Body. The International Journal of the History of Sport 16: 86–106. https://doi. org/10.1080/09523369908714072. McGill, Meredith L. 2018. Format. Early American Studies: An Interdisciplinary Journal 16: 671–677. https://doi.org/10.1353/eam.2018.0033. Parikka, Jussi. 2015. A Geology of Media. Minneapolis: University of Minnesota Press. Parks, Lisa. 2007. Falling Apart: Electronics Salvaging and the Global Media Economy. In Residual Media, ed. Charles R. Acland, 32–47. Minneapolis: University of Minnesota Press. Schneider, Alexandra, and Wanda Strauven. 2013. Waste: An Introduction. NECSUS. European Journal of Media Studies 2: 409–418. https://doi. org/10.5117/NECSUS2013.2.SCHN. Schulte-Sasse, Linda. 1991. Leni Riefenstahl’s Feature Films and the Question of a Fascist Aesthetic. Cultural Critique: 123–148. https://doi. org/10.2307/1354097. Siegert, Bernhard. 2003. Passage des Digitalen: Zeichenpraktiken der neuzeitlichen Wissenschaften, 1500–1900. Berlin: Brinkmann & Bose. Spehr, Paul C. 2000. Unaltered to Date: Developing 35mm Film. In Moving Images: From Edison to the Webcam, ed. John Fullerton and Astrid Söderbergh Widding, 3–28. London: John Libbey Publishing. https://doi.org/10.2307/j. ctt1bmzn7v. Starosielski, Nicole. 2021. Media Hot and Cold. Durham: Duke University Press. Sterne, Jonathan. 2012. MP3: The Meaning of a Format. Durham: Duke University Press. Sterne, Jonathan, and Tara Rodgers. 2011. The Poetics of Signal Processing. differences 22: 31–53. https://doi.org/10.1215/10407391-1428834. Steyerl, Hito. 2009. In Defense of the Poor Image. e-flux: 1–9. Tsing, Anna Lowenhaupt. 2015. The Mushroom at the End of the World—On the Possibility of Life in Capitalist Ruins. Princeton, NJ: Princeton University Press.
1 INTRODUCTION: LOOKING AT OLYMPIA
25
Turquety, Benoît. 2018. On Viewfinders, Video Assist Systems, and Tape Splicers: Questioning the History of Techniques and Technology in Cinema. In Technology and Film Scholarship. Experience, Study, Theory, ed. Santiago Hidalgo, 239–259. Amsterdam: Amsterdam University Press. Wright, William David. 1929. A Re-determination of the Trichromatic Coefficients of the Spectral Colours. Transactions of the Optical Society 30: 141–164. https://doi.org/10.1088/1475-4878/30/4/301. ———. 2007. Professor Wright’s Paper from the Golden Jubilee Book: The Historical and Experimental Background to the 1931 CIE System of Colorimetry. In Colorimetry: Understanding the CIE System, ed. János Schanda, 9–23. Hoboken, NJ: John Wiley & Sons. https://doi. org/10.1002/9780470175637.ch2.
CHAPTER 2
Media Epigraphy: Format, Trace, and Failure
Picture a large sheet of paper, about as large as you could reasonably hold in both hands without it buckling or tearing. Now imagine that you are a bookmaker who is about to transform this sheet into the leaves of a book. Once the type is printed, the sheet will be folded over itself multiple times, drastically reducing its size to a gathering of pages that is much easier to handle and carry. The paper will get smaller—it will undergo compression. This chapter traces a genealogy of compression as a media technique intertwined with bookmaking, focusing on three interrelated terms to lay a foundation for media epigraphy: format, trace, and failure. I will argue that the folding of paper is the proto-form of compression and that histories of the moving image (and of audiovisual media at large) have much to gain theoretically, conceptually and methodologically if they seriously engage with bookmaking and the study of books. Bibliographers have developed distinctive, visual ways of doing historical work and examining traces which can serve as a model for media epigraphy. The material history of printing and bookbinding can inform current interdisciplinary discussions regarding important media-technological notions that extend far beyond printed media.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Jancovic, A Media Epigraphy of Video Compression, https://doi.org/10.1007/978-3-031-33215-9_2
27
28
M. JANCOVIC
Format The notion of format has been at the center of many recent debates in film and media studies, as well as in art history and literary studies. Compression formats like MP3 or JPEG are central to our experience of media and culture. “Formats” appear in all media industries, but the term has very divergent meanings depending on whether it refers to printed media, radio stations, television programming, film or electronic files. All of these different denotative functions point to a complex web of material practices, technological standards, cultural significances, social dynamics and sensory histories. The difficulty of defining what format exactly means has been noted repeatedly in recent research, but it also provides a fertile ground for novel approaches to media culture (Sterne 2012; Volmar et al. 2020). Because it is used in different contexts, it can illuminate historical correspondences between disparate technologies. In the interest of contributing to what Jonathan Sterne calls “format theory” and conceptually grounding it in the study of paper and bookmaking, I will begin with the oldest meaning of the word, which comes from the bookmaking trade. But why look to a medium like paper in a book on video compression? Film scholars and media archaeologists like Laura U. Marks and Wanda Strauven have shown that it will benefit our understanding of the history of film immensely if we adopt a capacious attitude towards its many materialities. Including in our purview materials like textile and techniques like weaving, we may discover historical precursors to such modern-seeming concepts like resolution or image matrix (Marks 2014; Strauven 2020). Following in these footsteps, I am asserting that some of the methods and concepts used in the study and making of books can be put to good use in the analysis of moving images, too. Especially the terms format and compression, whose meaning converges in the medium of paper, can enrich our thinking about media history across conventional categories and help us situate video in a much larger genealogy of compressive practices. Folding and Unfolding The book scholar Thomas Tanselle explains that bibliographers need a word to describe the relationship between the physical structure of a book and the routines of a print shop that have produced it (Tanselle 1971). Historically, the printing trade used “format” for this. Contrary to its
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
29
vernacular use, format refers not to size but to the folding of a book: a single folding of the paper sheet creates a folio format, a double a quarto— the ancestor of today’s A4—a triple an octavo, and so on. Writes Tanselle: “format is not one of the properties of paper but represents something done to the paper” (1971, 32). To format is to fold. This orthodox bibliographic definition is useful for the study of many media besides books because it directs our attention to formats as practices, as actions done. A narrow interpretation of format as folding may seem limiting, but it already contains all of its later permutations and sets the stage for the chthonic fibers that, as recognized by Jonathan Sterne (2012, 17), connect seemingly distinct media. Formatting has always been a compression in the contemporary sense. The folding of the paper sheet shrinks its dimensions and simplifies its transport and storage. But the act of folding will also rotate the pages and shuffle them around. To ensure that the text on each leaf will remain readable and the right way up, a printer has to anticipate the spatial shifts that will occur, and twist and arrange all individual pages on the forme1 prior to impressing them on the sheet. This process of spatial arrangement is called imposition, and it is fundamentally an encoding problem. It ensures that the compressed data is organized correctly after folding and decodable in the correct order during reading. While the technology is much different in a digital video file, as we will see in later chapters, digital compression also fundamentally relies on such spatial, geometric and rotational manipulations of signals. We may therefore think of the fold of a book as a trace of its compression—an early example of a compression artifact. The important insight to collect from Tanselle is that if format is not a description of what an object is but rather a trace of the material and haptic procedures that have called forth its outward form, then identifying formats is not a matter of the descriptive Galilean techniques of measurement and categorization. Rather, formats are established through analytical techniques—interpretation and inference. These are the domain of all the disciplines that, as the historian Carlo Ginzburg (1989) has argued at length, share a distant lineage in divination and the reading of venatic clues: history, archaeology, medicine, criminology. To bibliographers, the term format does not signify something that can be measured, because it refers to the folding of a sheet, and only indirectly to its size. A format is not simply “there” but has to be induced and teased The forme is the assembled and secured layout of the pages before printing.
1
30
M. JANCOVIC
Fig. 2.1 Detail of chainlines, visible as a faint pattern of vertical stripes running along the sheet of paper. Minor edge damage is also visible. The image is of handwritten lecture notes from 1822 or 1823, from the Modern Manuscripts Collection of and digitized by the Library of the Vrije Universiteit Amsterdam, object ID 38558578
out from traces in the paper. To a reader, the direction of chainlines (an imprint of the wires that hold the paper pulp, Fig. 2.1) or the placement of watermarks left behind by the paper mold are fundamentally irrelevant to the philological “essence” of a book, the text. But bibliography reverses this semiotic hierarchy. Bibliographers treat this material residue as a meaningful trace of the papermaking process. Book scholars can reproduce a book’s past by studying the frayed edges of its pages, carefully observing the location of watermarks and inspecting the wave patterns in the paper to reconstruct how the sheet was folded. To attain a bibliographically useful description of the format of a book, the wave patterns left by the wires of the mold have to be unfolded and examined as inscriptions, and therefore as something already more than just a side-effect of a technological process. All of this has historiographical consequences. If we understand formats as practices rather than as measurable physical properties of objects, then studying the history of formats and other forms of compression means not practicing a history of technology, but a history of techniques (Febvre
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
31
1983[1935])—an analysis of “doings or happenings” rather than of objects (Bridgman 1954, 224). Technology includes “the machines, and their components,” Benoît Turquety recently summarized, “whereas technique describes what concerns gestures, practices, and the conscious choices implied on the operators’ side” (2018, 242, my emphasis). Turquety points out that in comparison with technological, economic and aesthetic histories of cinema and television, the history of techniques in moving image culture remains largely uncharted. Media epigraphy is receptive to this deficit. One of my central focal points in this book will be various techniques of folding, ordering, arranging, reformatting, projecting, as well as viewing moving images. Formats are often semantically related to the notion of a container. In this sense, format was being used in radio broadcasting as early as the 1930s. The cultural critic Gilbert Seldes wrote in 1950 that “[t]o make individual programs forgettable, yet hold the audience, means that the format must be the link between one program and another.” And further: “Drama and the big popular comedy programs are in the upper reaches of radio; lower down, format is purely a matter of packaging, wrapping other people’s goods in new paper” (Seldes 1950, 112). This understanding of format as a structural link, container or wrapper is also at play in television, where the word format denotes a dramaturgical armature; a central premise on top of which a number of screenplays or a series can be developed (Meadow 1970). The reality TV shows Big Brother or The Voice are formats: Standardized templates for content originally developed for the local television market in the Netherlands that can be easily rewrapped in different national packaging and adapted in other countries to seem like local, domestic shows. The packaging metaphoric is also sustained in institutional language. The Society of Motion Picture and Television Engineers (SMPTE) is a professional association that creates technological standards for the media and entertainment industry, and will have an important role to play in a later chapter. One of SMPTE’s committees responsible for format interoperability is called “Media Packaging and Interchange.” These conceptual links between formats as packaging harken back to the notion that formats are something external to the essence, a shell that contains but is not the thing proper, like a dispensable film can that houses an invaluable negative. This seeming peripherality is also carried over into computing, where formats manifest to users as the file extension, hidden by default like an insignificant appendage.
32
M. JANCOVIC
Overall, these examples register a certain cultural tendency to dismiss formats as secondary to cultural memory. After all, what matters to most users is the video of their holiday or graduation, not whether it had been stored in MP4 or AVI format. But, recalling much earlier work in science and technology studies and the history of technology, I will argue that the container may be just as important as the content. In the domain of television and video, the terminology surrounding formats is perhaps the most inconsistent. The sets of analog and digital standards that govern how television is transmitted around the world— NTSC, PAL, ATSC, DVB, and so on—are often called formats and occasionally protocols. They are at the same time techniques for compressing, encoding and transporting video, audio and data signals. Format also commonly refers to the aspect ratio or dimensions of an image. This is how European Union legislation uses the word, and German telecommunication law also knows a provision called the “format protection clause.” This law was introduced in 2007 to stimulate the dissemination of widescreen content and prevent operators of public broadcasting networks from tampering with signals in the 16:9 aspect ratio.2 This meaning has been common internationally since the early days of both cinema and television, following the word’s use in painting and photography. The aspect ratios of electronic images are also a crucial element in the history of compression. As we have learned from the history of bookmaking, there is an intrinsic connection between formats and compression. For books, at least, the format is the result of the latter. But in electronic images, too, compression often dictates the format. For example, the width and height of a JPEG file may seem arbitrary, but on a technological level, they are always multiples of 8, because the compression algorithm used in JPEG images operates on a grid of 8 by 8 pixels. And many of the engineering challenges of analog television broadcasting similarly revolved around squeezing a signal in a limited broadcasting spectrum into an image with a specific aspect ratio—such as in the widescreen analog television system PALplus. PALplus Compression and format remain closely interlinked also outside of the bookmaking trade. The transition to widescreen television broadcasting during the 1990s was not simply a problem of choosing the right aspect ratio, but 2
Telekommunikationsgesetz (German Telecommunications Act) § 49 (1)
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
33
also of folding a larger signal into an already overcrowded electromagnetic spectrum. In the United States, the Federal Communications Commission’s adamant unwillingness to increase channel width posed a major hurdle to the implementation of high definition television (Pool 1988; Katz 1989; Schubin 1996). The same regulatory obstacle had already cropped up in 1953 when the color system NTSC was introduced. The European PALplus format had to deal with a very similar engineering snag. PALplus was the backward-compatible widescreen television system used across Europe during the transitional period in the 1990s and 2000s, leading up to fully digital broadcasting. The challenge in PALplus was to devise a compression method that could show images both on older standard- definition television sets, and new widescreen receivers. The widescreen image thus had to be “squeezed” in such a way that it would appear correctly on two generations of televisions with two different aspect ratios. A trace of PALplus also appears on my DVD of Olympia. Watching the film progress, one might notice a line of alternating black and white dashes at the top edge of the image (Fig. 1.2, right). This is a digital code embedded in the analog video broadcast that can be decoded by compatible widescreen TV receivers and instructs them to adjust the aspect ratio of the image. It was developed in the early 1990s at the University of Dortmund and introduced into broadcasts in the second half of the decade. It is uncommon to see the widescreen signal on a DVD, since it is unnecessary in digital video. The fact that it is present in Olympia confirms that the footage had been sourced from an analog video format rather than film. On a digital display and on my particular DVD, the PALplus signal is rather jarring, because it flashes on and off five times per second throughout the entire film due to the conversion from the European to the American norm. PALplus solved the formatting problem by splitting the image signal into two separate bands. The portion of the vertical frequency of the image that was intended for non-widescreen TVs was transmitted in a different part of the signal, separately from the rest (Jack 2004). Using mathematical techniques, the intangible video signal was manipulated and reformatted in a very tangible sense: the widescreen picture was essentially cropped at the sides to fit a standard-definition TV, and the trimmed portions were sent independently in a different region of the signal. Older receivers would ignore this part, but a newer television set would be able to “glue” them back together and reconstruct the entire wide image. To
34
M. JANCOVIC
reiterate my point, this is compression as a material process, even when it deals with diaphanous signals that seem to lack materiality. Like the folding, binding and trimming of paper, compression folds, divides, cuts and reorders electric signals and electromagnetic waves.3 As the television engineer and media historian Mark Schubin describes, the widescreen 16:9 format of PALplus and of current digital broadcasting was not only a mathematical compromise between the narrow television aspect ratio and the wide anamorph CinemaScope film ratio. The 16:9 format appeared suitable and eventually emerged as a global standard for a whole host of reasons. It also provided twice the resolution of standard definition (which, by the loose recommendation of the International Telecommunication Union was what “high definition” meant). The resolution of a 16:9 video signal could be accommodated by existing memory devices and it provided benefits for electronic circuit design. But it was also well-suited for cheaper photochemical filming, since one frame in the 16:9 ratio could fit into a height of three perforations on standard 35 mm film instead of four. So, filming in this format would save 25% of film stock (Schubin 1996). Within the film industry and in TV productions shot on film, three-perforation 35 mm film was, after all, the major competitor format of HD video. Both the 16:9 aspect ratio and the PALplus format, with its digital code in an analog signal, are reminders that “digital media” did not simply replace “analog media,” but that even now, a great many digital and analog formats coexist, compete and fold into hybrid forms. Lest we forget, a large portion of the world still relies on analog broadcasting. As the years- long delays, scandals and difficulties surrounding the launch of digital television in countries like South Africa demonstrate, analog television and analog forms of compression are not television history, but still part of a global media present. Unruly Formats The mechanization of the printing industry in the first quarter of the nineteenth century enabled the production of much larger paper than was previously possible. As Tanselle notes, besides the tenfold increase in 3 As legend has it, Kerns Powers, who proposed the now near-universal 16:9 screen aspect ratio in 1984, derived this format by drawing rectangles of all existing film aspect ratios on pieces of paper with a pencil, cutting them out and overlaying them on top of each other.
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
35
papermaking speed, this also led to a great multiplicity of book formats (Tanselle 1971). With the introduction of wove paper in the mid- eighteenth century and later automation of papermaking, some of the traces usually found in early books, such as chainlines, disappear or get added to paper artificially for ornamental purposes and cease to give an indication about the format of a book. In such cases, book scholars resort to analyses of faults, failures, losses and damage. The format of books from the nineteenth century onward can often only be revealed based on incomplete and damaged copies. For instance, the leading edge of a printing forme receives the most stress during printing, so by examining damage on the type, bibliographers can disclose clues about the imposition and therefore indirectly also about the format (Tanselle 2000). In newer books, the format and folding can at times be determined if an untrimmed or even unopened copy has been preserved. Such an object resembles a book but it fails to be one fully, since it cannot be opened and therefore also cannot be read. It is only from the moment the fold is irreversibly cut open, when an interface—Schnittstelle—is created, that the “bookness” of a gathering of folded papers is actualized. Bibliography thus teaches us that damage and faults in their many forms can be epistemically fertile. Indeed, in certain cases, they may be the only possible mode of addressing an object’s past. But even prior to the mechanical revolution in papermaking, bibliographical markings have always been only incomplete traces, a kind of circumstantial evidence that needs to be deciphered in order to be explained. Much like my Olympia DVD, with its inexplicable mixture of different norms, in the history of bookmaking, many ambiguous, odd and mixed formats exist for which provisional terms like “octavo-form sextodecimo” have to be improvised (Tanselle 1971, 2000). As a matter of fact, it is entirely possible to encounter books that do not have any identifiable format at all. If a book’s format cannot be established from traces in the paper, then for bibliographical purposes, it is a book without format.4 This is an important insight, because it departs from the usual conception of formats as naturally and evidently given in any media object. Instead, formats are constructed epistemically. They rely on the interpretive labor of recognition and sense perception.
4 For a number of interesting cases from Britain in the early nineteenth century, see McMullin (2003).
36
M. JANCOVIC
This bibliographical understanding of formats and the techniques book scholars use in identifying them are a helpful model because they treat the obvious with suspicion, just as the film scholar Haidee Wasson has encouraged in her invitation to shift the focus of film studies to the study of formats (Wasson 2015). How could we apply these bibliographical principles to moving images and their history? Instead of simply stopping at the observation that a film is “a 35 mm film” or that it was seen “from a DVD,” a bibliographical—or perhaps media epigrapical—way of looking at moving image formats would continue the investigation: what exactly is it that makes this film be categorized as a 35 mm film? What does it mean to say that a film is “in DVD format,” besides the obvious fact that it is stored on a DVD? What material traces do these formats leave apart from the measurable facts that they are 35 mm wide or 120 mm in diameter? How is the format inscribed in the content? Questions like these slightly readjust film studies’ typical focal distance. Asking them might potentially reveal that we are, in fact, surrounded by many interesting objects like Olympia, whose reformattings and recompressions hint at a complicated history. Wondering about the materiality of formats is nothing new. The eighteenth- century physicist and mathematician Georg Christoph Lichtenberg was the author of “About Book Formats,” an essay we could consider a key early text in format theory. Paper formats had occupied Lichtenberg for a long time. In a letter from October 1786, he reported that he had given one of his algebra students the task to find a paper ratio that would look self-similar when folded in the middle, since such a ratio (1: 2 ) would surely have “pleasant and exquisite” qualities (Lichtenberg 1967, IV: Briefe:686, my translation). No sooner had Lichtenberg decided to manufacture such a sheet with scissors than he discovered with delight that all of his writing paper was already formatted this way. This discovery then led him to wonder about the material circumstances of producing and distributing paper, the geographic source of the molds used by German papermakers and the reasons for the emergence of the ratio: “Are there prescribed rules that mold- makers follow, or did this shape spread only by tradition?” (ibid., my translation).5 Lichtenberg’s questions remarkably resonate with the types of inquiry that many contemporary media archaeologists would be interested in. 5
For the answer to Lichtenberg’s question, see Kinross (2009).
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
37
Indeed, the bibliographical methods of identifying book formats require a thorough diachronic understanding of the tools and techniques of papermaking, printing, binding and trimming. In order to make qualified statements about the history of books, bibliographers must investigate the sequence of imposition; the shape and weight and durability of the molds and deckles and wire facings; the pressure and weight applied to the type and forme and various other parts of the machines; and the specific, precise directions, rules and ways of grasping and handling them that papermakers, typesetters and binders traditionally used. It is this type of corporeal and sensory knowledge that media epigraphy might seek when researching other kinds of formats, too. Understanding compression as a material process thus draws attention to practices and conventions but also logistics and infrastructure—in other words, “the catacombs under the conceptual, practical, and institutional edifices of media,” as Sterne (2012, 16) called formats.
Trace Using examples from book studies, we have seen how textual media can be historically re-evaluatued on non-textual and material terms. I have drawn on bibliography as a rewarding conceptual reservoir for the study of audiovisual media, formats and compression. Let us now turn our attention to the meaning of traces, and to the ways book scholars study them. One particular bibliographical practice is especially methodologically germane to media epigraphy: optical collation. The Optical Collator When books get reprinted, the publisher or typesetter occasionally make changes to the type. The publisher may, for instance, have noticed a typographical error and fixed it in the second edition. To book and literary scholars, such changes are immensely valuable because they can help piece together a particular book’s history. Scholars frequently parse different editions of a book looking for deviations. But a swapped letter in a whole book is a needle in a haystack. Looking for one is tedious and straining to the eyes. To aid in this vital component of their work, book scholars have developed a unique scientific instrument, the optical collator. The collator is a machine that simplifies the visual identification of differences among two printed editions of texts by means of a flickering or
38
M. JANCOVIC
stereoscopic lens system. It is also an intriguing case study for scholars of the moving image. The Hinman collator, named after its inventor Charlton Hinman, is the first and most famous among them (Figs. 2.2 and 2.3). It was developed in the late 1940s and uses lights and shutters to present a very short “film” (consisting of only two images) to the eyes. Two editions of a book can be placed on each side of the collator and alternatingly
Fig. 2.2 A Hinman collator at the Folger Shakespeare library in Washington, DC. (Photograph by Julie Ainsworth. Image 48192, used by permission of the Folger Shakespeare Library)
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
39
Fig. 2.3 Hinman collator in use at Watson Library, 1959. (Image source: University of Kansas Libraries, Special Collections. Call Number: RG 32/37 1959)
illuminated by a flickering light. While the books are being “animated” like this, a bibliographer looks into the viewfinder and any “glitches” on the page—differences between the editions such as a misprint or a variant—will stand out to the eyes like a shimmering or protruding spot. In this way, textual differences become addressable graphically. One of the Hinman’s successors, the Lindstrand comparator by Gordon Lindstrand from the 1970s, uses a system of lenses identical in principle to 3D cinema, showing a slightly different image to each eye by using mirrors, a prism and a binocular viewer. With this visual technique, the tiring labor of collating historical books and scouring them for minuscule discrepancies can be done much more efficiently. What fascinates me about the practice of optical collation is that it is historical work, but it is grounded not in the reading of sources but in inventive ways of observing them. Collation is perception as historical method, an embodied visual epistemology (Drucker 2010). It uses the sensory proclivities of the historian’s own body in conjunction with a
40
M. JANCOVIC
technical apparatus to make traces of historical relations between objects apparent and perceptible. The trace is not necessarily conclusive—a misprint might not readily expose which book edition is older. But it indicates a point of interest for further study. And collation does so by turning away from—we might, perhaps, even say queering—the textuality of the book and moving closer towards its graphical, visual, material, sensory and sensuous properties. Let us tentatively explore what the principles of bibliographic collation might look like when applied to moving images. Comparing Nothing With Nothing In February 2018, I had the fortune of seeing Katsuhiro Ō tomo’s legendary 1988 animation film Akira shown in 35 millimeter at an event in New Haven. According to the organizers of the screening, it was possibly the only extant 35 mm copy of the film in North America. Along with the delightfully jarring (and deliberately chosen) English dub, the film bore the marks of time. It was aging with dignity, but rife with signs of wear, it was obvious that this print had already magnetized many an audience over the years. Towards the cataclysmic finale of the film, the image is repeatedly flooded with pure white light. The frame shows nothing. The emptiness represents the blinding flash from an explosion that destroys the city of Neo-Tokyo. But in the screening on that afternoon, a mesh of deep lacerations and bruises and scratches on the emulsion stood out against the white screen (Fig. 2.4). It interjected itself between a picture that viewers might have seen in the past and an audience that was trying to glean its vestiges in the present. But these scratches were not random. The lines were aligned strictly parallel to the direction of movement of the film in the gate or perpendicular to it, forming a lattice that resembled clusters of crosses. I was struck by this deliberateness. It looked like the illegible writing of an unknown civilization. Strictly speaking, these fissures in the emulsion did not belong there. They were defacing the appearance of the film and obstructing the visual purity it presumably once had. And yet they were doing something productive, in fact, remarkable: they had a direction and made themselves recognizable as forms to us, the audience; their rigid and regular geometry directed my attention to the repetitiveness of the gestures and labor involved in transporting, handling and exhibiting film. If Akira the movie was a monument to the anxieties and aesthetic values of the societies and
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
41
Fig. 2.4 Three frames from the explosion scene at the end of Akira on 35 mm film, showing faint vertical scratch marks and dust throughout. (Film scan courtesy of UCLA Film & Television Archive)
cultures that had produced it, then Akira the 35 millimeter film I saw was a memento of the material practices that had both stained and sustained its existence as a physical object. What would we find if we took this sequence of frames containing nothing at all and compared it with different editions of the film, just like a bibliographer might collate two editions of a book? Let us contrast the 35 mm print with a digitized VHS copy (Fig. 2.5), a remastered DVD edition (Fig. 2.6) and a Blu-ray edition (Fig. 2.7). All of these formats ostensibly show an empty, completely white image. And yet they all look strikingly different. In order to perceive these images as empty, we have to perform an act of sensory abstraction (or self-deception, depending on how you look at it) and ignore how they actually appear to us. After all, what is visible in the reformatted, compressed copies of Akira are greenish, baby blue and slate grey fields with different forms of noise and brightness fluctuation. The seemingly flat white surface is morphologically variegated and shimmers with subtle textures. It has historical depth.
42
M. JANCOVIC
Fig. 2.5 Still from the explosion scene in Akira digitized from a VHS. Uploaded to YouTube by user intel386DX on July 22, 2017. Provenance and digitization method unknown. Color shifts, slight shearing at the top and bottom of the frame, macroblocking artifacts, a black border from several reformattings and other subtle and less subtle image distortions are apparent
Instead of pointing towards Akira as a coherent filmic text, these traces make us acknowledge its multiple objecthoods. They are inscriptions of the many compression methods and formats in which the film circulates around the world, and the many use contexts and viewing practices in which it has appeared to us and others over time. I will return to optical collation and the lessons it has to offer to media epigraphy later on. But first, a discussion of traces is in order. Traces are Not Set in Stone In 2008, the Anglicist and media scholar Mathew Kirschenbaum proposed that studying minute traces of stray data left on hard drives could serve as a radically new approach to understanding electronic literature
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
Fig. 2.6 Still from the same scene in the 2001 “special edition” DVD by Pioneer Entertainment. This copy was derived from an interpositive film print restored and scanned by Pioneer Entertainment, which removed dust, dirt and scratches. Square blocking artifacts (“pixilation”) can be discerned faintly
Fig. 2.7 Still from the 2013 “25th anniversary” Blu-ray edition of Akira released by Funimation. This copy is based on a new scan of the same restored photochemical print as above. The image shows film grain with minor blocking artifacts resulting from its digitization. Of note is also the slightly different aspect ratio of each version
44
M. JANCOVIC
and new media (Kirschenbaum 2012). Kirschenbaum’s widely received suggestion is best understood in the context of a larger academic revaluation of the notion of trace, also evident in such recent concepts and new fields of study as counter-forensics, critical forensics or forensic architecture. Charting a genealogy of these methods would inevitably lead us to Carlo Ginzburg. Already in 1979, the Italian historian insisted that forensic methods of studying traces—in his words, “the same conjectural paradigm employed to develop ever more subtle and capillary forms of control”—could be reclaimed in progressive ways by historians (Ginzburg 1989, 123). To some, the proximity to state-sanctioned ways of producing truths imparts the term “forensics” with a noxious ring, but, as Kirschenbaum himself has noted, many of the research techniques discussed in his book originate in bibliography rather than criminology (Kirschenbaum 2015). I propose that the basic principles of Kirschenbaum’s way of studying traces can be generalized beyond digital documents, and that the same techniques are highly useful also in the study of moving images and their histories. Departing from the term forensics and its uneasy association with state and judicial power, I propose to call this expanded method media epigraphy. As I have noted in the introduction, I am borrowing this term from historical epigraphy, a discipline that studies ancient inscriptions in materials like stone and wood. One of the key principles of epigraphy is that writing is encountered in many ways. Reading is only one of them. Much in same way that bibliographers treat books as physical objects rather than simply text, recent approaches to epigraphy stress that ancient inscriptions are not and were not merely “writing.” For their meaning as artifacts also emanates from their visual, architectonic, monumental, tactile and oral properties. Inscriptions are material objects embedded in space; their audiences are not solely readers but spectators and even listeners (Hamburger 2011; Chaniotis 2012; Smoak and Mandell 2019). The epigraphers Alice Mandell and Jeremy D. Smoak maintain that [w]hen an inscription was smashed by a political foe its meaning did not disappear. Instead, its fragmented words came to embody new meanings that were coordinated by its relocation within the debris of a destroyed city or the structure of a new wall or road. As social and political processes transformed the space in and around the text, audiences brought with them new horizons of expectations […]. An emphasis upon display allows us to appre-
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
45
ciate the inherent flexibility of meaning as different audiences paraded through the spaces in which such inscriptions were set. (Mandell and Smoak 2018, 95)
Now, the subject of this book is contemporary images and not ancient inscriptions. But Mandell and Smoak’s short account touches upon several key notions that are useful to the study of both: relocation, infrastructure, space, social and political processes—all of these influence what an inscription signifies to a given audience in a given historical moment. What this would mean for media epigraphy is that the study of traces (for example, the many compression artifacts contained in Olympia or Akira) is not simply making a form of “writing” legible, but bringing inscriptions of decay into focus in all of their material, sensory, epistemic and cultural hybridity. The trace, the film theorist Mary Ann Doane writes, is “the witness of an anteriority” (2007, 136). But witnesses can be unreliable. Similarly, the philosopher Sibylle Krämer has observed that “[t]races show not the absent, but rather its absence” (Krämer 2007, 15, my translation). An absence is a space of multiple and never fully resolvable potentials. It can be filled with concurrent and possibly contradictory narratives. As we have seen with Olympia in the introduction, media epigraphy uses traces to generate new questions rather than answers, productively multiplying and complicating ways of telling media’s past.6 For Krämer, the trace coupled the post-Saussurean realm where everything is sign and representation to the corporeal world of things. Today, in the late stages of the material turn, such coupling devices—epistemological adapters that make older scientific paradigms compatible with newer ones—are no longer necessary. But the notion of trace as an imprint remains just as useful. The uncertainty and openness of an inscription’s meaning is not only the principal starting point in epigraphy proper, in palaeography, codicology and other manuscript studies. On this point, media archaeology and media philosophy also seem to agree with literary theory, cultural techniques research and digital humanities. Crucially, the same line of reasoning had previously been explored also in queer and performance studies, where ephemera served to pursue knowledge that “does not rest on epistemological foundations but is instead interested in following traces, glimmers, residues, and specks of 6 Expanding on work by Jean-François Blanchette, Johanna Drucker (2013) has adopted a similar view of traces in her reflection on Kirschenbaum’s approach.
46
M. JANCOVIC
things” (Muñoz 1996, 10). Hence: a trace’s evidentiary value is always at least a little suspect. For cultural theorist José Esteban Muñoz writing in 1996, this methodological enterprise was closely tied to the political effort to queer and undermine traditional forms of rigor and the often exclusionary societal structures in academia that uphold them. The relevance of this position will echo in later chapters, where I explore in more depth the interstices and resonances between media epigraphy, queer studies and disability studies. Media Epigraphy: Some Preliminary Principles So, what might it mean to look at films like a bibliographer? Let us look at another example. Figures 2.8 and 2.9 are two “variants” of the same image from the screen adaptation of The Hitchhiker’s Guide to the Galaxy (2005). Like a book scholar using an optical machine, these two images can be “collated” media-epigraphically. Figure 2.8 is a still from the film’s trailer uploaded to YouTube. It shows traces of multiple reformattings. The image originated with an anamorphic widescreen aspect ratio of 1:2.39, as seen in the Blu-ray release (Fig. 2.9). In the trailer, the film footage is surrounded by a dark area, which, however, is
Fig. 2.8 Screenshot from a trailer for The Hitchhiker’s Guide to the Galaxy, directed by Garth Jennings, circulated on YouTube. © Disney Enterprises
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
47
Fig. 2.9 The same frame from the 2007 Touchstone Home Entertainment Blu- ray release of the film. © Disney Enterprises
not uniformly black. Rectangles in three slightly different shades can be discerned, indicating that the trailer has been reformatted at least three times. An examination of their resolution and aspect ratios suggests that the file uploaded to YouTube may have initially come from DV NTSC, a tapebased video compression format, and was erroneously captured into a computer file without correcting for DV’s narrow pixel aspect ratio, thus stretching the image in width slightly. This video was appended with black horizontal bars on top and bottom to fill out a frame with an aspect ratio of 4:3, common in a previous generation of television screens and computer displays. This process is known as letterboxing, and is common when widescreen films need to fit a non-widescreen medium. Finally, the trailer was pillarboxed (adding black vertical bars) into a 16:9 frame, to make it suitable for the aspect ratios displayable on YouTube around the time of its publication, and overlaid with a “Movieclips.com” watermark before being published on 19 December 2012. Trailers of The Hitchhiker’s Guide available on YouTube are of universally poor quality, and this is the form in which many people in the present will first encounter the film. By the standards of conventional film studies, this is an entirely inadequate presentation, with several generations of compression losses sedimented on top of each other. But I would argue that these losses are not without their own worth. In the palimpsestic black margins that surround the “content,” traces of this object’s past are preserved. And perhaps even more interesting than the specific history of this individual video are the hints that they divulge about larger moving image culture: the circulatory workflows that film trailers were subjected
48
M. JANCOVIC
to and the networks such paratextual objects traversed during the promotion and distribution of mainstream cinema in the early years of accessible online video—and, as the likely use of DV indicates, also in the final years of tape-based consumer media. As I and others have argued elsewhere, understanding the conditions under which films and videos are reformatted and recompressed may be key in grasping and fully appreciating how moving images circulate around the world, especially in contexts and spaces outside of the typical viewing situation in the cinema or the home (Fahle et al. 2020; Jancovic 2020). Compression and formatting usually involve both creative and destructive interventions into moving images, oftentimes performed by informal communities of practice like amateurs, fans, collectors or pirates (Hilderbrand 2009; Lobato 2012). Reformatting is a fundamental practice within visual culture writ large, accompanying not just contemporary videos. Already in the late nineteenth century, the art historian Jacob Burckhardt complained about the rampant reformatting of paintings, whose owners often had them ruthlessly slashed, trimmed and curtailed to adjust the dimensions in order to better fit the rooms in which they were exhibited (Müller 2014; Niehaus 2018). Burckhardt considered this a horrifying, barbaric practice, but the interesting lesson is that we can draw many parallels between compressions and reformattings taking place in contemporary digital audiovisual environments, and much older historical media practices and art-historical customs. Such forms of compression and reformatting have only received modest scholarly attention, but media epigraphical methods may better equip us to tell richer, fuller histories. Reformatted objects surround us everywhere. Most images, texts, and recorded sounds we encounter undergo dozens of format changes throughout their existence. Online videos are downscaled for mobile devices and upscaled for 8K television sets; and films metamorphose from raw video to intermediate editing formats to DCPs for the cinema or AV1 files for YouTube. Streaming platforms like Netflix reformat their catalog dozens of times for different devices and re-encode large portions of it whenever new compression technology is developed (Jancovic and Keilbach 2023). Our messaging apps automatically and covertly convert all the animated GIFs we send into MP4 videos to prevent us from inundating their servers with a compression format from 1987. Our handheld devices diligently monitor their own orientation in space and automatically reformat the screen from portrait to landscape. Television sets often refuse to play video files because of format incompatibilities. Such
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
49
irritations of modern life, in turn, sustain the online cottage industry of format converter software and services. Unruly formats, indeed. The innumerable media formats and the many competing compression algorithms cause technical issues in the broadcasting industry, incompatibilities for consumers in the home and technological nightmares for archivists working with audiovisual cultural heritage. As often as communities of people try to bring some order into the chaos, new format standards proliferate. History offers numerous examples: the medieval efforts to regulate paper sheet and mold sizes, or analogous attempts to standardize paper in Republican France around 1800, or the early twentieth century efforts to create “world formats” for all everyday objects (Krajewski 2006), or the thirty-odd years it took to somewhat standardize film camera and projector apertures. All of these format standardization initiatives ended with mixed results or, to put it more mildly, only achieved very gradual and approximate effects. At times, reformatting can even induce a complete effacement of media objects: There is an anecdote according to which a furious projectionist once hacked a print of Hans-Jürgen Syberberg’s opera film Parsifal (1982) to pieces because he was unable to correctly adjust its aspect ratio (Cherchi Usai 2000, 61). Formats are intimately tied to personal memory and affective attachment (Heller 2016). Fans get irate when a cherished TV show or favorite film is released on Blu-ray or published on a streaming service in the “wrong” format. But format changes are also generative; they frequently add something that did not exist before. The black slabs that frame the Hitchhiker's Guide trailer or the blurry aureoles that “correct” viral vertical videos to make them suitable for YouTube—these prostheses our culture grafts onto things in order to make them “have” a certain format are not simply empty or redundant spaces. They are evidence of procedural frictions across aesthetic, technological, and economic regimes. As media scholars have argued, every migration is a mutation (Chun 2008; Kirschenbaum 2012). A reformatting is therefore never just a repackaging into a new container, but always a refolding. Like a sheet of paper that remembers every fold, compression formats are not transparent vessels but impart the content they frame with faults and creases that are always more than just technical artifacts. Bringing traces of reformatting into the focal point of media history can support inquiries into the gestures and practices that make media culture possible.
50
M. JANCOVIC
When something is formatted and reformatted, it yields to the politics of the format. Format changes can let a little bit of the filmic “off”—that Lacanian Real of film production, like an intruding microphone or an unintended exposure of nudity—transpire into the screen.7 Format changes can even become permanently inscribed into the hardware itself. The three-colored phosphors of a cathode ray tube (CRT) decay at different rates for each color. This undesirable quality of CRT screens has led to little pieces of software known as screensavers. As, again, Mark Schubin points out, if a CRT screen shows widescreen films padded with black bars on top and bottom (like in Fig. 2.8) for long periods of time, its blue phosphors will decay faster than the other colors. When the full screen is then filled with an image, a yellow stripe will overlay the widescreen area. A screen with such a permanent mark is, objectively speaking, damaged. But just like bibliographers can decipher the torn margins of a page, this damage is also a form of memory. It offers a glimpse into the types of spectatorship the screen made possible: the material of the glass tube “remembers” that it has been used for watching a whole lot of films rather than television programming, to the point that the widescreen formats of cinema were burned into the TV screen. Microhistories of Media Microhistory—a strand of historical inquiry that focuses on small units of research—in the Italian reincarnation that Carlo Ginzburg represents is a most useful approach to traces. Ginzburg highlighted not only the interweaving of micro and macro domains of history, but in his later and more autobiographical writings also stressed how accidental encounters with anomalous material have led him to richer understandings of the past (Ginzburg 1989, 2012). In her historical dissection of the year 1991, Alexandra Schneider has demonstrated just how productive similar approaches can be when applied to the domain of media and technology: strange coincidences across time and space are unearthed that suggest still unaddressed vacillations of technological history, seemingly insignificant events are unmasked as symptoms of major cultural ruptures, memory connects to history in heteroclite ways (Schneider 2014). Inspired by both Kirschenbaum’s forensics and Ginzburg’s microhistorical legacy,
7
Schubin (1996) gives numerous concrete examples.
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
51
throughout this book, I will flesh out media epigraphy as a method for approaching media’s past through small traces. Let me provide another example of what I would consider a media- epigraphical approach, this time from the online space rather than from history or media theory. In a YouTube video on his channel Technology Connections, Alec Watson discusses the Western Electric Model 500 telephone. Watson replays a brief scene from Terminator 2: Judgment Day (1991, directed by James Cameron), in which we see and hear a telephone ringing. The Model 500 was ubiquitous in US households throughout the twentieth century. Watson notices that in this particular scene, the telephone in the film makes the recognizable Model 500 sound, a major third produced by hitting two brass bells of slightly different sizes. The device actually depicted on screen, however, is a Trimline phone with a single bell which could not have produced such a sound. This indicates that the sound was introduced in post-production rather than recorded on set. Watson further points out a noticeable pitch fluctuation in the phone ring. Measuring its peak-to-peak time at roughly 1.85 seconds, he observes that it closely matches a 33 1 rpm vinyl record, which spin at 1.8 seconds per 3 revolution. If the spindle hole of a phonograph record is drilled off-center or the stamper is misaligned during pressing (which was not uncommon for older and cheaper records) the groove will wobble periodically during playback, producing pitch modulations exactly like the one present in the sound of the ringing telephone. This leads Watson to conclude that the sound effect used in Terminator 2, a film made well into the age of digital sound recording, originated from a vinyl disk, possibly archived from the sound effects “libraries” that film studios stored on LP records and often reused. Watson’s impressive deductions are worthy of a detective novel, but they are also reminiscent of the methods bibliographers use when they work on reconstructing book histories. Noticing a tonal fluctuation and an obscure engineering detail not only leads Watson to a convincing insight about the production history of a single film, but to a lush landscape of technologies and practices: a variety of professional recording formats and household telecommunication devices interact, technical standards clash with the material realities of LP manufacturing, analog recordings linger on in digital environments for cost savings and convenience, and cinema’s realist aspirations waver under business imperatives of production efficiency. Instead of a linear history and clear separation between analog and digital media, a small trace opens the path to a
52
M. JANCOVIC
topography extending in all directions, with interesting features at every level of detail. A media epigraphy is attentive to this type of fleeting detail beyond the representational content of the image. At its heart is an interest in the thingness of traces. And in its focus on materiality, media epigraphy resembles media archaeology. Wolfgang Ernst’s proposal for a materialist media archaeology of noise, for example, unfolds along somewhat similar lines. Ernst has previously argued against the digital restoration of audio material, endorsing instead the idea that scratches and distortions in recordings are part of what needs to be preserved (Ernst 2013, 60). It is this Proustian mémoire involontaire of the inscription apparatus, which, according to Ernst, is accessible to media-archaeological inquiry. But there are important differences between this approach and what I am calling media epigraphy. While I agree with many of Ernst’s premises, including that the surplus in machinic inscriptions can be made to bear upon the study of culture, I am less convinced about the degree to which technical media alone can make the past knowable. Approaches that give primacy to machines and their precision also tend to underestimate the perceptual work necessary to mobilize the inscriptions made by them. Technical inscriptions, after all, are always already predicated on various gestures of compression, folding, filtering and ordering as historical events in themselves (Kromhout 2017; Matos 2017), and therefore even unintentionally recorded noise depends on hierarchies of meaningfulness that preexist the operations of every technical medium. An engagement with traces is not simply an act of uncovering “frozen media knowledge embodied in engineering and waiting to be revealed by media-archaeological consciousness” (Ernst 2013, 182), a knowledge already available to hand and overlooked by other approaches. Instead, I believe that traces require a certain bodily attunement. They become knowledge only through bodily acts of sensing and reinterpretation, requiring embodied techniques of looking and listening. Even under the scrutinous media-archaeological eye or ear, traces are not an indexical path to be followed to some submerged but ultimately discoverable past. Traces are, to paraphrase Drucker (2013), performed rather than received. A queer performance, rather than “a straight look into the archive” (Ernst 2013, 60). We can say that traces neither pre-exist ontologically nor are they simply fabricated out of thin air, but occur in encounters with media. Encouraged by calls for innovative media archaeological approaches (Fickers and van den Oever 2013; Strauven 2019), media epigraphy thus
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
53
goes beyond certain media archaeological currents, refusing to discount phenomenality and bringing into view not only objects and discourses, but also the “verbs” of media theory (as Cornelia Vismann called them): procedures, practices, techniques; means of manipulating things, like compression or reformatting; and ways of sensing and seeing. Properly understanding such techniques may require the use of a number of tools. Software like MediaInfo, for example, can be used by media historians easily to gather clues about the provenance and circulation of objects like my DVD of Olympia. Signal processing applications like MATLAB can be very useful to the study of media in visualizing how techniques like the discrete cosine transform, a key compression algorithm of digital culture, manipulate image information and translate it into mathematical frequencies. Software libraries like FFmpeg can be utilized to tinker with compression formats. Such experiments with algorithms, filters, converters, playback methods and formats are useful initial steps in doing media epigraphy. They can help illuminate the material and infrastructural processes that underlie moving image culture and bring forward points of interest for further study. Much like traditional film history and analysis, media epigraphy still “watches” films and compares versions and editions, and uses some of the same sources as classical media historiography, media archaeology, as well as science and technology studies. But with the help of various tools, it does so in the search of different outcomes. It pays attention to those quirks of moving images—errors, faults, distortions—that many historical approaches have tended to either discount as incidental and therefore irrelevant glitches of playback and projection, or, contrarily, glorified into overbearing aesthetic theories of style and subversion. The challenge, as Peter Geimer has argued in relation to the interpretation of accidental photographic mistakes, is to understand images in both their iconic capacity as pictorial representations of the world, and in their evidentiary capacity as traces of an imaging process (Geimer 2007, 2010). By calling my approach “epigraphy,” I want to signal that it carries the heritage of media-archaeological thinking and remains committed to it, but also looks in the direction of other scholarly traditions, inspired in particular by queer studies’ invitation to “squint, to strain our vision and force it to see otherwise,” to quote Muñoz once more (2009, 22). Media epigraphy might thus be counted among those methods of research that Anna Tsing has called “arts of noticing” (Tsing 2015, 37). Importantly, in my usage of this term, traces can be ephemeral and may
54
M. JANCOVIC
not even exist outside of perception at all. Jarring phenomena like lag, asynchrony or shearing artifacts in video resulting from incompatible frame rates are all transient. They do not leave permanent traces. They do not reside in the images but are “inscribed” as a form of disturbance only when they are presented or perceived; and they might not be perceived by everyone at all times. For example, some people hardly notice image flicker, a side-effect of temporal compression, whereas others might quickly get dizzy or even get seizures from it. Some traces emerge from very particular configurations of recording, playback and reception hardware and thus direct our attention to questions of infrastructure, standardization and compatibility. Others require that our bodies be sensitive to specific sensory stimuli and thus alert us to questions of corporeality. An epigraphical approach to media therefore acknowledges that traces are not inherently obvious as such, but require interpretive manipulation and specific techniques of looking. What all of this means methodologically is paying close attention to modes of circulation and practices of spectatorship, including one’s own, and creating room for the ambiguities of perception and embodiment—and thus, for body politics. This is to say that I conceive of media epigraphy as a corrective to the political inertia that both media archaeology and science and technology studies have been rightfully criticized for (Bowker 2013; Mills and Sterne 2017; Skågeby and Rahm 2018; Törneman 2019).
Failure Traces left by compression processes can also be understood as minute forms of failure. After all, compression is considered successful when it remains invisible and inaudible, so every trace it leaves behind is like a failure to achieve this goal. Throughout the twentieth century, failures, errors and imperfections have become one of the leading preoccupations of Western thought. An undercurrent of reasoning on failure has existed as least since Aristotle. But during the twentieth century, failure and its close cognates—rupture, crisis, accident, parasite—were turned into a major focus of study, uniting fields as diverse as philosophy, psychoanalysis, media theory, sound studies, sociology and queer studies. Since the historian Howard Mumford Jones demanded in the late 1950s that the history of technology account not only for its success stories, failure has gradually gained tremendous academic momentum. Jussi Parikka (2013) observes that Gilles Deleuze,
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
55
Bruno Latour and Martin Heidegger, three philosophers who otherwise share very little with each other, find common ground in their interest in mechanical failure. The nineteenth and twentieth centuries offer myriad possible pathways for historicizing the fascination with failure. We might also turn to Emil Kraepelin or Sigmund Freud, and their pathologies of catachreses of speech, writing, memory or action; or to Michel Serres, an influential thinker for whom parasitical breakdowns intersected with the concept of noise. Noise, with its two permutations in sound studies and in Claude Shannon’s communication theory, orients us back towards media theory and computation: Shannon, along with Norbert Wiener, studied noise, and both of them have become two central branches that fed much of Friedrich Kittler-influenced media theory. With Kittler we return to the conceptual underpinnings of media-archaeological scholarship—recall Erkki Huhtamo and Parikka’s (2011, 207) motion towards “rethinking media archaeology in terms of failure and noise at the core of new technologies,” or Wanda Strauven’s (2015) characterization of media archaeology as a “noisy” praxis. In music, literature and the pictorial and performing arts, the failure to communicate, to signify and to represent were raised not only to a dominant thematic concern, but also a generative strategy in itself. Here, we could date the beginnings of this cultural embrace of imperfection and mistake-making in the 1920s, when Dadaism and the Futurists were coming of age. But its forerunners can be discerned already in art criticism of the late Romantic period. The art critic John Ruskin, for example, praised “irregularities and deficiencies” in art as “sources of beauty” (1890, II:172). We could append a few other nouns to the litany of failure: error, malfunction, accident, breakdown, disturbance, interference, glitch, noise, imperfection, disruption, defect, disorder, fault… Though all these words convey slightly different scales, textures and levels of detail, they all “nonetheless refer to a common epistemological model, expressed through various disciplines that are frequently linked by borrowed methods or key terms,” as Carlo Ginzburg (1989, 118) wrote in another context. These terms proliferate in the scientific vocabulary of late, indicating an unresolved scholarly desire to understand the affective magnitude we seem to feel when things break down and are pushed out of joint. Many of those who study failure seem to agree that it is a site of revelation; that when technology breaks, something previously hidden comes to light. But they do not always agree on what it may be. Film and media scholars and art historians have been enchanted by failures. There is a vast
56
M. JANCOVIC
and still thriving body of research on glitches, errors and faults in audiovisual media, and on obsolete, failing and decaying technology. But these failures have largely been understood in aesthetic terms only. That error, decay and failure can be both beautiful and aesthetically innovative is undeniable. Many artists have built their careers around this triad, and the cultural obsession with “precarious aesthetics” (Stenstrop 2013) and imperfection (Kelly et al. 2021)—seen in glitch filters, the continuing fashionableness of retro media or in the resurgence of analog formats like MC and VHS tapes—confirms their popular appeal. Compression processes, too, can sometimes create striking and otherworldly imagery. But aesthetic theories of imperfections and decay tend to exalt them into symbols of abjectness and overemphasize their transgressive, disruptive and subversive qualities. By doing so, they relegate noise and decay into a domain of alterity in which the value of failure stems from its oppositionality to success and perfection, not from its function as an integral and constitutive element of mediation itself (Kromhout 2017). The outlines of a more nuanced aesthetic engagement with traces in moving images are being shaped by some recent work that prioritizes embodiment. Drawing on the phenomenological tradition of Maurice Merleau-Ponty and continuing the pathways developed on those foundations by Vivian Sobchack, Laura U. Marks and others, Shane Denson (2020) has addressed the conditions of contemporary post-cinematic experience as they are shaped by digital infrastructures through computational processes including, inter alia, compression. In the final chapter of his book on cinematic movement, Jordan Schonig (2022) develops a similar phenomenological sensitivity towards a range of digital compression glitches. The story I tell here brushes against some of the same themes. Much like these authors, my starting point is a close conversation with the moving image, driven by an attraction to those of its features at the cusp of noticeability that hint towards its technological workings. My epigraphy of compression diverges, however, in its treatment of temporality and of the digital. As I have argued in relation to formats, when viewed as a set of techniques and practices, compression seems to relativize some of the ontological and phenomenological claims made regarding digital cinema. Compression techniques like interlacing are itinerant and often suggest hybridizations, parallels and transversalities between digital and analog media, rather than any strict divisions. Moreover, while much scholarly attention has been devoted to the (micro)temporality of digital images, compression, without reverting us to an exclusively space-centric
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
57
paradigm, allows us to understand how spatial and temporal manipulations are not only co-implicated in visual culture, but often mutually indistinguishable. But more on that in the next chapters. Yet compression can offer us more than an impulse to reflect on the aesthetics and phenomenology of failure and decay. It can serve as a prism through which to narrate new histories of media. In itself, this is, obviously, not a new idea at all. Many media theorists have recognized that noise, decay and various forms of encoding and formatting preserve clues to an object’s past, as well as to larger media practices (Noble 2013; Ernst 2014; Kane 2014; Schlesinger 2014; Marks 2017). But, bar some exceptions, this idea almost universally remains articulated only as a hypothetical possibility. Exactly what pasts and practices would unfold is often left unclear, and approaches that methodically pursue errors, failure and decay as history are uncommon. With the premise that they have epistemic value, the next chapters will mobilize such traces towards a historical narrative of compression techniques that crisscrosses scientific communities, disruptive phenomena at the periphery of image culture, and unusual media practices at the limits of vision. Technological Failures in Film Art A scratch, a speck of dust, some irregularity, a fleeting moment of pixilation, an inconspicuous vestige of film shrinkage, flicker. These familiar little aberrations are almost inevitable wherever film technology is at work. They can accumulate in an image as it becomes increasingly embedded in the world through time. Schonig (2022, 154–161) makes a case against approaching digital compression glitches through theoretical frameworks developed around technological failure, because they generally do not interrupt the viewing experience in the same way that buffering and other more disruptive breakdowns do. The compression glitch, argues Schonig, rather transitions viewers into a heightened perceptual mode of engagement with the moving image in motion. But it is precisely in this sense that such traces call attention to themselves: by both inciting and mocking the human fetish for continuity of surface and form and by failing the promise of transparency expected of any culturally respectable recording medium. Minutiae like these are easy to miss once one has learned to miss them. As a shorthand sign for the materiality of mediation, they lend themselves well to being aestheticized. In the hands of Andy Warhol or Danny
58
M. JANCOVIC
Williams, the strobe cut becomes a style.8 Such traces of the machine ground media objects in history (Baron and Goodwin 2015, 70). You don’t need to know anything about how the VHS format works to recognize the “VHS look” and feel nostalgic about it: the high contrast, low sharpness, overexposure-prone highlights, the colorful borders around objects, hue shifts to blue or red and sometimes green, the visible horizontal lines caused by interlacing, the tracking errors that show up as tiny white dots. Artists sometimes harvest such visual debris with the loving affection of a collector. Fluxus and related experimental art movements reveled in creative strategies that provoke and put on display what we might now call glitches. Nam June Paik’s Zen for Film (1965), a silent meditative piece for 16 millimeter, is a wonderful example. Zen for Film is a rare film work by Paik, who is better known for his video art. Its subject is a distinctly filmic form of memory: the film consists of only a blank strip of celluloid and, much like the explosion sequence in Akira, shows nothing but transparent leader with any damage or residue that might have accumulated on it with each progressive projection. This kind of iterative decay still remains a favored artistic method, even many generations of media formats and imaging technologies later. Ian Burn used a similar process in 1968 with Xerox Book, which I like to think of as a literary adaptation of Zen for Film. In it, Burn iteratively photocopied a blank sheet of paper one hundred times, preserving progressively more toner speckles and errant lines, and showing that a copy is never just a copy. Virgil Widrich, an Austrian film director, reanimated the same motif in his 2001 semi-animated short film Copy Shop, which was made by printing out every single frame of the film on paper with a printer, copying it, and scanning the roughly 18,000 frames back into a film. The protagonist, a copy shop worker, accidentally ends up creating real-life copies of himself, and as the relentless copying procedure spirals out of control, the material support of the film (which, in this case, is paper, rather than actual film) also becomes increasingly inscribed with errors, folds and tears. A similar procedure is at the center of digital works like Cory Arcangel’s self- explanatory Iron Maiden’s “The Number of the Beast” compressed over and over as an mp3 666 times (2004) and his Untitled (After Lucier) (2006), 8 The strobe cut is a visible and audible flash produced as an artifact of in-camera editing. On strobe cuts and other audiovisual disturbances in Warhol’s and Williams’s work, see King (2014).
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
59
or Julie Talen’s Sitting in a Room (2008). The latter two are both works of video paraphrasing Alvin Lucier’s 1969 sound work I Am Sitting in a Room. In both, digital or analog recordings are compressed or re-recorded over and over again in order to make visible the so-called generation loss that images and sounds “suffer” when they are repeatedly subjected to compression. At other times, machinic traces can acquire meaning even if they are incidental to the work. In the photochemical release of Derek Jarman’s last feature film Blue (1993), whose image shows only a single shot of blue color throughout the entire film, perturbations on the emulsion like dust and scratches function as the sole index of motion and time. Once noticed, such apparitions have to be either explained away or imbued with meaning. These “dramas of vision,” in Vivian Sobchack’s lovely turn of phrase, are not present in the later digitally-generated version of the film (M. Smith 2008, 121). Their absence is subtle, but marks the hybridity of Blue’s various formats and their technological heritage.9 But in the absence of artistic treatment, such sediments of time are often considered a loss. Traces of compression are viewed negatively as undesirable damage or deterioration. Film archivists try to prevent them, film restorers to revert them and many a film fan finds them ugly and distracting, preferring to view copies that are “pristine.” But, provided with the right ways of listening, such traces also speak of the provenance and historicity of things. Not Glitch Some might call traces like these glitches. There are two reasons why I will be avoiding this term. Firstly, “glitch” is too narrowly tied to the contemporary experience of computer media. The present-day situated knowledge about which things glitch and how they tend to do so has bearing upon our capacity to imagine what a glitch can be, constricting the full understanding of media-technological traces. Secondly, its association with the clichéd aesthetics of so-called “glitch art” recalls a limited set of formal cues that is too blunt to serve as a conceptual instrument with analytical and historical value. As many commentators—including glitch artists—have observed, past its zenith sometime around 2007, the Glitch Art 9 On dust as a filmic signifier of temporality and on the tension between the analog and digitally-generated versions of Blue, see also Remes (2015).
60
M. JANCOVIC
movement stopped creating art with glitches. Instead, it aestheticized errors in compression algorithms and corruptions in data, and conventionalized procedures to replicate them (Vanhanen 2003; Moradi et al. 2009; Menkman 2011a, b; Kane 2014). Negating the decidedly political and critical approaches of early glitch artists, the commodification of glitch aesthetics contained contingency and therefore severely curtailed its capacity to produce the genuinely new and unexpected. The release of Kanye West’s music video Welcome to Heartbreak in 2009 is considered a watershed moment for glitch, both as artistic movement and concept. The video famously utilized inter-frame datamoshing, a technique popular in glitch art circles, which intentionally introduces MPEG compression errors to elicit a very recognizable visual effect—a mesmerizing, viscous, oily, somewhat blocky and oversaturated transition between cuts and scene changes (Fig. 2.10).10
Fig. 2.10 Still from “Welcome to Heartbreak” music video (2009) by Kanye West feat. Kid Cudi, directed by Nabil Elderkin. © 2009 Roc-A-Fella Records Schonig (2022) gives an in-depth analysis of datamoshing aesthetics.
10
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
61
Although several similar music videos had been made before and since, West’s celebrity status and the timing of the release meant that this particular pop-cultural object was construed as “mainstream media pilfering of independent artistic production” (Seventeen Gallery 2009, n.p.). The video appeared shortly before the second solo show of Paul B. Davis, one of the prolific early glitch video artists who had been using the same principle in his work. As the curatorial statement introducing the show at Seventeen Gallery in London relates in Davis’s blunt words: “It fucked my show up… the very language I was using to critique pop content from the outside was now itself a mainstream cultural reference” (Seventeen Gallery 2009, n.p.). It may seem naïve to believe that twenty-first century art could operate outside of and differently from popular culture, let alone that such an outside exists. But since artistic method and precedence were at stake in the debate, it is film-historically pertinent to point out Davis’s and his contemporaries’ own formal indebtedness, conscious or not, to analog and hybrid film works like Bill Morrison’s Light is Calling (2004, Fig. 2.11) or Peter Delpeut’s Lyrisch Nitraat (1991). Both of these films use dissolve transitions based on blending individual decaying frames, a technique visually uncannily similar to inter-frame datamoshing. In Davis’s case, however, the subsequent disenchantment ultimately resulted in a renewal of artistic practice, leading to an arguably more vibrant creative output and to innovative works like Codec, as discussed in the previous chapter. The problem with glitch, then, is that the once fashionable form of visual alterity that has accreted around this term is ultimately in conflict with true errors, mistakes and decay, which are ornery and tend to appear unprovoked, randomly and at inconvenient times. Rather than attempt to recuperate glitch as an instrument of thought, I believe “trace” remains the sharper conceptual tool. This is not least because in the case of many media, analog as well as digital, there exist traces of technological processes that are neither defects, errors, nor glitches. The deterioration of photochemical film or magnetic tape leaves unique textures and patterns on the material. Film and tape get moldy, dry out, hydrolyze. Traces of these processes can partially divulge how the material had been stored and handled, what it is made of and how it chemically responded to the environments it had been placed in. Nitrate film naturally shrinks over time and thus changes its format—it compresses and reformats itself, becoming incompatible with standard projection hardware. Traces of shrinkage can manifest as cracks and wrinkles in the
62
M. JANCOVIC
Fig. 2.11 Still frame from Light Is Calling (2004), a film by Bill Morrison. Courtesy of Hypnotic Pictures. Original photography from The Bells (1926), directed by James Young, shot by L. William O’Connell
emulsion or as visible indications of narrowed sprocket holes when the film is optically reprinted to another copy. Although such marks of decay can be thought of as failures of preservation, they are not errors. On the contrary. Decay is the most natural state of film, video and data. It is a reminder of the material, phenomenal and environmental conditions of their existence. In his short film Machine to Machine (2013), Philippe Rouy assembled footage filmed by drones and robots inside and around the highly radioactive carcass of Fukushima Daiichi Nuclear Power Plant after it had been damaged by the massive 2011 Tōhoku earthquake and tsunami. The images frequently stutter and decompose into incoherent blocks of data; oversaturated speckles of color sparkle across the surface (Fig. 2.12). These are the visible results of the lethally high radiation triggering the image sensors, damaging the storage media and tripping up the
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
63
Fig. 2.12 Stills from Machine to Machine (2013) by Philippe Rouy. (Images courtesy of the artist)
compression algorithms. Disastrous human environmental failures are thus inadvertently but inevitably inscribed as technical failures in the image. The colorful radioactive glitter, much like the vinegar smell of decaying film or the geometrically pleasing patterns on a broken LCD screen, are traces of how the images were made and at once traces of how
64
M. JANCOVIC
they are being unmade. They obliquely tell of their environments and of the actions involved in making and destroying them. The remnants of compression that I analyze in this book are not always such an objectively legible entity or phenomenon. Sometimes, their presence as traces needs to be teased out. Their discernibility may also fluctuate over time as visual culture drifts and both technology and taste change. Interlacing, an early video compression technique discussed in the next chapter, creates jagged artifacts like those seen in Olympia, which are widely considered unpleasant and objectionable on today’s LED displays. But the same method provided for decades entirely satisfactory results on cathode ray tube screens. For my purposes, the traces of compression are thus produced and preconditioned by the interpretive work necessary to observe them. Micrometer, Slide Rule, Computer Let us return to bibliography one last time. The history of books has taught us that compression is a physical and material process that long preceded digital and even analog electronic media. Bibliography has taught us that historical research can be practiced in ways that utilize and are attuned to bodily perception, and with the right tools and knowledge, damage to an object can be constructively interpreted as a trace of its past. But long before digitization and forensic tools became common, bibliography itself had also developed and relied on complex practices of reformatting. A closer look at optical collation and its larger media-technological milieu shows a captivating landscape in which bibliography and textual criticism, two historical disciplines, begin to look more like amateur engineering and inventorship. In the late 1960s, the book scholar Gerald A. Smith was reporting on his failed attempt to convince Xerox Corporation to modify the Xerox 914, the first commercially successful photocopier, to allow the copying of old books. Books resist reformatting through their very folds. When placed open on a scanner, the text near the fold, where the paper arches away from the glass, will appear curved. This unmistakable trace of reformatting is still painfully familiar to us in the age of scanned PDF files. Distortions like these make optical collation very difficult or impossible. 11
11 The most elaborate of today’s digital scanning and optical recognition methods solve the same issue with infrared cameras or lasers that detect the bend of the paper and correct it computationally.
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
65
In an attempt to standardize and simplify the copying protocol, Smith petitioned Xerox to extend the scanning surface of the copier all to way to the edge of the machine, so that books could be opened halfway and laid over the edge of the glass at a right angle, allowing each page to be scanned flat. “If only ten or fifteen modified 914’s were placed in the major libraries of the world, the entire business of reproduction of rare books would be improved and standardized. All copies of the same edition would be of the same size, and any poor man could easily buy and collate them […]” (G. A. Smith 1967, 111). Xerox refused on the grounds that the market of customers interested in such a copier was negligible in proportion to the costs of the modification, and that they did not want to facilitate copying books in general. In response, the Anglicist George R. Guffey subsequently proposed modifying the microfilming method instead. Guffey’s solution involved a series of reformattings: instead of photocopying a book directly, he suggested to first microfilm it unfolded halfway, which could be achieved somewhat more easily than photocopying directly, and to then photocopy the microfilms (Guffey 1968). A significant portion of bibliographic and textual criticism work in the twentieth century has been done from microfilm reproductions of books, which facilitated access and circulation much like digitizations do today. And as digitizations do today, the reformatting to microfilm resulted in format incompatibilities and irritations. With the advent of xerography, the chain reformatting from book to microfilm to xerocopy became a common and useful strategy for bibliographers to lay their hands on and collate books that were otherwise difficult to access. But it was not free from difficulty. I quote one example at length to demonstrate the frustrations. When I began working on the text of the tenth volume of the California Edition of The Works of John Dryden, I was faced with some fifty microfilms of three of Dryden’s plays; these microfilms had been purchased over a period of fifteen years. The reduction ratios varied from 17× to 7×. Some of the microfilms contained one page of text per exposure, others two pages per exposure. Some of the plays had been filmed with the lines of text parallel to the length of the film, others with the lines perpendicular to the length, and a few had been filmed with the lines at an approximately forty-five degree angle. Since the UCLA Library had recently purchased a Hinman Collator […], I decided to have Xerox Copyflo prints made from these microfilms. I handed my mixture of films to the local Xerox Copyflo machine operator, asking him to supply me with prints of uniform size. His first
66
M. JANCOVIC
r eaction was depressing, to say the least; he felt that the necessary compensatory photographic manipulations would be so expensive it would be better to refilm all the books before having Copyflo prints made. After a closer examination of the microfilms, he decided that one of the film analysts employed by his company might, for an additional fee, be able to bring some order out of that chaos. The analyst (armed with micrometer, slide rule, and computer) eventually worked out most of the problems involved. But the process was costly from the standpoint of time lost, as well as money spent. In spite of the analyst’s painstaking work, about 15 per cent of the finished Xerox Copyflo prints were not usable. (Guffey 1968, 237–238)
Sources from the heyday of optical collators thus reveal the extent to which bibliographical knowledge-making in the 1960s and 70s depended upon a network of optical devices, microfilmers and copying services, formatting, compression and image manipulation skills, and how this network grappled with the materiality of the objects operative within it. Using new optical machines to aid in book collation made errors historically productive, but also introduced new errors and failures, which then provoked new attempts at format standardization—performed with instruments including micrometers, slide rules and early computers. As we will see later on, this process, in which visual aberrations and disturbances give an impulse for technological innovations but also continuously transform and complicate scientific practices, repeatedly plays out in fields as diverse as mathematics, television engineering, and neurology. Bibliographers themselves not only supplied inventive ideas, but often reached toward the toolbox to build or improve custom collators with improvised materials like adhesive tape, dowel sticks, and a variety of projectors, lenses, hand cranks, shutters made out of cardboard, and other filmic elements (Dearing 1966; G. A. Smith 1967). I am struck by this methodological inventiveness and see it as a model for media epigraphy. Optical collation diversifies historical research into an activity that utilizes the bibliographer’s sensorium as well as her machine-building skills. These principles can be translated to film and media-historical research. Practicing media history in a similar way would then involve some novel competences: training the perceptual acuity necessary to notice and identify fine traces, counting pixels and resolutions, or tinkering with hardware and software to see how traces are left, but also experimenting with software tools like MediaInfo that allow new ways of “collating” formats. As Johanna Drucker has observed, “[t]he trained eye can read image
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
67
structure, but also, production history, by distinguishing different kinds of marks, lines, tonal values, color ranges, densities, textures, and patterns” (Drucker 2010, 13). What this would mean is that film and media history would have to inspect and contemplate more minutely the reformatting and standardization practices and tools used in the film, broadcast and post-production industries, but also pay increased attention to the infrastructure and peripheral technology that makes them possible, examining algorithms, cables, adapters, converters and other objects and techniques. All of these work on the margins and hidden in the folds of moving image culture but, like the fold of a book, are, really, the center that holds everything together.
The Folds of History Drawing on concrete examples from bibliography and audiovisual culture, in this chapter, I have outlined some principles of media epigraphy as a historical method for the study of media. These principles will guide some of the case studies that will follow. By conceptually tying notions like format and trace to papermaking and bookmaking practices and techniques of folding, I emphasize that compression is a material, physical process. I am convinced that the study of moving images can benefit immensely if it expands its view towards non-audiovisual media like paper. To state this in stronger terms, less emphasis toward the term medium and more emphasis on the term format shows new and unexplored historical relationships and linkages: the many forms of compression that contemporary moving images undergo can be thought in parallel with the reformatting of paintings in the nineteenth century, and the methods book scholars use to study damage can be instructive for explorations of the history of video. Today’s baroque diversity of media formats and the associated questions of compression and reformatting are by no means a new set of problems, although each time they reappear in a different industrial, institutional, cultural, and technological climate. Format scholars have argued that formats function as crucial instruments of standardization and make possible collaboration across communities (Sterne 2012; Volmar 2017, 2020). This is undeniably true. But formats often also create the opposite agential force, a force that results in incompatibilities, conflicts, uncertainties, failures and losses. Formats and media are interlinked in non-linear ways that we do not yet fully understand. Placing formats at the center of analysis can guide our view toward specific properties and
68
M. JANCOVIC
limitations that recurrently appear and disappear in association with particular media, such as compressibility, rewritability, portability or instantaneity. As an example, endless looping is a trait that often accompanies “small” or highly compressed formats: from the analog tape-based Echo- matic cassette of the 1960s and the “Pocket Rocker” miniature audio cassettes of the 1980s, to the looping GIFs of the 1990s or the Vines of the 2010s. In this way, formats facilitate transversal thinking across conventional media categories like “print,” “audio” or “cinema.” Moreover, format, a term more fine-grained than medium, is useful because it offers much more nuanced access to a process that has been critical to film theory, history, and preservation in the last two decades: digitization. Digitization has often been framed as a historically singular rupture that begins in the closing years of the twentieth century and belongs to our contemporary time. To digitize an object seems like a singular, alchemic interposition into its existence, in the sense that once an analog object (or an entire industry) is digital it seems somehow permanently transformed forever. But if we think of this transformation as a reformatting instead, the ontological divide between analog and digital media will instantly appear much less dramatic and exceptional. It is not that digitization becomes unimportant, but rather that it can be reframed as only one instance in a long series of format changes that objects like films undergo throughout their existence. In its analog life, a film will have been subjected to many refomattings, and it will experience many more after digitization, too. Throughout its “archival life” (Fossati 2009), a photochemical film in the archive may be scanned and digitized, for example, into a sequence of DPX files, or JPEG2000 images, or encoded as an FFV1 video stream. All three of these digital formats are similar in some ways, but very different in others. Thus, I argue that shifting the focus from digitization towards reformatting can not only help us understand the finer mutations and metamorphoses that media objects undergo, beyond the simplistic and crude digital/analog divide that has for so long dominated media theory, but also take a broader look at the historical processes and dynamics that continuously require objects to change their format in this way. Looking at reformatting from this perspective makes it possible to media-historically situate the digitization of film in a larger cultural history of compressions and format migrations. Compression formats have far-reaching consequences for the sensory experience of media, the accessibility and reproducibility of old scientific data, and for private and collective memory. Formats can be placeholders
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
69
for class difference and social hierarchy: entire value systems and cultures of taste are encapsulated in the way one unfolds a “tabloid” differently from a “broadsheet.” The compression and reformatting of objects often tends to leave traces, minor forms of machinic failure that can be looked at—in the literal sense—to enable new ways of writing history. In the following chapters I will pursue traces of minor failures in compressed images to show how they become the kernel of major transformations in social, scientific and cultural practices. Some of the examples I will address include objects like flickering television sets, broken projectors or computers that do not behave as they are expected to. Such failing media objects can provoke new and inventive forms of spectatorship, instigate the discovery of scientific concepts or the formation of entirely new medical fields and artistic styles. As we have seen, folding is a paradigmatic example of a compression technique that lets us draw together the compression of moving images with the media-theoretical notion of “format” through their shared material history in paper. With this historiographical maneuver, the folding and formatting of paper can be viewed as one stage in a long history of techniques of compression that are so central to media culture, particularly to cultures of the moving image. Folding has hardly ever been considered with academic attention as a media technique, except for some highly specialized contexts in mathematics and art (Siegert 1993, 2010; Wiedemeyer 2014; Friedman 2018). But thinking of compression in these material terms as folding, as a technique of manipulating surfaces, ensures that we stay cognizant of compression’s physical, kinetic and tactile character. Even in their more abstract electronic sense, analog and digital video compression methods still rely on the spatial manipulation of concrete things.
References Baron, Rebecca, and Douglas Goodwin. 2015. The Rest Is Noise: On Lossless. In On Not Looking: The Paradox of Contemporary Visual Culture, ed. Frances Guerin, 63–76. New York: Routledge. Bowker, Geoffrey C. 2013. A Plea for Pleats. In Deleuzian Intersections: Science, Technology, Anthropology, ed. Casper Bruun Jensen and Kjetil Rodje, 123–138. New York: Berghahn Books. Bridgman, Percy W. 1954. Remarks on the Present State of Operationalism. The Scientific Monthly 79: 224–226.
70
M. JANCOVIC
Chaniotis, Angelos. 2012. Listening to Stones: Orality and Emotions in Ancient Inscriptions. In Epigraphy and the Historical Sciences, ed. John Davies and John Wilkes, 299–328. Oxford: Oxford University Press. Cherchi Usai, Paolo. 2000. The Ethics of Film Preservation. In Silent Cinema, An Introduction, 2nd ed., 44–71. London: British Film Institute. Chun, Wendy Hui Kyong. 2008. The Enduring Ephemeral, or the Future Is a Memory. Critical Inquiry 35: 148–171. Dearing, Vinton A. 1966. The Poor Man’s Mark IV or Ersatz Hinman Collator. The Papers of the Bibliographical Society of America 60: 149–158. Denson, Shane. 2020. Discorrelated Images. Durham: Duke University Press. Doane, Mary Ann. 2007. The Indexical and the Concept of Medium Specificity. differences 18: 128–152. https://doi.org/10.1215/10407391-2006-025. Drucker, Johanna. 2010. Graphesis: Visual Knowledge Production and Representation. Poetess Archive Journal 2: 1–50. ———. 2013. Performative Materiality and Theoretical Approaches to Interface. DHQ: Digital Humanities Quarterly: 7. http://digitalhumanities.org/dhq/ vol/7/1/000143/000143.html Ernst, Wolfgang. 2013. Digital Memory and the Archive. Edited by Jussi Parikka. Minneapolis, MN: University of Minnesota Press. ———. 2014. Between the Archive and the Anarchivable. Mnemoscape 1: 92–103. Fahle, Oliver, Marek Jancovic, Elisa Linseisen, and Alexandra Schneider. 2020. Medium | Format. Zeitschrift für Medienwissenschaft 12: 10–19. Febvre, Lucien. 1983. Reflections on the History of Technology. Translated by Martha Cummings. History and Technology 1: 13–18. https://doi. org/10.1080/07341518308581612. Fickers, Andreas, and Annie van den Oever. 2013. Experimental Media Archaeology: A Plea for New Directions. In Techné/Technology: Researching Cinema and Media Technologies, their Development, Use and Impact, ed. Annie van den Oever, 272–278. Amsterdam: Amsterdam University Press. Fossati, Giovanna. 2009. From Grain to Pixel: The Archival Life of Film in Transition. Amsterdam: Amsterdam University Press. Friedman, Michael. 2018. A History of Folding in Mathematics: Mathematizing the Margins. New York, NY: Birkhäuser. Geimer, Peter. 2007. Das Bild als Spur. Mutmaßung über ein untotes Paradigma. In Spur: Spurenlesen als Orientierungstechnik und Wissenskunst, ed. Sybille Krämer, Werner Kogge, and Gernot Grube, 95–120. Frankfurt am Main: Suhrkamp. ———. 2010. Bilder aus Versehen: eine Geschichte fotografischer Erscheinungen. Hamburg: Philo Fine Arts. Ginzburg, Carlo. 1989. Clues, Myths, and the Historical Method. Translated by John Tedeschi and Anne C. Tedeschi. Baltimore, Maryland: Johns Hopkins University Press.
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
71
———. 2012. Threads and Traces: True, False, Fictive. Translated by Anne C. Tedeschi and John Tedeschi. Berkeley: University of California Press. Guffey, George Robert. 1968. Standardization of Photographic Reproductions for Mechanical Collation. The Papers of the Bibliographical Society of America 62: 237–240. Hamburger, Jeffrey F. 2011. The iconicity of script. Word & Image 27: 249–261. https://doi.org/10.1080/02666286.2011.541118. Heller, Franziska. 2016. Warum Filmgeschichte? Wie die Digitalisierung unser Bild der Vergangenheit verändert. Cinema: 12–24. https://doi.org/ info:doi/10.5167/uzh-131627. Hilderbrand, Lucas. 2009. Inherent Vice: Bootleg Histories of Videotape and Copyright. Durham: Duke University Press Books. Huhtamo, Erkki, and Jussi Parikka, eds. 2011. Media Archæology: Approaches, Applications, and Implications. Berkeley: University of California Press. Jack, Keith. 2004. Video Demystified: A Handbook for the Digital Engineer. Burlington, Oxford: Elsevier. Jancovic, Marek. 2020. Fold, Format, Fault: On Reformatting and Loss. In Format Matters: Standards, Practices, and Politics in Media Cultures, ed. Marek Jancovic, Axel Volmar, and Alexandra Schneider, 192–218. Lüneburg: Meson Press. Jancovic, Marek, and Judith Keilbach. 2023. Streaming Against the Environment: Digital Infrastructures, Video Compression, and the Environmental Footprint of Video Streaming. In Situating Data: Inquiries in Algorithmic Culture, ed. Karin van Es and Nanna Verhoeff, 85–102. Amsterdam: Amsterdam University Press. Kane, Carolyn L. 2014. Compression Aesthetics: Glitch From the Avant-Garde to Kanye West. InVisible Culture: An Electronic Journal for Visual Culture. https://ivc.lib.rochester.edu/compression-aesthetics-glitch-fromthe-avant-garde-to-kanye-west/ Katz, David. 1989. High Definition Television Technology and its Implications for Theatrical Motion Picture Production. Journal of Film and Video 41: 3–12. Kelly, Caleb, Jakko Kemper, and Ellen Rutten, eds. 2021. Imperfections: Studies in Mistakes, Flaws, and Failures. New York: Bloomsbury. King, Homay. 2014. Stroboscopic: Warhol and the Exploding Plastic Inevitable. Criticism 56: 457–480. https://doi.org/10.13110/criticism.56.3.0457. Kinross, Robin. 2009. A4 and before: towards a long history of paper sizes. KB Lecture 6. Wassenaar: NIAS. Kirschenbaum, Matthew. 2012. Mechanisms: New Media and the Forensic Imagination. Cambridge, MA: The MIT Press. ———. 2015. On Mechanisms’ Materialism: A Reply to Ramón Reichert and Annika Richterich. Medium. Accessed 19 July 2018. https://medium.com/@
72
M. JANCOVIC
mkirschenbaum/on-m echanisms-m aterialism-a -r eply-t o-r am%C3%B3n- reichert-and-annika-richterich-8168cdf9922b. Krajewski, Markus. 2006. Restlosigkeit: Weltprojekte um 1900. Frankfurt am Main: Fischer Taschenbuch. Krämer, Sybille. 2007. Was also ist eine Spur? Und worin besteht ihre epistemologische Rolle? Eine Bestandsaufnahme. In Spur: Spurenlesen als Orientierungstechnik und Wissenskunst, ed. Sybille Krämer, Werner Kogge, and Gernot Grube. Frankfurt am Main: Suhrkamp Verlag. Kromhout, Melle Jan. 2017. Noise Resonance: Technological Sound Reproduction and the Logic of Filtering. Doctoral dissertation, Amsterdam: University of Amsterdam. Lichtenberg, Georg Christoph. 1967. Schriften und Briefe. Edited by Wolfgang Promies. Vol. IV: Briefe. Frankfurt am Main: Zweitausendeins. Lobato, Ramon. 2012. Shadow Economies of Cinema: Mapping Informal Film Distribution. London: British Film Institute. Mandell, Alice, and Jeremy D. Smoak. 2018. Reading Beyond Literacy, Writing Beyond Epigaphy: Monumentality and the Monumental Inscriptions at Ekron and Tel-Dan. Maarav 22: 79–112. Marks, Laura U. 2014. Arab Glitch. In Uncommon Grounds: New Media and Critical Practices in North Africa and the Middle East, ed. Anthony Downey, 257–272. London: Tauris. ———. 2017. Poor Images, Ad Hoc Archives, Artists’ Rights: The Scrappy Beauties of Handmade Digital Culture. International Journal of Communication 11: 18. Matos, Sónia. 2017. Can Languages be Saved?: Linguistic Heritage and the Moving Archive. In Memory in Motion, ed. Ina Blom, Trond Lundemo, and Eivind Røssaak, 61–84. Amsterdam: Amsterdam University Press. https://doi. org/10.2307/j.ctt1jd94f0.6. McMullin, B.J. 2003. Watermarks and the Determination of Format in British Paper, 1794-“CIRCA” 1830. Studies in Bibliography 56: 295–315. Meadow, Robin. 1970. Television Formats. The Search for Protection. California Law Review 58: 1169–1197. https://doi.org/10.2307/3479681. Menkman, Rosa. 2011a. Glitch Studies Manifesto. In Video Vortex reader II: Moving Images Beyond YouTube, ed. Geert Lovink and Rachel Somers Miles, 336–347. Amsterdam: Institute of Network Cultures. ———. 2011b. The Glitch moment(um). Amsterdam: Institute of Network Cultures. Mills, Mara, and Jonathan Sterne. 2017. Afterword II: Dismediation—Three Proposals, Six Tactics. In Introduction: Toward a Disability Media Studies, ed. Elizabeth Ellcessor and Bill Kirkpatrick, 365–380. New York: NYU Press. Moradi, Iman, Ant Scott, Joe Gilmore, and Christopher Murphy. 2009. Glitch: Designing Imperfection. New York: Mark Batty.
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
73
Müller, Susanne. 2014. Formatieren. In Historisches Wörterbuch des Mediengebrauchs, ed. Heiko Christians, Matthias Bickenbach, and Nikolaus Wegmann, 253–267. Köln: Böhlau. Muñoz, José Esteban. 1996. Ephemera as Evidence: Introductory Notes to Queer Acts. Women & Performance: A Journal of Feminist Theory 8: 5–16. https:// doi.org/10.1080/07407709608571228. ———. 2009. Cruising Utopia: The Then and There of Queer Futurity. New York: New York University Press. Niehaus, Michael. 2018. Was ist ein Format? Hannover: Wehrhahn Verlag. Noble, Jem. 2013. VHS: A Posthumanist Aesthetics of Recording and Distribution. In The Oxford Handbook of the Archaeology of the Contemporary World, ed. Paul Graves-Brown and Rodney Harrison, 728–742. Oxford: Oxford University Press. Parikka, Jussi. 2013. Dust and Exhaustion. The Labor of Media Materialism. CTheory.net, February 10. https://journals.uvic.ca/index.php/ctheory/article/view/14790/5665. Pool, Robert. 1988. Setting a New Standard. Science 242: 29–31. Remes, Justin. 2015. Motion(less) Pictures: The Cinema of Stasis. New York: Columbia University Press. Ruskin, John. 1890. The Stones of Venice: The Sea Stories. Vol. II. Boston: Aldine Book Publishing Co. Schlesinger, Martin. 2014. Go Play Outside! Game Glitches. In (Dis)Orienting Media and Narrative Mazes, ed. Julia Eckel, Bernd Leiendecker, Daniela Olek, and Christine Piepiorka. Bielefeld: transcript. Schneider, Alexandra. 2014. Ta-Ta Ta-Ra Ta-Ta Ra-Ra: 1991— Kompressionsformate und Memoryscapes. In Memoryscapes: Filmformen der Erinnerung, ed. Ute Holl and Matthias Wittmann. Zürich: Diaphanes. Schonig, Jordan. 2022. The Shape of Motion: Cinema and the Aesthetics of Movement. New York: Oxford University Press. Schubin, Mark. 1996. Searching for the Perfect Aspect Ratio. SMPTE Journal 105: 460–478. https://doi.org/10.5594/J09548. Seldes, Gilbert. 1950. Oracle: Radio. In The Great Audience, 105–216. New York: The Viking Press. Seventeen Gallery. 2009. Paul B. Davis, Define Your Terms (or Kanye West Fucked Up My Show). Seventeen. Accessed 13 August 2018. https://www. seventeengallery.com/exhibitions/paul-b-davis-define-your-terms-or-kanye- west-fucked-up-my-show/. Siegert, Bernhard. 1993. Relais: Geschicke der Literatur als Epoche der Post, 1751–1913. Berlin: Brinkmann & Bose. ———. 2010. Türen. Zur Materialität des Symbolischen. Edited by Lorenz Engell and Bernhard Siegert. Zeitschrift für Medien- und Kulturforschung 1: 151–170.
74
M. JANCOVIC
Skågeby, Jörgen, and Lina Rahm. 2018. What is Feminist Media Archaeology? Communication +1 7: 1–18. https://doi.org/10.7275/fthf-h650. Smith, Gerald A. 1967. Collating Machine, Poor Man’s, Mark VII. The Papers of the Bibliographical Society of America 61: 110–113. Smith, Marquard. 2008. Phenomenology, Mass Media and Being-in-the-World. Interview with Vivian Sobchack. In Visual Culture Studies: Interviews with Key Thinkers, 115–130. London: SAGE. Smoak, Jeremy, and Alice Mandell. 2019. Texts in the City: Monumental Inscriptions in Jerusalem’s Urban Landscape. In Size Matters: Understanding Monumentality Across Ancient Civilizations, ed. Federico Buccellati, Sebastian Hageneuer, Sylva van der Heyden, and Felix Levenson, 309–344. Bielefeld: Transcript. Stenstrop, Sofie Lykke. 2013. The Precarious Aesthetic across Domains. An Exploration of the Allure of Imperfection. Tidsskrift for Medier, Erkendelse og Formidling 1: 112–126. Sterne, Jonathan. 2012. MP3: The Meaning of a Format. Durham: Duke University Press. Strauven, Wanda. 2015. The (Noisy) Praxis of Media Archaeology. In At the Borders of (Film) History: Temporality, Archaeology, Theories, ed. Alberto Beltrame, Giuseppe Fidotta, and Andrea Mariani, 33–41. Udine: Forum. ———. 2019. Media Archaeology as Laboratory for History Writing and Theory Making. In New Media Archaeologies, ed. Ben Roberts and Mark Goodall, 23–43. Amsterdam: Amsterdam University Press. https://doi.org/10.2307/j. ctvcj303s. ———. 2020. Sewing machines and Weaving Looms: A Media Archaeological Encounter between Fashion and Film. Journal of Visual Culture 19: 362–377. https://doi.org/10.1177/1470412920964905. Tanselle, G. Thomas. 1971. The Bibliographical Description of Paper. Studies in Bibliography 24: 27–67. https://doi.org/10.2307/40371526. ———. 2000. The Concept of Format. Studies in Bibliography 53: 67–115. https://doi.org/10.2307/40372094. Törneman, Mira Stolpe. 2019. Queering Media Archaeology. Communication +1 7: 1–16. https://doi.org/10.7275/5xch-6125. Tsing, Anna Lowenhaupt. 2015. The Mushroom at the End of the World—On the Possibility of Life in Capitalist Ruins. Princeton, NJ: Princeton University Press. Turquety, Benoît. 2018. On Viewfinders, Video Assist Systems, and Tape Splicers: Questioning the History of Techniques and Technology in Cinema. In Technology and Film Scholarship. Experience, Study, Theory, ed. Santiago Hidalgo, 239–259. Amsterdam: Amsterdam University Press. Vanhanen, Janne. 2003. Virtual Sound: Examining Glitch and Production. Contemporary Music Review 22: 45–52. https://doi.org/10.1080/ 0749446032000156946.
2 MEDIA EPIGRAPHY: FORMAT, TRACE, AND FAILURE
75
Volmar, Axel. 2017. Formats as Media of Cooperation. Media in Action 1: 9–28. ———. 2020. Reformatting Media Studies: Toward a Theoretical Framework for Format Studies. In Format Matters: Standards, Practices, and Politics in Media Cultures, ed. Marek Jancovic, Axel Volmar, and Alexandra Schneider, 27–45. Lüneburg: Meson Press. Volmar, Axel, Marek Jancovic, and Alexandra Schneider. 2020. Format Matters: An Introduction to Format Studies. In Format Matters: Standards, Practices, and Politics in Media Cultures, ed. Marek Jancovic, Axel Volmar, and Alexandra Schneider, 7–26. Lüneburg: Meson Press. Wasson, Haidee. 2015. Formatting Film Studies. Film Studies 12: 57–61. https:// doi.org/10.7227/FS.12.0007. Wiedemeyer, Nina. 2014. Buchfalten: Material Technik Gefüge der Künstlerbücher. Doctoral dissertation, Weimar: Bauhaus University Weimar.
CHAPTER 3
Interlacing: The First Video Compression Method
The rapid economic boom that followed the hyperinflation in the Weimar Republic of the early 1920s brought about a massive increase in international telegraph traffic. During this time, wirephoto technology, or what we would today call fax machines, reached a level of automation that allowed it to be operated commercially with minimal human intervention and began emerging as an economically viable and useful medium. Photographs of politicians, criminals and financial authorizations were crossing the Atlantic in ever increasing volumes. The demand for this new medium was so high that the German radio and television manufacturing giant Telefunken felt the necessity to redirect its existing experimental television technology into phototelegraphy—the remote transmission of still images (Ilberg 1933). One of Telefunken’s engineers, Fritz Schröter, who was a physicist and television pioneer and would later ascend to the post of the company’s director, approached phototelegraphy in an unusual way. Unlike emerging wirephoto methods, Schröter attempted to use the technology to transmit not images, but text. In 1928, he presented the prototype of a wireless phototelegraph that was meant to greatly improve transmission speed and resolve some unwelcome flaws that long-distance telegraphs operating on shortwave frequencies had been prone to at the time (ibid.). Schröter’s research at his Telefunken laboratory was the beginning— one of many—for interlacing, the first video compression method. Interlacing is one of the fundamental ways of reducing video bandwidth. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Jancovic, A Media Epigraphy of Video Compression, https://doi.org/10.1007/978-3-031-33215-9_3
77
78
M. JANCOVIC
It has been a core component of how TV signals are processed and transmitted since the beginning of public broadcasting. Today, it persists as a common approach to video compression that leaves behind a recognizable—if often undesirable—trace.
A Brilliant Invention or Nuisance? The remains of interlacing can be encountered in most technologies grouped under the fuzzy term video, with its hundreds of formats like U-matic, VHS, Betamax, Video8, but also in the graphics generated by older video game consoles and electronics. The pattern of regular thin horizontal stripes is a tell-tale sign that interlacing is being used to reduce the bandwidth of a video signal (Figs. 3.1 and 3.2). On a cathode ray tube (CRT) television, these patterns are hardly visible but they can become very conspicuous on LCD and current LED
Fig. 3.1 Interlacing in Olympia. Interlacing creates an intermediate frame where a hard cut would have been in the original film. These intermediate frames are not very visible in motion, because they only appear for 1/50th of a second, but are noticeable to a trained eye
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
79
Fig. 3.2 Detail of interlacing or “combing” artifacts in Olympia as they might appear on a digital display
displays. Digital broadcasting and more recent storage formats like DVD and, occasionally, Blu-ray discs sometimes feature interlaced content, despite being able to afford the increased bandwidth of so-called progressively scanned, non-interlaced images. Interlacing is a particularly intriguing example of how traces resulting from compression can inform novel approaches to media history. One of interlacing’s multiple origins is in phototelegraphy, where it was used as a method of reducing transmission errors. It has wandered from the transmission of still images to television, where it not only radically reduced signal bandwidth, but also solved the problem of flicker, another unpleasant visual effect (to which the last two chapters of this book are dedicated). But, as the television engineer and historian Mark Schubin has observed, “as lenses, cameras, and displays have improved and spatial resolutions have increased, the artifacts of interlace have become more noticeable” (Schubin 2016, 37). What started out as a method of preventing visual disturbances has turned into a visual disturbance itself. Filmmakers who submit their works to festivals are being warned to avoid interlacing: “Uploading interlaced footage will result in bad quality of your processed file” (Reelport 2015, n.p.). Instructions by the Short Film Corner of Cannes Film Festival stipulate: “Please also make sure you
80
M. JANCOVIC
upload a progressive (deinterlaced) video file, without any horizontal lines across the image—an interlacing artefact known as ‘combing’—particularly visible during sequences with rapid movement” (Festival de Cannes 2019, n.p.). As we see, the film festival realm has a clear antipathy to this form of compression and its aesthetic consequences, and the use of interlacing is explicitly discouraged in filmmaking practice. Due to the irksome traces it leaves behind and the post-production and preservation problems it causes, interlaced video is now despised by video production professionals and shunned by the film industry. It is considered a nuisance, an anachronistic remnant of outgrown standards that prioritize backwards compatibility over practicability and aesthetics. As such, interlacing demonstrates what the television historian Anne-Katrin Weber, following Graeme Gooday, refers to as “the social character of success” (2014, 8): the observation that technological successes and failures are not caused by hardware innovations, but rooted in the social relations of a technology’s use. Interlacing is particularly interesting because its transition from success to failure has a complicated historical dimension which, on top of that, depends on the perceptual effects the technology elicits, allowing us to invite questions of bodily sensation into a discussion of technological history. Interlacing was anchored in national and international television standards over 90 years ago and has outlived even the most venerable technological components of analog television. Its long and controversial existence opens up possibilities to better interrogate the complex space that links the contemporaneous to the old. The topology of this space and the exchanges that take place within it cannot be apprehended by unidirectional notions like “media convergence.” Interlacing’s significance as a video compression method before digital codecs and its ties to early (still and moving) image processing invite a more nuanced understanding of digitality, one in which “the digital and the analog are not episodes in a history of media, but, rather, technical media are an episode of the digital and of the analog” (Siegert 2003, 15, my translation). Interlacing’s remarkable longevity and the way in which it straddles boundaries between media formats with just as much ease as it lingers on in standards after almost a century of technological change can teach us to rethink the often unduly essentialized differences between analog and digital media, and historicize them with more rigor, as Lisa Gitelman has called for (Gitelman 2008, xi).
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
81
Interlacing relies on a deceptively simple spatial principle: it partitions the image into lines that are scanned out of order, non-sequentially. This method can be implemented across multiple media. Perhaps that is precisely why its history is so poorly researched. Unlike the technological inventions so prominent in television histories—Paul Nipkow’s scanning disk or Vladimir Zworykin’s Iconoscope, for instance—interlacing is a way of manipulating signals and thus not a technology but a technique. A first brief historical overview of interlacing methods appeared already in 1939 (Raeck 1939a, b, c). But excluding the two or three equally brief origin stories written in the twenty-first century (O’Neal 2008; Marshall 2018) interlacing usually receives only a fleeting footnote mention in technological histories of television. Its historical usefulness in solving several concurrent visual and electrotechnical problems of early television is well-understood. But the variegated media techniques that it is woven into have not been addressed in literature in due detail. Interlacing reappears multiple times in the history of the moving image. In each of its permutations, it embodies a different potential of television, it “inscribes” a different notion of use, applicability and spectatorship. Rather than having a singular identifiable origin, it opens up a fragmented archaeology of impractical, impossible and imaginary media or, as we could say with Weber, “a mobile cartography of links and affinities between multiple machines and their stories” (2014, 9) reaching as far as the late nineteenth century. Among them, the contributions of Fritz Schröter and his work at Telefunken are little-known in English-language television scholarship. My intention here, though, is not to usher another name into an already crowded pantheon of male television pioneers. The historiographical value of Schröter’s work lies less so in his stature as a savvy inventor. Rather, his experiments with interlacing techniques clear the view towards Schröter’s tussles with the various materialities stymieing the smooth wireless transmission of images—including atmospheric, chemical, physical and ocular ones—and the traces those materialities leave behind. Fixating on the traces of interlacing and on the many perceptible visual errors and aberrations that surrounded its development will allow us to conceive of new models for understanding the history of compression, obsolescence and the role of residual and non-cinematic technology in the history of the moving image. Schröter’s work on compression and his partly unsuccessful inventions like the phototelegraph present an entrance to a new view on the diversity and hybridity of media, the many lineages
82
M. JANCOVIC
of image-processing techniques in the first decades of the twentieth century, and the instability of the imaginary of moving image transmission. Expanding from around 1930 both forward and backward in time, a look at interlacing artifacts reveals how compression methods and techniques traverse many domains of technology and infrastructure, and how long- term global standards borrow knowledge from various short-lived technologies and formats. I will thus consider interlacing and its artifacts not only as the constituent elements of a technological history, but also as discursive surfaces along which the terms of human visual perception are negotiated, previously unaddressed imaginaries of media become traceable and future media practices are prefigured.
A Universe Made of Dots and Dashes In the second half of the nineteenth century, a profound, meticulous segmentation of the visual world took place. The decomposition of continuous surfaces into discrete elements became the fundamental principle of an accelerating information exchange. Louis Braille, Alfred Vail, Émile Baudot, and the painter Samuel Morse all showed that human language itself could be translated into configurations of discrete lines or dots and transmitted free from its previous dependence on a textual or aural form. Ottmar Mergenthaler’s Linotype machine massively sped up newspaper production by allowing typesetting to take place line by line, and halftone reprography made possible the reproduction of photographs in newsprint by dissolving them into thousands of tiny self-contained dots. The German media theorist Friedrich Kittler (2009) has attributed the idea of transmitting images by dissecting them into fine grids to the Scottish inventor Alexander Bain, who experimented with fac simile machines in the 1840s. After Bain, rasters, grids, tables and matrices of all sorts slowly came to dominate those visual media whose imperative was the speedy transmission of information at a distance. About a decade after Paul Nipkow’s invention of the scanning disk, in 1897, a particularly curious multimedial method of transferring visual information was patented in Germany by a certain Johann Walter from Basel. Walter proposed that images could be sent to remote locations by overlaying them with a minuscule rectangular, triangular or rhomboid grid, for instance by projecting the image onto a wall through a transparent foil imprinted with a fine mesh. The brightness of the original image
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
83
in each cell of the grid was to be approximated—quantized, as we would now say—to a binary value: black or white. This would essentially rasterize the image into a digital bitmap (Fig. 3.3) which could then be transmitted cell by cell and line by line, through a telegraph or telephone. A human operator would “scan” the image and digitize it into a Morse signal: dot Fig. 3.3 Examples of Walter’s image transmission grids. (Image source: Walter 1898, 4)
84
M. JANCOVIC
for white, dash for black, confirming Geoffrey Batchen’s observation that “the logics of computing are already inscribed in the practice of phototelegraphy” (2006, 39). The same grid enlarged four times would be used on the receiving side and painted in by another operator receiving the Morse signal by ear or from a phonograph recording, basically painting by numbers. The hand-drawn or printed mosaic of black and white cells could then be touched up by hand and, finally, photographically reformatted to its original size. Colors, Walter claimed, could be added arbitrarily by splitting the image into color plates and separately transferring one layer for each desired color (Walter 1898). A year later, Walter evolved this procedure into what in today’s vocabulary would be called an encoding format for transmitting compressed vector graphics over the Morse or Hughes telegraph (Walter 1899). I have not seen any evidence that Walter’s patent was ever put to practice, which is not surprising given how immensely impractical and overcomplex it was. But it is worth marveling at the ingenuity of the elaborate image transfer protocol he envisaged. It involved manual analog-digital- analog conversion, projectors, transparent foil, pencils, wireline or wireless telegraphy or telephony, phonographs, photographic cameras, and, most importantly, the human sensorium. Fundamental digital compression techniques like quantization, which is done today algorithmically in codecs like MPEG, were performed by seeing, listening and drawing human bodies. The sending and reception of visual data would have taken place via a seamless, hybrid cascade of assistive technologies, although it is not easy to say who exactly assisted whom: whether the various machines at work assisted the human operators, or the human operators assisted the machinic process. In Walter’s elaborate media assemblage, the line between bodily techniques and technical bodies cannot be drawn with certainty. A similar process of sensory-technological coupling took place between the human operators of wireless telegraphs and their machines (Campbell 2006). But Walter’s patents seem to testify that the new gestures, bodily actions and sensory skills emerging around this time were not necessarily tied to writing and the wireless telegraph only. Walter’s image compression and transmission format neatly illustrates what was happening to inscription techniques in the closing years of the nineteenth century. The increasing prominence of grids, as material tools for formatting and organizing information, was a sign that the fundamental cultural techniques of reading and drawing were slowly being
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
85
supplemented with a new operation, scanning. Inscription was separating from signification, and signification from language. Methods similar to Walter’s were not at all uncommon at the turn of the century. When the American Ernest A. Hummel unveiled his telediagraph, a telegraph-based fax machine, news of the device widely circulated across the English-language press between 1898 and 1900, headlined by The New York Herald which installed and used one of the devices in its offices. Despite being nearly identical in construction to Bain’s and Frederick Bakewell’s fax machines introduced already 40 years earlier, the telediagraph was celebrated as the first practicable method of sending images remotely. In 1898, “the problem of sending pictures by wire, the same as messages, is at last successfully solved,” newspapers concluded (Pictures Successfully Sent by Telegraph at Last 1899, 19). As for the motivation behind his device, Hummel is quoted saying: I saw an article published in a newspaper. There was a picture of a man, and it was drawn over lines that were drawn across and up and down that picture, forming squares. I could see how confusing it would be to an operator, so I worked on the theory of doing away with the crossing lines. (The Herald’s Test of New Method of Transmitting new Pictures by Wire 1898, 1)
The very same fin-de-siècle visuality of grids, tables and rasters that made the remote transmission of images possible apparently also became a source of confusion, an undesirable artifact of transmission. Hummel’s telediagraph still needed approximately half an hour for a small line-art portrait, but automatized the image-processing and transmission. Unlike Johann Walter’s method, there was no need for an operator to manually scan the grid. An artist was still needed on both ends, because the images first had to be drawn directly on special coated plates and then redrawn and touched up after reception, sometimes according to handwritten directions (Fig. 3.4). The success of the transmission thus still depended on the collaboration between the automatic operations of the machine and the drawing and interpretive skills of the people preparing the sketches. Yet in comparison with Walter’s procedure devised just one year earlier, the degree of manual processing was much lower. Devices like these were the technological precursors of Fritz Schröter’s phototelegraph, although Schröter introduced another interesting twist into the relationship between text and image.
86
M. JANCOVIC
Fig. 3.4 An image received via the telediagraph in New York in 1899 (left, note the handwritten instruction) and the finished sketch by an artist as it appeared in the New York Herald (right). (Image source: Cook 1900, 347)
Fritz Schröter’s Phototelegraph Fritz Schröter joined Telefunken—then called Gesellschaft für drahtlose Telegraphie, or Wireless Telegraphy Corporation—in 1921 and became head of the technical department two years later. His research was concentrated around television and shortwave and ultra-high frequency technology. The shortwave radio spectrum, roughly designating the high frequency band from 3 to 30 MHz, was precious real estate in the 1920s. Shortwave radio signals are reflected by the Earth’s ionosphere, which allows them to travel great distances across the globe. This made shortwave a very attractive alternative to costly transatlantic cables. Cables around this time were considered unsuitable for high-speed phototelegraphy because of their low bandwidth. Not only was high-frequency wireless cheaper, it also promised high transmission speeds and low latency. But the spectral characteristics of shortwave also posed some difficult engineering challenges. Instead of transmitting photographs between press agencies or police stations, Schröter was developing a phototelegraph that would wirelessly
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
87
send short text messages printed on thin, continuous strips of paper. The message to be transferred was first typed on a strip of plain paper, acquired line-by-line by a light-sensitive device, transmitted wirelessly, and then exposed again on photosensitive paper on the receiving end. The equipment used in this experimental setup was based on recent breakthroughs in television research at Telefunken, such as a ring-shaped photoelectric cell designed by Schröter and an improved low-latency cell that converts electrical signals into changes in light intensity (Schröter 1926, 1928; Burns 1998, 199). However, shortwave phototelegraphy suffered from quality loss over long distances due to atmospheric disturbances and electromagnetic fading. An atmospheric event could cause the signal to weaken or fail. Because of the narrow width and small size of the typed letters, a fading affecting only a few lines could cause parts of the message to go missing entirely (Schröter 1928, 1932a). One way of dealing with this would have been to simply increase signal amplitude, essentially pumping more power into the signal to boost it. But the high-frequency generator tubes at this time were enormous, water- cooled machines whose manufacturing, maintenance and operation costs as well as electricity consumption rapidly increased with output power (Graf von Arco 1929?). Schröter dealt with the pesky materialities of shortwave communication by devising a remarkably simple fix, a method he had patented in 1927 (Schröter 1930). Instead of transmitting every scan line in succession, he redesigned the phototelegraph so that it would skip every second or every third line and then return to the blanks after a certain interval. His initial idea was to wind up the message strip thrice and scan it at a right angle (Fig. 3.5). In each loop, one third of the information would be added. But this trajectory was too short—a longer disturbance could still compromise a considerable part of the text. Rather than scanning across, a new model was built that rolled the paper strip along helically at a sharp angle (Fig. 3.6), skipping every other line and then adding them during a later round of scanning. This way, in the event of a fading, the noise would be distributed over a larger area and over multiple letters, maintaining good readability even in unfavorable atmospheric conditions. According to Schröter’s claims, the system was fast, operating with a phenomenal theoretical maximum speed of 500 words per minute— incomparably faster than human-operated telegraphs and teletypewriters
88
M. JANCOVIC
Fig. 3.5 Perpendicular scanning. (Image source: Schröter 1928, 456)
and slightly outperforming even most automatic telegraphs at the time.1 Schröter’s early “SMS” exchange also had the added benefit of what in today’s parlance would be called encryption: “absolutely guaranteed secrecy, because the interval of the line change could be agreed upon [between sender and receiver] and changed freely” (Schröter 1928, 456, my translation). Without knowledge of line synchronization timings, a receiver would produce only garbled dashes. In fact, after the experiences of World War I, cryptographic capabilities were a major selling point of communication and compression technology, and many later compression techniques and algorithms were developed in or adjacent to military contexts. 1
As compared to telegraph speeds given in Huurdeman (2003, 303–307).
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
89
Fig. 3.6 Helical scanning: alternating even and odd scan lines. (Image source: Schröter 1928, 457)
Despite its reportedly flawless functioning in the lab, Schröter’s single- line fax machine never witnessed a commercial life. The technology consumed ten times more bandwidth than the standard Morse alphabet telegraph, and the paper strips running through the printer had to be exposed in a darkroom and developed after reception like regular photographs, which was cumbersome. Further development was abandoned after experimental transcontinental operation between Nauen in Germany and Buenos Aires in Argentina showed additional irresolvable visual distortions resulting from the repeated delayed arrival of the same electromagnetic signal as a result of the wave bouncing along its path around the globe (Schröter 1932b, 445). Problems with fading and atmospheric interference had also been solved in other, more practical ways (Ilberg 1933; Schröter 1937). One of them was Rudolf Hell’s Hellschreiber, developed in the early 1930s, which transmitted text by breaking each character into a graphical grid of 7 by 7 dots. Like the phototelegraph, the device scanned text line by line, but instead of skipping lines, Hell chose to print every line twice, creating redundancy both as a measure against interference and to adjust synchronism (Hell 1940; Liebich 1990). Highly resistant to noise and fading, the output was printed electrochemically and immediately readable without the need for photographic processing. Both Schröter’s phototelegraph and Hell’s Schreiber add a layer of complexity to the history of technological communication after the turn of the century. The role of telegraphy in disturbing the relationship between inscription, language and signification at the end of the nineteenth
90
M. JANCOVIC
century has been copiously explored from various perspectives (Carey 1989; Kittler 1990, 1996; Strauven 2008; Maddalena and Packer 2014). Timothy Campbell (2006) has argued that historical discussions of radio have overshadowed other, non-oral uses of wireless technology. But Campbell’s investigation itself remains limited to a view of wireless communication as primarily a form of writing. The wireless telegraph still relied on the graphotactic properties of written languages (such as the frequency of certain letters) for entropy encoding and compression. Schröter’s phototelegraph and the Hellschreiber, on the other hand, did not transmit a digitized code like Morse or Baudot. Instead, they approached text and language from the ground up as a purely graphical problem. They conceived of writing as a form of drawing, a process that unfolds on a plane surface extended in space, not necessarily coherently in any particular direction, independently of any notation system, and which allows for the possibility to be reformatted and recomposed spatially in various ways. Instead of asking how to transmit a character of writing—a textual unit—Schröter and Hell asked how to transmit a picture element. They replaced signification with image processing. Schröter’s and Hell’s media techniques of compression and redundancy are paradigmatic examples for what the media philosopher Sybille Krämer has called “notational iconicity.” As Krämer (2012) argues in opposition to received theories of writing and their “dogma of linearity,” inscriptions can never be reduced to a form of language only, but always also operate iconically. Schröter’s non-linear scanning method shows that writing can become uncorrelated from speech, understood as the linear, successive production of signs. This detachment made writing manipulable spatially and ensured that even noisy transmissions would only diminish the quality of the print, possibly adorning it with traces of the electromagnetic inflections of the atmosphere that Schröter considered “parasitical markings” (Schröter 1932b, 439). But, unlike in telegraphy, such disturbances could never inject a wrong character into the message. By treating writing as a species of image, the phototelegraph also uncoupled signal transmission from the constraints of any particular writing system: the problem of encoding and compressing languages with many graphemes disappears when texts are treated as images. Schröter’s phototelegraphy of text was an unsuccessful, impractical and short-lived experiment lodged in-between newer machines like the Hellschreiber and the slower but much more practicable teleprinters and
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
91
teletypewriters. But his image-processing method would find a new and remarkably successful lease on life in another medium. Compressing Early Electronic Television Signals In 1930, Schröter registered another patent in Germany. In it he proposed a method of scanning and displaying television images that combined the afterglow properties of fluorescent screens with what he called the “line- skip scanning method” (Schröter 1933). This was the same principle he had used in his experiments with the phototelegraph, and would later be known as interlacing. This period was roughly the peak of the format war between mechanical and electronic television (Fig. 3.7). Mechanical systems based on spinning disks were reaching their limits and would soon lose out to tube-based electronic technology. Schröter was a strong proponent of the cathode ray tube and faced substantial internal adversity at Telefunken for advocating for electronic television, because the company traditionally prioritized precision mechanics over electronics.2 Schröter found an external ally in fellow physicist and inventor, the young Manfred von Ardenne. The former Telefunken engineer and television historian Gerhart Goebel has noted that because von Ardenne was not affiliated with Telefunken, any failures
Fig. 3.7 A format war unfolding in print: advertisements for scanning discs and cathode ray tubes side by side, as they appeared in the German TV and film journal Fernsehen und Tonfilm 2 (3), 1931 2 German Museum of Technology, Gerhard Goebel collection, I.4.048 NL Goebel-078, p. 30. Also Schröter (1953, 8).
92
M. JANCOVIC
of the electronic system could be blamed on him personally, whereas his successes could be immediately claimed and absorbed into Telefunken research. Around 1930, the commonplace way of displaying images on fluorescent cathode ray tubes (CRTs) was to scan a discrete image, wait for it to fade, then scan the next one. Exactly as his phototelegraph operated, Schröter instead proposed that the electron beam that produces the picture on the screen be diverted so as to perform two separate scans at an acute angle for each image, first scanning only odd lines from top to bottom, then returning to the top of the screen and adding the remaining even lines. A single image frame was thus dissected into two separate “fields,” an odd and an even one. Due to the image retention and slow decay of CRT phosphors, the brightness of one field would persist on the screen as the other field was being drawn, creating the impression of a continuous image. “The image virtually remains constantly in the field of vision, so that a gap between the changing images is barely present. In this way, the flicker resulting from the image sequence each second is eliminated to a high degree” (Schröter 1933, 2, my translation). Interlacing thus cleverly exploited both the materiality of the CRT screen as well as the “deficiencies” of the human visual system. It provided, as Schröter recognized, an excellent way to either effectively double the frame rate or compress the signal bandwidth by 50% while mostly preserving perceived resolution. And it simultaneously also circumvented one of the biggest obstacles hindering the practical deployment of television: the unpleasant visual phenomenon of flicker. Cinema had solved the problem of flicker mechanically in the early years of the 1900s with the use of double- and triple-bladed shutters that increase the rate of flicker beyond perceptible levels. Detractors, however, were quick to point out that interlacing introduced its own visual flaws, which were especially prominent on the small screens in use at this time. With the limited resolution of early television, interlacing caused a jittering effect as the electron beam switched from one field to another, making the image tremble slightly. The current technical term for this is interline twitter, and it would disappear if the viewer increased her distance from the screen. But with increased distance, precious perceived sharpness in the small image would be lost again. Interlacing thus traded decreased flicker for decreased resolution. These new distortions were recognized from the beginning (Urtel 1936). Schröter himself acknowledged the striped pattern resulting from
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
93
interlacing as a disturbance and, in fact, despite the claims in his patent, considered the procedure unsuitable for the transmission of moving images (Schröter 1932b, 111). He would continue trying to improve it for the rest of his career. In any case, much later internal histories of Telefunken remember the first public presentation at the German Radio and Television Exhibition in 1934 as a success (Roeßler 1951). The new German 180-line television standard introduced in 1935 demonstrated the usefulness of compressing images with interlacing. The system had a much higher operating brightness, which also exacerbated flicker (Goebel 1953) and thus necessitated strategies to mitigate it. But up until the 1940s, with each new national norm, the debate on whether to use interlaced or progressive (non-interlaced) scanning was rekindled anew (Weiss 1937). Traces Remain The side-effects of interlacing continue to exert an influence on television content even today. High-frequency patterns—for example, striped clothing or fine text—can interfere with the field raster and result in jittering. These effects place some practical limits on what types of images can “safely” appear on screen. They also introduce new complexities into television and video workflows: they necessitate the use of low-pass filters, anti-aliasing and other interventions that manipulate and control color and brightness frequencies in an image. While it is now common to record and distribute non-interlaced video and deinterlacing filters that try to minimize or remove its signature traces have been a staple of television and media player hardware for many years, interlacing remains a tricky engineering challenge. It is still used by many producers and television stations around the world who rely on older equipment or limited bandwidth, and it can present enormous problems in the preservation of moving images, for example when video and audiovisual archives are digitizing or reformatting historical material. Individual video formats implement interlacing in divergent ways. Without very detailed knowledge of format peculiarities, this can be confusing to archivists and cause errors in digitization and preservation workflows. For example, by convention, the D-1 analog video tape format starts with the odd field first in PAL mode but with the even field first in NTSC mode. Accidentally digitizing such a tape with the wrong initial field would ruin the entire file.
94
M. JANCOVIC
In digital systems, even the interpretation of the field order can be unclear: does the decoder or editing software start counting fields at 0, an even number, or at 1, an odd number? Reformatting and editing interlaced video, both analog and digital, without considering these subtleties can compound interlacing artifacts even more when the order of the fields is not monitored carefully. Figure 3.8 shows a concrete example: a severe odd/even field dominance error resulting from resizing and mixing interlaced video footage with different field orders, visible as a prominent horizontal zig-zag disturbance. These differences are crucial for analog videotape editing, because due to the phase alternation in the PAL system, the color signal is synchronous only once every eight fields. Because color information is often compressed more heavily than brightness information in nearly all common video delivery formats, editing across fields and disregarding the color sequence can introduce a jump in color subcarrier phase. To a viewer, this would be visible as streaks of color from previous frames seeping into the image. Similar color compression schemes are used in numerous digital formats as well, where the same problems can arise.3 All of this can result
Fig. 3.8 Severe field dominance error. (Image courtesy of Esben Hardt) 3
See also Gfeller et al. (2013, 64).
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
95
in subtle or more overt traces, which frequently dot old footage posted on online video repositories like Archive.org or YouTube. The compression therefore also restricts how moving images can be edited or how special effects can be applied. Because it is fundamental to the production, circulation, viewing and preservation of the moving image and because it operates in the background, generally hidden from most users, we could understand compression as a set of infrastructural techniques of the moving image. Although compression methods lack the tangible physical form we commonly associate with the word infrastructure, we could nonetheless describe its effects as “a complex relation of complicity and independence from the intentions that go into [media’s] construction,” as the anthropologist and media scholar Brian Larkin once described media infrastructure. Larkin continues: “These intentions do not easily go away, and long after haunt technologies by shaping the conceptual horizons of what people expect them to be” (2008, 47). Because of how moving images tend to circulate, compression also haunts them even long after the initial need to implement it has disappeared. Distributors sometimes republish old TV shows carelessly to turn a buck, without paying attention to minor incompatibilities between formats or the marks that repeated forms of compression could leave on the material. Fans digitize their favorite programs and upload television moments found on tapes discovered in the attic or accidentally encountered on some home-burned CD or forgotten hard drive, without the technical means or expertise to eliminate traces of compression. And in this way, long after interlaced video formats have become obsolete, the traces of interlacing persist in the visual space. Compression subtly shapes what properties moving images can have, what can appear in them and how they can be modified. It does not obviate human agency, but it places some tangible boundaries around it. There are ways to work around the limitations that compression places on moving images, but those are always work-arounds: in many questions regarding the appearance, transmission and storage of video, compression remains in the center, orienting moving image practices around itself, restraining the content, the editing, the format and their circulation and viewability. The compression method pre-emptively “curates” what can be usefully encoded and how. For a media theory of compression, this seems like an important point to elaborate upon. As we have seen, compression is not a transparent process but profoundly alters whatever passes through it. But the
96
M. JANCOVIC
infrastructural effects of video compression reach even further. Compression unfolds certain effects not just in the moment it encounters an image to compress, but already because it is anticipated as a future possibility. To give just one banal example, television presenters know they should not wear clothing with fine horizontal patterns because it interferes with interlacing and creates distracting Moiré patterns on screen. A culture of video compression thus already operates prior to its algorithmic and electrotechnical effects, and is inscribed in mundane work habits that do not immediately seem associated with it, like a news anchor’s daily choice of wardrobe. Granted, such effects of compression are slight. But we lack a media-theoretical understanding of just how far practices of compression really reach outside of the more overt domains of technology and aesthetics. It is thus also possible that we simply do not yet realize the full extent to which compression permeates our daily life and affects our media habits.
A Child with Too Many Parents In his meticulously detailed book Television: An International History of the Formative Years, Russell Burns includes a brief paragraph on interlacing and assigns the chief contribution for its development to Randall Ballard with a marginal acknowledgment that there had been numerous earlier descriptions of similar processes (Burns 1998, 427–428). Ballard is often called the inventor of interlacing (e.g. Magoun 2007). Other television engineers and historians have attributed its invention to the American experimenter Ulises Armand Sanabria (Udelson 1989; O’Neal 2008) who claims to have been the first to produce interlaced images on a mechanical television system in 1926. Yet others give greatest credit for interlacing to the prolific Russian-American TV pioneer Vladimir Zworykin (Winston 1998). Schröter and other experimenters working in continental Europe are largely absent from English-speaking histories of television, and Schröter himself attributes the principle of interlacing to the Scottish inventor John Logie Baird (Schröter 1953). What gives? As with so much of television history, interlacing is a child with numerous purported parents. Many inventors claim credit for it, and almost as many have gotten it. Ultimately, however, replacing one patrilineal origin story with another is a less rewarding exercise than examining how each historical instance of interlacing’s “invention” seems to encapsulate an altogether different idea of what television should be.
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
97
Early Interlacing Patents Between 1914 and the 1950s, nearly a dozen interlacing patents were proposed with various modifications. Burns (1998, 209–211) briefly enumerates some of the names associated with the general notion of non- sequentially combining individual image lines into complete pictures: as early as 1914 by Samuel Lavington Hart, by William Stephenson and George Walton in 1923, by Marius Latour, by John Logie Baird and by Ernst F. W. Alexanderson simultaneously in 1926, and by Manfred von Ardenne shortly after Schröter in 1930. Nearly all of them are either very vague, overly complicated or difficult if not impossible to engineer (Marshall 2011). Some of the inventions wildly overestimate their capabilities and, as was common until the 1930s, severely underestimate the bandwidth necessary to carry a reasonably viewable moving image. What even constituted a “reasonably viewable” moving image was hardly universally agreed upon. The Hungarian television pioneer Dénes von Mihály, for example, seems to have had a lot of confidence in the elasticity of human perception when he presented his own television system in May 1928 in Berlin. Two years prior, the Geneva Frequency Plan had standardized spectrum channel widths across Europe. In order to squeeze a television signal into the narrow 5 kHz modulation bandwidth allowed by the Plan, von Mihály’s picture had a resolution of just 30 lines and a frame rate of only 10 frames per second. By today’s sensory habits, this would just barely qualify as an “image” at all, much less as a moving one, and would likely feel unpleasant to watch to most people. And yet, von Mihály remained assured: “As soon as our eye is acquainted with these pictures, we will not see the flickering anymore and fill in any resulting gaps on our own” (quoted from Goebel 1953, 301, my translation). Von Mihály was certainly not the only one with this forward-looking attitude. With typical 1930s technological optimism, Schröter’s colleague at Telefunken Erich Kinne reports as late as 1931 on televising film recordings: As experiments showed, the abovementioned brightness retention of 1/7 sec. can be accomplished well. Accordingly, only 7 pictures need to be sent every sec., which is a substantial saving. We were able to reproduce the motions of the transmitted persons very well, too. The initial concern that movements might look blocky if such a low image frequency were used
98
M. JANCOVIC
proved to be incorrect […]. Both trials were entirely satisfactory. (Kinne 1931, 37, my translation)
Traces of compression like flicker or choppy movement thus very precisely demonstrate the contradictory, subjective and historically contingent nature of human sensing. Even within the small community of early television engineers, flickering images are either not noticed at all, or they are noticed but treated as a minor quirk to get used to, or as a critical problem in need of a solution. Although impracticable, there is some value in revisiting and appreciating some of the strange early interlacing contraptions like the one patented in 1914 by Samuel Lavington Hart, a British educational missionary to China. His complex 20-page claim comprises three different image- transmitting devices. One of them consists of a rotating spherical bank with 12 lenses with a photosensitive cell on the inside (Fig. 3.9). The bank was to rotate 15 times per second and simultaneously slightly rock forward and backward in three positions 5 times per second, producing a
Fig. 3.9 Samuel L. Hart’s “apparatus for transmitting pictures of moving objects.” (Image source: Hart 1915, 17)
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
99
triple-interlaced image with a total of 36 lines at a rate of 15 frames per second. Hart conceived of his apparatus primarily as a fac simile machine which could either copy photographs or send pictures of physical objects and expose them onto photographic paper on the receiving side. However, he believed the device could also be used to record either digital or analog video: This may be accomplished by using the varying currents created by the changing of the light thrown on to the sensitive cell, to produce a record. This record might be produced on a rapidly moving tape by means of impressions made as in ordinary telegraphic machinery, these impressions consisting for example practically of dots and dashes, or […] on a revolving plate or disc on which the spiral grooves are cut in a manner similar to a phonographic record. (Hart 1915, 8)
Hart’s suggestion is mechanically impossible because video requires very high signal frequencies that are difficult to produce and record mechanically. To read a Morse code from telegraph tape with sufficient speed would require an enormous rotational force. It would take another 13 years for John Logie Baird to produce a video signal with frequencies so low that they were audible and could be recorded onto a wax disc, with a frame rate of just 12.5 frames per second (McLean 2000). But the actual storage of video on tape with reasonable quality was only solved magnetically much later, after the development of quad tapes and helical scanning in the 1950s—a method Schröter had described as early as 1932 (Schröter 1932a). Yet although impossible to realize in practice, Hart in principle described the future DigiBeta and Laserdic formats—methods for storing interlaced digital or analog video signals on a tape or disc. There is something remarkable about the matter-of-factness of Hart’s patent. Unlike the grandiose claims that television pioneers were prone to make, Hart raises the possibility of going from the transmission of still images to moving images almost as an incidental side note, an interesting—but ultimately secondary—functionality of his fax machine. What this shows us is that there existed a small but undeniable countercurrent of technological thought that did not see moving images as the ultimate next step of still image transmission but simply as a nice “feature.” Most later mechanical interlacing patents, which proposed various combinations and modifications of multiple Nipkow disks, were also never
100
M. JANCOVIC
reduced to practice. Those that did make it to an experimental stage often gave poor results because of displeasing optical artifacts. But it is interesting to observe the efforts to find use for interlacing techniques at the conceptual intersection between television and cinema. For example, Ernst F. W. Alexanderson tested a large-format projection mechanism for General Electric around 1926, which was meant to project seven independent light beams onto a plain screen arranged so as to produce a complete picture. A simplified version of this multibeam projector was unsatisfactory because of light waves passing over the image, and the scheme was abandoned before planned improvements using interlacing could be implemented (Burns 1998, 211–213). Nevertheless, the idea of using interlacing to achieve screen sizes or image resolutions that would be acceptable even in the motion picture theater did not die out entirely. A similar large-screen television system that used film stock as an intermediate was exhibited by Telefunken’s August Karolus at the 1933 Berlin Radio Exhibition and intended by Joseph Goebbels as an instrument of national-socialist propaganda. It featured four interlaced channels, each with a 24-line signal to produce an image of considerable quality, one square meter in size (Burns 1998, 234–236), or about the size of a modern 60-inch TV. Interlacing thus functioned as a common denominator of various overlapping image generation methods which, in hindsight, appear to us as distinct media: fac simile still image transmission, text transmission, photography, television and cinema. Randall Ballard’s use of interlacing is an especially apt counterpoint to Schröter’s. Ballard’s System Two years after Schröter, the Radio Corporation of America (RCA) engineer Randall Ballard patented an interlacing method in the USA that was identical in principle to Schröter’s, and at the same time driven by an entirely different impetus. Ballard’s intention was to find a way to exploit the properties of the fluorescent screen so that it could be used to show sound films. [I]t has been determined that, for the purpose of making use of this fluorescence to the best advantage, the received picture frequency should not be materially greater than sixteen pictures per second. However, when using standard sound-film, it is necessary, at the transmitter, to run the film at the
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
101
normal rate of 24 frames a second to reproduce the sound faithfully. (Ballard 1939, 1)
Finding suitable methods to reformat films to television was crucial to the success of the medium, since especially smaller local stations lacked content to fill their broadcasting schedule. Ballard’s invention was predominantly meant as a means of achieving this. Ballard’s interlacing patent presents an alternative to other destructive reformatting procedures. Because the latency of the phosphors Ballard had been working with limited the frame rate to 16 frames per second, television technology could not show a sequence of images at the same speed as cinema. This creates a problem that has three possible solutions. One could simply slow the film down, but that would distort the soundtrack and make voices sound disconcertingly deep. One could skip some of the frames, but that would distort the motion. Or one could produce a new print running at a slower speed, but that is expensive. The interlacing of odd and even lines was an ingenious workaround, a fourth, new solution. Interlacing allowed Ballard to rotate the scanning disc (which transforms images on the film into an electric signal) at a speed of only 12 revolutions per second, displaying half of the available 81 lines for each frame of film. As a result, the televised transmission would interlace 12 odd and 12 even fields every second and add up to the same frame rate as sound film. The reduction of both aural and visual disturbances was a central goal of Ballard’s patent. Ballard’s method was first put to practice when RCA presented a modified and very successful field trial of their television system to members of the British Television Committee in October 1934. The technology was based on Elmer Engstrom’s studies on television image characteristics and RCA’s trump card: Vladimir Zworkyn’s Iconoscope, the revolutionary high-sensitivity electronic camera tube (Burns 1998, 420–425). Ballard’s method stuck, but the importance of reformatting film dwindled. For reasons we will get into in a moment, the frequency of the two interlaced fields was raised from 24 to 60 Hz in order to couple the scanning speed to the frequency of the alternating current in the electrical grid. The British delegates reported excellent results and applauded the lack of flicker compared to similar devices in the United Kingdom (ibid.). RCA continued using this setup ever since, as did its affiliate EMI in Britain with an adjustment to 50 Hz to match the mains frequency in Europe. Recognizing interlacing’s triple usefulness in compressing bandwidth,
102
M. JANCOVIC
reducing flicker, and making effective use of phosphors, the following year, next to Telefunken, TEKADE and Loewe in Germany also began using interlacing in their electronic senders and receivers (Goebel 1953). Experimentation with marginal adjustments like scanning vertically instead of horizontally continued, but roughly from this point onward, interlacing would remain in use by every TV manufacturer and station on the planet more or less unchanged for all of its analog existence and much of its digital present. Composite forms that combine elements of photochemical film and scanned electronic transmission seem like media-historical “hybrids” of cinema and television. But the notion of hybridity implies the intermixing of previously pure forms. Yet, as Anne-Katrin Weber has also argued, this is, in itself, a historical fallacy. There never was a purity or stability to begin with. As Ballard’s patent also evidences, the borrowing of various technologies and methods and the circulation of content across a variety of formats was essential to early television. Moreover, the issue of frame rates, which causes incompatibilities between cinema and television and which Ballard initially intended to work out with interlacing, also hints at a broader dependence of television technology on electrical infrastructure.
Double Standards: Synchronizing Television and the Electrical Grid The ghosting artifacts visible in my DVD edition of Olympia are a direct result of the friction between television frame rate standards in Europe and North America. The standards are incompatible with each other, and both adhere to the local mains frequency. But why did television need to be synchronized with electrical infrastructure? Early television relied on the electrical grid to generate regular scanning pulse frequencies. It was around the same time in the 1920s and 30s that, after several decades, the multitude of electrical frequencies in use around the globe was approaching something resembling an industry-accepted consensus: 60 Hz became the prevalent norm across the Americas, and 50 Hz in the rest of the world, roughly speaking (Lamme 1918; Owen 1997; Mixon 1999). Early television image acquisition was done with flying spot scanners in completely dark studios. With the Iconoscope’s dramatically improved light sensitivity, however, filming under normal light
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
103
conditions became possible in the mid-1930s. At the same time, the luminance of CRT screens increased, which made flicker at low frame rates much more pronounced, increasing the pressure towards higher rates. As early successful television broadcasts of the 1930s began growing in scale, both more professionalized studio settings and outside locations became more common. Simultaneously, interferences to both cameras and receivers grew more pronounced: car engines, railroads and alternators on the streets all generated stray magnetic fields that disturbed the transmitted images as noise or as various modulations in brightness (Kirke 1939). Since studio lights also flickered at the mains frequencies, television standards had to be adjusted to the same rate, because a mismatch between camera, electricity and light would produce brightness fluctuations in the form of bright stripes rolling across the image. Television standards were thus built on top of pre-existing local electricity norms, corresponding to the global 50/60 Hz frequency divide. There are electrotechnically expedient reasons for distributing current at these frequencies, but the two numbers are also to some degree arbitrary, and, fascinatingly enough, they have to do not only with technology but also with perception. These two dominant global standards seem to have been partially based on a desire to make flicker invisible to “correctly” functioning eyes. The history of the 50/60 Hz utility frequency norms can be traced back to the year 1891. The electrical systems engineer and historian Edward L. Owen argues that these two values were chosen primarily because they are just above the threshold beyond which the intermittent flicker of lamps becomes imperceptible to humans and fuses into the illusion of continuous illumination. AEG, the major German producer of electrical equipment in Berlin, found that at 50 oscillations per second, lightbulb flicker would not be visible to most people. Simultaneously in Pittsburgh, the Westinghouse company settled for the slightly higher 60 Hz (Owen 1997; Mixon 1999). It is thus not only the visual media of cinema and television that were built around averting flicker. The very “filmic” impulse to make interrupted light appear as continuous movement was already present in practices of artificial illumination. All street lights and indoor lightbulbs connected to the electrical power grid refer back to this aversion to flicker inscribed in the standard. Much of the electrical infrastructure that surrounds us is imprinted by humans’ incapacity to see intermittent light at a
104
M. JANCOVIC
given frequency, even if the exact number is clearly not an absolute value but a matter of taste, opinion and subjective perception. The frequency of the electrical grid, however, is not perfectly constant but fluctuates by tiny amounts. These fluctuations posed a significant synchronization problem for early television. If the television receiver was powered with electricity from a generator that had not been synchronized with the sender, the television image would trail and roam horizontally (Schubert 1931). Media scholars have argued that electrical glitches manifesting in audiovisual media can be read as marks of the presence of the state—its incompetencies and corruptions included—in the household (Larkin 2008; Marks 2014). In much the same way, the temporal cycles of social life and the periodicities of economic activity were reflected in the wobble of the early television image. At times when people went to or returned from work, the frequency of the electrical grid would fluctuate, and so would the images made with it. In order for television to provide stable, visually acceptable images, it was therefore not enough to synchronize its frame rates to local or regional the mains frequency. Rather, early television depended on the synchronicity of entire topographical networks of electricity supply. Improvements in oscillator and synchronization technology eventually removed the need to link television frame rates to the mains frequency. But by that time, the two frame rates had been formally standardized and adopted globally, eventually resulting in the long-lasting legacies of the NTSC and PAL/SECAM norms. Even the protracted, on-going global switchover to digital broadcasting retained the original values, and interlaced video at 50 or 60 Hz remains the norm. Interlacing in Cinema? Contemporary approaches to television’s history have stressed the need to think it as a hybrid, turbid and “impure” medium in continuous transformation and permanent interdependence with other media (e.g. Stauff 2004; Keilbach and Stauff 2013; Weber 2014). One particular use of image “compression” similar to interlacing greatly underscores this. It predates even the earliest methods mentioned in any source so far by two decades: Max and Emil Skladanowsky’s Bioscop, the apparatus used during what is considered by some to be the first film exhibition for a paying audience.
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
105
During their show in the Berlin Wintergarten in November 1895, the brothers presented film recordings of acrobats, athletes and dancers from a special double projector. The film strips, recorded at eight pictures per second according to the Skladanowskys’ own statement, were cut frame by frame and glued together in such a way that the even and odd frames each form an endless loop. Both loops are projected alternatingly with the Bioscop. Through a semicircle-shaped disc with a jagged edge rotating in front of both lenses, the transition from one frame to the other was dissolved softly. The elaborate process served to eliminate the annoying flicker that was bemoaned at film exhibitions until the first years of the twentieth century (Loiperdinger 2001). The Skladanowsky brothers removed flicker by physically cutting up the film and softly blending one image into another. The Bioscop could be said to be more “televisual” than “filmic,” in that instead of projecting discrete frames through a solid shutter, it produced soft transitions closer to those of a magic lantern or the later television. On the level of technique, the muted foreshadowings of video compression can thus already be discerned in some of the moving image formats now regarded as the threshold between the magic lantern and cinema “proper.” All these multiple competing “origins” of interlacing prevent us from thinking of television’s history in a single direction progressing from photography to film and towards computers. Instead of emerging in a sequence, it is more useful to think of the history of moving images in terms of superpositions and interferences of various methods and formats. Schröter’s approach reveals the solid and enduring ties that television has to photography, the telegraph, and the fax machine—technologies that all predate (early) cinema.4 On the other hand, Ballard’s method—and it is, to be sure, the same method; the only difference lies in what it aimed to achieve—was devised to remotely transmit film, with the explicit purpose of preventing aural distortions of sound films. It was certainly television, but it was at the same time a wirelessly streamed cinema, a cinema displaced. Ballard’s attempts to transmit film also show that one of the core techniques of the television apparatus did not have the broadcasting of live events as its most logical and immediate purpose. Encoded in each specific form of interlacing is thus a different vision of televised moving images.
4
Uricchio (2008) has made a similar point.
106
M. JANCOVIC
A media epigraphy that follows traces of interlacing and the ways in which it both solves some problems of inscription (atmospheric fading, flicker) and becomes a problem of inscription (interline jitter and such) makes it possible to uncover these historical networks of knowledge and practices that echo in the images we see on our screens. These networks reach far back into the nineteenth century and continue to orient our experience of technology in the present; the ideas circulated in 1930s compression research have prefigured many modern digital codecs. Competing Visions of Television The search for “firsts” and the knowledge of when and where exactly interlacing originated has some chronological value, but ultimately, it is less significant than the realization that interlacing emerged in many places, at multiple points in time, in different ways. Schröter’s antidote to visual errors, developed for his impractical phototelegraph, found use as a highly practical compression technique in television. Ballard also applied interlacing as a remedy for distortions, but from a much different starting point. Interlacing is a comparatively simple technique, but its history is complex precisely because it emerged from conflicting, multidirectional desires responding to contradictory visions of what wireless moving images could and should be. Media epigraphy, I argue, can articulate these visions with some precision by clearing the view towards the concrete relationship between small traces and previously neglected environmental, infrastructural and sensorial factors. Schröter, Ballard, Sanabria, Hart and others are some of the names we could enlist in recounting interlacing’s branching archaeology, along with fax machines, image dissection grids, competing electronic and mechanical television systems and formats, and photography and cinema. But equally important is the shortwave spectrum, atmospheric turbulences, electrical infrastructure and transregional electrical standards, and the human affinity for consistent brightness and dislike for flicker. Interlacing can productively inform media history precisely because it can function as such a versatile method in so many formats and media. It is a general, highly adaptable technique of manipulating images that can be implemented in a number of ways to achieve a number of goals. As a “transversal media practice” (Gansing 2013), interlacing and its traces distend the limits of television history and reaffirm what William Uricchio (2008) has called the interpretive flexibility of the medium. The relatively
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
107
simple idea that one can dissect an image into odd and even lines and transmit them out of order is central to the phenomenology of television. Interlacing artifacts were for many decades a staple component of the sensory definition of what watching television “felt” like, even if they may be difficult to notice and consciously observe on older television screens. Interlacing wove together scientific figurations of the human body’s dis/abilities with the material properties of fluorescent screens and the infrastructural challenges of electrification. Just like the introduction of color TV capitalized on humans’ low visual acuity to blue light, interlacing “worked because it incorporated the supposed limitations of its viewers into its technical standards and infrastructure” (Sterne and Mulvin 2014, 112) and utilized to its advantage precisely those retentive properties of fluorescent screens that are now considered undesirable. But interlacing has appeared and reappeared in many other specific and marginal historical and technological contexts, reflecting divergent understandings of television. Was television phototelegraphy but with movement? Was it cinema? Was it radio? Was it telephony enhanced with moving pictures, as was a major research focus around this time? Was it meant for entertainment, propaganda, news or private communication? Was it meant for live broadcasting, as Schröter had intended, or the transmission of pre- recorded film, as Ballard did, or for storage, as Hart did? Was it a scientific instrument to study the physics of shortwave radiation, as some Telefunken scientists were deploying it? Was it meant to be brought into the private home or shown on large screens to big audiences in public? Was it a military technology to send sketches from land to airplanes, a precursor of unmanned military drones, as both Schröter and Sanabria described in some of their hypothetical applications? Television around 1930 was all of these things, and interlacing has appeared as part of most of them, achieving quite different objectives. Wanda Strauven (2007) has shown that in the first decades of the 1900s, a range of different modes of distribution had been envisioned for moving images. By 1930, this imaginary potential was still far from exhausted. By fixating not on interlacing’s origin story but rather the many specific formats it found utility in, we can do justice to the heterogeneity of television practices, and better understand how all of them were present as concurrent potentials, as promises in the idea of early television.
108
M. JANCOVIC
An Interlaced History Fritz Schröter briefly became a media theorist himself when he reflected on technological history in a commemorative article celebrating the 50th anniversary of Telefunken’s founding. In the span of three pages, he presents two contradictory models of media history. “The historical development of television is comparable to a stairway whose steps are characterized by always increasingly better technical, economical and civilizing results” (Schröter 1953, 4, my translation). Yet comparing the Nipkow disk with Adriano de Paiva’s 1878 synchronous cell raster, instead of a staircase ascending to ever greater civilizatory heights, he suddenly changes the tune to a spiral that makes techniques submerge and reappear: “And so it is that techniques spiral up. In the same azimuth at every turn of the spiral, a single same principle returns again at a higher level of development” (Schröter 1953, 7, my translation). A Whiggish teleology that progresses in increasingly improving steps is replaced with a spiral model of history somewhat reminiscent of Erkki Huhtamo’s media archaeology centered around cyclical recurrences (Huhtamo and Parikka 2011). If anything, interlacing does not fit either of Schröter’s historiographical templates very well. The history of interlacing seems less like a spiral or staircase, and more like some form we do not have a name for yet, but which is impossible to fully visualize as a simple two-dimensional shape with a clear direction. A complex spectrum, perhaps. While I have placed an emphasis on Schröter and his invention in this chapter, the network circumscribed by interlacing extends much farther and crosses many other media. In fact, applying a wider lens to the history of compression would reveal that a great number of digital compression methods that would only come to fruition much later had already been envisioned in experiments with early analog television. In August 1936, eight years after the phototelegraph experiments and just a few days after the Berlin Olympics had ended in Nazi Germany, Schröter applied for another patent for an image compression method. Its American version reads: “[A]ccording to the present invention the brightness or shading or density values of the picture elements are transmitted only to the extent where, contrasted with the corresponding and respective picture elements of the preceding frame, they have experienced an actual change” (Schröter 1940, 1). Schröter thus made one of the first proposals for difference encoding, the basis of every major digital consumer video format. He suggested compressing the signal and increasing
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
109
spectrum efficiency by transmitting not complete brightness information for every frame, but only its difference from the previous frame. The originary medium for Schröter’s new video compression was, again, a film strip in which two frames were scanned simultaneously and their luminance difference calculated electrically. In his keen analysis of compression codecs, the media scholar Adrian Mackenzie has suggested that digital video compression challenges cinematic and televisual perception, on the basis that it completely reorganizes the coherent frame of the image into an interconnected database of spatially and temporally discorrelated brightness and motion vectors (Mackenzie 2008, 2013). Schröter’s patent provides a subtle but significant corrective under which the differences between these audiovisual media appear much less rigid and clear-cut than Mackenzie’s conclusion would seem to admit. Undoubtedly, there are good reasons to accept the premise that cinema, television and digital video, as very large theoretical and historical categories of thought, each have their dominant mode of visuality and some consistent commitment as to what constitutes “an image,” enforced through technological standards and aesthetic conventions. But at their margins, once one looks at the multiple formats and practices that go by the names cinema and television, things become blurry. Motion compression does not start with the software patent lineage posited by Mackenzie, nor is it such a radical break with the microeconomies of the analog television signal. Decades prior to digital video, inter-frame compression techniques had already been described, anticipated and recognized by Schröter and others as a suitable solution to the problem of economical image transmission. It is true that the analog implementations posed significant engineering obstacles that would only really be worked out computationally decades later. But the technique—the notion of saving bandwidth by encoding only the difference of two values instead of each individual value—was already simmering at the core of early analog signal processing. This parallels many other more or less successful methods of compressing images that have been proposed throughout video’s history. In the 1970s, neuroscience had discovered two independent channels of human vision, borne by two types of ganglion cells in the brain that each have different sensitivities to motion. In response, some high-definition television systems proposed during the 1980s were developed on the basis of dual resolution, where motion content was transmitted separately from and with lower resolution than detailed patterns (Katz 1989). The insight
110
M. JANCOVIC
that still and moving images had entirely different psychovisual requirements had also been around since the early days, and was, in fact, very much won thanks to the porousness between phototelegraphy and television research (e.g. Urtel 1936; Roeßler 1951).5 Many other theoretical suggestions for compression methods—such as transmitting only the image portions in the center of the visual field where human eyes are sensitive to high resolution—were circulated in early television research around 1930. PALplus, as discussed in the previous chapter, managed to fit a wide-format image into a previously standard-definition signal in 1995 and remain backward-compatible. Similar methods using quadruple interlacing were already being tested in the 1930s: designs existed that would allow the same broadcast to be shown on both common home receivers and large high-resolution screens by “hiding” the higher resolution in two additional interlaced fields (Reichel 1939). This is nothing short of astonishing. It means that in the late 1930s, television engineers were already anticipating a future in which moving images would be watched on screens of different sizes, and began developing compression methods to accommodate multiple image resolutions. Television would not be facing such a challenge again until 60 years later, when widescreen TV began competing with the conventional format, necessitating the invention of dual systems like PALplus. Yet efficiently compressing video so that it can scale to different formats even now remains one of the most critical technical difficulties that distribution platforms like YouTube and Netflix face. All of these early approaches to image transmission underscore that compression codecs and compression logics at large have a past that long precedes digital computational media. If codecs and formats are viewed not as technologies but as techniques—as methods and practices, as ways 5 One of such systems that compresses stillness and motion separately was in practical use until 2009 in the Japanese high-definition satellite broadcasting format MUSE. This pioneering analog-digital hybrid used a four-field dot-interlacing pattern with motion compensation and other complex techniques to massively compress moving portions of the image. Stationary images were transmitted with full resolution. This type of interlacing divides the image into four instead of two fields, and additionally divides individual lines into a grid of dots that are also skipped in a specific order. This produces an idiosyncratic and recognizable five-eyed dice pattern in the image. Unhappy with the disturbing visual artifacts of interlacing, Schröter had patented a similar early dot-interlace method already in 1946. Similar approaches were tested some years later during experiments with color television in the United States.
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
111
of doing things that are frequently borrowed and repurposed—then, depending on how closely we look, they seem to emerge well before digital video with analog television, and before it with phototelegraphy, and perhaps even other media before them. By situating many principles of digital video compression within 1950s analog color television technology and even earlier telephony research at Bell Labs, Jonathan Sterne and Dylan Mulvin have productively upset the distinctions between conventionally separate domains like audio and video, analog and digital (Sterne and Mulvin 2014). We could stretch the historical elasticity of these categories even further. Fritz Schröter’s and his contemporaries’ ideas demonstrate that it is possible to predate the emergence of now common digital video compression methods (such as motion compensation) into the ebullient period of interwar television research of 1926–1936. I mention this not in order to cast Schröter as a prescient genius whose role in video compression has unjustly been ignored by television historians, although there is certainly more to be said about his contributions to the field. Schröter himself acknowledges Baird’s precedence with interlacing, and his patent claim, after all, was not the interlacing method per se, but its application to a cathode ray tube receiver. Nor is my goal to claim that all technologies have already been prefigured, invented or imagined before—for example, that the Skladanowsky brothers “invented” interlacing because they had used a somewhat analogous technique in their film projection. Such a move would only replace a teleological history of compression with a teleological history of the invention of compression, to paraphrase both Michel Frizot and Peter Geimer. Rather, I see Schröter’s pivot from phototelegraphy into television as one event in a series of permutations that highlights the necessity to, as Sterne and Mulvin put it, “think transversally across histories of technologies, culture and sensation” (2014, 134). The historiographical value lies in recognizing interlacing as a complex superposition of techniques and formats that can be addressed in many interesting and as yet unseen ways. But besides questions of historiography, interlacing’s repeated appearances across many surprising contexts and its lingering presence in digital video also raise some questions about the “post-cinematic.” Many of the purported identifying characteristics of contemporary digital visual culture actually predate digital visual media as such. The fragmentation of the frame, its processual ontology as a performed set of microtemporal signal instructions rather than a coherent image, its recombinability and
112
M. JANCOVIC
reformattability, its lapse from the phenomenological horizon of subjective human experience into the subperceptual or the fully asensory, its non-linear storage order reconstituted in sequence only during projection—all of these are recognizable in nuce in interlacing, in analog-digital hybrid formats like PALplus, in the various experimental analog signal compression techniques, and, most strikingly, even in some of the earliest instances of photochemical film projection. Shane Denson (2020) has argued that computational imaging and related processes have extensively transformative, discorrelative effects on perception and subjectivity in the digital era. The methods I use can neither confirm nor refute these larger claims and it would be a mistake to object to them on the basis of the single example of interlacing only. But my spotlight on practices and techniques in this chapter—together with some of the compression methods we shall encounter later—seems to demand a reassessment of the dependencies between perception, visuality and technological infrastructure, particularly in regards to the degree to which we can locate the source of these transformations in digital computational media. At stake is not whether we can effectively diagnose the existence of a “truly posthuman, post-perceptual media regime” (Denson 2020, 2). It is whether we can accurately pinpoint its true historical conditions of emergence, and the extent to which they can be attributed to digital infrastructure alone.
Spectral Politics Compression methods appear to operate on their own temporal plane. They are closely intertwined with media, but also separate from them, persisting throughout long stretches of history and surviving even serious media-historical ruptures like the digital switchover. Yet they also oftentimes become dormant or “asymptomatic,” only to atavistically reappear with surprising ease and adaptability in new contexts even after their original media-technological habitat had become obsolete. In this sense, compression techniques like interlacing not only introduce parasitical traces that remain visible over generations. They can themselves be considered as having a viral nature relative to media: they move laterally rather than simply forward in time, jumping from host to host; they mutate, adapt and are adapted along the way. Audiovisual engineers thought for a long time that interlacing would eventually disappear as the broadcasting and video industries
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
113
gradually digitized. Neither the MPEG-1 standard, nor the JPEG2000 compression scheme, nor MXF, the gold standard data format of the video production industry, included support for interlacing, because interlacing was not seen as a viable or sensible future modality of digital video. But ignoring video’s past was a costly and cumbersome oversight for its future. It meant that JPEG2000/MXF, a major video preservation format, had no specifications for interlaced video and therefore precluded a colossal portion of moving images from being safely archived in this format according to international guidelines (Jones 2019). Even in those cultures and communities that tend to reformat and migrate their audiovisual media often, obsolete formats and legacy techniques can and will continue re- emerging, sometimes in very inconvenient ways. In hindsight, we could call interlacing television’s number one compression method. It demonstrates the richness of approaches to video compression before “compression” even became a coherent field of research. Schröter’s experiments with shortwave also urge us to reconsider the political economy of the frequency spectrum. The importance of shortwave radio as a global medium has faded considerably in the last two decades due to it being gradually replaced by the Internet. But very recently, shortwave is being “rediscovered” for high-frequency financial trading. With the rise of algorithmic trading, even the fastest cable technology is perceived as having too much latency. On account of its global reach at the speed of light, traders are increasingly experimenting with shortwave. And they face much the same obstacles Fritz Schröter did 90 years ago: unreliable ionosphere conditions, disturbances due to sunspots, echoing, and so on. One might say with a wink that even the cosmos itself resists speculative finance capitalism. As this once major public broadcasting technology is gradually being zombified in privatized applications by speculative trading firms, artists like Amanda Dawn Christie are making important media-archaeological interventions that help preserve the memory of shortwave’s communicative functions. Christie’s performances, installations, sculptures like The Marshland Radio Plumbing Project, and films like Spectres of Shortwave (2016) document both the disappearance of shortwave radio’s visible infrastructure as well as its lingering ethereal and spectral presence in the household and environment. The “spectral turn” (del Pilar Blanco and Peeren 2013) in the humanities has contributed many exciting investigations into the paranormal economy that haunts media. Interlacing is one of such specters. It has
114
M. JANCOVIC
travelled on to non-interlaced environments like video streaming services, audiovisual archives and online repositories, where improperly de- interlaced files reformatted from old VHS tapes and DVDs circulate in the millions, and where its artifacts become even more pronounced as a result of the optical properties of LCD and LED screens. Despite—or perhaps due to—its degeneration into an unwanted visual disturbance, interlacing lingers on in our visual space. It inscribes itself in the form of traces of a long afterlife of an analog past, quite alive and well in digital environments. In this way, an epigraphy of interlacing can make a contribution to the task of historicizing the function of shortwave telegraphy in the development of television. At Telefunken, much of the knowledge gained with experiments in phototelegraphy—including the realization that the transmission of moving images, unlike previously believed, was not possible over shortwave—was immediately rerouted and applied to television research. The many potentials of television around 1930 were all corralled by the materiality of the Hertzian spectrum. The propagation characteristics of radio waves place a limit on what television can be: a sharp and clear picture with high resolution requires high frequencies, yet those high frequencies also constrict its broadcastability to a region stretching no farther than a few large cities. The way to go around these fundamental limits of physics, in the 1930s as it is today, is to cheat perception with compression. “Cinema and current television technology live by defrauding the eye” (Reissaus 1931, 189, my translation).
References Ballard, Randall C. 1939. Television System. US patent no. 2152234. Batchen, Geoffrey. 2006. Electricity Made Visible. In New Media, Old Media: A History and Theory Reader, ed. Wendy Hui Kyong Chun and Thomas Keenan, 27–44. Psychology Press. Burns, Russell W. 1998. Television: An International History of the Formative Years. London: The Institution of Engineering and Technology. Campbell, Timothy. 2006. Wireless Writing in the Age of Marconi. Minneapolis: University of Minnesota Press. Carey, James W. 1989. Technology and Ideology: The Case of the Telegraph. In Communication as Culture: Essays on Media and Society, 155–177. Boston: Unwin Hyman. Cook, Charles Emerson. 1900. Pictures by Telegraph. April: Pearson’s Magazine. Denson, Shane. 2020. Discorrelated Images. Durham: Duke University Press.
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
115
Festival de Cannes. 2019. How to Upload Your Film? Cannes Court Métrage. Accessed August 24. https://www.cannescourtmetrage.com/en/participer/ detail-des-caracteristiques-techniques/shorts-in-competition. Gansing, Kristoffer. 2013. Transversal Media Practices: Media Archaeology, Art and Technological Development. Malmö University, Faculty of Culture and Society. Gfeller, Johannes, Agathe Jarczyk, and Joanna Phillips. 2013. Kompendium der Bildstörungen beim analogen Video. Zurich: Scheidegger & Spiess. Gitelman, Lisa. 2008. Always Already New: Media, History, and the Data of Culture. Cambridge, MA: The MIT Press. Goebel, Gerhart. 1953. Das Fernsehen in Deutschland bis zum Jahre 1945. Archiv für das Post- und Fernmeldewesen 5: 259–393. Graf von Arco, Georg. 1929. Bildtelegraphie und Fernsehen. I.4.095 NL Federmann—170. Historical Archive of the German Technology Museum Berlin. Hart, Samuel Lavington. 1915. Improvements in Apparatus for Transmitting Pictures of Moving Objects and the like to a distance Electrically. UK patent no. 15,270. Hell, Rudolf. 1940. Die Entwicklung des Hell-Schreibers. Hell Technische Mitteilungen: Gerätentwicklungen aus den Jahren 1929–1939: 2–11. Huhtamo, Erkki, and Jussi Parikka, eds. 2011. Media Archaeology: Approaches, Applications, and Implications. Berkeley: University of California Press. Huurdeman, Anton A. 2003. The Worldwide History of Telecommunications. Hoboken: John Wiley & Sons. Ilberg, Waldemar. 1933. Ein Jahrzehnt Bildtelegraphie und Fernsehen. Telefunken-Zeitung. Jones, Jimi. 2019. So Many Standards, So Little Time: A History and Analysis of Four Digital Video Standards. Doctoral dissertation, Urbana-Champaign: University of Illinois. Katz, David. 1989. High Definition Television Technology and its Implications for Theatrical Motion Picture Production. Journal of Film and Video 41: 3–12. Keilbach, Judith, and Markus Stauff. 2013. After the Break: Television Theory Today. In When Old Media Never Stopped Being New. Television’s History as an Ongoing Experiment, ed. Marijke de Valck and Jan Teurlings, 79–98. Amsterdam: Amsterdam University Press. Kinne, Erich. 1931. Zur Erzielung grösserer Bildpunktzahlen beim Fernsehen. Fernsehen und Tonfilm 2: 36–38. Kirke, H.L. 1939. Recent Progress in Television. Journal of the Royal Society of Arts 87: 302–327. Kittler, Friedrich A. 1990. Discourse Networks 1800/1900. Translated by Michael Metteer. Stanford, CA: Stanford University Press. ———. 1996. The History of Communication Media. ctheory.net.
116
M. JANCOVIC
———. 2009. Towards an Ontology of Media. Theory, Culture & Society 26: 23–31. https://doi.org/10.1177/0263276409103106. Krämer, Sybille. 2012. Punkt, Strich, Fläche: Von der Schriftbildlichkeit zur Diagrammatik. In Schriftbildlichkeit. Wahrnehmbarkeit, Materialität und Operativität von Notationen, ed. Sybille Krämer, Eva Cancik-Kirschbaum, and Rainer Totzke, 79–100. Berlin/Boston: De Gruyter. https://doi. org/10.1524/9783050057811.79. Lamme, B.G. 1918. The Technical Story of the Frequencies. Transactions of the American Institute of Electrical Engineers 37: 65–89. https://doi. org/10.1109/T-AIEE.1918.4765522. Larkin, Brian. 2008. Signal and Noise: Media, Infrastructure, and Urban Culture in Nigeria. Durham: Duke University Press Books. Liebich, Helmut. 1990. Hellschreiben: Nostalgie oder Realität. Funkamateur, November. Loiperdinger, Martin. 2001. Die Anfänge des Films. In Medienwissenschaft: Ein Handbuch Zur Entwicklung Der Medien Und Kommunikationsformen. 3. Teilband, ed. Joachim-Felix Leonhard, Hans-Werner Ludwig, Dietrich Schwarze, and Erich Straßner, 2:1161–1167. Handbücher zur Sprach- und Kommunikationswissenschaft [HSK] 15. Berlin: Walter de Gruyter. Mackenzie, Adrian. 2008. Codecs. In Software Studies: A Lexicon, ed. Matthew Fuller, 48–55. Cambridge, MA: The MIT Press. ———. 2013. Every Thing Thinks: Sub-representative Differences in Digital Video Codecs. In Deleuzian Intersections: Science, Technology, Anthropology, ed. Casper Bruun Jensen and Kjetil Rodje, 139–154. New York/Oxford: Berghahn Books. Maddalena, Kate, and Jeremy Packer. 2014. The Digital Body: Telegraphy as Discourse Network. Theory, Culture & Society 32: 93–117. https://doi. org/10.1177/0263276413520620. Magoun, Alexander B. 2007. Television: The Life Story of a Technology. Westport/ London: Greenwood Publishing Group. Marks, Laura U. 2014. Arab Glitch. In Uncommon Grounds: New Media and Critical Practices in North Africa and the Middle East, ed. Anthony Downey, 257–272. London: Tauris. Marshall, Paul. 2011. Inventing Television: Transnational Networks of Co-operation and Rivalry, 1870–1936. University of Manchester. ———. 2018. Interlacing—The Hidden Story of 1920s Video Compression Technology. The Broadcast Engineering Conservation Group. https://becg. org.uk/2018/12/16/interlacing-t he-h idden-s tor y-o f-1 920s-v ideo- compression-technology/. McLean, Donald F. 2000. Restoring Baird’s Image. London: The Institution of Engineering and Technology.
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
117
Mixon, P. 1999. Technical Origins of 60 Hz as the Standard AC Frequency in North America. IEEE Power Engineering Review 19: 35–37. https://doi. org/10.1109/MPER.1999.1036103. O’Neal, James E. 2008. Ulises A. Sanabria and the Origins of Interlaced Television Images. IEEE Broadcast Technology Society Newsletter 16: 15–18. Owen, E. L. 1997. The Origins of 60-Hz as a Power Frequency. IEEE Industry Applications Magazine 3: 8, 10, 12–14. https://doi.org/10.1109/ 2943.628099. Pictures Successfully Sent by Telegraph at Last. 1899. The San Francisco Call, May 7. del Pilar Blanco, María, and Esther Peeren. 2013. Introduction: Conceptualizing Spectralities. In The Spectralities Reader: Ghosts and Haunting in Contemporary Cultural Theory, ed. María del Pilar Blanco and Esther Peeren, 1–36. London: Bloomsbury Academic. Raeck, F. 1939a. Die geschichtliche Entwicklung des Zeilensprungverfahrens I: Zeilenverschiebung, Zelensprung und andere Veränderungen der Zeilenlage. Mechanische Anordnungen zur Durchführung entsprechender Abtastbewegungen. Fernsehen und Tonfilm, April. ———. 1939b. Die geschichtliche Entwicklung des Zeilensprungverfahrens II: Die ersten Anwendungen der Braunschen Röhre beim Zeilensprungverfahren. Fernsehen und Tonfilm 10: 25–30. ———. 1939c. Die geschichtliche Entwicklung des Zeilensprungverfahrens III: Das Zeilensprungverfahren bei der Abtastung von Tonfilmen. Fernsehen und Tonfilm 10: 53–56. Reelport. 2015. How to Prepare a Film File for the Upload. Accessed September 14. http://www.festival-cannes.fr/assets/File/WEB%202015/PDF/How_ to_prepare_a_film_file_for_the_upload.pdf. Reichel, Wilhelm. 1939. Der Mehrfachzeilensprung. Hausmitteilungen der Fernseh-AG 1: 171–179. Reissaus, Georg Günther. 1931. Bildpunktzahl und Bildpunktfrequenz. Fernsehen und Tonfilm 2: 187–189. Roeßler, Erwin. 1951. Telefunken und die Entwicklung des Fernsehens ab 1928. Telefunken-Zeitung, March. Schröter, Fritz. 1926. Drahtlose Bildtelegraphie. Elektrotechnische Zeitschrift: 719–721. ———. 1928. Fortschritte in der Bildtelegraphie. Elektrische Nachrichten Technik 5: 449–458. ———. 1930. Verfahren zur Bildzerlegung bzw, 484765. German patent no: Zusammensetzung. ———. 1932a. Die Zerlegungsmethoden der Fernbildschrift. In Handbuch der Bildtelegraphie und des Fernsehens: Grundlagen, Entwicklungsziele und Grenzen der elektrischen Bildfernübertragung, ed. Fritz Schröter, 1–25. Berlin: Springer.
118
M. JANCOVIC
———, ed. 1932b. Handbuch der Bildtelegraphie und des Fernsehens: Grundlagen, Entwicklungsziele und Grenzen der elektrischen Bildfernübertragung. Berlin: Springer. ———. 1933. Verfahren zur Abtastung von Fernsehbildern. German patent no. 574085A. ———. 1937. Bildtelegraphie und Fernsehen. 2. Die Physik 5: 1–20. ———. 1940. Television System. US patent no. 2,202,605. ———. 1953. Aus der Fernseh-Entwicklung bei Telefunken. Rückblick und Ausblick. Telefunken-Zeitung—50 Jahre Telefunken. Festschrift zum 50jährigen Jubiläum der Telefunken Gesellschaft für drahtlose Telegraphie m.b.H. 26: 191–196. Schubert, G. 1931. Zur Netzsynchronisierung von Fernseh-Empfängern. Fernsehen und Tonfilm 2: 105–120. Schubin, Mark. 2016. More, Faster, Higher, Wider: A Brief History of Increases in Perceptible Characteristics of Motion Imaging. SMPTE Motion Imaging Journal 125: 32–40. https://doi.org/10.5594/JMI.2016.2579138. Siegert, Bernhard. 2003. Passage des Digitalen: Zeichenpraktiken der neuzeitlichen Wissenschaften, 1500–1900. Berlin: Brinkmann & Bose. Stauff, Markus. 2004. Das neue Fernsehen. Machteffekte einer heterogenen Kulturtechnologie. Dissertation, Bochum: Ruhr-Universität. Sterne, Jonathan, and Dylan Mulvin. 2014. The Low Acuity for Blue: Perceptual Technics and American Color Television. Journal of Visual Culture 13: 118–138. https://doi.org/10.1177/1470412914529110. Strauven, Wanda. 2007. The Imagination of Wireless Distribution. In Networks of Entertainment: Early Film Distribution 1895–1915, ed. Frank Kessler and Nanna Verhoeff, 295–303. London: John Libbey Publishing. ———. 2008. S/M. In Mind the Screen: Media Concepts According to Thomas Elsaesser, ed. Jaap Kooijman, Patricia Pisters, and Wanda Strauven, 276–287. Amsterdam: Amsterdam University Press. The Herald’s Test of New Method of Transmitting new Pictures by Wire. 1898. New York Herald, January 7. Udelson, Joseph H. 1989. The Great Television Race: A History of the American Television Industry, 1925–1941. Tuscaloosa: University of Alabama Press. Uricchio, William. 2008. Television’s First Seventy-Five Years: The Interpretive Flexibility of a Medium in Transition. In The Oxford Handbook of Film and Media Studies, ed. Robert Phillip Kolker, 286–305. New York: Oxford University Press. Urtel, R. 1936. Das Zeilensprungverfahren im Fernsehen. Telefunken-Zeitung: 36–42. I.4.095 NL Federmann—292. Historical Archive of the German Technology Museum Berlin. Walter, Johann. 1898. Verfahren zur Uebertragung von Zeichnungen, Handschriften u. dgl. in die Ferne. German patent no. 98627.
3 INTERLACING: THE FIRST VIDEO COMPRESSION METHOD
119
———. 1899. Verfahren zur telegraphischen Uebertragung von Zeichnungen. Elektrotechnische Zeitung 20: 59–61. Weber, Anne-Katrin. 2014. Recording on Film, Transmitting by Signals: The Intermediate Film System and Television’s Hybridity in the Interwar Period. Grey Room 56: 6–33. https://doi.org/10.1162/GREY_a_00148. Weiss, Georg. 1937. Zur Frage der deutschen Fernseh-Rundfunknormung. Fernsehen und Tonfilm 8: 45–47. Winston, Brian. 1998. Media Technology and Society: A History From the Telegraph to the Internet. London; New York: Routledge.
CHAPTER 4
+Et cetera in Infinitum: Harmonic Analysis and Two Centuries of Video Compression
Et ignem regunt numeri. Joseph Fourier, Théorie Analytique de la Chaleur (1822)
One of the most outrageous and revolutionary books ever written in mathematics started out as a repeated failure. On December 21, 1807, the august institution that would later become the French Academy of Sciences read a study presenting a mathematical model of the propagation of heat in solid bodies. It was sent to the Académie by Joseph Fourier, then Prefect of Isère, a department in the southeast of France. Fourier was approaching 40 and had had an eventful life by this time: a teacher and politician, he had been arrested, briefly imprisoned, escaped the guillotine and was released in 1794—not coincidentally the year in which, according to media theorist Friedrich Kittler, Napoleon initiated modernity (Kittler in Fuller 2008). Fourier then became an Egyptologist and historian, and an amateur mathematician and physicist. Napoleon appointed him secretary of the Institut d’Égypte in Cairo in 1798 before dispatching him back to Paris in 1801 and then to Grenoble in 1802, where Fourier then oversaw infrastructural development in the département (Grattan-Guinness 1972, 2005; Herivel 1975). Some of Fourier’s biographers believe to recognize a causal link between his sojourn in Egypt and his research on heat: Fourier not only studied the physical
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Jancovic, A Media Epigraphy of Video Compression, https://doi.org/10.1007/978-3-031-33215-9_4
121
122
M. JANCOVIC
propagation of heat but is considered by many to be the discoverer of the greenhouse effect (Prestini 2004).1 Fourier’s treatise was initially rejected for publication by the committee. But the Académie encouraged him to conduct further research by designating the same topic, heat propagation, as the subject of a scientific competition in 1811. With only a single paper competing against him, Fourier won with an extended version under the title The Theory of Heat. The committee, however, consisting of some of the greatest mathematical minds of France, criticized its lack of generality and rigor. It would take 15 years after the original submission, once he himself had become chairman of the Académie, until Fourier’s disquisition properly appeared in print as The Analytical Theory of Heat.
Joseph Fourier and the Theory of Heat The reason it took one and a half decades and two failures before the book was finally published were some of its extravagant and eyebrow-raising mathematical tricks and claims. Fourier was trying to find a way to mathematically model how heat diffuses in solid bodies over time. The flow of heat in a material can be described by a partial differential equation, now known simply as the heat equation. Fourier solved the heat equation with an infinite sum of sines and cosines—now known as a Fourier series. Unless one is mathematically inclined, it may be difficult to appreciate the gravity of this solution, but the shock and rupture it has caused in the fabric of mathematical science was such that it led to the birth of an entirely new field of mathematics. Trigonometric expansions—solutions to mathematical problems using only functions like sine and cosine—were known to be applicable to simpler, ordinary differential equations. Mathematicians prior to Fourier had known for a few decades that some complex functions could be represented as sums of simpler sinusoids. Painting is a good analogy for this. Children learn early on that colors can be added to create new colors. Mixing yellow and blue can be a substitute when you want to paint grass but you’ve ran out of green. In the same way, sine and cosine functions combined in the right proportions can solve a range of mathematical problems. But the utility of sine and cosine sums was mostly theoretical 1 Fleming (2005) corrects some of the historical misattributions and inaccuracies in the reception of Fourier’s work on climate.
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
123
and limited because it was believed that they lacked generality and could only represent smooth, continuous curves (Grattan-Guinness 1990). Fourier provided a practical way of decomposing complicated functions into sine and cosine coefficients. One of the central controversies was his claim that any arbitrary function could be represented in this way (Fourier 1822, 233–234). The reason why this is extraordinary is that it implies that even a function that is discontinuous or has sharp jumps can be expressed as a sum of continuous and periodic functions. This would mean that it is possible to create, for example, a square wave—a discontinuous, discrete digital pulse—made entirely out of smooth, continuously undulating curves. In the early nineteenth century, this was the equivalent of claiming that yellow color could be produced from the right combination of black and blue. It would go against everything we think we know about color. And for this reason, Fourier’s solutions not only seemed extremely counterintuitive to Louis Lagrange, the doyen of the committee whose influence at the Académie prevented the study from being published. It also caused a crisis in the mathematical understanding of functions and continuity. The problem was that Fourier’s solution produced verifiable results. “Fourier was modeling actual physical phenomena. His solution could not be rejected without forcing the question of why it seemed to work,” as the mathematician David Bressoud (2007, 4) explains. In a manner of speaking, Fourier forced mathematics to rethink not only the basics of color mixing, but the definition of color itself. The impact of Fourier’s method was so profound that it transformed calculus into a new branch of mathematics: analysis. Like geometry or algebra, analysis is a specialized area of mathematics that, just like the noun “analysis” in the general sense, often involves the decomposition of complex mathematical objects into simpler components. “Fourier analysis” and its more general form “harmonic analysis” are now the names of both a mathematical technique and the mathematical discipline that studies the representation of arbitrary functions in terms of simpler trigonometric functions. These techniques form the bedrock for much of digital video compression. As Friedrich Kittler (2003) has observed, the methods of Fourier analysis have allowed the exploration of what seemed like an entirely new realm of reality, the frequency domain. Analytical techniques can be imagined not simply as mathematical procedures that represent functions in terms of other functions, but as epistemic machines that can reformat a vast aggregate of measurable physical phenomena—sound, heat, light, vibration,
124
M. JANCOVIC
electricity, planetary orbits, the tides—into a unified vocabulary of computable equations and storable data. Analysis, this new mathematical medium, made it possible to use the past measurements of intangible phenomena to calculate and project the future. It became a “universal technique of description” (Siegert 2003, 247, my translation). Analysis was the culmination of the arts of decomposing the eternal, an impulse within mathematics initiated by the likes of Leibniz and Newton in the eighteenth century. Its uncanny ability to refract the world into its components and extract rhythm and periodicity from chaos had serious cosmological consequences. Infinity, after all, used to be the domain of God. Analytical computing techniques and their “reckless disregard for the dangers of the infinite” (Bressoud 2007, 22) allowed rational operations with infinity, something previously unthinkable. Fourier himself was quite aware of the great universalizing force of analysis and its power to draw together seemingly unrelated contingencies in the material world: [M]athematical analysis is as extensive as nature itself; […] It brings together phenomena the most diverse, and discovers the hidden analogies which unite them. If matter escapes us, as that of air and light, by its extreme tenuity, if bodies are placed far from us in the immensity of space, if man wishes to know the aspect of the heavens at successive epochs separated by a great number of centuries, if the actions of gravity and of heat are exerted in the interior of the earth at depths which will be always inaccessible, mathematical analysis can yet lay hold of the laws of these phenomena. It makes them present and measurable, and seems to be a faculty of the human mind destined to supplement the shortness of life and the imperfection of the senses […]. (Fourier 1878, 7–8)
Sciences and industries could learn to use analysis to model and manipulate an astonishing array of physical events. After William Thomson (Lord Kelvin) applied Fourier analysis to the calculation of tides in the 1860s, within just a few years, the newfound ability to model and predict the actions of the oceans profoundly changed the nature of seafaring and the productivity of fishing. Today, analysis is used on subatomic scale in crystallography, on human scale in neuroimaging and electrocardiography, on planetary scale in seismology, and on cosmic scale in astronomical spectroscopy. Wherever something oscillates or machines are used to recognize patterns, analysis is at work. Fourier’s initial ideas planted the seed for a remarkable number of mathematical concepts (Donoho et al. 1998).
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
125
The media theorist Bernhard Siegert (2003) goes as far as arguing that the development of analysis, beginning with Leonhard Euler in the eighteenth century and culminating with Fourier, represented the epochal epistemic rupture that allowed modern digital media to emerge. In point of fact, video compression is one of the major applications of harmonic analysis. And traces of the tension between the discontinuities of discrete signals and the infinite periodicity of sinusoids—the crux of Fourier’s and Lagrange’s disagreement—are inscribed as compression artifacts in almost all moving images we encounter in daily life.
Blocking Figure 4.1 is a peculiar still from Olympia, played from DVD on a computer with very poor hardware. The shot is from the choreographed sequence in the film’s prologue that directly precedes the torch relay.
Fig. 4.1 Blocking artifacts—a transient decoding error during playback of Olympia from DVD
126
M. JANCOVIC
Clearly not part of the original visual structure are the conspicuous pixelated disturbances visible in the center. In this case, they are not an error in the MPEG video signal itself but a transient apparition resulting from insufficient reading or decoding speed—a trace of the fact that the hardware simply could not keep up with the stream of data read from the DVD. But similar blocking artifacts (as they are known) will be familiar to most people, especially those who watch digital television in areas with poor reception or view high-resolution video on aging hardware. This mosaic tells us something about how digital images are organized spatially and temporally (Schonig 2022). But the presence of blocking also invites us to consider a more expansive—and surprisingly long—algorithmic legacy of video: a story of the moving image in which mathematicians, physicists, computer scientists and electrical engineers have to be considered with the same urgency as photographic pioneers, inventors and entertainers; in which calculating devices are just as crucial as cameras and projectors; and in which mathematical practices of seeing, sensing and measurement are of the same import as the more conventional spectatorial habits that usually dominate histories of cinema and television. Matrices of Transformation So, where does this block pattern come from? The size of each tile is a straightforward hint: simply measuring the dimensions of each grid element shows that they are 16 pixels wide and tall, each with faint lines subdividing them into four smaller blocks of 8 × 8 pixels. As decreed by the DVD video standard, these images have been processed with the MPEG-2 compression codec and with the help of the discrete cosine transform (DCT). In mathematics, transforms are a class of functions used to simplify complicated problems. A transform maps an equation from its original domain into another domain, where it can be solved more easily. The discrete cosine transform is a concrete algorithmic example of the methods of Fourier analysis: it is one method of transforming images into the frequency domain, where they can be manipulated as waves and then transformed back to be displayed as images again. DCT algorithms take an input signal and return the “recipe” needed to recreate it from cosine waves. By way of analogy, the DCT is like a machine that can look at a particular shade of green and tell you the precise proportion of blue and yellow pigment you would need to mimic it exactly. When
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
127
given a signal with discrete values—for example, digitally sampled brightness variations in an image—the DCT returns a finite sum of cosine waves with different frequencies and amplitudes. These are called coefficients. By adding the coefficients together, the original signal can be reconstructed. In other words, the DCT can manipulate certain types of data and represent them as a spectrum of wave frequencies. This is extraordinarily useful to lossy compression, because human senses are only responsive to stimuli at certain frequencies. Human hearing and vision are relatively insensitive to very low or very high frequencies (such as sounds at the limits of hearing or very fine textures and variations in images, particularly in color perception). Removing wave coefficients that contribute only little to the overall “shape” of the signal or to which the human sensory system responds poorly can compress a signal’s bandwidth dramatically while only affecting its perceptual characteristics a little. If you know that a particular shade of green is composed of mostly blue and yellow and the tiniest trace amount of red, you may want to skip red altogether and save yourself the trouble of admixing a pigment that might be expensive but won’t have much of an effect anyway. DCT-based compression makes this economy of information possible by creating knowledge about the proportions of particular frequencies in a signal and cleverly exploiting the sensory tendencies and limitations of our bodies. Michel Foucault once used an uncharacteristically mathematical expression when he called relations of power-knowledge “matrices of transformation” (1978, 99). The philosopher Foucault most certainly did not have digital compression algorithms in mind when he wrote this, and yet the DCT is an exceptionally representative—if literal—example of a matrix that encodes knowledge as power. DCT algorithms create an ordered table out of disordered and disorderly signals. As a technique of organizing and producing knowledge on a massive scale, the transform therefore has a tacit but powerful political dimension. Techniques of ordering signals by frequencies are the prerequisite for hierarchies of meaning and underlie many of the graphical, military and judicial operations of governing spaces through format standards that Bernard Geoghegan (2021) simply calls “rendering.” By decomposing complex signals, transforms like the DCT make it possible to render what is “important” from what is not. In doing so, transforms extract value from signals: epistemic value in the sciences, diagnostic value in medicine, surveilling value in law enforcement, predictive value in economics, and logistical and compressive value
128
M. JANCOVIC
in media culture. Fourier transforms are that smelting process that separates ore from gangue. The DCT does not perform any compression on its own but is its condition of possibility. DCT algorithms—of which there are multiple—separate and order the individual frequency components within a signal, and thus allow their selective reduction or removal. In this way, DCT enables various psychovisual and psychoacoustic models of human seeing and hearing to be applied to data compression (Sterne 2012). The DCT does not operate on the whole image at once but partitions it into a grid of rectangular blocks with usually 8 × 8 data points, grouped into larger macroblocks. The size of the blocks and macroblocks sometimes varies slightly across video formats, but the general principle remains the same. Every such block in an image is transformed individually and produces 64 coefficients, corresponding to combinations of 8 cosine waves with different frequencies. Overlaying these waves on top of each other in the correct way produces the shapes and patterns that make up digital images. In most signals, almost all of the informational value is concentrated in just a few coefficients and the remaining frequencies are negligible. Therefore, generally, even a small number of cosine waves is enough to give a very good approximation of the original signal. From a human observer’s point of view, an image with 15 or 20 coefficients will, in many cases, look nearly indistinguishable from one that is composed of the full set of 64, and the rest can thus be considered waste and eliminated by lossy compression. But a failure anywhere in the system—for example, a disturbance in the television signal, corrupted data or insufficient decoding speed—can manifest as blocking artifacts that reveal this grid-like logic of the DCT. Interferences with the block structure are perhaps one of the most commonly encountered and recognized traces of DCT-based audiovisual formats. The regular square partitioning of the image is characteristic of JPEG images and most video compression codecs in the MPEG family modeled after them. But it is not inevitable. Coding schemes that transition between blocks less “brutally” have existed prior to JPEG, and are utilized in some later image and video compression codecs such as MPEG-4 AVC, as well as audio formats like MiniDisc and MP3. But the JPEG committee’s commitment to creating a universal image format meant that it had to be computationally simple enough to be useable in many applications, including those with very modest hardware.
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
129
These traces of compression decay are often viewed as ill-favored in contemporary visual culture, as evidenced by graphics software such as Adobe Photoshop, whose recent filtering capabilities include “JPEG Artifacts Removal.” But with the selection and standardization of this particular form of the DCT, together with its use in digital broadcasting standards and in popular consumer formats like DVD, blocking artifacts would become a relentless specter in the audiovisual fabric of the twenty-first century. And even though the use of new and more efficient compression codecs is increasing on the web, JPEG images maintain their reign as one of the defining formats of the Internet despite being more than 30 years old. Many other transform classes apart from the DCT exist. The Fast Fourier Transform (FFT), on which the DCT is based and to which this chapter will return later on, has been called “the most important numerical algorithm of our lifetime” (Strang 1994, 253). Other transforms play vital roles in image and data processing and serve as important instruments in many scientific disciplines, from tomography to the compression of early interplanetary probe photographs. If you have seen a film in the cinema recently, the images on the screen were compressed with the JPEG2000 codec, which makes use of a successor of the DCT. The reason you are able to access high-resolution videos instantly with nearly no delay on streaming platforms like YouTube or Netflix is because they use high- efficiency compression codecs like VP9 or AV1, yet again offspring of the DCT that combine sine and cosine transforms. The methods of harmonic analysis are thus crucial in mathematics, mathematical physics, signal processing and engineering, medicine, acoustics and optics, but their fundamental role in audiovisual culture also makes them important objects of media theory and history. Harmonic analysis is critical not just in the context of digital image compression but also, for example, in analog television broadcasting and fields like music recognition and facial recognition. Despite seeming like a highly specialized, technical field of research, there is hardly any area of contemporary audiovisual culture, including scientific processes of machine vision and political mechanisms of surveillance, that has not been touched by the mathematical techniques of analysis. Media-theoretical work on harmonic analysis is sparse, but scholars have established its formative role in shaping the epistemic conditions in which digital media and notions like signal and noise could emerge (Siegert 2003; Kromhout 2017). Much of existing research is anchored by references to the prominent German media theorist Friedrich Kittler,
130
M. JANCOVIC
who sustained an interest in Fourier analysis over decades, but whose remarks are somewhat desultory. German-language writings in this tradition gravitate toward Wolfgang Ernst’s brand of media archaeology and its signature preoccupation with microtemporality and acoustic phenomena (Ernst 2013; Maibaum 2016). Although the value of sound studies’ pioneering contributions to media theories of analysis is inestimable, in many of them, discourses of temporality tend to overshadow issues of space which, as I argue, play a no less important role. Authors with affinity for cultural studies tend to take a more speculative approach to Fourier’s media legacy, but are also less willing to discount questions of power, politics and corporeality, maintaining an important historical awareness of the impact codecs and algorithms have on the way life is lived on the human scale (Cubitt 2011; Caplan 2012; Hansen 2015). Provocatively, Adrian Mackenzie has linked transform-based compression to both violence and network infrastructures when he considered the online circulation of gruesome videos by terrorist organizations and asked: “Would a beheading […] occur without codecs and networked media?” (Mackenzie 2013, 142). In other words, had we not had effective, omnipresent methods to compress video that allow it to circulate worldwide, would the brutal visual spectacles of global violence and war, along with all other forms of contemporary affect mediated through images, be thinkable at all? New media and video artists have also recognized the importance of harmonic analysis in recent years, addressing the visual realm at times with greater precision and incisiveness than media theorists. Media artists Ted Davis, Rosa Menkman, Cory Arcangel and others have written on the functional and aesthetic logic of DCT-based image compression and how it redistributes sensations. All of them have also made audiovisual works in which the mechanisms of compression are exposed and the blocky artifacts and checkerboard patterns typical of the cosine transform are put under scrutiny. My goal in this chapter will be to position harmonic analysis firmly within the genealogy of the moving image. Commencing from an epigraphy of ringing and blocking artifacts, two visual errors ubiquitous in digital image culture that result from compression, I will explore the epistemic function of such traces and mechanical failures in sciences like mathematics and neurology. That, in its turn, will allow us to consider the rich material culture of mathematics, with its plethora of calculating machines and epistemic techniques, operating in an ambiguous conceptual space
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
131
between the symbolic and the material, between abstract calculation and the physical environment, and between infinity and finitude. But first, a short history of the DCT. DCT: A Compressed Overview Much like Fourier’s study of heat, work on the DCT first began with a failure. Nasir Ahmed conducted its development in 1974 at Kansas State University together with his PhD student T. Natarajan and colleague Ram Mohan Rao. Ahmed once recalled that the project failed to secure funding from the U.S. National Science Foundation on the grounds that it seemed “too simple” (Ahmed 1991, 4). Transforms were a booming area of mathematical research in the 1960s and 1970s, and their growing number prompted the signal processing community to develop standardized testing procedures. Instead of comparing transform outputs by simple visual inspection, the comparison was eventually rationalized numerically. Benchmarks and systems of reference were developed that could compare different algorithms. This can be seen as a late stage of a reflexive phase in the history of computational techniques (very much prompted by the rise of the Fast Fourier Transform a decade earlier) in which the time and efficiency of computation becomes itself the object of computation. Ahmed and his colleagues had found that in comparison to other transforms, the DCT’s performance and compressive ability was closest to a theoretically ideal method. Their method was highly computationally efficient: it could produce results at high speed, at the cost of only minor errors compared to slower processes (Ahmed et al. 1974, 90). The so-called computational efficiency of calculating a transform like the DCT means, in its most basic sense, the absolute number of additions and multiplications that a computer (or person) needs to perform before achieving a result. On an algorithmic level, any given compression method might be implemented and realized in more or less efficient ways. An algorithm that uses fewer operations will be faster and thus more “efficient.” Efficient fast algorithms implement various tricks: they can exploit some mathematical properties (such as the symmetries of sinusoidal functions), take shortcuts by assuming that the input data has a specific shape or structure (like the standardized size of blocks in the case of JPEG images) or capitalize on the unique features of the processing hardware (i.e. how much longer a particular electronic circuit needs to multiply two numbers as opposed to adding them). Different types of hardware will
132
M. JANCOVIC
handle the same algorithm very differently (Wallace 1992). For example, the general-purpose processor in your smartphone does not compute in the same ways as an application-specific integrated circuit in an aerospace sensor, because they have been designed with different goals in mind. These differences can be put to use mathematically. Finally, a crucial set of techniques for accelerating computation involves the spatial manipulation of data through rotation, mirroring and interlacing, but more on those later. Thanks to these computational shortcuts, multiple algorithms for computing the DCT have been discovered over the decades. The computational savings among them can be small in a relative sense. Feig’s fast two-dimensional DCT from 1990 uses 462 additions, versus the 464 required in a two-dimensional Arai-Agui-Nakajima DCT (Kuhr 2001). But with trillions of calculations performed daily on the scale of visual culture—not just in compressing and decoding images but also, for example, in reformatting them to different screen sizes—a difference of 2 additions per block of data translates into enormous savings in time, electricity and capital. Feeling the Heat: Compression and the Environment Despite what may seem like a highly abstract numerical process, compression, as I have now argued repeatedly, is a material and industrial procedure that shapes, molds and folds concrete things. Understanding the computational efficiency of an image compression algorithm is important media-theoretically because every addition, subtraction, multiplication and division uses electricity and generates heat. Minuscule differences in computational implementation can play an outsized role in the material economy of signals and translate into tangible infrastructural and environmental risks. Intuitively, one may be tempted to assume that as Internet access speeds continue rising around the world, the need to compress video files would gradually abate. In actuality, it is the opposite. Video files that circulate today are tendentially much more heavily compressed than fifteen or twenty years ago. The growing capacity of storage media belies the much more rapid, exponential growth in screen resolutions and increases in bit depths and color spaces. Those weigh heavily on the file sizes of contemporary videos, films and television programs. The number of mobile devices that can show video and that connect to networks has exploded,
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
133
negating increased network bandwidths and exacerbating the need to compress. Taking advantage of the growing processing power of consumer electronics, the media industry has an incentive to strive for the highest possible rates of compression and computational efficiency. Streaming platforms squeeze the files as drastically as the decoding devices can handle, because smaller files mean that videos get delivered to end devices faster and with lower bandwidth use—and thus without the “digital dams” (Alexander 2017) of buffering and latency that bedevil capitalist notions of user-friendliness and immediacy. Services like Netflix and YouTube have a major stake in developing and standardizing new compression codecs, which, much like media formats, compete for market dominance in what we could call “the compression wars.” Video platforms are continuously optimizing compression efficiency, re-encoding their catalogs as more efficient codecs are developed to decrease file sizes and increase perceived image quality. However, the crucial insight here is that each new generation of compression standards is more computationally complex, and thus consumes more energy (e.g. Lin et al. 2010; Monteiro et al. 2015; Mercat et al. 2017). The decision of a large content provider like Netflix to migrate from one compression codec to another—or even just to perform a tiny adjustment to the compression parameters like increasing the number of intraframes in a video stream—has direct environmental ramifications. The encoders need more electricity in order to perform the calculations needed to compress signals more heavily. More importantly, the decoding also requires more energy from the billions of devices at the end of the delivery chain. In countries like the Netherlands, end-user devices account for nearly half of the total energy use within the entire digital economy—more than the electricity needed to power data centers and network infrastructure (Dutch Data Center Association 2020). Video de/compression may not be as energy-intensive as some other computational practices like high- performance gaming, but it does have tangible effects. With every new wave of compression standards, the hunger for energy grows, like a tug of war counteracting the efficiency gains of LED and LCD technology. Television sets draw more power and the batteries of mobile devices drain faster. These batteries then need to be recharged more often and their capacity diminishes faster as well, accelerating their eventual obsolescence and decay. New compression algorithms also frequently require
134
M. JANCOVIC
specialized chips to decode, necessitating new hardware and thereby perpetuating a vicious cycle of materials extraction, polluting manufacturing processes, environmentally irresponsible logistics, and electronic waste generation (Jancovic and Keilbach 2023). The example of the Netherlands remains instructive: The electrical grid in this small country is chronically bursting at the seams, so much so that there is no capacity for new electrical connections for commercial users in some provinces, even as demand for electricity is expected to skyrocket as a result of decarbonization initiatives (Kleinnijenhuis and van Hest 2022). Under these unpropitious circumstances and against a backdrop of planetary environmental crises interlocking with acute energy crises, even the modest impacts of compression enacted on the television screens, smartphones and laptops in our homes, when stacked up and because they are increasing, feed into matters of national and global energy security. Compression algorithms thus point our attention to, on the one hand, the profound enmeshment between media culture and electrical infrastructure. On the other hand, pinpointing how and where exactly compression takes place—what kinds of calculations it requires, which electronic components execute it, and so on—allows us to render visible some areas of the media economy that have hitherto largely evaded media- theoretical scrutiny. This includes, among others, the relationships between streaming platforms, browser developers, data centers, telecom companies and network infrastructure operators, energy suppliers and utility companies, chip developers, chip foundries and other hardware manufacturers, as well as more obscure players in this complex topography, such as firmware integrators. All of these actors maintain convoluted—sometimes symbiotic, sometimes antagonistic—bonds with each other. It is a strange but fitting coincidence of history that the namesake of Fourier analysis, on which much of the contemporary media industry is built, was also the discoverer of the greenhouse effect. The seemingly abstract, microscale and microtemporal symbolical calculations performed inside a graphics processing chip lead not only to fluctuations in electricity use on a meso scale, but also to perturbations in the macroscale relationships of power that permeate global supply chain capitalism. Because they depend on electricity and storage media, compression algorithms have a direct bearing on life and the distribution of heat on a warming planet. This is what Nicole Starosielski (2014) has called “the materiality of media heat” at work—the real and concrete effect of video compression on the
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
135
physical world. The waste of compression accumulates in our environment as heat.
Motion Compensation The paradox of contemporary video circulation is that in order to stay mobile, moving images assume that nothing in them will move very much. This is the basis of motion compensation, another common compression method frequently used in broadcast and online video that can also reveal the block structure of digital images. The conventions of mainstream cinematography are built around controlling movement. Since analog times, handbooks for cinematographers have been prescribing maximum panning speeds that limit camera motion to ensure no visual disruptions like flicker and judder appear on screen. Similar principles apply to digital video, and compression mechanisms exploit these visual habits. In simple terms, when the movement in a video recording is consistent and predictable, rather than individually storing the content of every frame, it is possible to reduce file sizes tremendously by only encoding information about how parts of the image are moving—basically, encoding changes between frames, rather than whole frames. For example, imagine a static camera shot that shows a person walking from the left side of the frame to the right. You can save a lot of data if you only encode the static background once for the entire shot, and compress only the portions of the image where movement occurs. This is the algorithmic practice known as difference encoding, in the context of digital video compression also known as motion compensation. Playing with Fire Adrian Mackenzie has unraveled techniques of motion compensation into an economic and logistical theory of visual culture. Mackenzie recognizes that block-based compression treats the image very differently from the singular, self-contained and coherent frame of a film strip, “reordering relations between frames rather than just keeping a series of frames in order” (2008, 52). In motion-compensating codecs, a frame can be constituted by its mathematical relationship with any number of preceding or subsequent frames. Several DCT blocks are stitched across frames to form
136
M. JANCOVIC
macroblocks whose movement is calculated independently of the rest of the image. The historical tendency of codecs in the MPEG family has been to standardize ever more complex mathematical models of motion and increasingly finer and more nested macroblocks, fragmenting the video frame into a patchwork of temporally interlinked, progressively granular image fragments, all moving independently in time and space. Recent video compression methods incorporate an extensive portion of the gestural vocabulary of cinematography, being optimized to recognize scene cuts and compress panning, tracking, tilting and zooming motions. A motion-compensating codec essentially performs a bibliographical comparison: it “looks” at a series of images and “collates” them, analyzes the motion vectors in past frames and predicts their movement into the future. Anticipating that successive frames tend to be similar to each other, instead of encoding each frame as a whole, the codec only encodes the anticipated movements of the macroblocks within the frame. If the prediction is correct, it can save bandwidth. Even if it is incorrect, it encodes only the difference between prediction and actual movement rather than the entire picture, which also saves some bandwidth. But this economy of stasis is a gamble. As Jordan Schonig (2022) has also shown cogently, it falls apart when compression encounters movement that refuses to stand still and be calculated. There is an interesting congeries of filmic objects that otherwise have little in common besides an aesthetics of incompressibility: stochastic, noisy and high-entropy processes like falling snow, confetti, television static, film grain and fire (Fig. 4.2). Such high-frequency, high-amplitude motifs can inundate both transform-based and motion compensation-based compression methods. Thus, the epigraph that Fourier attributes to Plato and uses to open his book on heat is also valid in reverse: et numeros regit ignis—not only fire is reigned by numbers, but numbers by fire. Motion compensation is ineffective when applied to a film such as, for example, Stan Brakhage’s Mothlight (1963). Brakhage made Mothlight not by pointing a camera at a subject and letting it record, but by physically pasting organic material like insect wings onto the film strip. Each frame in Mothlight is radically different from the previous. This means that when compressed with common motion-compensating codecs, for any given bitrate such images will encode with “worse” visual quality than a film in which motion is predictable. This can be readily seen in Fig. 4.3, a spectacularly decayed, crudely compressed digital copy of Mothlight that bears
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
137
Fig. 4.2 Still from Olympia DVD showing the Olympic fire with heavy MPEG blocking
nigh on no resemblance to the original film, although it carries many traces of its own reformattings and circulations in online spaces. Some phenomena in the physical world are simply not suited to be compressed this way. The JPEG and MPEG codecs were, after all, designed for and tested on continuous-tone signals and live action imagery. They are less useful for representing many forms of graphics or text, but also noisy media like photochemical film. That is not to say that such images cannot be compressed at all. But they make compression more expensive. There is a price to pay for images that statistically do not fit the norm— moving images made with unconventional techniques like some forms of animation; or experimental film and video in which there are many discontinuities, in which neighboring pixel values are discorrelated in time and space and in which surfaces and movement are not smooth. Such images resist compression: they take an infinitesimally longer amount of time to encode, or because of their larger sizes, take infinitesimally longer to be
138
M. JANCOVIC
Fig. 4.3 Still from a copy of Stan Brakhage’s Mothlight uploaded to YouTube in 2012. Apart from the poor resolution and drastic blocking, interlacing artifacts can be seen as horizontal striations
stored or moved from one storage medium to another. They cost more time, electricity and heat. The difference may be tiny, but it is not negligible. In recognition of the limitations of motion-compensating and DCT- based compression, newer codecs like H.264, H.265 and later ones have begun implementing filters and presets that take into account the needs of different source material. Common encoders now feature deblocking filters to reduce blocking artifacts or provide presets for film grain and animation that bias the encoder toward higher frequencies—and thus, again, increase file sizes and ever so slightly delimit a video file’s ability to circulate. It is not too difficult to envision situations in which the cost of encoding certain types of images has had pragmatic consequences. When the BBC stopped accepting dramas produced on 16 mm film in 2006, the
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
139
reason was video compression. The BBC’s Principal Technologist Andy Quested stated: The problem lies with the MPEG 4 compressors the BBC uses to squeeze HD into a limited broadcast spectrum. These compressors have difficulty handling the random grain pattern of film, particularly on high speed, pushed and/or under exposed material. This results in blocky artefacts and a general softening of the image that the BBC ‘white coats’ think the audience at home will find unacceptable. (Cited from Crofts 2008, 9).
This is an extraordinarily illuminating statement, certainly from the standpoint of media epigraphy and format studies. The difficulty of compressing photochemical grain, as well as the difficulty audiences were alleged to have with its digital traces like blocking and blur, caused the phasing out of a film format at one of the world’s largest broadcasting corporations. Not only does this show the effects that small visual failures have on the functioning of reformatting procedures across analog and digital media, it also provides some insight into the assumptions that media executives make about their audiences and about acceptable forms of visuality at large. Lossy compression, in effect, brings technological conditions into being in which certain aesthetic forms are treated preferentially: moving images in which movement is somewhat predictable, coherent and not too fast, in which successive frames are mostly similar to previous ones, and in which change is not too abrupt. More simply put, many common forms of compression sustain a visual culture grounded above all not in change and movement, but in stasis and predictability. Those moving images that step out of the bounds of these conventions are penalized either with decreased compression efficiency, or increased errors and aberrations.
Traces of Infinity Let us return to Olympia once more. Figure 4.4 is another still from the prologue of the film, showing a partially covered, illuminated Medici Venus. Formally, Riefenstahl’s opening piece draws on a dramatic play of chiaroscuro, emphasizing contact between light and shadow. It is here that the trouble begins. If you focus your attention on the pitch-black statue in the foreground, you will notice that the contour of its right arm is limned by a thin halo where it comes in to contact with the lighter areas
140
M. JANCOVIC
Fig. 4.4 Detail of ringing artifacts in Olympia. A portion of the thin halo is indicated with white arrows
of the mise-en-scène. The faint glow exaggerates the contrast between bright and dark and imparts the image with an unmistakable air of cheap video—a perplexing mismatch with the timeless aura that Willy Zielke’s cinematography is trying to conjure up. The phenomenon we are seeing here is called ringing. It is a trace of the collision between two incompatible regimes of temporality: a collision between symbolical mathematical calculations with infinity, and the material finitude of media. Ringing When the methods of Fourier analysis encounter discontinuity, a phenomenon known as ringing tends to manifest in signals. In mathematical terms, the term discontinuity refers to an instantaneous change or “jump”—for instance, a sudden change in brightness in an image, such as at the edge of a sharp bright object on a dark background. The “interface” will “ring” when it is compressed, producing a faint halo or pattern of ripples. These spectral phantoms are also known as the Gibbs phenomenon. They appear as a consequence of representing a discontinuity with a finite number of coefficients. It is possible to represent discontinuities in a signal with continuous functions, but it requires an infinite number of coefficients. Yet because
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
141
our signal-processing machines have to deliver results in the finite time that we have at our disposal, we are forced to truncate—that is, interrupt—the infinite series at some point, and therefore lose definition. Sharp contours become blurry and ripples appear. This is infinity’s fundamental intransigence to being represented in finite terms. What appears important to me is that the process of compression thus not only removes information, but produces something that was not present in the image before. Ringing artifacts are quite common in both digital and analog compressed images, and they serve as a reminder that the disagreement between Lagrange and Fourier never found a resolution. “Lagrange was correct in his assertion that a summation of sinusoids cannot form a signal with a corner. However, you can get very close. So close that the difference between the two has zero energy. In this sense, Fourier was right […]” (Smith 2013, 142). Ringing artifacts in images look the way they do because of a curious anomalous behavior of the Fourier series. When a Fourier series (a sum of sine and cosine curves) is used to represent a function with a discontinuity, it will wiggle: it will first overshoot its value by about 9% as it approaches the point of discontinuity, then quickly undershoot it and continue oscillating up and down around the correct value until it settles (Fig. 4.5). In images, this appears as a ripple in brightness or color around sharp edges. Scientist Josiah Willard Gibbs, after whom this phenomenon is named, explained it in 1899. Gibbs clarified that by including more and more coefficients—that is, making the signal less compressed—the width of the ringing will tend toward zero and the graph of the series will increasingly resemble the shape of the original function, but the ringing itself will never disappear completely. This means that abrupt changes in signals are inherently difficult to represent with the methods of Fourier analysis when the frequencies of the system are limited. This phenomenon is the reason why image and video formats based on the DCT tend to introduce rather heavy compression artifacts around any areas in an image with sharp contours. Because of this, JPEG compression and, by extension, its moving image counterpart MPEG, tend not to be very good at compressing well-defined shapes like text or animated drawings. (This was a key motivation for the development of wavelet transforms in the successor format JPEG2000, which is used in digital cinema and which can represent abrupt changes more efficiently.)
142
M. JANCOVIC
Fig. 4.5 The Gibbs phenomenon. Top: an ideal square wave with sharp edges. Middle: a graph of its Fourier series approximation with 5 coefficients. Bottom: 20 coefficients. Around each edge, the function “rings.” By adding more coefficients, the Fourier series will more closely approximate the original function and the wiggles will get narrower, but they will not disappear completely. (Graphs: Author)
Ringing is also a common side-effect of the artificial sharpening of images, which, for instance, many television sets perform automatically. In analog television, ringing frequently affects the color components of the signal, since they are more heavily compressed than luminance components and therefore carry mostly low-frequency information. This form of compression—allocating less bandwidth to color and more to brightness—is known as “chroma subsampling.” It is one of the most widespread forms of lossy compression in electronic imaging meant for human
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
143
eyes. It exploits the knowledge that human vision is much less sensitive to fine changes in color than in brightness, and that high-frequency color information therefore does not improve the subjective impression of sharpness in images. Most lossy image formats take advantage of this convenient “deficiency” of human vision, including DCT-based JPEG and MPEG, but also analog NTSC and PAL broadcasting (Sterne and Mulvin 2014). Ringing thus occurs in many contexts where the frequency spectrum of signals is plied, folded and manipulated. But in scientific and medical dispositifs of audiovisual media, it can pose a real danger. Ringing presents a particularly tricky issue in medical fields that rely on media-technological practices of visible-making—for example, in magnetic resonance imaging (MRI). MRI procedures generate information about objects in the frequency domain. Near sharp edges, such as where two different types of tissue touch, ringing is ubiquitous and poses a significant diagnostic challenge. Ringing artifacts are not, strictly speaking, an error, since they arise from the known physical behavior of waves and of the imaging procedure itself. In some cases, they can even be exploited as an advantage to draw attention to tissue defects (Jerri 2013). But in many medical imaging contexts, they are considered a troublesome flaw. Because of its nonlinear characteristics in some neuroimaging procedures like diffusion MRI, ringing effects are compounded and become “more dangerous, as while it is not easily spotted by the naked eye, it biases practically all diffusion metrics,” (Veraart et al. 2016, 301) complicating the diagnosis and interpretation of imaging data. Ringing can mimic the appearance of medical conditions, and thus creates an entirely new region of diagnostic doubt, sowing uncertainty and requiring permanent attention to and reflection on the instruments in use. In some neuroimaging modalities, the accepted way of handling ringing is to filter out high frequencies. In practical terms, this often simply means blurring the image—in a sense, trading one visual fault for another and decreasing resolution. Ringing is thus a minor but omnipresent failure of compression. But we could also understand it in media-epigraphical terms as a trace, signaling towards an old impasse within mathematics. In fact, not just one impasse, but a series of epistemic negotiations repeatedly playing out over the span of a century, vacillating between symbolic calculation and machinic inscription. Tracing the history of the Gibbs phenomenon opens a fascinating
144
M. JANCOVIC
insight into the material culture of mathematics and, especially, the epistemic influence of compression on scientific practice. Many textbooks on Fourier analysis touch upon the history of the Gibbs phenomenon. Mathematicians have observed that it displays in parvo a number of central features of the development of mathematics. We find forgotten pioneers. We encounter shocking disputes over priority. We study brilliant achievements, some never properly appreciated. We discover a remarkable succession of blunders, which could hardly have arisen save through copying from predecessors without checking. In short, Gibbs’s phenomenon and its history offer ample evidence that mathematics, for all of its majesty and austere exactitude, is carried on by humans. (Hewitt and Hewitt 1979, 158)
Peculiar in their absence from this exciting human drama of mathematics, however, are media and techniques of inscription, notation and data storage. Paying attention to them can reveal how mathematical knowledge is formed, and how abstract and symbolic calculations frequently depend on physical processes. At the end of 1898, a heated discussion took place in the letters to the editor column of Nature. The controversy was ignited by a letter discussing the anomalous behavior of Fourier series sent by Albert Michelson, most famous for measuring the speed of light. To the physicist Michelson, some of the mathematical intricacies of the phenomenon seemed puzzling. In his letter, he wonders about the very same issue that had stood between Lagrange and Fourier nine decades earlier. The idea that a real discontinuity can replace a sum of continuous curves is so utterly at variance with the physicists’ notions of quantity, that it seems to me to be worth while giving a very elementary statement of the problem in such simple form that the mathematicians can at once point to the inconsistence if any there be. (Michelson 1898, 544)
Using the journal as a platform, a number of eminent mathematicians got involved in the debate, with Gibbs ultimately showing that the Fourier series does converge to the discontinuous function it represents, yet the ringing oscillations do not decay. This is not entirely easy to grasp intuitively, but the behavior can be easily observed in the real world in many devices that record waves, for instance in oscilloscope displays. Efforts to
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
145
develop a filter against ringing appear as early as 1900 (Gottlieb and Shu 1997). In 1917, Horatio Scott Carslaw, another mathematician studying the conduction of heat in solids, expressed his incredulity at ringing having remained undiscovered for so long (Carslaw 1917). But as historians of mathematics have later shown, ringing had, in fact, first been described by mathematician Henry Wilbraham already in 1848. In his article, Wilbraham provided slightly clumsy but not inaccurate hand-drawn graphs that clearly show the bizarre snaking curves near the discontinuity (Wilbraham 1848). Why ringing had remained unnoticed for half a century is a puzzling conundrum in the history of mathematics. One possible media-historical conjecture that I offer is that ringing lacked a technical imaging dispositif to ground it. To understand why ringing had been forgotten and later rediscovered, we have to look at the larger media-technological context and the history of analog computing. The phenomenon reemerged at the end of the nineteenth century out of the need for practicable computers. Those computers were not merely calculating machines, but inscription devices. The Harmonic Analyzer A few months before Albert Michelson sent his letter to Nature, he had penned a journal article together with Samuel Stratton in which the two report on the development of their 80-wheel harmonic analyzer (Fig. 4.6). These devices were some of the earliest computers and could perform Fourier analysis and synthesis of complex waves. Michelson and Stratton’s undertaking was motivated by a desire to compute more coefficients of Fourier series faster and with fewer addition errors. The susceptibility to error was directly tied to the material constitution of previous machines: “the stretch of the cord and its imperfect flexibility” (Michelson and Stratton 1898, 87). Computing accuracy was, in this case, not a matter of correct mathematical operation but a purely physical notion and a question of replacing cords with “fluid pressures, elastic and other forces, and electric currents” (ibid.). Of these, the two scientists settled on spiral springs as the storage “format,” storing coefficient data as mechanical energy. With its capacity to compute 80 coefficients, Michelson and Stratton’s contraption was an impressive, mightily improved version of previous mechanical analyzers. Lord Kelvin’s earlier tide-predicting machine
146
M. JANCOVIC
Fig. 4.6 Undated photograph of Michelson and Stratton’s 80-coefficient harmonic analyzer, as demonstrated by scientist Harley E. Tillitt, probably sometime in the 1960s. (Image courtesy of the Special Collections & Archives Department, Nimitz Library, U.S. Naval Academy)
(Fig. 4.7), built in 1872, reduced the time needed to compute the yearly tides for a single harbor from several months to four hours, even though it was limited to the synthesis of no more than ten harmonic oscillations (Ulmann 2013, 18). Kelvin later expanded his work on the tidal predictor by envisioning a machine with a self-correcting feedback loop. If two tide analyzers could be connected mechanically such that results from the first would be fed to
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
147
Fig. 4.7 William Thomson’s 10-coefficient tide-predicting machine (1876). (Image courtesy of and © Science Museum Group)
the second, and the output of the second back into the first, such a device could solve second-order linear differential equations in one operation, with each iteration giving a more accurate result. The only drawback to Kelvin’s computer was insufficient torque: there was no engine powerful enough to rotate the shafts of two analyzers simultaneously. It was only after mechanical torque amplifiers had been developed (originally for use in heavy machinery and artillery) that Kelvin’s mechanism could be built by Vannevar Bush in 1930 (Goldstine 1993)—coincidentally, in the context of studying problems of electrical grid reliability. I mention this to point out once more that computation is a matter not of symbols, but of physical force; of the strength needed to set a machine into motion.
148
M. JANCOVIC
one term
three terms
five terms
seven terms
twenty-one terms seventy-nine terms
0
p
2p
3p
4p
Fig. 4.8 Graphs drawn by Michelson and Stratton’s analyzer. As the number of coefficients increases, the curve begins to approximate the square wave, but the oscillations around the discontinuity (the small wiggles around the corners of the square graph) remain. (Image source: Michelson and Stratton 1898, 88)
It is in this milieu of mechanical computation that the ringing phenomenon is rediscovered in 1898, but this time, it becomes epistemically productive. In the square and sawtooth curve graphs reproduced by Michelson and Stratton (Fig. 4.8), ringing is also visible, just like in Wilbraham’s original hand-drawn graph. But in contrast to the illustration drawn 50 years earlier, the mechanically produced curves express a different orientation towards the symbolical operations of mathematics. Because it is not a human hand doing the inscribing, the curved lines seem to eliminate the uncertainty inherent in human inscription. The protrusions and aberrant wiggles in the curves are “corroborated” through the referentiality of a machine to its own mechanical operation. Michelson and Stratton do not acknowledge the ringing artifacts in their paper, and just a few months later, in his letter to Nature, Michelson treats ringing as a numerical problem. We could view this as science’s capacity to “continually reconcile or absorb anomalous material into its basic tenet,” as Susan Leigh Star (1985, 391) put it. Importantly, though, “science” does not do this on its own, but often with the use of media of inscription, which carry out a crucial dual operation: they both produce anomalous material, and are instrumental to absorbing it as well.
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
149
It is in the repetition of gestures and the labor of calculating (both symbolically and mechanically by turning the analyzer’s crank) that the physical behavior of waves around discontinuities reappears as an irresolvable failure of an apparatus—with its limited number of gears and finite time of operation—to represent infinity. What then ensues is the procedure that Nicolas Rasmussen prosaically called “trying to decide on the facts” (1993, 227). With Gibbs’s subsequent mathematical explanation and calculation of the overshoot, and Maxime Bôcher’s later rigorous proof, ringing finally becomes explicable with the symbolical instruments of mathematics, which are calibrated to handle infinity. The anomaly is absorbed into the system as a “phenomenon,” a special mathematical event that happens under certain particular circumstances. The traces of the mechanical analyzer, as a medium, mediate between the domains of machines and of symbols. They tie the physical behavior of waves to the symbolical and graphical system of idealized curves and make one explicable in terms of the other. This vital operativity of technical instruments and graphical inscriptions in producing mathematical entities is only rarely heeded by historians of mathematics. Fourier himself was also privy to the immense disruptive power that graphical inscriptions could have. Fourier biographer Ivor Grattan-Guinness (1990, 2:599) observes that in his book, Fourier omitted every single diagram showing graphs with discontinuities, like the sawtooth wave that had been present in his original 1807 paper. The disconnected graphs might have simply been too strange for the mathematicians of his time. What this seems to indicate is that Fourier’s failure to convince the Académie with his research was therefore not only due to the difference between his and Lagrange’s confidence in the generality and representative power of trigonometric functions. It was also a difference between two paradigms: Fourier’s geometric against Lagrange’s algebraic approach to mathematics, a tension between graphical and symbolical techniques of manufacturing mathematical objects. Wings of Desire: On Mortality and Finitude “The Angel’s Nightmare” is what Wim Wenders calls a sequence that takes place about 70 minutes into Wings of Desire (1987), Wenders’s romantic filmic poem about Berlin and two angels who watch over its inhabitants. Unlike the rest of the film, The Angel’s Nightmare is shot with a shaky, chaotic camera to relay the anguish that the angel Cassiel is
150
M. JANCOVIC
experiencing after his failed effort to prevent a man’s suicide. In an audio commentary recorded in 1996, Wenders reminisces: We cut it very fast and we shot a lot of the stuff in… with an Arriflex, an old Arri that we could step-frame and just shoot frame-by-frame. And… as [the angels] go back and forth in time, as time is not… it’s not even an issue for them, we thought we could mix all these images together and have his nightmare consist of contemporary images, just as well as images from the past. And… Because, of course, they do see unpleasant things and they… they see violence and they have memories of the war, which is the darkest hour of Berlin.
A still from The Angel’s Nightmare is presented in Figs. 4.9 and 4.10. The first is from the restored high-definition Blu-ray edition released by Criterion Collection in 2009. The second from an illicit copy of unknown provenance circulated online. The video file on the Blu-ray has been compressed with the Advanced Video Coding compression codec (also known as H.264 or MPEG-4 Part 10 AVC) with a video bitrate of 23.98 Mb/s
Fig. 4.9 Still from Wings of Desire (1987) by Wim Wenders, from the Criterion Collection Blu-ray, compressed with Advanced Video Coding. (Image © 1987 Road Movies GmbH—Argos Films. Courtesy of Wim Wenders Stiftung— Argos Films)
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
151
Fig. 4.10 Still from Wings of Desire, compressed with high efficiency video coding. (Image © 1987 Road Movies GmbH—Argos Films)
and a resolution of 1800 × 1080 pixels. The average per-pixel bitrate is about 0.514 bits per pixel. The latter was compressed with High Efficiency Video Coding (HEVC, also known as H.265), a successor of the previous method. At a resolution of 1200 × 700 pixels and a bitrate of 1.56 Mb/s, the image dimensions are somewhat smaller than the Blu-ray edition, but the average per-pixel bitrate is approximately 0.075 bits per pixel, compressed roughly seven times more heavily in terms of information quantity per pixel. As with the scratched print of Akira and the artifact-ridden DVD of Olympia in the previous chapters, the value of these two copies of Wings of Desire depends on how one chooses to look at them. From a conventional spectatorial vantage point, within a customary tradition of looking at cinema and a particular culture of preserving, restoring, showing (and selling) it, Figure 4.10 is not just “poor” in Hito Steyerl’s sense, it is almost worthless. The image is so despoiled of information that the mosaic of DCT blocks contains barely anything but their single lowest frequencies. And yet, I would like to recall Steyerl’s observation that the richness of a well-resolved, high-definition image—like the restored Blu-ray edition
152
M. JANCOVIC
here—is “anchored in systems of national culture, capitalist studio production, the cult of mostly male genius, and the original version, and thus [is] often conservative in [its] very structure” (Steyerl 2009, 3). To Steyerl’s argument, I would add something that seems crucial to me: such rich images are also an implied film historiography, a certain habit of remembering the past and imagining the future of the moving image. It is a historiography that tends toward notions of continuity, infinity, sharpness and losslessness; a vision of the past in which the past can be perpetually “restored” by hiding damage and scratches. For its 30th anniversary screening at the Berlin Film Festival and subsequent theatrical re-releases in 2018, the film was restored again. Wim Wenders was quoted saying “we needed to go deeper, because we needed something that was going to survive forever” (Wenders in Horn 2018, n.p., my emphasis). Such a symbolical desire for infinity and immortality often underlies archival and restoration work. But it tends to break down when images encounter the materiality of compression. It is poignant that the compression in the illicit copy of Wings of Desire fails so spectacularly precisely at this point in the film, at one of its steepest dramaturgical peaks where so much is at stake aesthetically and emotionally. The rest of the video file is actually of decent quality. But in The Angel’s Nightmare, the algorithm simply cannot keep up with the step- framing and rapid motion and montage. The affective force of the image eclipses the codec’s ability to contain it. This is also the moment in the narrative in which the infinite and unrepresentable domain of angels radically clashes with the finite, mortal and physical world of humans. It is no coincidence, of course, that the expressive cinematography overwhelms algorithms that are primed for minimal camera movement. But there is something poetic about seeing compression fail exactly as the film attempts to represent death, in the moment when it suggests that it is viable to remember the past in its entirety without loss, as the angels do. Harmonic analysis makes itself known in this specific moment by haunting the signals with inscriptions of finitude. The ripples of the Gibbs phenomenon and the blockiness of motion compensation are like an echo of mortality: a reminder that all signals must decay, that our terms are numbered, that movement can only happen when time is finite. The restored Blu-ray of Wings of Desire is a crystallization of a great deal of labor and technical expertise. It is easier to watch inasmuch as the frame contains some sense of contour and direction, and forms retain a bare minimum of definition against their background. The images have
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
153
more resolution, and thus require less resolving effort on our part. But in my view, the blurry mess of the illicit copy, too, is valuable in its own right because it points outward, beyond the film. The less well-resolved the images are, the less they become identifiable as belonging to Wings of Desire in particular, and the more they begin to resemble every other poorly resolved film. The blocky, barely legible surface stands in for all those films that, unlike Wings of Desire, have neither the systematic care nor the market value to be preserved, restored, reformatted and re- restored forever, and will therefore be lost to history. The poor image reminds us that that a lossless spectrum in which nothing escapes is only thinkable symbolically. This is not to say the symbolic domain is not useful. On the contrary, as sound scholars have argued, the Fourier domain gave birth to the idealized notions of “signal” and “noise” as distinct phenomena, which then led to further developments in theoretical physics, communication engineering and to practical technical applications like noise filters and technical media at large (Kromhout 2017). Reiterating that infinity is a symbolic construct does not mean that it should be abandoned, it just means that in the physical domain, discontinuities, breaks and losses must inevitably occur. We can move away from infinity, as the angel Damiel does in the film, but we can only do so once. Friedrich Kittler has speculated repeatedly about the theological associations of Fourier analysis with the divine, but also about the possibilities it offered to think of an alternative form of historiography not as a practice of recording events in time, but as an engagement with the past in terms of rotations, phase shifts and frequency manipulations (Kittler 1993). “In the Fourier domain, we are immortal,” Kittler concluded in a lecture shortly before his death (2012, 48, my translation). But only there.
Space Before digital computers became widely available, doing analysis meant carrying out operations within complex intermedial assemblages involving a variety of bodily gestures. For example, a common way of performing machine-assisted analysis in the early days of oscillography was to photograph the output of an analog oscilloscope and then manually trace the photographs into a mechanical Fourier analyzer (Matos 2017). In the primeval days of brain research in the 1930s and prior to the development of the Walter electroencephalograph frequency analyzer in 1946, the
154
M. JANCOVIC
physicist Günther Dietsch used an epidiascope (an opaque projector) to enlarge ink-printed oscillograms of brain waves onto large sheets of paper and traced the projected curves by hand with a pencil (Dietsch 1932). The mathematical precision was thus a function of Dietsch’s own manual dexterity. Dietsch then calculated the brain waves’ spectral components with the help of Ludwig Zipperer’s Tables for Harmonic Analysis. The Tables were a calculation aid, a computer in the form of a book. Its pages were made of transparent film (Fig. 4.11), and one could analyze harmonic coefficients arithmetically and spatially by filling numbers into pre-printed blank tables, cutting them out and overlaying them onto other tables. Mathematical analysis in Dietsch’s case thus involved symbolic operations with numbers, but also an interesting grouping of other skills, tools, techniques and media: paper and projectors, pencils and rulers, drawing, cutting and moving things around in space. Much like bibliographers who use various optical machines and techniques of looking to generate
Fig. 4.11 An analytical table on a transparency. (Image source: Zipperer 1922)
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
155
historical relationships between books without necessarily reading them, mathematics can create abstract and symbolical relationships out of bodily gestures and techniques in physical encounters with media. After all, with a lot of patience, time and resolve, it is possible to calculate the cosine coefficients of an arbitrary wave by hand. In fact, this is how harmonic analysis had been practiced for a greater part of its history: tediously. Computation is always a material and sensory process. The analytical techniques of modelling physical events could be considered a special form of reformatting. Analysis transfers the “content” of media like air and the electromagnetic spectrum (heat, sound, light) and reformats them into sums and series of oscillating waves. In practice, techniques like the DCT often achieve this reformatting by performing spatial manipulations on signals. Creating Order The pursuit of computational efficiency and speed has been a major incentive in the development of computing machines and algorithms. This has held true since long before modern digital computing. The desire to accelerate computation instigated Carl Friedrich Gauss’s efforts in the early 1800s to calculate a fast Fourier transform, which he did by standardizing his calculations and removing superfluous symbols as much as possible (Bullynck 2009). It equally motivated James W. Cooley and John Tukey’s development of the Fast Fourier Transform in the mid-twentieth century. It was the reason Lord Kelvin and Michelson and Stratton kept building increasingly sophisticated harmonic analyzers and synthesizers. This telos of compressing computing time is well-noted in media theory, but scholars tend to overlook that acceleration is often achieved by spatial means by manipulating, formatting and reordering data in space. Zipperer’s tables are a tangible example, but spatial media also permeate analysis everywhere else conceptually and practically. Spatial metaphors abound in the “graphical imaginary” (Drucker 2010) of signal processing. When mathematicians speak of transforms and harmonic analysis, they tend to imagine mathematical manipulations in distinctly spatial, topological and geometric terms. The operations of the DCT and of many other algorithms depend on gestures of rearranging, tabulating, moving, ordering, rotating, repositioning, folding, dividing and recombining. It is, in the first instance, these dimensional reformattings that make the
156
M. JANCOVIC
microtemporality of Fourier transform algorithms possible. A splendid example of the importance of spatiality is this passage from a textbook on digital image processing: Fourier theory assumes that the array of pixels supplied as input to the DFT [discrete Fourier transform] is merely one period of an image that repeats itself infinitely in the x and y directions […]. An equivalent way of thinking about this is to regard the image as being wrapped around on itself, such that the left side touches the right and the top touches the bottom. (Efford 2000, 200)
In order to be processed, the image must thus first be imagined as wrapped around a torus: an image in the Fourier domain has no “format” since it is not bounded in space but modeled as if it extended and repeated into infinity. In fact, the reason why, compared to other transforms, the DCT is so efficient at compressing energy into only a few coefficients is because its first step is to extend the signal with a mirrored image of itself. By mirroring the data, the signal becomes symmetrical and the high frequencies that would normally result from the discontinuities at its edges can be reduced (Watkinson 2001). Furthermore, the matrix of coefficients returned by the algorithm is also ordered in a spatially meaningful table. The most significant coefficient, the average of the entire block, comes first, in the top left. The further away a coefficient is from this corner of the table, the less likely it is to be contributing significant information and the less sensitive a human eye would be to the fine detail it represents. Depending on the chosen image quality, only a first few of the 64 total coefficients in a block will carry any information at all; most will be compressed to zero. The encoder traverses the table in a precise way, starting from the most meaningful field and moving diagonally in a zig-zag pattern (Fig. 4.12) towards the least significant one. This way, important low-frequency information is registered first, and the long series of zeroes can be shrunk efficiently with other compression methods like run-length encoding and Huffman coding. The DCT itself does not compress information, but formats it into a spatial structure from which it can be compressed at all. The spatial form is thus indispensable to the bandwidth-reducing and time-saving properties of compression. This point is even more forcefully demonstrated by the Fast Fourier Transform, the precursor of the DCT.
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
157
Fig. 4.12 Example of a DCT quantization table (left) and the zig-zag path that the run-length encoder follows
The Fast Fourier Transforms The Fast Fourier Transform (FFT) is an algorithm for calculating the frequency coefficients of a signal, independently discovered in 1965 by James Cooley (then at IBM) and John Tukey (then at Princeton University). Whereas the DCT transforms a signal into frequency components consisting only of cosine waves, the Fourier transform decomposes signals into sines as well as cosines. Cooley’s and Tukey’s specific algorithm does this in a fast—very fast—way. Its speed represented a historic leap in signal processing. Even the transatlantic telegraph pales in comparison: depending on the size of the data matrix, some calculations that would have taken decades to compute with the conventional, plain discrete Fourier transform were accelerated to hours by the fast algorithm. The FFT achieves this spectacular speed by repeatedly arranging, separating and rearranging the signal spatially. It exploits the knowledge that under some specific mathematical conditions, one can avoid a large number of matrix additions and multiplications simply by reformatting the data into a rectangular table and performing separate two-dimensional transforms with appropriate phase shifts (Cooley et al. 1967). Because of how complex numbers behave, it is faster to separate odd and even data points, transform them separately and then sum the two separate results, than it is to calculate all complex multiplications one by one. The key is that this can be performed recursively, repeatedly splitting the data table
158
M. JANCOVIC
and isolating even and odd rows in a “butterfly” shape (sometimes also called “divide and conquer”) until the transforms are only a single point long. The effect of this computational technique is immense. And the more data there is to process, the more dramatic the difference between the regular and fast methods becomes. For a digital image with modest dimensions of 500 pixels square, the FFT is roughly 42.000 times faster than a regular discrete Fourier transform. Like Zipperer’s calculation aids and Gauss’s tables and inscription optimization techniques, the FFT thus uses tabulation as a spatial strategy to dramatically improve computing speed. Historically, the FFT, much like the Gibbs phenomenon, is also a good reminder of how forgetful the discipline of mathematics can be and how frequently it forgets and rediscovers techniques and epistemic instruments. In 1984, historians of mathematics realized that Gauss had described an efficient algorithm for evaluating the sine and cosine coefficients of a Fourier series in an unpublished and undated manuscript believed to be from 1805. Gauss had thus found a fast Fourier transform algorithm 160 years before it would get reinvented by James W. Cooley and John Tukey as the now ubiquitous FFT. But the algorithm had already surfaced in the 1940s (Danielson and Lanczos 1942; Cooley et al. 1967; Cooley 1987, 43) and the knowledge about Gauss had itself already been discovered and forgotten twice, in 1904 and again in 1977 (Heideman et al. 1984). The FFT teaches us a basic media-archaeological lesson about media techniques’ resistance to neat timelines. Although the chronology of the authorship of theorems, axioms, algorithms, proofs and their formal relations is traditionally very important to the historiography of mathematics, algorithms have complicated non-linear pasts punctuated by repeated disremembering and do not lend themselves well to conventional historiographical notions like authorship. This pertains to harmonic analysis at large, too. As soon as we try to bring its origin—and the origins of compression more generally—into focus, it slips away, appearing as a sprawling superposition of multiple events and a disordered tangle of scientific disciplines and interrelated mathematical or physical problems. Instead of searching for origins, we can benefit from pausing on the moments of reappearance that algorithms and other computational techniques undergo to ask ourselves what technological and epistemic conditions have led to their being forgotten, and what necessitated them being remembered again. By doing so, we may come to the realization that the history of video compression and visual media is longer than one might expect.
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
159
Cooley and Tukey’s version of the fast algorithm, developed in a milieu of Cold War nuclear military paranoia (Cooley 1987; Rockmore 2000) not only massively accelerated transforms into the frequency domain but also many other mathematical techniques that can be performed in it, such as sorting and filtering algorithms. It has been noted that the reason why the FFT had not been rediscovered until the 1960s could have been the amount of processed data: the computational savings of a fast algorithm (or an analytical table, or other expediting tools) are negligible when one is only dealing with 12 or 36 data points like Gauss had been (Maibaum 2016, 84). Due to the imprecisions of nineteenth-century measuring instruments, calculating more than a handful of coefficients on any data would have made little sense, anyway (Heideman et al. 1984). But this very quickly changed in the twentieth century. As measurement data of all sorts were being churned out in the mid-twentieth century and after analog-to-digital converters had been developed in the mid-1960s (Cooley 1987), the volume of data began ballooning and computation economy became an increasingly tight bottleneck. The more data there are to process, the larger the difference between the standard and fast algorithm becomes. Yet perhaps most importantly, after being discovered and forgotten so many times, this time around, the success of the FFT algorithm was partly also a function of the media work IBM had done. IBM publicized the technique in full-page advertisements and, crucially, released it into the public domain because its patentability seemed dubious and because “the thought was that the money was to be made in hardware, not software” (Rockmore 2000, 6). Besides superior computational performance, over one and a half century after Gauss, not only had the lack of a fast algorithm become economically intolerable, but there was now a media-technological dispositif to distribute and promote it. Computing with Light Nineteenth-century mechanical analyzers and Zipperer’s analytical tables testify that analytical work can be performed with a wide range of mediatic techniques and instruments. In the context of harmonic analysis, lenses are one of the more peculiar instruments that can calculate. Placing a lens between a screen and an object photographed on transparent film creates an optical system. When the screen is in the focal plane of the lens and the system is illuminated by collimated light, it will project the object’s
160
M. JANCOVIC
Fraunhofer diffraction pattern—a Fourier transform—onto the screen. The diffraction angle corresponds to frequency. Optical projection systems can thus be used to compute two-dimensional Fourier transforms at the speed of light. In this way, it is possible to show the spatial frequency spectrum of an object, manipulate it and perform matrix multiplications by placing physical filters into the system (Huang et al. 1971; Tyson 2014). Visual interference patterns make it possible to compare images without computers and with only photographic means. Lenses and screens, two principal components of visual media, thus not only record and project images, but can also compute. Indeed, there was a time when optical Fourier transform systems were faster than digital computers for some types of applications. A 1971 paper writes: A giant coherent optical system at the Institute of Science and Technology, University of Michigan, is capable of processing 70-mm films with a resolution of 100 cycles/mm. It can therefore do a Fourier transformation on approximately 2 × 108 data points essentially instantaneously. Now suppose we do the same thing on a digital computer. It would take more than an hour just to read in the data points, if the computer film reader reads at 30 μs/point. Assume that the Cooley and Tukey [FFT] algorithm is used, and assume that the computer had a core memory of more than 4 × 108 words, it still would take about 100 h to perform the Fourier transform. (Huang et al. 1971, 1604)
The media-historical implications of such optical computing systems are considerable. As historians of analog computing have taught us, tide- prediction machines and mechanical analyzers clearly affirm the important status computers have had prior to what are conventionally considered the early years of cinema in the 1890s (Care 2012; Ulmann 2013). Yet the fact that optical tools like lenses can also perform complex analytical operations means that we may have to rethink some of the received, all too readily made pronouncements regarding the relationship between cinema and computational media. Conventionally, in much of film theory, digital cinema has been construed as ontologically distinct from analog cinema due to its basis in computation (Manovich 2001; Doane 2007). The example of Fourier optics demonstrates that the key photographic components of analog cinema— light passing through a semitransparent film, a lens and hitting a screen— can fairly easily also be repurposed into computing machines. This unsettles
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
161
the historical frame of reference upon which many of the arguments about the nature of digital and analog cinema had been built, since cinema and computation have historically and materially been much more proximate than received wisdom would have us believe. Computation and signal processing, although not independent of hardware, can be performed with hardware of many kinds: through the manipulation of symbols on paper or the moving of electrical charges in semiconductors, but also mechanically, electrooptically with cathode ray tubes, photo-electromechanically with various resonant systems, symbolically with numbers, geometrically with measurements, graphically with tables and transparencies, as well as optically with light and lenses.
Compression as Moving Image Infrastructure It is not an exaggeration to say that much of present-day popular audiovisual culture is indebted to a single mathematical technique, the discrete cosine transform (DCT). Every major lossy compression format uses an algorithmic implementation of the DCT or one of its successors. DCT underlies the epoch-defining triumvirate of the post-television age: JPEG images, MPEG videos and MP3 audio all use it to compress data. Various forms of the DCT are used in other formats like Dolby AC-3 (used for sound in high definition digital television) and utilized by all video codecs in the MPEG family, which includes the DV, DVD and Blu-ray formats as well as digital broadcasting standards and most common web video codecs. The discrete cosine transform is what links computational processes of compression to the dis/abilities of the human sensory system. To the extent that they make the logistical and technological practices of producing, distributing, broadcasting, exhibiting, and preserving moving images possible, compression algorithms must be viewed as part of their algorithmic infrastructure. The discrete cosine transform, in particular, inextricably weaves the history of audiovisual media to harmonic analysis. It can tell us much about the role of mathematics in visual media culture, as well as the role of media in the culture of mathematics. And indeed, the material culture of mathematics is a splendid repository for media history. The many calculating devices and computing techniques accompany the history of the moving image like a faint undercurrent, in various forms. The spatial techniques of computation that underly modern video compression also somewhat relativize the insistence, voiced often in some traditions of media archaeology, on examining the Eigenzeit
162
M. JANCOVIC
of media—the temporal logic proper to technology. In Wolfgang Ernst’s approach to media archaeology, for example, as digital theorist Geoff Cox has summarized, “the human sensory apparatus is considered inadequate for the recording of cultural memory” (Cox 2015, 159; cf. Ernst 2012). The goal of this form of archaeology then becomes to excavate and understand the autonomous microtemporalities of media. But, as we have seen, on the level of practice and technique, those microtemporalities of computation often emerge out of observable physical and deeply sensory gestures of drawing, measuring or turning cranks. My point is not that previous media theories are wrong when they emphasize the temporal dimension of compression or of computation at large. Finding efficient algorithms and accelerating the speed of calculation is a powerful telos within mathematics that has been present for millennia (Siegert 2003). But acceleration is, in many cases, achieved spatially. Temporality is often a product of creating proximities; of manipulating, formatting and moving elements in space. Sequestering temporality from spatiality or giving one conceptual preference over the other is methodologically less effective than a consideration of how temporalities on any scale are enacted in space, and how each is constituted in and by the other. This is not a radical suggestion, and it has been explored previously (Mackenzie 2001; Hansen 2015). Cox argues that “[c]oncentrating efforts on understanding temporality at both micro and macro levels begins to unfold more complex and layered problems of different kinds of time existing simultaneously across different geopolitical contexts” (2015, 161). Considering critically the implications of Wolfgang Ernst’s emphasis on microtemporality, Cox continues: “Rather than run the risk of overlooking the potential of macrotemporality of history in favour of microtemporality, why not deepen the contradictions between them?” (Cox 2015, 159). Cox’s is a valuable suggestion, but perhaps it could be useful to take an even further step back and reconsider whether the way we sort temporal scales into micro, meso and macro levels is in itself useful. It would appear intuitively understandable what we mean when we talk of micro and macro scales, since their measure is some unarticulated but shared human sense of time. But is it really helpful to approach the world with this a priori scaling already in place? When faced with “this political problem of temporality” (Cox 2015, 152) of an object like compression algorithms, whose logic resonates on a
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
163
broad spectrum of frequencies and timescales simultaneously, pre- emptively excluding some of them means putting aside factors that might develop effects across other scales. The microtemporal level of algorithms is inseparable from every other temporal scale as well as from human history. The difference of a single addition in an implementation of a compression codec has effects on the climate, and is thus factually operating both at a microtemporal scale as well as in planetary time. What lessons can we draw from a media epigraphy of ringing and blocking? By closely following traces of compression inscribed in moving images, an alternative archaeology of the moving image begins to appear. Most saliently, these compression traces are subtle reminders that the history of mathematics and the history of visual media can not only be thought in parallel, but that they have been integral to each other long before the advent of digital media and binary computing. The symbolical notions of infinity that inform mathematical methods are at odds with the finitude and materiality of signals, and this conflict plays out daily in audiovisual culture in the form of compression artifacts. To properly understand the history of the moving image, one must also study the mathematical techniques that make it possible. Analysis, in particular, is of central interest to theories of media, compression and audiovisual culture. Conversely, to properly understand the history of mathematics, one must also study the media that make mathematical inquiry possible. The history of analytical thought can be juxtaposed with some of its material media. In fact, the abstract and symbolic operations that compression algorithms perform are inseparable from a long history of physical, spatial and material modes of calculation. In suggesting that algorithms be considered part of the infrastructure of cinema, I have situated moving images within an archaeology of mechanical, graphical, optical, and symbolical computing machines and techniques. At the same time, I have also argued that, far beyond only its microtemporal effects preserved as traces, this algorithmic infrastructure ramifies through our lived environment and time as heat. The spectral manipulation of signals, for which harmonic analysis provides the mathematical a priori, is deeply structured by symbolical notions of infinity, but at odds with the intractable material finitude of machines.
164
M. JANCOVIC
References Ahmed, Nasir. 1991. How I Came Up with the Discrete Cosine Transform. Digital Signal Processing 1: 4–5. https://doi.org/10.1016/ 1051-2004(91)90086-Z. Ahmed, Nasir, T. Natarajan, and K.R. Rao. 1974. Discrete Cosine Transform. IEEE Transactions on Computers C–23: 90–93. https://doi. org/10.1109/T-C.1974.223784. Alexander, Neta. 2017. Rage against the Machine: Buffering, Noise, and Perpetual Anxiety in the Age of Connected Viewing. Cinema Journal 56: 1–24. https:// doi.org/10.1353/cj.2017.0000. Bressoud, David M. 2007. A Radical Approach to Real Analysis. 2nd ed. Washington, D.C: The Mathematical Association of America. Bullynck, Maarten. 2009. Rechnen mit der Zeit, Rechnen gegen die Zeit. F. X. von Zachs ‚Archiv der Beobachtungen‘ und C. F. Gauß’ Rechnung per rückkehrender Post (1800–1802). In Zeitkritische Medien, ed. Axel Volmar, 177–193. Berlin: Kulturverlag Kadmos. Caplan, Paul Lomax. 2012. JPEG: The Quadruple Object. Doctoral Dissertation, London: Birkbeck, University of London. Care, Charles. 2012. Early Computational Modelling: Physical Models, Electrical Analogies and Analogue Computers. In Ways of Thinking, Ways of Seeing. Mathematical and Other Modelling in Engineering and Technology, ed. Chris Bissell and Chris Dillon, 95–119. Berlin/Heidelberg: Springer. https://doi. org/10.1007/978-3-642-25209-9. Carslaw, H.S. 1917. A Trigonometrical Sum and the Gibbs’ Phenomenon in Fourier’s Series. American Journal of Mathematics 39: 185–198. https://doi. org/10.2307/2370535. Cooley, James W. 1987. The Re-discovery of the Fast Fourier Transform Algorithm. Mikrochimica Acta 93: 33–45. https://doi.org/10.1007/ BF01201681. Cooley, James W., P.A.W. Lewis, and P.D. Welch. 1967. Historical Notes on the Fast Fourier Transform. Proceedings of the IEEE 55: 1675–1677. https://doi. org/10.1109/PROC.1967.5959. Cox, Geoff. 2015. Postscript on the Post-digital and the Problem of Temporality. In Postdigital Aesthetics: Art, Computation And Design, ed. David M. Berry and Michael Dieter, 151–162. Houndmills, Basingstoke, Hampshire; New York: Palgrave Macmillan. Crofts, Charlotte. 2008. Digital Decay. The Moving Image: The Journal of the Association of Moving Image Archivists 8: 1–35. Cubitt, Sean. 2011. Vector Politics and the Aesthetics of Disappearance. In Virilio Now: Current Perspectives in Virilio Studies, ed. John Armitage, 68–91. Cambridge: Polity Press.
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
165
Danielson, G.C., and C. Lanczos. 1942. Some Improvements in Practical Fourier Analysis and Their Application to x-ray Scattering from Liquids. Journal of the Franklin Institute 233: 365–380. https://doi.org/10.1016/ S0016-0032(42)90767-1. Dietsch, G. 1932. Fourier-Analyse von Elektrencephalogrammen des Menschen. Pflüger’s Archiv für die gesamte Physiologie des Menschen und der Tiere 230: 106–112. https://doi.org/10.1007/BF01751972. Doane, Mary Ann. 2007. The Indexical and the Concept of Medium Specificity. differences 18: 128–152. https://doi.org/10.1215/10407391-2006-025. Donoho, D.L., M. Vetterli, R.A. DeVore, and I. Daubechies. 1998. Data Compression and Harmonic Analysis. IEEE Transactions on Information Theory 44: 2435–2476. https://doi.org/10.1109/18.720544. Drucker, Johanna. 2010. Graphesis: Visual Knowledge Production and Representation. Poetess Archive Journal 2: 1–50. Dutch Data Center Association. 2020. State of the Dutch Data Centers 2020. Dutch Data Center Association. Efford, Nick. 2000. Digital Image Processing: A Practical Introduction Using Java. Harrow, England; New York: Addison-Wesley. Ernst, Wolfgang. 2012. Chronopoetik: Zeitweisen und Zeitgaben technischer Medien. Berlin: Kulturverlag Kadmos. ———. 2013. Media Archaeography: Method and Machine versus the History and Narrative of Media. In Digital Memory and the Archive, ed. Jussi Parikka, 55–73. Minneapolis, MN: University of Minnesota Press. Fleming, James Rodger. 2005. Joseph Fourier’s Theory of Terrestrial Temperatures. In Historical Perspectives on Climate Change, 55–64. Oxford/New York: Oxford University Press. Foucault, Michel. 1978. The History of Sexuality. Translated by Robert Hurley. 1st American ed. New York: Pantheon Books. Fourier, Joseph. 1822. Théorie analytique de la chaleur. Paris: Firmin Didot. ———. 1878. The Analytical Theory of Heat. Translated by Alexander Freeman. Cambridge: Cambridge University Press. Fuller, Matthew, ed. 2008. Software Studies: A Lexicon. Cambridge, MA: The MIT Press. Geoghegan, Bernard Dionysius. 2021. The Bitmap is the Territory: How Digital Formats Render Global Positions. MLN 136. Johns Hopkins University Press: 1093–1113. https://doi.org/10.1353/mln.2021.0081. Goldstine, Herman Heine. 1993. The Computer from Pascal to Von Neumann. Princeton, N.J: Princeton University Press. Gottlieb, David, and Chi-Wang Shu. 1997. On the Gibbs Phenomenon and Its Resolution. SIAM Review 39: 644–668. https://doi.org/10.1137/ S0036144596301390.
166
M. JANCOVIC
Grattan-Guinness, Ivor. 1972. Joseph Fourier, 1768–1830: A Survey of His Life and Work, Based on a Critical Edition of his Monograph on the Propagation of Heat, Presented to the Institut de France in 1807. Cambridge: The MIT Press. ———. 1990. Convolutions in French Mathematics, 1800–1840: From the Calculus and Mechanics to Mathematical Analysis and Mathematical Physics. Vol. 2. Basel: Birkhäuser. ———. 2005. Joseph Fourier, Théorie Analytique de la Chaleur (1822). In Landmark Writings in Western Mathematics 1640–1940, ed. Ivor Grattan- Guinness, 354–365. Chichester; New York: Elsevier Science. Hansen, Mark B.N. 2015. Symbolizing Time: Kittler and Twenty-First-Century Media. In Kittler now: current perspectives in Kittler studies, Theory Now, ed. Stephen Sale and Laura Salisbury, 210–238. Cambridge: Polity Press. Heideman, M., D. Johnson, and C. Burrus. 1984. Gauss and the History of the Fast Fourier Transform. IEEE ASSP Magazine 1: 14–21. https://doi. org/10.1109/MASSP.1984.1162257. Herivel, John. 1975. Joseph Fourier: The Man and the Physicist. Oxford: Oxford University Press. Hewitt, Edwin, and Robert E. Hewitt. 1979. The Gibbs-Wilbraham Phenomenon: An Episode in Fourier Analysis. Archive for History of Exact Sciences 21: 129–160. Horn, Andrew. 2018. Wim Wenders’ ‘Wings of Desire’ Soars to Screens After Restoration. Variety. Huang, T.S., W.F. Schreiber, and O.J. Tretiak. 1971. Image Processing. Proceedings of the IEEE 59: 1586–1609. https://doi.org/10.1109/PROC.1971.8491. Jancovic, Marek, and Judith Keilbach. 2023. Streaming Against the Environment: Digital Infrastructures, Video Compression, and the Environmental Footprint of Video Streaming. In Situating Data: Inquiries in Algorithmic Culture, ed. Karin van Es and Nanna Verhoeff, 85–102. Amsterdam: Amsterdam University Press. Jerri, A.J. 2013. The Gibbs Phenomenon in Fourier Analysis, Splines and Wavelet Approximations. Dordrecht: Springer Science & Business Media. Kittler, Friedrich A. 1993. Draculas Vermächtnis. Technische Schriften. Leipzig: Reclam. ———. 2003. Blitz und Serie—Ereignis und Donner. In Ereignis: eine fundamentale Kategorie der Zeiterfahrung: Anspruch und Aporien, ed. Nikolaus Müller- Schöll and Philipp Schink, 145–158. Bielefeld: Transcript. ———. 2012. Und der Sinus wird weiterschwingen: über Musik und Mathematik [im Rahmen der Ringvorlesung “Die Künste und die Wissenschaften” Kunsthochschule für Medien Köln, 3. Februar 2011). Köln: Verlag der Kunsthochschule für Medien. Kleinnijenhuis, Jan, and Renee van Hest. 2022. Netbeheerders slaan alarm: vraag naar stroom explodeert. NOS. October 4.
4 +ET CETERA IN INFINITUM: HARMONIC ANALYSIS AND TWO CENTURIES…
167
Kromhout, Melle Jan. 2017. Noise Resonance: Technological Sound Reproduction and the Logic of Filtering. Doctoral dissertation, Amsterdam: University of Amsterdam. Kuhr, Stefan. 2001. Implementation of a JPEG Decoder for a 16-bit Microcontroller. Master’s thesis, Stuttgart: University of Applied Sciences. Lin, Chu-Hsing, Jung-Chun Liu, and Chun-Wei Liao. 2010. Energy Analysis of Multimedia Video Decoding on Mobile Handheld Devices. Computer Standards & Interfaces 32: 10–17. https://doi.org/10.1016/j. csi.2009.04.003. Mackenzie, Adrian. 2001. The Technicity of Time. Time & Society 10: 235–257. https://doi.org/10.1177/0961463X01010002005. ———. 2008. Codecs. In Software Studies: A Lexicon, ed. Matthew Fuller, 48–55. Cambridge, MA: The MIT Press. ———. 2013. Every Thing Thinks: Sub-representative Differences in Digital Video Codecs. In Deleuzian Intersections: Science, Technology, Anthropology, ed. Casper Bruun Jensen and Kjetil Rodje, 139–154. New York/Oxford: Berghahn Books. Maibaum, Johannes. 2016. Schnelle Transformationen. Eine medienarchäologische und objektorientierte Untersuchung von Fourier- Transformationsalgorithmen. MA thesis, Berlin: Humboldt University. Manovich, Lev. 2001. What is Digital Cinema? In The Digital Dialectic, ed. Peter Lunenfeld, 172–198. Cambridge, MA: The MIT Press. Matos, Sónia. 2017. Can Languages be Saved?: Linguistic Heritage and the Moving Archive. In Memory in Motion, Archives, Technology and the Social, ed. Ina Blom, Trond Lundemo, and Eivind Røssaak, 61–84. Amsterdam: Amsterdam University Press. https://doi.org/10.2307/j.ctt1jd94f0.6. Mercat, Alexandre, Florian Arrestier, Wassim Hamidouche, Maxime Pelcat, and Daniel Menard. 2017. Energy Reduction Opportunities in an HEVC Real- time Encoder. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1158–1162. https://doi.org/10.1109/ ICASSP.2017.7952338. Michelson, Albert A. 1898. Fourier’s Series. Nature 58: 544–545. https://doi. org/10.1038/058544b0. Michelson, Albert A., and S.W. Stratton. 1898. A New Harmonic Analyser. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 45: 85–91. https://doi.org/10.1080/14786449808621106. Monteiro, Eduarda, Mateus Grellert, Sergio Bampi, and Bruno Zatt. 2015. Rate- distortion and Energy Performance of HEVC and H.264/AVC Encoders: A Comparative Analysis. In 2015 IEEE International Symposium on Circuits and Systems (ISCAS), 1278–1281. https://doi.org/10.1109/ISCAS. 2015.7168874.
168
M. JANCOVIC
Prestini, Elena. 2004. The Evolution of Applied Harmonic Analysis. Boston: Birkhauser. Rasmussen, Nicolas. 1993. Facts, Artifacts, and Mesosomes: Practicing Epistemology with the Electron Microscope. Studies in History and Philosophy of Science Part A 24: 227–265. https://doi.org/10.1016/0039-3681(93)90047-N. Rockmore, D.N. 2000. The FFT: An Algorithm the Whole Family Can Use. Computing in Science Engineering 2: 60–64. https://doi. org/10.1109/5992.814659. Schonig, Jordan. 2022. The Shape of Motion: Cinema and the Aesthetics of Movement. New York: Oxford University Press. Siegert, Bernhard. 2003. Passage des Digitalen: Zeichenpraktiken der neuzeitlichen Wissenschaften, 1500–1900. Berlin: Brinkmann & Bose. Smith, Steven. 2013. Digital Signal Processing: A Practical Guide for Engineers and Scientists. Burlington, MA: Elsevier. Star, Susan Leigh. 1985. Scientific Work and Uncertainty. Social Studies of Science 15: 391–427. https://doi.org/10.1177/030631285015003001. Starosielski, Nicole. 2014. The Materiality of Media Heat. International Journal of Communication 8: 2504–2508. Sterne, Jonathan. 2012. MP3: The Meaning of a Format. Durham: Duke University Press. Sterne, Jonathan, and Dylan Mulvin. 2014. The Low Acuity for Blue: Perceptual Technics and American Color Television. Journal of Visual Culture 13: 118–138. https://doi.org/10.1177/1470412914529110. Steyerl, Hito. 2009. In Defense of the Poor Image. e-flux: 1–9. Strang, Gilbert. 1994. Wavelets. American Scientist 82: 250–255. Tyson, Robert K. 2014. Fourier Transforms and Optics. In Principles and Applications of Fourier Optics, 4-1-4–9. IOP Expanding Physics. Bristol: IOP Publishing. Ulmann, Bernd. 2013. Analog Computing. Berlin: Walter de Gruyter. Veraart, Jelle, Els Fieremans, Ileana O. Jelescu, Florian Knoll, and Dmitry S. Novikov. 2016. Gibbs Ringing in Diffusion MRI: Gibbs Ringing in Diffusion MRI. Magnetic Resonance in Medicine 76: 301–314. https://doi. org/10.1002/mrm.25866. Wallace, G.K. 1992. The JPEG Still Picture Compression Standard. IEEE Transactions on Consumer Electronics 38: xviii–xxxiv. https://doi. org/10.1109/30.125072. Watkinson, John. 2001. An Introduction to Digital Video. 2nd ed. Jordan Hill: Focal Press. Wilbraham, Henry. 1848. On a Certain Periodic Function. The Cambridge and Dublin Mathematical Journal 3: 198–201. Zipperer, Ludwig. 1922. Tafeln zur Harmonischen Analyse Periodischer Kurven. Berlin/Heidelberg: Springer.
CHAPTER 5
Viewer Discretion is Advised: Flicker in Media, Medicine and Art
In March 2017, a peculiar news item reverberated across North American and European news channels. A Dallas County grand jury on Tuesday indicted John Rayne Rivello of Salisbury, Maryland, on an aggravated assault charge enhanced as a hate crime […]. Rivello is accused of sending a strobing image to reporter Kurt Eichenwald’s Twitter account with the intention of causing a seizure. Eichenwald, who lives in the Dallas area, has epilepsy. Included with the image was the message: “You deserve a seizure for your posts.” The image was apparently sent in response to Eichenwald’s outspoken criticism of then-President-elect Donald Trump. In a statement Tuesday, Rivello’s attorneys said he is a military veteran with post-traumatic stress who apologized to Eichenwald and is seeking counseling. (Associated Press 2017)
A criminal complaint filed by the FBI details the online manhunt that led to the man’s arrest.1 Rivello had used a prepaid burner phone to conceal his identity, but the Dallas Police Department and the FBI were able to track him down after issuing search warrants for Twitter’s IP address and phone number logs. The mobile provider’s metadata records turned up an iPhone associated with the number, which matched a corresponding Apple iCloud account. On this account, Rivello conveniently stored a 1 United States of America v. John Rayne Rivello 2017, Document 1. The case was dropped by federal prosecutors in November 2017.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Jancovic, A Media Epigraphy of Video Compression, https://doi.org/10.1007/978-3-031-33215-9_5
169
170
M. JANCOVIC
Fig. 5.1 One frame from the two-frame strobing animated GIF sent to Eichenwald. This is a meme that has been circulating online since at least 2004
photo of himself holding up his driver’s license along with screenshots of the tweet and the Wikipedia page on Eichenwald he had vandalized, as well as the fateful flickering GIF (Fig. 5.1). Rivello’s mundane but nonetheless labyrinthine movements traverse a variety of formats, platforms and practices. His actions expose the mutual transitivity between online and offline forms of abuse and violence, along with his naïve but certainly considered attempt at online anonymity. Eichenwald’s case is an intricate quagmire of many chronic maladies plaguing our society and networked media: trolling and hate speech, the devolution of political discourse into tribalist babble, the resurgence of fascist intimidation tactics (Eichenwald has Jewish heritage and the case stirred many Nazi sympathizers to rally to Rivello’s defense), data privacy and the convoluted vectors of surveillance, the relationship between the United States’ military conquests and mental disorders, or the increasingly life-threatening conditions of global journalistic labor. But there is one more facet that makes this case stand out. In a move that greatly intensifies
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
171
the paradoxical relationship that animation entertains with death (Cholodenko 1991; Sobchack 2009), the jury’s indictment classified the tweet and animated GIF as “a deadly weapon” (The State of Texas v. John Rayne Rivello 2017). GIF, the compressed moving image format that usually litters the web with weird memes, can seemingly also turn into a deadly instrument of harm.
A Different Kind of Violence If we provisionally accept Michel Foucault’s supposition that the body is “an inscribed surface of events” (1984, 83), then the body itself becomes addressable by media epigraphy. A seizure like Eichenwald’s, then, can be viewed as a particularly pernicious event inscribed on the body by compressed, flickering moving images. Flicker is a trace of the most elementary form of moving image compression: the fewer images are shown per second, the less information there is to transmit. But this reduction is traded in for increasing strobing and stuttering of the image. We could say that flicker is the waste left over from compression—and this waste can be literally harmful. But if compressed images can cause harm, then the history of video compression is, in part, also a history of violence. One of the main points of contention in Eichenwald’s civil case against Rivello was profoundly film-theoretical. The strobing GIF case forced the American legal system to grapple with the problem of whether a moving image can “touch” a person—and thereby legally constitute battery under Texas law. U.S. District Judge James Bredar only confirmed what Laura U. Marks had already argued years ago: of course moving images can touch our bodies. But this legal decision is nonetheless striking, not least because it upends the commonplace understanding of the relationship between violence and media. When we speak about violence and media, what we often really mean is a much narrower alliance than the expansive meaning of both those words might suggest. What people tend to refer to is violence in media, or even more precisely, images of violence in “the media.” When they recognize imagery as a threat, theories of the image almost invariably speak of a mimetic violence of the second order. When, for instance, Susan Sontag wrote of the power of images to “assault” us, the nature of this power was assumed to lie in what the images depict (Sontag 2004). Many of those who share an interest in problems of the pictorial recognize that all
172
M. JANCOVIC
representations of violence—and in some cases representation in and of itself—are also re-inscriptions of violence (Metz 1985; Bronfen 1990; Nancy 2005). Yet this leaves us with a much smaller but importunate class of images that can assault even if they do not represent anything at all. It seems to me that the weaponization of a GIF in the Eichenwald incident signals a need to recalibrate our understanding of the term “media violence” and of the uneven distribution in whom this violence tends to affect. This was not the first time that strobing animations were sent through computer networks to remotely, purposely induce epileptic seizures and cause bodily harm in susceptible people.2 But Eichenwald’s case is nonetheless remarkable because its entry into the public record formally inaugurates a long but poorly understood regime of visual aggressions: moving images that are aggressive not in what they show, but because of what they do. With the GIF incident as my point of departure, in this and the following chapter, I will consider flicker as a compression artifact and a source of violence and pleasure. Tracing an epigraphy of flicker allows us to address some strange exchanges between unruly bodies and disorderly images that take place at the margins of media and medicine. Epilepsy provides us with a particularly compelling impulse, because it urges us to seek out a richer historical account of video compression that includes its corporeality: the harmful and gratifying bodily sensations that compressed moving images can cause, the queerness of dysfunctional media and the sensory politics of standards and infrastructure. Compression methods are a type of standardized technological procedure. These standards are inculcated in our sensory experience of the world, governing how a great portion of our daily visual and auditory events take shape. “Study an information system and neglect its standards, wires, and settings, and you miss equally essential aspects of aesthetics, justice, and change,” Susan Leigh Star (1999, 379) once warned. Like other technological standards, compression methods are saturated with expectations regarding how our bodies should typically behave. On the one hand, media-technological standards presuppose users who are both functioning, and functioning in particular ways (Sterne 2006; Sterne and Mulvin 2014). On the other hand, standards also write these forms of function into being. Standards are one of those things that, to borrow 2 In March 2008, a message board operated by the Epilepsy Foundation of America was targeted by attackers who posted hundreds of messages with flashing attachments.
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
173
from the queer and feminist scholar Sara Ahmed, “are relegated to the background in order to sustain a certain direction” (Ahmed 2006, 31). What happens when this direction is abandoned, when users stop functioning, when standardized ways of seeing are performed “incorrectly”? What happens when bodies and images misbehave, fail or refuse to align with the norming capacity of the standard? When the media users implied and anticipated by standards do not orient themselves as expected, images may become violent. They may provoke nausea, seizures and other potentially harmful bodily responses in those who view them. We can read such events as traces of the very visceral relationships our bodies develop with media. Seizures like the one experienced by Kurt Eichenwald are, in a manner of speaking, a failure of the body to engage with media standards in the expected way. Strikingly, such failures tend to appear in close proximity to flicker, itself a failure of moving images to compress seamlessly and transparently. Drawing on disability studies, queer studies and science and technology studies and using photosensitive seizures as a case study, in this chapter, I will show how compression standards, medicine and visual culture produce, enable and enact certain forms of disability and violence, and how, in turn, disabilities inform, orient and inscribe themselves into standards, medical practices and visual culture at large. One of the unexpected directions that my research on video compression led to has been neurological literature. Visual media play a central epistemic role in medical practice, and brain researchers are particularly dependent on and cognizant of imaging technologies. In fact, one could reconstruct a general history of moving image culture simply by peeping into medical journals. Neurologists acutely follow and discuss developments in media tech. The introduction of remote controls, of color television, of reel-to-reel video recorders and VHS tapes, the advent of home computing, the use of CRT, plasma, LCD and LED displays, the steadily growing popularity of gaming, 3D cinema and 3D television—all of these topics find a mention in the margins of brain research of the past decades. This chapter will chart some of the many ways in which neurology has come to rely on and shape itself through visual media. But after having reviewed a range of neurological and psychiatric literature on photosensitive epilepsy covering the last 70 years, I found that neurological discourses also tell us a great deal about societal anxieties surrounding gender, sexuality and technology on the one hand, and, on the other hand, about unique, marginal media practices that rely on compressed images. Some of
174
M. JANCOVIC
these practices not only subvert medical and patriarchal forms of power, but also seem to overturn our accepted understanding of what it means to be a spectator of moving images. What awaits us in this and the next chapter is a branching, intricate story of unintended side-effects, dangerous works of art, unruly and mischievous children and some very unique ways of watching television. But first, a brief history of photosensitive epilepsy to lay some groundwork.
Photosensitive Epilepsy Epilepsy is a set of neurological conditions that affects roughly 50 million people worldwide. It is estimated that out of these, around 3–5% are photosensitive.3 Photosensitive people experience epileptic seizures or show neurologically abnormal responses to strong or flickering light. Estimates suggest that photosensitive epilepsy (PSE) affects somewhere between one in 4000 to one in 6000 people, but the susceptibility to flashing light is five times higher in children and adolescents aged 7–20, out of which girls are twice as likely to be affected as boys. Once a seizure is experienced, two thirds to three quarters of patients remain sensitive to light for life, although seizures can remit with age, especially after early adulthood. A seizure is a specific pattern of electrical activity in the brain marked by the excessive synchronized firing of a group of neurons. It can manifest in various forms: from a brief momentary loss of consciousness (called an absence) followed by mood changes, to severe uncontrolled convulsions or muscular spasms accompanied by confusion, fatigue, incontinence or vomiting. The parameters that influence seizures differ by the specific epilepsy syndrome, of which there are many. They are diverse and span a wide range of motor and sensory modalities. Epileptic seizures can be provoked by intrinsic bodily factors (as diverse as stress, mental calculation, emotional excitement or menstruation) and extrinsic precipitants (including alcohol consumption, bathing in hot water or even exotic triggers like listening to a particular melody, playing board games, toothbrushing or ironing striped clothes). But for people with photosensitive epilepsy, visual stimuli are the most frequent seizure trigger, and among these, television is by far the most common (Panayiotopoulos 2017). In Europe, close to 3 The International League against Epilepsy suggests the term “visual-sensitive.” Some sources, e.g. Saleem et al. (1994), estimate photosensitivity affects as many as 5–10% of epilepsy patients.
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
175
two thirds of patients experience their first seizure while watching television (Harding and Takahashi 2004; Radhakrishnan et al. 2005; Harding and Harding 2010). Even though the specific mechanisms by which they trigger seizures are not clearly understood, images and visual scenes with regular, high- contrast patterns are usually epileptogenic. The likelihood of a seizure varies in response to various properties of the offending image. Contrast, the shape and orientation of patterns, flicker frequency, interlacing, the distance from the screen, the color of the light, whether one or both eyes are affected and a whole host of other variables affect how provocative an image is. Epileptic seizures generally occur suddenly. This can make them terrifying and distressing for the person affected, and even brief seizures carry situational risks of injury or death, for instance when driving or swimming. The top answer to the question “What does it feel like to have a seizure?” on an online forum gives an impression of how a severe seizure might be experienced like: I remember seeing an acquaintance have a seizure and needing to be reassured that when I did it, it wasn’t that frightening, that I didn’t look like that, that I wasn’t so vacant or dead… but the reality is—I’m not there when I’m having a seizure. My body is in control of me. Everything that makes me, me… it’s not there. (Carletti 2014)
The experience of a seizure can thus be accompanied by a profound sense of disorientation and a temporary loss of one’s own embodied personhood. As such, epilepsy can significantly impair one’s social space of action and, depending on severity, is recognized as a disability. The “Origin” of Photosensitivity Neurophysiological research on photosensitive epilepsy has a cherished origin myth: the Apologia by philosopher Apuleius. Apuleius presented this famous self-defense before the Roman proconsul Claudius Maximus at his trial sometime around 159 CE to fend off charges of magica maleficia. Apuleius had been accused of using sorcery to lure an older woman of standing to marry him, and of enchanting with magical incantations a slave boy and a free woman and thereby causing them to collapse. In one passage of the Apologia, epilepsy serves Apuleius as a tool for sequestering
176
M. JANCOVIC
false allegation from truth, and the domains of magic and divination from medicine. He places “the divine disease,” as epilepsy has been called for millennia, firmly under the purview of healers and philosophers rather than magicians. Crucially, this section of Apuleius’s speech mentions a technical device as a possible stimulus for convulsions: a spinning potter’s wheel whose flickering rotations could supposedly instigate a fit.4 This anecdote is widely thought to be the first historical mention of PSE. The spinning wheel has been mythologized and passed on by generations of epileptologists.5 Clinical literature on photosensitivity in the era of modern epileptology begins with two cases of seizures documented by the British neurologist William Gowers in 1881. Gowers reported seizures provoked by suddenly stepping into bright sunshine and by looking at fire. Photosensitivity in humans is thus rediscovered by modern medicine at the same time as photosensitivity in matter: less than a decade prior, Willoughby Smith and Joseph May had accidentally stumbled upon the photoconductive properties of selenium during their efforts to develop a way to test undersea telegraph cables for electrical faults (Smith 1873). The International Exhibition of Electricity and the first International Electrical Congress were underway in Paris when Gowers’s large monograph on epilepsy appeared in print. Three years later, Paul Gottlieb Nipkow would use selenium cells to patent another spinning wheel mythologized, in their turn, by historians of television—the scanning “electric telescope.” The electrification of the world thus formed a backdrop to photosensitive epilepsy’s nascent transformation into an object of neurological study. Moreover, modern neurology itself took shape in tandem with the brain’s emerging epistemic status as a voltaic organ amenable to manipulation by electricity. And all of these developments were flanked by the emerging research on television and cinema. The Electroencephalograph Much like Apuleius’s attempt to separate magic from medicine, late nineteenth and early twentieth century neurology and psychiatry were faced Apologia 45.5. Panayiotopoulos (2017) has recently objected, however, that the potter’s wheel in Apuleian and Galenic times was solid rather than spoked, and would thus not have produced the intermittent flickering light normally associated with seizures. 4 5
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
177
with the challenge of resolving epistemic and nosological uncertainties surrounding a condition known as hysteroepilepsy. The nineteenth- century French neurologist Jean-Martin Charcot classified hysteroepilepsy as a psychosomatic disorder with the outward symptoms of “true” epilepsy which, however, did not originate from lesions in the brain but from the imagination (Gotman 2012, 162). The historical intertwining of epilepsy with hysteria is long, complex and variegated. After the “rediscovery” of Hippocratic writings in Europe in the fourteenth century, a range of neurological and psychiatric symptoms were imputed to the uterus and, by association, the female reproductive system and sexuality. Noticing the correlation between seizure frequency and the menstrual cycle, gynecologists in the mid-nineteenth century began removing ovaries as a popular treatment for epilepsy, hysteria and other neuropsychiatric disorders in women.6 Despite high mortality rates, oophorectomies and clitoridectomies were practiced well into the early twentieth century (Betts et al. 2001). In the belief that a surfeit of sexuality and masturbation caused seizures, potassium bromide, which induces temporary sexual dysfunction, was used as the first anticonvulsant in both men and women (Stirling 2010). Media-historically, it is intriguing to realize that potassium bromide was also one of the first developing agents used in photography and later also one of the components in the emulsion applied to motion picture film, and thus in a sense supplanted the biological reproduction of life with the chemical reproduction of images. In the second half of the nineteenth century, epilepsy was gradually recognized as a cortical disorder with seat in the brain rather than the mind or uterus, and slowly discursively untangled from hysteria, madness and other neuropathologies like psychoses and movement disorders (Chaudhary et al. 2011; Magiorkinis et al. 2014; Schäfer 2015). This diagnostic separation, however, by no means followed a linear path. Rejected ideas surrounding the boundary between hysteria and epilepsy— particularly the now recognized influence of affect and emotional state on seizures—would often get resuscitated after decades. Some diagnostic grey areas persisted until as recently as the 1990s (Fenton 1986; Diedrich 2015), and epilepsy remains an evolving, fuzzily defined and somewhat intractable object of medical research. 6 Also note lunaticus, “lunatic” as another term for epileptic that links it to the lunar cycle (Chaudhary et al. 2011).
178
M. JANCOVIC
The slow work of defining and understanding epilepsy and distinguishing it from similar conditions coincided with the formation of what we now recognize as modern neurology. This process is historically inseparable from an ensemble of media-technological devices, media techniques of medical inquiry, and practices of recording, visualizing, collecting and storing data. In the last quarter of the 1800s, the so-called graphic method became intensely popular in physiology and other fields of medicine, and served as a hallmark of scientificity (Borell 1986). Apparatuses like Étienne-Jules Marey’s myographs (Fig. 5.2) extracted information from the body and primed visual culture for an aesthetics of jaggedy and jittery graphs (Cartwright 1995; Iversen 2012; Lomas 2012). These were the
Fig. 5.2 A detail of Marey’s myograph as illustrated in Etienne-Jules Marey, La machine animale: Locomotion terrestre et aérienne (Paris: G. Baillie, 1873), 31
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
179
beginnings of the diagrammatic language in which the later electroencephalograph would speak. Technical media and electrical measurement devices were essential to crystallizing what epilepsy “is,” and remain so until today. The invention and refinement of the electroencephalograph in the 1920s and 1930s caused the most dramatic historical rupture in neurological research. The EEG ultimately made it possible to differentiate epilepsy from other disorders on the basis of seemingly objective electrical data. Superseding the unreliable nineteenth-century diagnostic techniques based on direct observation, the EEG produced inscriptions that were grounded in the brain itself and made it possible to graphically distinguish bodily events whose outward appearance was very similar, such as hysteroid psychogenic fits, “true” epileptic seizures, non-epileptic types of convulsions, syncopes, migraines and others (Cartwright 1995; Diedrich 2015). But the introduction of the EEG did not automatically result in transparent recordings of some unbiased electrobiological reality. Rather, it prompted a vast reconfiguration in how medical and scientific work was performed, and it also brought along a host of new difficulties. The brainwave tracings that an EEG produces required new ways of looking and new techniques of making sense; new media techniques of deciphering, calculating, measuring, filtering, decoding, adjusting. Once the EEG is accepted as a method and medium of medical inquiry, it creates a cycle of technological and interpretive problems (Borck 2005). The issue is that our bodies send mixed signals. The technological difficulty of measuring and inscribing the brain’s activity brings about new hermeneutic problems of understanding what the inscriptions mean. These hermeneutic problems then give rise to yet again new technological concerns regarding amplification, compression, resolution, filtering, data management, storage, and the operation and maintenance of increasingly complex electrotechnical equipment. The data this equipment generates then again require the tricky interpretive work of distinguishing between experimental error, machine noise and actual cerebral activity. The EEG can show “abnormalities,” but it does not disclose which abnormalities in the graph are medically relevant, which might be caused by bad contact of the electrode with the skin, or by unrelated bodily movements like swallowing. Such interpretive difficulties of scientific images have been discussed by media and science and technology scholars extensively (Rasmussen 1993; Rheinberger 2009; Geimer 2010). But in the case of the EEG tracing, the work of understanding goes one step further
180
M. JANCOVIC
than simply needing to distinguish “signal” from “noise.” A medical practitioner attempting to decipher a graph first needs to distinguish meaningful noise from nonsensical noise. The EEG apparatus clearly produces traces of something, but the terms under which the produced “thing” acquires significance are subject to permanent renegotiation. Fourier analysis, the mathematical field that studies the decomposition of complex waves into simpler sinusoids, became neurology’s primary instrument for solving the challenges posed by the EEG. Just as images can be understood as combinations of superimposed waves, so can the complex electric impulses produced by the brain. The methods of Fourier analysis can reveal the repeating frequencies hidden within the disorderly scribbles of an EEG. As we have seen in Chap. 4, extracting the rhythms— the brainwaves’ harmonic content—initially required excruciating manual calculation and relied on a range of physical procedures and material cultural techniques like drawing, tracing and photographic projection. The introduction of automated analyzers in the 1940s represented a revolutionary breakthrough for electroencephalography. In his popular- scientific book The Living Brain, which I will later return to at more length, the famous neurophysiologist and artificial intelligence pioneer William Grey Walter documented how automated Fourier transforms not only increased the speed of processing EEGs by orders of magnitude— from weeks to seconds—but also led to the discovery of new harmonic brainwave components like the theta rhythm (Walter 1963). Even more curiously, Walter attempted to use analytical techniques to measure the brain’s “versatility.” By using windowing and averaging techniques similar to those now underlying the MP3 format to mask high-frequency compression artifacts, he believed to have discovered a method of separating genius original thinkers from “dull people” based on the harmonic content of their brainwaves. Neurologists using early EEG machines were quite aware of the limitations of their techniques: the narrow frequency responses, the disturbances introduced because of compression, the unintentional attenuation and filtering, phase synchronization difficulties, problems of sampling and resolution, or the inaccuracies introduced by tracing lines by hand with a pencil—a crude but inevitable step of early EEG recording (e.g. Dietsch 1932; Dawson and Walter 1944). Even present-day digital electroencephalography still bears the marks of the protocols developed in response to these earlier material constraints. For instance, the division of the brain wave frequency spectrum into bands (alpha, beta waves and
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
181
so on) not only aligns with the limited writing speed of early oscillographs but also reflects the mechanical design of analyzers which had to resonate at pre-selected frequencies to deliver practicable results. EEG software often measures amplitude and temporal resolution per millimeter, a unit traditionally useful for inscriptions made on paper, not on digital screens. While the transition to digital EEG in the 1990s and the emergence of computational neuroscience solved some of the issues of analog machines, it also introduced the same new troubles that digitization tends to cause in most contexts. Neurology was forced to face novel technical terms that had suddenly become crucial to how it creates knowledge: it had to address adequate sampling rates, compression and aliasing, artifacts and interference, bit depths, dynamic ranges, standard and non-standard data formats, compatibility and comparability, and data security and encryption. Like many other disciplines of science and medicine, from the moment epileptology adopted technical media as tools, it trapped itself in a cat-and- mouse game with itself. It began generating increasingly large amounts of data, and simultaneously grappling with the uncertainties of their interpretation. Audiovisual Media in the Neurological Dispositif Besides the EEG and mathematical techniques like harmonic analysis, audiovisual media were and remain indispensable to the diagnosis, monitoring and treatment of photosensitive epilepsy. Photography, film and video have played a constitutive role in the visual and material culture of neurology. The motion studies made at the Salpêtrière asylum under Jean- Martin Charcot and elsewhere have been studied at great length, as have the somatic and mental pain that their recording often caused to patients (Cartwright 1995). But while much research has been done on hysteria and epilepsy in general, the specific case of photosensitive epilepsy has been nearly entirely omitted from media scholarship. PSE deserves special historical attention because of its unique relationship with audiovisual media. Cinema is mentioned in association with PSE already in some of the earliest twentieth-century medical sources in 1927 (Robertson 1954). Film was used to record patients exhibiting experimentally provoked light-induced seizures no later than 1932, although the use of film for documenting other seizures is much older, beginning as early as 1905 with Walter Chase’s medical recordings
182
M. JANCOVIC
of epileptics—films later re-used by both Hollis Frampton and Stan Brakhage, two influential American avant-garde filmmakers (Cartwright 1995; Sitney 2008, 338). In 1934, a procedure known as “intermittent photic stimulation” was introduced. This is a method for testing for anomalies in brain activity with the use of flickering light of varying frequencies. Intermittent photic stimulation became widely deployed during the 1940s and 50s in combination with the EEG to diagnose photosensitive epilepsy. It was initially performed with an apparatus contrived out of parts of various media (an automobile headlight and a gramophone player) that somewhat resembled a film projector with a multi-bladed shutter: “an opal glass bowl that was illuminated from behind by a lamp, in front of which a disc with cut- out sectors was rotated” (Panayiotopoulos 2017, n.p.). In 1954, Henri Gastaut, whose laboratory in Marseille was among the primary hubs for the study of PSE, expressed his bewilderment over the lack of epileptological research on cinematographic projection as well as the film projector’s unrealized potential as a method of visual stimulation (Gastaut and Bert 1954).7 Despite Gastaut’s grievances, neurology did, in fact, sustain a tiny cottage industry of peripheral cinematographic and televisual technology. Improving on the glass bowl, William Grey Walter, for example, introduced electronic stroboscopes with variable flicker rates into the experimental setup in 1946. They were made by television manufacturer Scophony. Using this improved device, Walter showed that photoparoxysmal responses (abnormal brain activity in response to light) could be evoked by flicker even in non-epileptics. As the science historian Jimena Canales summarizes: “While previously epilepsy had been understood as a condition affecting only a few individuals who carried the disease, Walter’s research showed how it instead could be ‘latent’ in all individuals to differing degrees” (2011, 237). One of the key realizations emerging out of this research in 1948 was that the brain could be “driven” by light (Hutchison et al. 1958). This means that brain waves can be made to synchronize with the frequencies of flickering light like an optoelectrical circuit. This insight, as I will discuss in a moment, would later have a dramatic influence on the 7 Gastaut’s research is fascinating not least because it presents early neurological theories of film, distinguishing “emotionally neutral” films from those with a capacity to elicit psychic reactions and proposing, for example, measurable brainwave indicators for the notion of “identification” with characters on screen.
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
183
history of experimental film as many artists would go on to embrace it as a key principle of their work. Moving image technology not only provided the technological means of scientific inquiry, but also vital models to conceptualize the brain with. Walter’s book The Living Brain abounds in media metaphors. He alternatingly describes the brain and its functions in terms of film production, television recording, cryptanalysis; thought and intelligence are understood as problems of image storage and processing; nerves are likened to leaky undersea telegraph lines; neurodisorders and illnesses are described as “mechanical slips and errors” (Walter 1963, 38). Both Walter and his fellow flicker researcher John Raymond Smythies speculatively pictured the brain’s alpha rhythm in terms of a scanning mechanism as used in television cameras, and hypothesized that flicker-induced hallucinations result from, essentially, incompatibilities in the “frame rates” of the flickering light and the brain. The language of media vitally assisted neurologists’ thinking. In an effort to quantify and make medically legible the elusive specter of epilepsy, neurologists continued constructing ever more esoteric media devices and absorbing various optical machines into their experimental protocols. At Walter’s institute in Bristol, Harold Shipton invented the toposcope. Prefiguring by several decades the video sculptures for which artists like Nam June Paik would later become famous, the toposcope was a box of 22 stacked cathode ray tube screens equipped with a camera attachment that could record all of them simultaneously. Combining closed- circuit television and photography or film, it was used for multichannel EEG monitoring, each tube showing a signal from a different region of the brain. As magnetic tape recording became available during the 1950s, some North American neurologists used tape recorders to store EEG data as sound, “hacking” the devices to record the low frequencies required for EEG. The tapes would then be sent to Massachusetts Institute of Technology where they could be analyzed with the first digital correlators (Barlow 1997). Many years before computer networks, this allowed recording, analysis and diagnosis to be performed in separate locations. In the late 1960s, it became possible to apply digital Fourier analysis to EEG data to extract spectral content, setting the foundation for today’s quantitative EEG methods. Throughout the 1960s and 70s, filmic techniques like the use of slow- motion film, and videographic methods like long-term telemetry became
184
M. JANCOVIC
useful in the clinical separation of hysterical and epileptic patients (Ames 1971; Glaser 1978; Noe and Drazkowski 2009; Reif et al. 2016). The wide availability of video later enabled the formation of so-called epilepsy monitoring units, which one could describe as a sort of medical precursor of Big Brother-style reality TV. These medi(c)al spaces resemble small TV studios in which patients can be continuously surveilled and recorded. Lisa Cartwright (1995) names the monitoring unit as the paradigmatic example of what she calls the “neurological gaze.” One particularly notable media technique of medical observation is the split-screen EEG (Fig. 5.3). Using closed-circuit television, this diagnostic method shows a live video feed of a patient on one half of the television screen, while the other side displays a simultaneous tracing of their brainwaves. The split-screen unfolds the interiority of the seizing brain vis-à-vis
Fig. 5.3 Photograph of a television monitor showing a patient and their EEG in multiple brain regions at the onset of an absence seizure. (Image source: Bowden et al. 1975, 17)
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
185
its bodily exteriority. It allows the neurological gaze to link, in real time, the outward tremors of the body to its inward electrical traces. As this and similar media practices gradually became standardized, they developed profound epistemic effects. The repeated, standardized measurements and analyses made it possible to begin distinguishing between normal and “aberrant” neurological responses and between different forms of epilepsies. Together with the electroencephalograph, media of observation, storage and comparison allowed epileptology to speak to itself, encouraging the discipline to develop a common vocabulary and form institutions like the Terminology Committee of the International Federation for EEG and Clinical Neurophysiology. More fundamentally, it is only through the use of the many electrical, optical and screen-based devices that the brain itself can be imaged and imagined as an unruly, malfunctioning electrical machine whose secret inner workings can be compelled into the open, diagrammatically. But the complex media assemblages that sprang up around epilepsy research throughout the twentieth century do not simply record but significantly—sometimes harmfully and even fatally—influence the observed patients. The epilepsy monitoring unit, for example, is useful only when it actually records seizures. The surveilling regime therefore encourages their provocation. As an article studying these units recommends: “To increase the likelihood of capturing events in a timely fashion, it is standard practice to use activating procedures such as reduction of antiepileptic drugs, sleep deprivation, hyperventilation, and photic stimulation” (Noe and Drazkowski 2009, 495). Long-term epilepsy monitoring can lead to better treatment (Lagerlund et al. 1996) but at the same time, intentionally increasing the frequency of seizures can also cause adverse effects like psychosis. In what was only one publicized case among many (because they generally remain undisclosed to the public due to settlements), a patient in Colorado died from suffocation in 2008 after experiencing a seizure in such a unit (Noe and Drazkowski 2009). As this condensed archaeology of neurological media and knowledge- formation processes shows, epilepsy, as an object of study intimately tied to technological forms of inscription, makes medicine operate like a media archive. Over time, the increasing complexity of diagnostic methods generates increasingly large amounts of electrophysiological data in a variety of formats. These data require suitable storage, management and metadata, and formats which in turn require standardization in order to be processed, analyzed, exchanged and understood. To extract meaningful
186
M. JANCOVIC
statistical knowledge out of the individual tracings stored in clinical and experimental repositories, routines of addressing and tabulation have to be established which also require increased clerical and administrative work. Walter himself perhaps described it best: “the laboratory’s functions of memory and association begin to approximate crudely to those of the brain itself” (1963, 89). The epileptological archives, clinics and laboratories teem with specialized devices, yet they also contain more banal objects that did not originate in the medical dispositif. One of them is the 100 Hz analog television receiver, an inconspicuous but media-historically intriguing relic. In the mid-1990s, cathode ray tube televisions were getting larger, brighter and, because they were increasingly used as computer and gaming displays, closer. In other words, the television screen began to stand out more, literally inhabiting more territory not only in the rooms it shared with us, but, as the area it occupied on the retina grew larger, also in the space it arrogated inside of our bodies. The intrusion of brighter and closer images into the eye pushed existing television standards to their limits. As screens grew, failures of the image resulting from compression, such as flicker and interlacing jitter, became more apparent to the general population. To counteract such disturbances, television manufacturers began releasing analog television receivers with higher frame rates onto the market in the mid-1990s, with reduced image flicker as a primary selling point. The high frame rate analog TV is somewhat of a fringe object, a last vestige of household applications for cathode ray tubes that appeared just before plasma and LCD televisions began overtaking them. This now obsolete piece of technology (which still finds some enthusiasts in the retro videogaming community) was targeted toward consumers, but it also entered neurological research as both object and instrument of study. As an affordable and commonplace means of generating moving images with higher frame rates, these television sets served a crucial role in clarifying some of the pathophysiological mechanisms of PSE. The high frame rate consumer TV confirmed, for example, that one half to two thirds of patients are significantly less susceptible to flashing at 100 Hz compared to 50 Hz (Fylan and Harding 1997; Kasteleijn-Nolst Trenité et al. 2002). By helping to affix numerical values to the relationship between flicker and brain response, the television set, along with all the other light- emitting, flickering and measuring devices, made photosensitive epilepsy
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
187
more coherent as an epistemic object of medical science. The importance of television in epileptology, however, goes far beyond this.
The Televisual Condition Isolated reports of seizures caused by strong light appeared already in the late nineteenth century, but the particularly forceful effect of flickering light was a later discovery. Photosensitive epilepsy really begins to take shape as an “epistemic thing” in neurology (to borrow a term from Hans- Jörg Rheinberger 2009) after the 1950s with the rise of television. As TV grew into its post-war popularity, both the electroclinical characteristics of seizures and their most common technological cause suddenly gained visibility. A few cases of fits provoked by a television screen were documented throughout the 1950s, still considered highly unusual. Yet with the start of the following decade, epileptological journals experience a veritable explosion of literature on the subject. “Television epilepsy” appears as the principal form of PSE. Indeed, the literature evidences just how firmly embedded television had become in culture and daily family life already at this point. Thus wrote one J.N. Garvie from an English hospital in 1961: “I suggested to the parents that as television seemed to precipitate seizures it would be better for their children not to watch it. This, of course, provoked expressions of incredulous horror, and perhaps in these days the suggestion is both impracticable and absurd” (Fischer-Williams et al. 1961, 395). Almost universally, the reported cause of seizures in this early period was that the affected person stepped close to the screen in order to adjust an image that had started flickering due to loss of signal synchrony. An example: In January, 1960, the family acquired a new large-screen television set, which failed to function properly. The screen was intensely illuminated, and ‘thick black bars passed from top to bottom.’ The patient knelt beside the set to manipulate the controls, her face within six inches of the screen. She remembers manipulating the switches and feeling intensely distressed, and then she abruptly lost consciousness. Her husband witnessed a major epileptic seizure. (Pallis and Louis 1961, 188)8 8 For some similar cases, see, among numerous others, Klapetek 1959; Fischer-Williams et al. 1961; Mawdsley 1961; Pantelakis et al. 1962; Charlton and Hoefer 1964; Jeavons and Harding 1970; Andermann 1971.
188
M. JANCOVIC
It is in moments like this—those plentiful and unexceptional moments of failure when media devices suddenly act up, stop or misbehave—that technology and infrastructure articulate themselves as queer, as the media scholar Robert Payne recently argued. This capriciousness of media has been called queer materiality (Suárez 2014; Payne 2018). What strikes me about this case report is how the queer materiality of one person’s encounter with a television screen reveals much larger, complexly interlocked assemblages of media spectatorship. The machine malfunctions and the disturbance in its image causes an intense, unanticipated bodily disturbance in the form of a seizure. All of this is contingent upon a historically specific viewing practice and a particular moment in the timeline of television technology: because the notion of remote control is not widespread yet and the image needs to be synced manually, the viewer is coaxed into action, forced to modify her proxemic relationship to the TV and manipulate the image directly with the dial. Compression lies at the very heart of all this because it is the reason why TV images flicker in the first place. Fairly early on, neurologists realized that the incidence of sensitivity to flicker seemed much higher in certain regions of the world than in others. Astoundingly, the distribution follows television standards. In countries that used or still use PAL or SECAM, two of the three major analog video encoding standards, seizures were three times more common than in regions that used the NTSC standard, such as all of North America (Bower 1963; Charlton and Hoefer 1964). The reason is that these standards compress images differently. PAL television systems commonly show 50 interlaced images per second, which means they are slightly more temporally compressed and therefore flicker just a little more intensely than the 60 flashes per second used in NTSC. About half of people with PSE are sensitive to 50 flashes per second. Above that value, responsiveness quickly tapers off with increasing flicker frequency (Wilkins et al. 1979; da Silva and Leal 2017). But at close distance to the screen, the striped pattern produced by interlacing becomes apparent. The interlaced lines alternate at half the speed, and the low frequency in PAL countries—25 fps—lies in the most epileptogenic range, activating neurological responses in 75% of all photosensitive patients (Wilkins et al. 1979; Mayr et al. 1987; Ricci
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
189
et al. 1998).9 The combination of two video compression techniques, frame rates and interlacing, compounds the harmful effects. This rather startling realization bears repeating. Video compression standards, with their underlying dependence on electrical infrastructure, are inculpated in the incidence of epileptic seizures around the world. This is a remarkable instance of how media—and technological standards more broadly—possess an uncanny ability to “produce” disability. There have always been photosensitive people, of course, but photosensitivity did not have much of a coherent presence in neurology because it lacked a media environment in which it could systematically occur and thus be studied. Its contours as a sensible medical concept were delineated with the introduction of technological stimuli like flickering televisions into the household and into the vicinity of the perceiving body. Photosensitivity became predominantly known because of television; its position in modern medicine was partially carved out because of a certain notion of bodily ability inadvertently inscribed in video compression standards. PSE is thus somewhat unique as a very tangible example of a process that Mara Mills and Jonathan Sterne (2017) have called dismediation: disability constituting media, and media constituting disability. We could think of photosensitive epilepsy as a disorder of both the body and of technology that emerges primarily as an effect of our media environment and infrastructure. A seizure provoked by moving images can therefore be read not just as an instance of queer materiality, but as an embodied inscription of history. It is how the history of electrification and technological standards manifests in the photosensitive body. By directing our attention to the tangled ways in which the bodily and the technological co-constitute one another, photosensitive epilepsy can help us think through their relationship with more precision. Studying photosensitivity namely opens up a series of new questions about the technicity of the body and the mediality of its senses. A seizure caused by television may seem like an anomalous limit case of perception, a severe bodily event that is paradoxically both highly mediated yet dangerously immediate. But perhaps such events are not exceptional but, on the contrary, at the very center of what constitutes perception. They remind us that the fundamental operations at play in any encounter between human bodies and media are both radically technical and radically somatic. There 9 In 1964, 55 cases were reported in Europe versus only 3 in the United States (Charlton and Hoefer 1964).
190
M. JANCOVIC
are no “natural” human bodies. We all are shaped by standards. A seizure in susceptible people is at the end of a long path of dizziness, vertigo, nausea, headaches and other corporeal disruptions and forms of intentional and unintentional violence that moving images can evoke in our bodies. Flicker as a Technological Standard At the 1916 incorporation meeting of the Society of Motion Picture Engineers (SMPE), today one of the pre-eminent organizations for cinema and video standards, the Secretary of the US National Bureau of Standards, Henry Hubbard, gave an address. Motion picture engineering presents a splendid field for standardization. The need is obvious, for your machines and films travel to all parts of the world, and the demands of human safety, human vision, and comfort are common to all men [sic] in all lands. An ideal picture presentation for one is an ideal for others, since human nature is much the same the world over, and since mother nature standardized the human eye ages ago. (SMPE 1916, 18)
How, then, is it that a flickering image—a signal formatted into homogenous shape by standardization bodies just like the SMPE—can extend to some of us in ways that are comfortable and enjoyable, and assault others in ways that are dangerous, even life-threatening? Adrian Mackenzie offers an explanation: it is because “[e]yes and ears do not have universal, timeless physiological properties. They have media-historical habits” (2013, 145). Counter to Hubbard’s notion of universal and natural standards, where can we situate the very “non-standard” experience of a seizure caused by moving images? The history of standards relating to frame rates and flicker strikes one as having a drawn-out gestation period lasting roughly a century, followed by a rapid consolidation within less than a decade around 1930. The nineteenth century was witness to an extensive medicalization of vision. This process manifested, on the one hand, in the invention of myriads of pre- cinematic optical toys and, on the other hand, scientific experimentation that often involved bodily harm both self-inflicted and inflicted on others (Crary 1992; Geiger 2003; Canales 2011; Ladewig 2011; Strauven 2011). From the second quarter of the nineteenth century onward, determining the speed at which a sequence of still images begins to look like movement became a central conundrum for those who studied human vision.
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
191
Scientists, inventors and performer-entertainers grappled with the impossibility of coherently and rigorously quantifying the human visual system’s perceptive abilities (Nichols and Lederman 1980; Gunning 2011). Research into afterimages, the persistence of vision or the perception of movement proliferated, but continuously yielded contradictory results. The gamut of image speeds considered necessary to provide the illusion of smooth movement spanned from as low as eight per second on the optimistic side to as high as 50 on the ambitious side. Nobody could agree how much temporal compression was enough, or how much was too much. I would argue, however, that these uncertainties and contradictions were not a hindrance but a driving force. They opened up a field of co- presence for multiple competing projection technologies, display devices, compression methods and viewing practices. In the long early period of moving image media, difference was the norm, and instead of standards there was an abundance of formats. Because of this multitude of approaches to moving images, the bodily experience of flicker slowly became a global, commonplace fact of life throughout the twentieth century. The impulse to unify frame rates intensified only throughout the 1920s (Jancovic 2016). Then, suddenly and at roughly the same time, frame rates in both cinema and television were standardized. In 1926, the now somewhat glorified speed of 24 frames per second was chosen for the Movietone sound-on-film format. It spread to other projection formats mainly for pragmatic reasons like patent agreements, rather than any concrete sensory benefits (Eyman 1997). Ratified by the SMPE in 1931, the standard quickly propagated globally and continues to dominate cinema productions until this day. In television, the image frequency was aligned with the two common utility frequencies, 50 and 60 Hz—the same two values along which the incidence of photosensitive seizures diverges globally. As noted in Chap. 3, the choice for these two numbers was partly based on electrotechnical convenience, partly on aesthetic judgment, and partly arbitrary. The arbitrariness of the utility frequencies becomes apparent when we leave the media environments we are accustomed to and face other standards. To many people who grew up watching television with the slightly higher rate in NTSC regions like North America, analog TV in PAL countries seems to flicker noticeably. Douglas Trumbull, a film engineer, inventor and the special effects supervisor of Stanley Kubrick’s 2001: A Space Odyssey, once described the flicker of PAL television as “physically unbearable” (Great Frame Rate Debate—Part 2—Schubin, Trumbull 2012).
192
M. JANCOVIC
Trumbull’s sentiment may sound just a little dramatic, but it indirectly hints at a thought that Sara Ahmed has elaborated upon at length. “[S]pace itself is sensational: it is a matter of how things make their impression as being here or there, on this side or that side of a dividing line […]” (Ahmed 2006, 14). Something as insipid and innocuous as a technical norm for encoding and compressing television signals can create spaces that touch us affectively and physically. The utility frequency standards effectively create a line that apportions the world into enormous areas with diverging historical directions. The direction is, to some degree, random. But it is also political and embodied. It is political because it, obviously, has to do with all the ordinary geopolitics of technological standards: with spheres of influence and modes of alignment, with domestic and foreign policy and international trade alliances and rivalries. Yet it is political also on a sensory level, because the standards produce different visual and aural sensations and orient us towards experiencing some of them as “normal” and others as not so. The standards make lightbulbs and televisions flicker differently in different places around the world. They institutionalize technological differences as different patterns of perceptual habit. By historical happenstance, these differences and lines of division even acquire medical significance that manifests as more frequent seizures. Entering a space oriented along different historical lines means that we might experience that space as being askew or aslant, as vibrating with a queer materiality. In such moments, the technological orientation of spaces, made tangible through electrical infrastructure flickering on television receivers, can feel queerly at odds with our bodies to the point of being “physically unbearable.” Even after nearly two centuries, the debate on how many “fps” are necessary or desirable remains inconclusive. Recent experiments with higher frame rate cinema and television have produced contentious results. Peter Jackson’s film adaptations of The Hobbit (2012–2014), released as a trilogy of “high frame rate” 3D films, made many people nauseous and sparked oddly heated discussions about the nature of cinema and vision. The higher filming and projection rate meant that the film was less temporally compressed. In theory, the technology was supposed to magnify a sense of realism and immersion. Instead, critics and audiences struggled to come to terms with its perceptual effects. Online commentators did not mince their words when they derided it as “a horrible way to shoot and exhibit a film” (Horton 2012), “visually repugnant” (Laforet 2012), “kitsch and alienating” (Macnab 2012). Iterations of The Hobbit were said
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
193
to resemble “meth-head hallucinations” (Rocchi 2012) and appear to be veiled in a “sickly sheen of fakeness” (Collin 2012). These sensory controversies—in Ahmed’s words, “the disorientation of encountering the world differently” (2006, 20)—hinged on the sole factor of the frame rate, and they show how ingrained the resistance against perceptual transgressions is in mainstream culture. Beyond its technical dimension as a compression standard and its phenomenological dimension as a felt and perceived quality of the moving image, frame rates and flicker thus also function as discursive surfaces along which our culture negotiates abstract aesthetic qualities like “realism” and attempts to rationalize them into numerical values. Conversely, this technical property of visual media also becomes a model for understanding vision, perception and reality itself. Bodily operations like seeing, thinking, dreaming and even life as such become thinkable in terms of frames per second. The Internet is brimming with esoteric discussions on “how many frames per second does real life have?” or “if reality were frame-rate based, how could we detect it?” Just like the media metaphors used by William Grey Walter to explain the functioning of the brain, compression technology provides us with a language to make sense of our lived realities. Pokémon Panic and the First Flicker Norms in Broadcasting Compression standards not only produce cases of photosensitive seizures, but are also produced by them. Dismediation is not a one-way street; the body reciprocates and also acts as an agent of media history. Highly mediatized cases of television seizures have led to the development of various broadcasting guidelines for flashing images that permanently affect televisual aesthetics. In 1993, a TV commercial caused three documented epileptic seizures in the United Kingdom. In response, the country introduced the very first Guidelines for Flashing Images and Regular Patterns. With some exceptions for live news coverage, these became part of the licensing conditions for broadcasters in the UK. The largest and most notorious incident involving PSE occurred in Japan. On December 16, 1997, a four-second sequence of flashing red and blue lights in the 38th episode of the animated series Pokémon caused 685 hospital admissions, out of which 560 were proven to be seizures. Of those patients, mostly children, 76% had had no previous seizure history (Harding and Takahashi 2004). Japanese news stations reported about the
194
M. JANCOVIC
incident in the evening, replaying the offending image sequence and thereby inciting a second wave of complications. Fueled by dramatic reports in print media the following morning and by the resulting sweeping panic and outrage, the incident reached epidemic proportions, with more than 12,000 children reporting symptoms the next day (Morishita 2007). Weekly magazines churned out opinion pieces and interviews, sounding alarm at the suspected dangers of brainwashing and the like. Public discourse, at that time in Japan already blisteringly critical of the Pokémon franchise, turned not only against the series, and much less to the extraordinary technological circumstances of the incident, but against animation at large. The epileptogenic potential of animated cartoons had been pointed out already in 1963 (Bower 1963) so it is almost surprising that an incident of this scale had not occurred earlier. In any case, the “Pokémon panic” provoked a colossal institutional response from an array of Japanese ministries, broadcasters, industry associations and the production company and game developer Nintendo. The resulting initiatives led to further clinical research, and eventually also to the adoption of rules for flashing images and patterns. These new norms have had extensive effects on the production and aesthetics of Japanese animated programs and have also led to a revision of the British guidelines (T. Takahashi et al. 1999a; Koide 2013, 69). Moreover, the Pokémon incident also had a massive effect on epileptology itself: video recordings of the episode in some cases became instrumental in neurological research and led to a rethinking of the testing protocol for photic stimulation (Niijima et al. 1998). Some studies have even found evidence that because of the clearer picture they deliver, the use of community antennas in rural areas of Japan may have led to more cases than individual television antennas in cities (Takahashi Y. et al. 1999b). Accounts of the influence of image quality and resolution on seizures are somewhat contradictory. Earlier research on television-induced seizures in the 1970s suggested, contrarily, that poor signal reception increases the likelihood of seizures: Runwell Hospital lies in a valley where signal strength is low. Aberrations such as line jitter are accepted as normal by domestic viewers but are not seen on studio-quality equipment and may account for the discrepancy between the high incidence of TV sensitivity reported by us and by oth-
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
195
ers, and the failure of Gastaut et al. to show television sensitivity in the studios of the French national broadcasting organization. (Stefánsson et al. 1977)
These conflicting reports could possibly be explained by improvements in cathode ray tube and deflection oscillator technology, the transition to the ultra-high frequency spectrum and its increased resolution or other image improvements in the intervening two decades. But the etiological ambiguities notwithstanding, this neurological research has immense media-historical value. In bringing forward the entanglements between bodies, media infrastructure and the environment, the debates neurologists were leading about photosensitive epilepsy foreshadowed many of the issues that media theory would later pick up on. The medical reports dealing with the Pokémon panic and other cases show us the very specific ways in which the topography and climate of the Earth, the infrastructure built across it and the signals passing through it all interreact with—and are, in a sense, inscribed upon—the bodily experience of mundane media practices like watching television. The micro of perception and the macro of infrastructure both inflect one another. Each depends on where we are, which spaces we inhabit and what directions we take. Together, the Japanese and British guidelines became the basis for the International Telecommunication Union’s 2005 recommendation BT.1702. Only a few countries have implemented it in their national legislations, but it is followed voluntarily by some major television networks around the world. By 2004, software-based automatic flash pattern analyzers had been developed and are now common in quality control workflows in the broadcasting and post-production industries. Despite these precautions, flickering images regularly escape testing. This happened somewhat infamously in 2007, when a promotional video for the 2012 London Olympics which had not been checked for compliance caused 30 reported seizures in the UK. A more recent case occurred in 2015 on an episode of the talent show The Voice UK, in which studio lighting had been found to flash unacceptably during rehearsal but was left uncorrected, and later led to a complaint. The first documented photosensitivity case involving videogames— “space-invader epilepsy”—occurred in 1981 (Shoja et al. 2007) and after a series of lawsuits in the early 1990s, computer games began carrying
196
M. JANCOVIC
warning labels on the packaging and at the beginning of the game. But content itself is rarely screened or adjusted.10 Television and cinema flicker standards largely solidified in the 1930s, but the exploration of the relationship between temporal compression and the human visual system never ceased. Douglas Trumbull, the film engineer mentioned earlier, offers us another good example. Trumbull was also the inventor of Showscan, an unsuccessful high frame rate photochemical film format. Convinced that affective experience could be directly manipulated by projection speed, Trumbull was experimenting with high projection rates in the 1970s in order to elicit the strongest possible somatic response from cinema viewers. We had done laboratory tests to see the impact of high-frame rates images on viewers. Viewers were shown identical films shot and projected at 24, 36, 48, 60, 66 and 72 fps, and all of them were monitored with electromyogram, electroencephalogram, galvanic skin response and electrocardiogram. The results were conclusive that the 60 fps profoundly increased the viewers” visual stimulation. (Trumbull cited in Kaufman 2011, n.p.)
In this experimental cine-medical assemblage, the line between producing knowledge, increasing viewing pleasure and inflicting discomfort is hazy: Trumbull claims that frame rates peaking at 60 frames per second provoked such intense reactions that some viewers became sick and had to leave the laboratory. Sara Ahmed suggests that it is in such moments of nausea, discomfort or pain that we become aware of objects as having a specific form (2006, 48). In these queer moments of failure, the universality of standards begins to crack, and the subtle forms of violence contained within peek out.
10 A notable exception is video game publisher Ubisoft who made a public commitment to prevent material likely to cause harm after a single Nintendo DS-related incident in 2007. The release of the game WipEout distributed by Sony was delayed in 2008 following a failed photosensitivity test, suggesting that voluntary screenings are perhaps not entirely unheard of in the videogame industry.
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
197
Neurology and the Filmic Avant-garde Photosensitive epilepsy has left a lasting mark on the history of cinema and audiovisual art. Douglas Trumbull’s experiments are only one among the many instances of film technology and technique drawing on medical protocols and imitating the neurological gaze. Let us briefly return to the publication of William Grey Walter’s The Living Brain in 1953. The Living Brain is a collection of essays about the evolution of the brain, the development of electroencephalography, and Walter’s own career with robotics, artificial intelligence and epilepsy research. It is hard to overstate the influence this book has had on the arts, literature, film and music scene in 1960s Europe and North America (Geiger 2003). Walter was already a prominent figure in epilepsy research, but his popular-scientific account introduced photosensitive epilepsy and flicker science to a wide audience. The Living Brain supplied a burgeoning American and European avant-garde with a potent whiff of scientificity, which artists readily transformed into an entire aesthetic program. It was thanks to Walter that PSE became a central point of reference in experimental filmmaking. Multiple artists, such as Paul Sharits—famous for his flicker films and installation works—directly quote and reference Walter throughout their writings. Walter’s book inspired the artist Brion Gysin and programmer Ian Sommerville to construct the “Dreamachine,” a stroboscopic device that became popular with the Beat Generation as a drug-free method of indulging in hallucinations. Although forged decades ago, the bond between sciences of the brain and film art still lingers on in audiovisual culture in the form of para- cinematic objects like seizure warnings. Compression has, again, been a key element in this relationship. Walter’s neurological language influenced how artists like Tony Conrad conceptualized new forms of compressing and condensing film, and the experiments described in the The Living Brain provoked the same artists to deploy flicker, a compression artifact, as a style of filmmaking. One of the most famous showpieces of such avant-garde investigations into the materiality of cinema, Tony Conrad’s The Flicker, merits a moment of attention. A Queer History of Flicker Consisting of only black and white frames, Tony Conrad’s 30-minute stroboscopic 16 mm film The Flicker (1966) is one of the famous
198
M. JANCOVIC
representatives of what are called structural films, flicker films or discontinuous films, a category of experimental film and video works that conceive of aesthetic experience as a neurophysiological or even mathematical process (Sitney 2002). Attesting both to its impact at the time of release and its enduring relevance, The Flicker continues to be analyzed and (somewhat less often) screened. Just like it, many of the experimental films made after the 1950s find immediate ancestors in neurological diagnostic techniques like intermittent photic stimulation. That an interest in neuropsychological phenomena informed Conrad’s work at this early stage in his career in the mid-1960s is well-known. The Living Brain, after being reprinted in 1963, was likely part of Conrad’s course readings while he was studying mathematics at Harvard (Geiger 2002). Through the book, Conrad probably became aware of Walter’s pioneering use of stroboscopes and his finding that flicker could provoke EEG abnormalities in people both with and without epilepsy. Despite consisting only of rapidly alternating solid black and white frames, there is plenty to see in The Flicker, because the film tends to provoke hallucinations in its viewers. The hallucinatory images viewers often see are not visibly located anywhere in the frame itself, but are rather “compressed” within the flicker frequency. One way of understanding Conrad’s film is in terms of the mathematical principles of harmonic analysis, which at the time of its making were being applied to early digital image compression. Conrad, who, after all, was a mathematician and programmer, followed principles of Fourier analysis in making The Flicker. As he mentioned in a 1966 interview with Jonas Mekas, “I was working within a form of light that is broken down not into areas or into colors but into frequencies” (cited in Mekas 2016, 242). Much like transform-based compression codecs, The Flicker thus avails itself of the knowledge that much of human perception can be expressed mathematically as an oscillation. It exploits expectations about the psychophysical functioning of the human visual system to format “content” into a binary sequence of alternating monochromatic frames. The mathematical and musical provenance of The Flicker, like that of works by Conrad’s contemporaries Sharits, Nam June Paik and others, is well-documented (e.g. Windhausen 2008; Blom 2016). But despite all of the different takes on the film, it has seen astonishingly little consideration from the vantage point of queer studies. This is surprising, because The Flicker is, in many senses of the word, a very queer film.
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
199
When John Geiger asked Conrad to recall his inspiration for the work, he recounted a series of disparate episodes. After having first mentioned his neurology classes at Harvard, Conrad proceeded with the recollection of an event that took place several years later, on March 5, 1963 in his New York apartment. The episode is worth quoting at some length here. Mario [Montez] came over one evening and was in drag and Jack [Smith] became overstimulated and dimmed the lights and put on some funky, or should I say moldy, records, and in a frantic and desperate effort to put Mario on the silver screen, grabbed an antiquated projector, a 16 mm projector which he had found somewhere and pointed the pathetic instrument at Mario, to glimmer in the lamp of the machine. At this point I have to add that the projector I’m talking about was non-functional. It was a silent film projector, it had no lens, and about all it did have was variable speed. And it turned out it flickered and flashed. I thought to myself, I wonder if this flickering light could be anything like I studied in school, and turned the control for the frame rate down to as low as it would go, and to everyone’s astonishment, Mario’s sequins began to glimmer and shake in the shimmering light, and every highlight of the lipstick and makeup began to become unreal, and everything glowed with an unearthly ambiance. (Cited in Geiger 2002, n.p.)
A course in neurophysiology, a drag performance, and a broken variable-speed projector. This is a highly idiosyncratic aggregate of memories. Despite Conrad’s sustained insistence on bringing them up in interviews and articles, the drag performance and broken projector are prone to being forgotten and excised from analyses, reviews, and theories of The Flicker.11 But they are anything but peripheral to its history. It is the profound immersion in a queer space and an unsuspected encounter with queer materiality that, on the spur of the moment, gave birth to the The Flicker. The film scholar Marc Siegel (2017) has proposed that queerness in audiovisual culture resides not simply in forms of representation and desire, but in inventive aesthetic practices. The Flicker is not commonly classified as “queer cinema,” but perhaps it should be. It was intensely part of a network of queer subversion, transgression, gender non-conformity and other techniques of manipulating consciousness and perception. Its 11 Bridget Crone (2017) is among the few who fully acknowledge their importance in the film’s making.
200
M. JANCOVIC
creation was accompanied by queer people and dysfunctional objects, and by the many people with epilepsy who had taken part in the experiments that inspired it. Any account of The Flicker’s history—and any history of structural cinema that includes it—is incomplete if it does not address the “abnormal” experience of a seizure, the queerness of Mario Montez’s sequin dress and the queer materiality of the malfunctioning film device Jack Smith used to illuminate Montez’s performance. Lisa Cartwright has hypothesized that there exists a particular regime of perception distributed across “an unlikely range and mix of institutions and practices, including the hospital, the popular cinema film, the scientific experiment, and the modernist artwork” (Cartwright 1995, xiii). Hardly anything could be more representative for this nexus of relationships than the epilepsy warning. The epilepsy warning is a common but unassuming object at the margin of visual culture that indirectly stands in for the many forms of violence that moving images can be complicit with. The Flicker is possibly the first film to feature an epilepsy warning (Fig. 5.4). As is corroborated by interviews, Conrad was aware of the mechanisms of light-induced seizures; in fact, he envisioned the film not only as a visual experiment but also as a diagnostic tool. The Flicker therefore offers a suitable starting point for a closer investigation of the role of the epilepsy warning in the history of optical aggressions. Conrad’s disclaimer warns viewers of the hazards of watching the film and waives responsibility for any injuries. It persists on the screen for nearly three minutes, rhythmically constituting an integral part of the film and serving as a translation of Conrad’s musical investigations of temporality into visual art (MacDonald 2006; Joseph 2008). Its duration gives the audience, at least viewers who are already aware of their condition, enough time to leave the screening. Seizure disclaimers and liability waivers have since become common in visual culture. We encounter them in video games, television programs and music videos. But most contemporary warnings are decidedly para- textual and appear very briefly. Two well-known examples are the music videos for Nine Inch Nails’ “Came Back Haunted” (2013) and Jay-Z and Kanye West’s “Ni**as in Paris” (2011), both of which open with epilepsy warnings that ask viewers to exercise “discretion.” Even though such warnings are less common in films, cinema, especially avant-garde and art house cinema, remains a close point of reference for other media: the former music video was directed by David Lynch, and the latter inspired by the opening sequence of Enter the Void, directed by Gaspar Noé
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
201
Fig. 5.4 Seizure warning in Tony Conrad’s The Flicker. (Image courtesy of the Tony Conrad Estate and Greene Naftali, New York)
(2009)—both directors whose work frequently features violence, especially against women. Besides their apotropaic role in warding off lawsuits, the function of epilepsy warnings is ambiguous depending on who we take the “viewer” to be. Given that the vast majority of first seizures happens while watching television or playing videogames to people unaware of their photosensitivity, these disclaimers appear more performative and perfunctory than protective. They do not primarily serve those disabled viewers they purport to address, but rather ostentatiously stage the images they precede as visually transgressive—and therefore also seductive. As an instruction on what kinds of interactions with the images should occur, the epilepsy warning redirects the responsibility for preventive action towards the photosensitive viewer herself. The “discretion” that susceptible people are being asked to exercise is, in fact, an absurd and impossible kind of labor, as it demands the one thing an image can’t ask us to do: to not look at it.
202
M. JANCOVIC
Bodies on Display It is useful to consider The Flicker alongside another famous representative of structural cinema. Conrad’s flickering film composition technique and Douglas Trumbull’s medical assemblage converge in Epileptic Seizure Comparison (1976), a two-screen installation and arguably one of Paul Sharits’s most aggressive films (Fig. 5.5). Executed as a 30-minute loop of two 16 mm projections placed atop one another and surrounded by metallic walls, the piece shows slow-motion extracts from medical recordings of epilepsy patients during electrically and photically-induced seizures, interspersed with flickering frames of solid color. The four-channel soundtrack
Fig. 5.5 A portion of the 16 mm strip of Epileptic Seizure Comparison. (Image courtesy of the Estate of Paul Sharits and Greene Naftali, New York)
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
203
consists of cut up and stammered patients’ grunts and a noisy synthesizer simulation of brain wave patterns. According to Sharits’s artist statements, the film wants to “allow the viewer to move beyond mere voyeurism and actually enter into the convulsive state, to allow a deeper empathy for the condition and to also, hopefully, experience the ecstatic aspect of such paroxysm” (Sharits 1989, 436). When he speaks of “the majestic potentials of convulsive seizure,” Sharits (1978) is directly quoting Walter. The sincerity of Sharits’s engagement with epilepsy is not in question; photosensitive people can indeed at times experience seizures with positive and pleasurable effects, as we will soon see in the next chapter. However, Epileptic Seizure Comparison abstracts the experience of a seizure away from the concrete social and technological system of disabilities it is entangled with and reduces it to a set of assaultive visual modes of address. In fact, the film follows a long tradition of putting convulsing bodies on display to amuse or frighten; a tradition that, as mentioned, has accompanied the history of the moving image at least since the first recordings of epileptics made in 1905. But by 1905, audiences in many regions of the world were already viscerally familiar with seeing glitches of the body—tics, jerks, twitches, contractions, grimaces and seizures—on screen and on stage. As Rae Beth Gordon has demonstrated astutely, the performance styles of French cabaret and early film comedy called upon epilepsy and hysteria (Gordon 2001). The involuntary movements of patients and inmates, especially those of women, at the hospitals and asylums of Salpêtrière and Bicêtre permeated far beyond their walls and served as inspiration for the frenetic movements often encountered in early cinema. Gordon confirms Cartwright’s observation that there had been a high transitivity between nineteenth-century visual medical techniques and popular visual spectacle. As early as the 1870s, Gordon argues, a hysteroepileptic idiom can be seen developing within the gestural lexicon of French popular performing arts. By the turn of the century, nervous pathologies had been thoroughly incorporated into the choreographic language of audiovisual and performance entertainment and the pictoriality of fin-de-siècle painting and literature. Eventually, Gordon asserts, the palpitations and convulsions stylistically pervaded the visual world as the jerky, abrupt and agitated forms characteristic of many twentieth century European art movements like expressionism and Dada. The theatrical and mediatic nature of Jean-Martin Charcot’s medical practice at Salpêtrière has been observed in detail (Cartwright 1995; Faber
204
M. JANCOVIC
1997; Gordon 2001; Didi-Huberman 2003; Gotman 2012; Marshall 2016). Sharits’s Epileptic Seizure Comparison distils some of these dramatic conventions of medical performance established by Charcot. Chronologically, the installation falls into a phase of Sharits’s career that has been classified as informed by increased concerns for the audience, inclusiveness and diversity (Windhausen 2008). But despite the didactic overtones of Sharits’s artist statements and despite taking photosensitive epilepsy as its central subject, the installation is eerily devoid of people with epilepsy. What we see on-screen are anonymous bodies, or rather, pictorial motifs performing a medical spectacle, trimmed down to only the most lurid symptoms of an epileptic fit, while off-screen, the flickering installation space is hostile towards actual photosensitive spectators. Because it invokes this long history of frequently violent medical control, subjugation, confinement, observation, recording and spectacle, flicker in 1960s and 70s Western visual culture, not only in the structural film movement but also beyond it, became a kind of aesthetic proxy for perceptual assault. On the one hand, photosensitive epilepsy is exploited as a rebellious statement of sorts: creating and screening something that might give someone else a seizure suggests just the kind of edginess and transgression that avant-garde filmmakers (but also Twitter trolls sending strobing memes) are looking for. On the other hand, the very same films remain largely unwatchable to those people whose impairment imparts flicker with cultural and historical meaning in the first place. The impact of structural films or the flashing seizure meme sent to Kurt Eichenwald derives from their audience’s familiarity with flicker as shorthand for visual aggression. They “draw upon the metaphorical capital of the epileptic or convulsing body,” as medical humanities scholar Jeanette Stirling put it (2010, 178). Sharits exploits this iconography of and association with epilepsy to heighten his film’s tense and almost bellicose visual and aural language. And Conrad’s preamble to The Flicker explicitly frames it as potentially physically and mentally injurious. Films that incite such intense bodily events through their technology or construction rather than content are not common in mainstream film culture, although, as mentioned, a handful of examples unintentionally emerged out of the recent trend towards high frame rate cinema. But such films do proliferate in the experimental tradition, where disturbing the equilibrium—emotionally, aesthetically or politically—is a programmatic goal in itself. However, it is important to bear in mind that a large segment of those films and works of video art may be inaccessible to some people
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
205
whose bodily orientations preclude them from safely seeing flickering images. The experimental films of Peter Kubelka, Tony and Beverly Conrad, Sharits, Victor Grauer, Aldo Tambellini, Andy Warhol’s Exploding Plastic Inevitable events and the films that document them, but also works by Toshio Matsumoto, Peter Tscherkassky or their stylistic predecessors Man Ray and Len Lye—these are but some of the filmmakers who frequently utilize a range of flicker-based techniques of disorientation, and their works form the canon of avant-garde and underground cinema. But they perhaps unwittingly also stand as a milepost in a history of optical violence that leads to strobing GIFs becoming deadly weapons. The forms of aggression inscribed in these films often remain unperceived, since, as Susan Leigh Star has argued, “[t]he torture elicited by technology, especially, because it is distributed over time and space, because it is often very small in scope […], or because it is out of sight, is difficult to see as world making” (Star 1990, 48). That frame rate and compression standards can also enable forms of violence is easily forgotten. After all, our bodies are given to taking the shape of these standards. We learn to bear the “physically unbearable” flicker of European television and become inured to the judder of North American broadcasts. The repetition of movements and gestures, the sitting down in front of the screen and viewing it habitually, fools our bodies into feeling as though the standard itself were “intrinsic to being in the world” (Ahmed 2006, 80). The bodily and the infrastructural eventually form a flimsy circuit. But familiarity does not make the violent potential of moving images disappear, it just pushes it out of sight, ready to be malignantly reactivated in the service of intimidation and abuse in such banal media practices as sending a tweet. Most incidents involving PSE have so far been read as accidents. But, as the strobing GIF case has shown forcefully, moving images can also be instrumentalized as a medium of bodily harm. When discussions of violence in audiovisual media place too much emphasis on its representational nature, they run the risk of sidestepping, as Judith Butler recently phrased it, “those kinds of violence that […] do not take the literal form of a blow” (2020, 137). The optical and ocular aggressions that Conrad, Sharits, Trumbull and others experimented with in the 1960s and 1970s exist alongside the more obvious spectacularization and mainstreaming of violence that took place in film and television around the same period. But flicker and frame-rate-based “assaults” operate on a different level than the slasher or giallo genres and their many siblings. Flicker is a kind of violence
206
M. JANCOVIC
whose efficacy does not depend on depictions, simulations and insinuations of transgression, but rather on the interplay between the bodily orientations of its spectators and technological infrastructure.
Queer Objects Photosensitivity is a queer object of study in that it touches a number of disciplines, belonging to neither but entering into liaisons with many. Since antiquity, its historical place has been in medicine. But it also has an abode in disability studies, and its deep entanglement with screen-based and light-based media equally places it in the ambit of media history and theory. This presents us with an opportunity to bring media-archaeological and media-epigraphical methods into conversation with disability studies and queer phenomenology. The borders between these fields are porous. All three are drawn together by their predilection towards failure—be it failures of technological devices, of normative bodily ability, or of normative desire. Phenomenology, as Ahmed explains, “helps us to explore how bodies are shaped by histories, which they perform in their comportment, their posture, and their gestures. Both Husserl and Merleau-Ponty, after all, describe bodily horizons as ‘sedimented histories’” (Ahmed 2006, 56). The interest in excavating sedimented histories makes phenomenology resonate with media archaeology on more than just metaphorical levels. But as noted by Judith Butler, the same concern also makes phenomenology useful to gender and queer studies (Butler 1988). By following neurological and artistic practices revolving around photosensitive epilepsy in parallel with technological standardization and its failures, both disability and media history can be rethought in queer ways. A conversation between queer studies, disability studies and media epigraphy can reveal the meandering mechanisms of violence that sometimes surface when bodily abilities collide with technological standards and the queer materialities of media. It reveals histories of inadvertent bodily effects, of orientations and disorientations, of scientific experimentation and aesthetic ambition, and of subtle and not-so-subtle forms of assault and abuse. It has long been recognized that queer studies and disability studies may, in fact, be investigating the very same social and political structure of unbelonging, one whose epistemic underpinnings emerged with transformations in medicine, statistics and science in the eighteenth and nineteenth centuries (Davis 2002, 2013). Media-epigraphical approaches that
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
207
focus on the traces and material conditions of mediated experience can contribute the insight that this process also involved domains that might seem to be rather more remote, such as the laying of electrical grids, the standardization of power line frequencies or the homogenization of film projection frame rates. The recent calls for a “disability media studies” have urged disability scholars to move beyond questions of representation in the media, and encouraged media scholars to acknowledge disability as central to the study of mediation (Ellcessor et al. 2017, 4). Particularly instructive for media epigraphy, disability media studies seeks to find “an epistemology that trusts lived and physical experience as a basis for critique and analysis” and to recognize media practices beyond “normative forms of spectatorship or sensory engagement” (Ellcessor et al. 2017, 18). The example of photosensitive epilepsy shows how this approach can be used to open up new historical narratives of media, even rethink the entire scope of media history. A flickering GIF meme shared on Twitter opens up a passage towards neurological history, revealing how technical media played the central role in bringing forward photosensitivity as an object of study, as well as neurology’s later indelible influence on the history of experimental cinema. An epigraphy of flicker reveals how large-scale infrastructure and old technological standards resonate in our bodies and produce sensory intensities. Yet, however intense the ocular aggressions caused by compressed images may be, my account has up until now framed people with epilepsy as passively embedded in their media environments, continuously parrying the perils of urban modernity. In the next chapter, I will upend my own narrative and approach photosensitivity and compressed television images as a source of pleasure. From that angle, seizures will begin to appear not only as an affliction, but as an active and empowering media technique.
References Ahmed, Sara. 2006. Queer Phenomenology: Orientations, Objects, Others. Durham: Duke University Press. Ames, Frances R. 1971. “Self-induction” in Photosensitive Epilepsy. Brain 94: 781–798. https://doi.org/10.1093/brain/94.4.781. Andermann, F. 1971. Self-Induced Television Epilepsy. Epilepsia 12: 269–275. https://doi.org/10.1111/j.1528-1157.1971.tb04934.x.
208
M. JANCOVIC
Associated Press. 2017. Man Charged with Hate Crime for Seizure-inducing Tweet. AP News, March 22, Online edition. Barlow, J.S. 1997. The Early History of EEG Data-processing at the Massachusetts Institute of Technology and the Massachusetts General Hospital. International Journal of Psychophysiology 26: 443–454. https://doi.org/10.1016/ S0167-8760(97)00781-2. Betts, Tim, Nicola Dutton, and Helen Yarrow. 2001. Epilepsy and the Ovary (Cutting Out the Hysteria). Seizure 10: 220–228. https://doi.org/10.1053/ seiz.2001.0561. Blom, Ina. 2016. The Autobiography of Video: The Life and Times of a Memory Technology. Berlin: Sternberg Press. Borck, Cornelius. 2005. Hirnströme: eine Kulturgeschichte der Elektroenzephalographie. Göttingen: Wallstein. Borell, Merriley. 1986. Extending the Senses: The Graphic Method. Medical Heritage 2: 114–121. Bowden, A.N., P. Fitch, R.W. Gilliatt, and R.G. Willison. 1975. The Place of EEG Telemetry and Closed-circuit Television in the Diagnosis and Management of Epileptic Patients. Proceedings of the Royal Society of Medicine 68: 246–248. https://doi.org/10.1177/003591577506800426. Bower, Brian D. 1963. Television Flicker and Fits. Clinical Pediatrics 2: 134–138. https://doi.org/10.1177/000992286300200308. Bronfen, Elisabeth. 1990. Violence of Representation—Representation of Violence. Lit: Literature Interpretation Theory 1: 303–321. https://doi. org/10.1080/10436929008580039. Butler, Judith. 1988. Performative Acts and Gender Constitution: An Essay in Phenomenology and Feminist Theory. Theatre Journal 40: 519–531. https:// doi.org/10.2307/3207893. ———. 2020. The Force of Nonviolence: An Ethico-Political Bind. London: Verso. Canales, Jimena. 2011. “A Number of Scenes in a Badly Cut Film”: Observation in the Age of Strobe. In Histories of Scientific Observation, ed. Lorraine Daston and Elizabeth Lunbeck, 230–254. Chicago: The University of Chicago Press. Carletti, Joy. 2014. What Does It Feel Like to Have a Seizure? Quora.com, March 16. Accessed 15 December 2017. https://www.quora.com/ What-does-it-feel-like-to-have-a-seizure. Cartwright, Lisa. 1995. Screening the Body: Tracing Medicine’s Visual Culture. Minneapolis: University of Minnesota Press. Charlton, M.H., and Paul F.A. Hoefer. 1964. Television and Epilepsy. Archives of Neurology 11: 239–247. https://doi.org/10.1001/archneur.1964. 00460210017002. Chaudhary, Umair J., John S. Duncan, and Louis Lemieux. 2011. A Dialogue with Historical Concepts of Epilepsy from the Babylonians to Hughlings
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
209
Jackson: Persistent Beliefs. Epilepsy & Behavior 21: 109–114. https://doi. org/10.1016/j.yebeh.2011.03.029. Cholodenko, Alan. 1991. Introduction. In The Illusion of Life: Essays on Animation, ed. Alan Cholodenko, 9–36. Sydney: Power Publications/Australian Film Commission. Collin, Robbie. 2012. The Hobbit—An Unexpected Journey, Film Review. The Telegraph, December 9, sec. Culture. Crary, Jonathan. 1992. Techniques of the Observer: On Vision and Modernity in the 19th Century. Cambridge, MA: The MIT Press. Crone, Bridget. 2017. Flicker Time and Fabulation: From Flickering Images to Crazy Wipes. In Futures and Fictions, ed. Henriette Gunkel, Ayesha Hameed, and Simon O’Sullivan, 268–294. London: Repeater Books. Davis, Lennard J. 2002. Bodies of Difference: Politics, Disability, and Representation. In Disability Studies: Enabling the Humanities, ed. Sharon L. Snyder, Brenda Jo Brueggemann, and Rosemarie Garland-Thomson, 100–106. New York: Modern Language Association of America. ———. 2013. Introduction: Disability, Normality, and Power. In The Disability Studies Reader, ed. Lennard J. Davis, 4th ed., 1–16. New York, NY: Routledge. Dawson, G.D., and W. Grey Walter. 1944. The Scope and Limitations of Visual and Automatic Analysis of the Electroencephalogram. Journal of Neurology, Neurosurgery & Psychiatry 7: 119–133. https://doi.org/10.1136/ jnnp.7.3-4.119. Didi-Huberman, Georges. 2003. Invention of Hysteria: Charcot and the Photographic Iconography of the Salpêtrière. Translated by Alisa Hartz. Cambridge, MA: The MIT Press. Diedrich, Lisa. 2015. Illness as Assemblage: The Case of Hystero-epilepsy. Body & Society 21: 66–90. https://doi.org/10.1177/1357034X15586239. Dietsch, G. 1932. Fourier-Analyse von Elektrencephalogrammen des Menschen. Pflüger’s Archiv für die gesamte Physiologie des Menschen und der Tiere 230: 106–112. https://doi.org/10.1007/BF01751972. Ellcessor, Elizabeth, Bill Kirkpatrick, and Mack Hagood. 2017. Introduction: Toward a Disability Media Studies. In Toward a Disability Media Studies, ed. Elizabeth Ellcessor and Bill Kirkpatrick, 1–30. New York, NY: NYU Press. Eyman, Scott. 1997. The Speed of Sound: Hollywood and the Talkie Revolution 1926–1930. New York, NY: Simon & Schuster. Faber, Diana P. 1997. Jean-Martin Charcot and the Epilepsy/Hysteria Relationship. Journal of the History of the Neurosciences 6: 275–290. https://doi. org/10.1080/09647049709525714. Fenton, G.W. 1986. Epilepsy and Hysteria. The British Journal of Psychiatry 149: 28–37. https://doi.org/10.1192/bjp.149.1.28.
210
M. JANCOVIC
Fischer-Williams, M., T.B. Madden, and J.M. Garvie. 1961. Letters to the Editor: Epilepsy and Television. The Lancet 277: 394–395. https://doi.org/10.1016/ S0140-6736(61)91560-4. Foucault, Michel. 1984. Nietzsche, Genealogy, History. In The Foucault reader, ed. Paul Rabinow, 76–100. New York: Pantheon Books. Fylan, F., and G.F.A. Harding. 1997. The Effect of Television Frame Rate on EEG Abnormalities in Photosensitive and Pattern-sensitive Epilepsy. Epilepsia 38: 1124–1131. Gastaut, H.J., and J. Bert. 1954. EEG changes During Cinematographic Presentation; Moving Picture Activation of the EEG. Electroencephalography and Clinical Neurophysiology 6: 433–444. Geiger, John. 2002. Interview Conducted 28 February, 2002 with Tony Conrad, by Telephone from New York State University at Buffalo. Archive.org. ———. 2003. Chapel of Extreme Experience: A Short History of Stroboscopic Light and the Dream Machine. New York: Soft Skull Press. Geimer, Peter. 2010. Bilder aus Versehen: eine Geschichte fotografischer Erscheinungen. Hamburg: Philo Fine Arts. Glaser, Gilbert H. 1978. Epilepsy, Hysteria, and “Possession”. A Historical Essay. The Journal of Nervous and Mental Disease 166: 268. Gordon, Rae Beth. 2001. From Charcot to Charlot: Unconscious Imitation and Spectatorship in French Cabaret and Early Cinema. Critical Inquiry 27: 515–549. Gotman, Kélina. 2012. Epilepsy, Chorea, and Involuntary Movements Onstage: The Politics and Aesthetics of Alterkinetic Dance. About Performance: 159–183. Gowers, W.R. 1881. Epilepsy and Other Chronic Convulsive Diseases: Their Causes, Symptoms and Treatment. London: J & A Churchill. Great Frame Rate Debate—Part 2—Schubin, Trumbull. 2012. YouTube video. Gunning, Tom. 2011. The Play between Still and Moving Images: Nineteenth- Century “Philosophical Toys” and Their Discourse. In Between Stillness and Motion: Film, Photography, Algorithms, ed. Eivind Røssaak. Amsterdam: Amsterdam University Press. Harding, G.F.A., and P.F. Harding. 2010. Photosensitive Epilepsy and Image Safety. Applied Ergonomics 41: 504–508. https://doi.org/10.1016/j. apergo.2008.08.005. Harding, G.F.A., and Takeo Takahashi. 2004. Regulations: What Next? Epilepsia 45 (Suppl 1): 46–47. Horton, Nick. 2012. The Hobbit: An Unexpected Journey Review. Den of Geek, December 10. Accessed 18 September 2019. https://www.denofgeek.com/ movies/the-hobbit/23771/the-hobbit-an-unexpected-journey-review. Hutchison, J.H., F.H. Stone, and J.R. Davidson. 1958. Photogenic Epilepsy Induced by the Patient. The Lancet 1: 243–245.
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
211
Iversen, Margaret. 2012. Index, Diagram, Graphic Trace: Involuntary Drawing. Tate Papers. Accessed 7 February 2019. https://www.tate.org.uk/research/ publications/tate-papers/18/index-diagram-graphic-trace. Jancovic, Marek. 2016. Ghosts of the Past: Frame Rates, Cranking and Access to Early Cinema. In Exposing the Film Apparatus: The Film Archive as a Research Laboratory, ed. Giovanna Fossati and Annie van den Oever, 75–82. Amsterdam: Amsterdam University Press. Jeavons, P.M., and G.F.A. Harding. 1970. Television Epilepsy. Lancet (London, England) 2: 926. Joseph, Branden W. 2008. Beyond the Dream Syndicate: Tony Conrad and the Arts After Cage. New York: Zone Books. Kaufman, Debra. 2011. Douglas Trumbull Sees a Better Filmgoing Future. CreativeCOW.net, September. Accessed 29 January 2018. http://library.creativecow.net/kaufman_debra/Douglas-Trumbull_Filmgoing-Future/1. Klapetek, J. 1959. Photogenic Epileptic Seizures Provoked by Television. Electroencephalography and Clinical Neurophysiology 11: 809. https://doi. org/10.1016/0013-4694(59)90125-7. Koide, Masashi. 2013. On the Establishment and the History of the Japan Society for Animation Studies. In Japanese Animation: East Asian Perspectives, ed. Masao Yokota and Tze-yue G. Hu, 49–72. Jackson, Mississippi: University Press of Mississippi. Ladewig, Rebekka. 2011. Augenschwindel. Nachbilder und die Experimentalisierung des Schwindels um 1800. In Nachbilder: Das Gedächtnis des Auges in der Kunst: Das Gedächtnis des Auges in Kunst und Wissenschaft, ed. Werner Busch and Carolin Meister, 109–128. Zürich: Diaphanes. Laforet, Vincent. 2012. The Hobbit: An Unexpected Masterclass in Why 48 FPS Fails. Gizmodo. December 19. Accessed 29 January 2018. https://gizmodo. com/5969817/the-hobbit-an-unexpected-masterclass-in-why-48-fps-fails. Lagerlund, T.D., G.D. Cascino, K.M. Cicora, and F.W. Sharbrough. 1996. Long- term Electroencephalographic Monitoring for Diagnosis and Management of Seizures. Mayo Clinic Proceedings 71: 1000–1006. https://doi.org/10.1016/ S0025-6196(11)63776-2. Lomas, David. 2012. Becoming Machine: Surrealist Automatism and Some Contemporary Instances: Involuntary Drawing. Tate Papers. MacDonald, Scott. 2006. Tony Conrad. On the Sixties. In A Critical Cinema 5: Interviews with Independent Filmmakers, 55–76. Berkeley: University of California Press. Mackenzie, Adrian. 2013. Every Thing Thinks: Sub-representative Differences in Digital Video Codecs. In Deleuzian Intersections: Science, Technology, Anthropology, ed. Casper Bruun Jensen and Kjetil Rodje, 139–154. New York: Berghahn Books.
212
M. JANCOVIC
Macnab, Geoffrey. 2012. First Night: The Hobbit: An Unexpected Journey, Peter Jackson. The Independent, December 10. Accessed 18 September 2019. https://www.independent.co.uk/arts-entertainment/films/reviews/first- night-the-hobbit-an-unexpected-journey-peter-jackson-8397515.html. Magiorkinis, Emmanouil, Aristidis Diamantis, Kalliopi Sidiropoulou, and Christos Panteliadis. 2014. Highlights in the History of Epilepsy: The Last 200 Years. Epilepsy Research and Treatment 2014. https://doi.org/10.1155/ 2014/582039. Marshall, Jonathan W. 2016. Performing Neurology: The Dramaturgy of Dr Jean- Martin Charcot. Basingstoke: Palgrave Macmillan. Mawdsley, C. 1961. Epilepsy and Television. The Lancet 277: 190–191. https:// doi.org/10.1016/S0140-6736(61)91366-6. Mayr, N., D. Wimberger, H. Pichler, B. Mamoli, J. Zeitlhofer, and G. Spiel. 1987. Influence of Television on Photosensitive Epileptics. European Neurology 27: 201–208. https://doi.org/10.1159/000116157. Mekas, Jonas. 2016. Movie Journal: The Rise of New American Cinema, 1959–1971. New York: Columbia University Press. Metz, Christian. 1985. Photography and Fetish. October 34: 81–90. https://doi. org/10.2307/778490. Mills, Mara, and Jonathan Sterne. 2017. Afterword II: Dismediation—Three Proposals, Six Tactics. In Introduction: Toward a Disability Media Studies, ed. Elizabeth Ellcessor and Bill Kirkpatrick, 365–380. New York: NYU Press. Morishita, Misako. 2007. Receptivity to Television Characters by Children and Adults: A Study of the Difference of Opinion Between the Two Over the Pokemon Panic. Journal of Seigakuin University 20: 17–32. Nancy, Jean-Luc. 2005. The Ground of the Image. Translated by Jeff Fort. New York: Fordham University. https://doi.org/10.2307/j.ctt13x06f6. Nichols, Bill, and Susan J. Lederman. 1980. Flicker and Motion in Film. In The Cinematic Apparatus, ed. Teresa de Lauretis and Stephen Heath, 96–105. London: Palgrave Macmillan. https://doi.org/10.1007/9781-349-16401-1_8. Niijima, S., K. Takahashi, M. Onishi, N. Arii, M. Saito, K. Kuremoto, and Y. Yamashiro. 1998. Clinical Electroencephalographic Study of Nine Pediatric Patients with Convulsion Induced by the TV Animation, Pocket Monster. Acta Paediatrica Japonica: Overseas Edition 40: 544–549. Noe, Katherine H., and Joseph F. Drazkowski. 2009. Safety of Long-term Video- Electroencephalographic Monitoring for Evaluation of Epilepsy. Mayo Clinic Proceedings 84: 495–500. Pallis, C., and S. Louis. 1961. Television-induced Seizures. The Lancet 277. Originally Published as Volume 1, Issue 7170: 188–190. https://doi. org/10.1016/S0140-6736(61)91365-4.
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
213
Panayiotopoulos, C. P. 2017. Visual-Sensitive Epilepsies. MedLink Neurology, February 6. Accessed 17 December 2017. http://www.medlink.com/article/ visual-sensitive_epilepsies. Pantelakis, S.N., B.D. Bower, and H. Douglas Jones. 1962. Convulsions and Television Viewing. British Medical Journal 2: 633–638. Payne, Robert. 2018. Lossy Media: Queer Encounters with Infrastructure. Open Cultural Studies 2: 528–539. https://doi.org/10.1515/culture-2018-0048. Radhakrishnan, Kurupath, Erik K. St, Judith A. Louis, Robyn L. McClelland Johnson, Barbara F. Westmoreland, and Donald W. Klass. 2005. Pattern- sensitive Epilepsy: Electroclinical Characteristics, Natural History, and Delineation of the Epileptic Syndrome. Epilepsia 46: 48–58. https://doi. org/10.1111/j.0013-9580.2005.26604.x. Rasmussen, Nicolas. 1993. Facts, Artifacts, and Mesosomes: Practicing Epistemology with the Electron Microscope. Studies in History and Philosophy of Science Part A 24: 227–265. https://doi.org/10.1016/0039-3681( 93)90047-N. Reif, Philipp S., Adam Strzelczyk, and Felix Rosenow. 2016. The History of Invasive EEG Evaluation in Epilepsy Patients. Seizure 41: 191–195. https:// doi.org/10.1016/j.seizure.2016.04.006. Rheinberger, Hans-Jörg. 2009. Epistemic Objects/Technical Objects. In Epistemic Objects, ed. Uljana Feest, Hans-Jörg Rheinberger, and Günter Abel, 93–98. Berlin: Max Planck Institute for the History of Science. Ricci, Stefano, Federico Vigevano, Mario Manfredi, and Dorothée G.A. Kasteleijn-Nolst Trenité. 1998. Epilepsy Provoked by Television and Video Games, Safety of 100-Hz Screens. Neurology 50: 790–793. https://doi. org/10.1212/WNL.50.3.790. Robertson, E. Graeme. 1954. Photogenic Epilepsy: Self-precipitated Attacks. Brain 77: 232–251. https://doi.org/10.1093/brain/77.2.232. Rocchi, James. 2012. The Hobbit: An Unexpected Journey. Lord of the Rings Follow-up is a Flawed Kids Flick. Boxoffice.com. December 3. Accessed 10 December 2012. http://pro.boxoffice.com/reviews/2012-12-the-hobbit-anunexpected-journey. Saleem, S.M., M. Thomas, S. Jain, and M.C. Maheshwari. 1994. Incidence of Photosensitive Epilepsy in Unselected Indian Epileptic Population. Acta Neurologica Scandinavica 89: 5–8. https://doi.org/10.1111/j.16000404.1994.tb01623.x. Schäfer, Armin. 2015. Literatur im Aufschreibesystem von 1800 ist ein Simulakrum von Wahnsinn’. Anmerkungen zu einer These von Friedrich Kittler. Metaphora. Journal for Literary Theory and Media 1: III-1–III-16. Sharits, Paul. 1978. Filmography. Film Culture: 123–124. ———. 1989. Epileptic Seizure Comparison. In Film-Makers’ Cooperative Catalogue No. 7, 436. New York: Film-makers’ Cooperative.
214
M. JANCOVIC
Shoja, Mohammadali M., R. Shane Tubbs, Armin Malekian, Amir H. Jafari, Mohammad Barzgar Rouhi, and W. Jerry Oakes. 2007. Video Game Epilepsy in the Twentieth Century: A Review. Child’s Nervous System 23: 265–267. https://doi.org/10.1007/s00381-006-0285-2. Siegel, Marc. 2017. Queer Cinema Travels. Habilitation thesis, Berlin: Freie Universität Berlin. da Silva, A. Martins, and Bárbara Leal. 2017. Photosensitivity and Epilepsy: Current Concepts and Perspectives—A Narrative Review. Seizure—European Journal of Epilepsy 50: 209–218. https://doi.org/10.1016/j.seizure.2017. 04.001. Sitney, P. Adams. 2002. Visionary Film: The American Avant-Garde, 1943–2000. 3rd ed. Oxford: Oxford University Press. ———. 2008. Eyes Upside Down: Visionary Filmmakers and the Heritage of Emerson. Oxford: Oxford University Press. Smith, Willoughby. 1873. Effect of Light on Selenium During the Passage of An Electric Current*. Nature 7: 303. SMPE. 1916. Standardization: Address by Henry D. Hubbard, Secretary, U. S. National Bureau Oe [sic] Standards, Before the Society of Motion Picture Engineers at its Washington Meeting. Monday, July 24, 1916. Transactions of the Society of Motion Picture Engineers 1: 16–20. https://doi. org/10.5594/J18049XY. Sobchack, Vivian. 2009. Animation and Automation, or, the Incredible Effortfulness of Being. Screen 50: 375–391. https://doi.org/10.1093/ screen/hjp032. Sontag, Susan. 2004. Regarding the Torture of Others. The New York Times, May 23, Late edition, sec. Section 6, Column 1. Star, Susan Leigh. 1990. Power, Technology and the Phenomenology of Conventions: On Being Allergic to Onions. The Sociological Review 38: 26–56. https://doi.org/10.1111/j.1467-954X.1990.tb03347.x. ———. 1999. The Ethnography of Infrastructure. American Behavioral Scientist 43: 377–391. https://doi.org/10.1177/00027649921955326. Stefánsson, S.B., C.E. Darby, A.J. Wilkins, C.D. Binnie, A.P. Marlton, A.T. Smith, and A.V. Stockley. 1977. Television Epilepsy and Pattern Sensitivity. British Medical Journal 2: 88–90. Sterne, Jonathan. 2006. The mp3 as Cultural Artifact. New Media & Society 8: 825–842. https://doi.org/10.1177/1461444806067737. Sterne, Jonathan, and Dylan Mulvin. 2014. The Low Acuity for Blue: Perceptual Technics and American Color Television. Journal of Visual Culture 13: 118–138. https://doi.org/10.1177/1470412914529110. Stirling, Jeannette. 2010. Representing Epilepsy: Myth and Matter. Cambridge: Liverpool University Press.
5 VIEWER DISCRETION IS ADVISED: FLICKER IN MEDIA, MEDICINE AND ART
215
Strauven, Wanda. 2011. The Observer’s Dilemma: To Touch or Not to Touch. In Media Archaeology: Approaches, Applications, and Implications, ed. Erkki Huhtamo and Jussi Parikka, 148–163. Berkeley: University of California Press. Suárez, Juan A. 2014. Warhol’s 1960s’ Films, Amphetamine, and Queer Materiality. Criticism 56: 623–652. https://doi.org/10.13110/ criticism.56.3.0623. Takahashi, Takeo, Yasuo Tsukahara, Masahide Nomura, and Hiroo Matsuoka. 1999a. Pokemon Seizures. Nerological Journal of Southeast Asia 4: 1–11. Takahashi, Yukitoshi, Watanabe Mizuho, Ozawa Takeshi, Terazawa Sousuke, Motoyoshi Fumiaki, Nakamura Hitoshi, Okamoto Hiroyuki, et al. 1999b. Viewing Condition of Animated TV Program Called “PocketMonsters” and Induction of Photosensitive Seizures. Epilepsy Research 17: 20–26. The State of Texas v. John Rayne Rivello. 2017. Grand Jury of Dallas County, Texas. Docket no. F1700215, March 20. Trenité, Kasteleijn-Nolst, G.A. Dorothée, A. Martins da Silva, S. Ricci, G. Rubboli, C.A. Tassinari, J. Lopes, M. Bettencourt, J. Oosting, and J.P. Segers. 2002. Video Games are Exciting: A European Study of Video game-Induced Seizures and Epilepsy. Epileptic Disorders: International Epilepsy Journal with Videotape 4: 121–128. United States of America v. John Rayne Rivello. 2017 1–13. U.S. District Court, Northern District of Texas. Docket no. 3-17-MJ-192-BK, 10 March, 2017. Walter, W. Grey. 1963. The Living Brain. 2nd ed. New York: W. W. Norton & Company. Wilkins, A.J., C.E. Darby, C.D. Binnie, S.B. Stefansson, P.M. Jeavons, and G.F.A. Harding. 1979. Television Epilepsy—The Role of Pattern. Electroencephalography and Clinical Neurophysiology 47: 163–171. https:// doi.org/10.1016/0013-4694(79)90218-9. Windhausen, Federico. 2008. Paul Sharits and the Active Spectator. In Art and the Moving Image: A Critical Reader, ed. Tanya Leighton, 122–139. London, New York: Tate Publishing.
CHAPTER 6
Close Exposure: Of Seizures, Irritating Children, and Strange Visual Pleasures
In the early twentieth century, the discursive rules that governed the production of documents in psychiatry began to change. In place of the emphasis on nosology, the classification of medical cases to particular disorders, psychiatry’s epistemic priority shifted towards the recording of cases and their comparison with each other (Schäfer 2015). Corresponding to this process was an important change in the case file, the disciplining technology (Foucault 1995) and genre of medical writing that transforms patients into cases. What used to be a short and terse description gradually began growing in length and amount of detail. In this new textual format, patients with photosensitive epilepsy were narrativized like characters in the strange Bildungsroman of psychiatry and neurology, their deeply personal family histories often preserved alongside medical findings and electroencephalograms. Neurological case descriptions are an unconventional primary source for doing media history. But reading between the lines, they allow us to glean surprising insights into peculiar media practices on the periphery of audiovisual culture. In this chapter, I will take a look at several decades’ worth of medical reports about photosensitive epilepsy and, to borrow an expression from Dana Seitler, read them sideways. On its surface, medical literature about epilepsy already offers us “a subjugated history of those individuals and communities who were the objects of the surveillant and analytical medical gaze” (Cartwright 1995, xv). But in a more oblique way, the cases of photosensitive people also tell of societal anxieties © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Jancovic, A Media Epigraphy of Video Compression, https://doi.org/10.1007/978-3-031-33215-9_6
217
218
M. JANCOVIC
surrounding gender, sexuality and technology that reverberate through health research. These anxieties become especially visible in the context of a practice known as self-induction—a very unusual way of interacting with compressed images for visual pleasure.
Queer Feelings Epileptological literature of the previous century struggled to make sense of a particular pattern of behavior documented in some patients. It had been observed since at least 1932 that some photosensitive people, especially children, appear to deliberately induce seizures in their own bodies. Among the common methods to achieve this were rapid blinking, overbreathing, or facing the sun and rapidly waving one’s hands in front of the eyes with fingers outspread to deliberately create flicker—like an impromptu manual film projector.1 The intentional provocation of seizures was initially considered unusual and rare but, much like the frequency of photosensitive seizures itself, became increasingly noticeable during the 1960s. With the arrival of television, physicians observed two more techniques that became a much-discussed topic among psychologists, psychiatrists and neurologists: intently staring at a television screen or quickly switching channels. Not only were the motivations for self-induction not fully understood. In many cases, initially, it was not even clear whether the gestures and movements were the cause of a seizure or its symptom (Ames 1971; Ames and Saffer 1983). This behavior seemed to exclusively occur in people with photosensitivity, and not in other forms of epilepsy.2 It is estimated that no fewer than a quarter of photosensitive people self-induce either overt seizures or “queer feelings,” as a neurologist put it in 1963 (Bower 1963, 135), referring to sub-clinical epileptiform discharges—unusual electrical activities in the brain that do not result in symptoms. Neurophysiologist and one of the pre-eminent epilepsy experts Colin
1 The gesture of hand-waving before one’s eyes in front of a strong light source is apocryphally linked to Nostradamus’s divinations already in the sixteenth century, and verifiably to Jan Evangelista Purkyně’s optical experiments in the nineteenth (ter Meulen et al. 2009). Purkyně is an especially apposite reference in this context due to his willingness to go to remarkable bodily and sensory extremes in pursuit of experimental medical data. 2 Only recently have intentional seizures been documented in other forms of epilepsy but they remain extremely rare, or at least rarely observed.
6 CLOSE EXPOSURE: OF SEIZURES, IRRITATING CHILDREN, AND STRANGE…
219
Binnie called self-induction “the ultimate non-compliance” (Binnie 1988) because of its notorious resistance to antiepileptic treatment. In the peculiar literary genre of epileptological case reports, the self- inducing patient was a gendered stock character. A 1960s literature review on self-induced seizures characterized the average “case” as a young girl with “neurological or behavioral trouble” (K. Andermann et al. 1962, 63). Until at least the 1970s, the research distinguished between two groups of patients. On one end stood the “compulsive flickerers” who induced absences willfully and were assumed to be “emotionally unstable” (Bower 1963, 135). In these children, the self-inflicted seizures were framed as the pathological result of “subnormal intelligence,” psychiatric problems or psychosocial difficulties (Binnie et al. 1980). Under the neurological gaze, self-induction thus provided a circular diagnosis: the habit both identified and confirmed the self-inducer as “impaired.” On the opposed end was another group, the innocent children who unintentionally fell victim to the perils of television screens. Neurology and psychiatry thus discursively placed the self-inducing child between two seductive and corruptive forces: visual media or pathological deviance. By excluding from this binary the possibility of desire and agency, neurology and psychiatry discursively contained sexuality and pleasure. Only in the twenty-first century has this view come into question, as it appears that photosensitive children without cognitive impairments also frequently induce their own seizures, but they are simply better at hiding the behavior and more easily escape the surveillance that chronically institutionalized patients tend to be under.3 The psychiatrist Beng-Yeong Ng comments that “[m]any of these children do not have many possibilities for self-satisfaction and have limited ability to play with their body” (Ng 2002, 236). The recognition that many children simply enjoy seizures is a development of the last two decades. There is no single reason why people might induce seizures on purpose. 1960s medical literature reveals how vexing this fact was to a discipline that is, as Lisa Cartwright has also commented, usually in search of unambiguous etiological narratives. Pleasure, in particular, seemed to be one factor that neurology and psychiatry struggled to come to terms with. A certain proportion of self-inducing people denies finding seizures pleasurable, explaining their behavior either in vague or evasive terms, or as a 3 This insight finds an antecedent in the disaggregation of homosexuality from mental illness (and disability) in psychiatry in the 1970s and 1980s, as observed by Kunzel (2017).
220
M. JANCOVIC
compulsion they cannot help. Some psychiatrists likened the habit to the displacement behaviors of animals (K. Andermann et al. 1962; Kammerer 1963). But many children, contrarily, insist that seizures are simply fun. In 1962, pediatric neurologist Dora Chao reported on a girl with a history of seizures whose “father was a seaman and was away much of the time. The mother did not care for the child and sent her to live with a paternal aunt” (Chao 1962, 734). The girl, whose family history is inconspicuously recorded together with the details of her EEG, visited the hospital in 1958 at the age of nine after having been treated some years prior. Chao’s case report notes: “Apparently in the interim of 4 years, the petit mal seizures [absence seizures with brief loss of consciousness] occurred daily, as many as 20 to 30 a day, always precipitated by gazing at bright light, which she apparently enjoyed, inasmuch as she admitted ‘it feels good’” (1962, 734). A different case of a 7-year-old girl discloses: “Despite heated reprimands and threats of punishment the patient would deliberately evade supervision and when she thought herself unobserved induce an attack. She gained no apparent advantage from this behaviour; but the outcome appears to have been pleasurable” (Whitty 1960, 1208). Against my previous reading of the violent potentials of visual media in the preceding chapter, the practice of self-induction paints photosensitive epilepsy in a strikingly different light. And television plays a special role under these changed conditions: “A 9-year-old girl with generalized tonic- clonic seizures and atypical absences had a recurrent habit of being drawn irresistibly toward the television screen. She would bring her face close to the screen, staying there with a glazed expression, and be difficult to distract” (Sharma and Cameron 2007, 2003). And elsewhere: Some patients with television epilepsy are compulsively attracted to the screen […]. Some of these patients admit to using television as a method of selfinduction. Flickering of the television set has its greatest effect when the patient is close to the screen and the image is out of focus. Patients may cause the television picture to go unstable and then stare at the flickering light that it produces. Others insist that the compulsion is distressing to them but irresistible […]. (Ng 2002, 236)
While, as we have seen in the previous chapter, the violent bodily resonances caused by the queer materialities of television were often seen as accidental and unintentional, some of such encounters with media can also
6 CLOSE EXPOSURE: OF SEIZURES, IRRITATING CHILDREN, AND STRANGE…
221
be deliberate. The flickering, compressed image induces an attraction both irresistible and distressing, a paradoxical and dangerous affective drive— something we might describe, in other words, as a queer desire. One final example involving video games: “Questioning showed that he [a 14-year-old boy] enjoyed the blank screen and interference patterns when inserting a new game, and that he approached the screen to induce an attack” (Harding et al. 1994, 1211). The malfunctioning screen and the “malfunctioning” seizing body, whom I had previously portrayed as incongruous and at odds with each other, have found a new orientation towards one another, a new “intercorporeal relationship” (Marks 2002). The matter at hand is thus not simply that technological standards in television are unsuitable for photosensitive people, or that such people “fail” to see in the way laid forward by the standard. Rather, a seizure can also represent an active refusal to occupy the task of seeing in the standard manner. Strange Attractors In many cases, self-induction is a willful, pleasurable and indulgent behavior that children often hide from their families and physicians. In the metaphorical machine of neurology, it is a kind of glitch that queers both the medical algorithms of treatment as well as what Lee Edelman has called “the disciplinary image of the ‘innocent’ Child” (Edelman 2004, 19). The case reports of these indocile patients document how treatment with anticonvulsants, in some cases with toxic amounts, was rarely effective, instead often causing ill side-effects like depression and, ironically, hypersensitivity to light (Robertson 1954; Green 1966). The archives of neurology record both the patients’ continued self- inducing in spite of punishments as well as their refusal to come out of the closet as “blinkers,” “self-inducers” or “flickerers.” These are records in the—as Michel Foucault called them—ignoble archives of the medical sciences that have worked to discipline sexuality since the close of the eighteenth century. But they are also archives of resistance. The girl who refuses to comply with her physician and ignores her parents’ interdictions breaks the proxemic protocols that govern socially acceptable media use. She moves far beyond the “proper” viewing distance at which a TV image is legible and she willfully mistunes signal into noise, consciously provoking something that, as she is told, could harm her. Disability scholar Anna Mollow emphasizes that “‘sex’ and ‘disability’ often serve as different
222
M. JANCOVIC
signifiers for the same self-disintegrating force” (McRuer and Mollow 2012, 305). This realization finds a strange climax in epilepsy patients whose reports document how they masturbate while looking into bright light (e.g. Kammerer 1963; Harley et al. 1967). Given the long-dating historical linkages that tie epilepsy to sexuality, sexual paraphilias indeed served as the central frame of reference for conceptualizing practices of self-induction. The habit was and is often evaluated by physicians, parents and patients as somehow improper or shameful. In the past, it was frequently compared to masturbation and in some cases even deemed “an extremely perverted autoerotic pursuit” (Kammerer 1963, 326, my translation). Sexuality, however, quickly proved to be a structure too rigid and inadequate to contain self-induction, which seemed to resist traditional notions of pleasure and required an expansive redefinition. In 1958, yet another case concluded: “That this kind of auto-stimulation might be analogous to masturbation had occurred to both sets of parents, but there is no convincing evidence for or against this analogy” (Hutchison et al. 1958, 245). And elsewhere: “If there is pleasure in seizures, the seizures may well have something of an evoked memory in them […] or else an altogether new form of pleasure, as in dreams or after eating mescaline” (K. Andermann et al. 1962, 63). The pleasure of a seizure thus appears like an eroticism diagonally related to sex, but also distinct from it; enigmatic, inexplicable and scintillatingly scary. These are but two paradigmatic examples among many of physicians and patients struggling to find a language to describe the strange desire for flickering light: She [‘an intelligent girl, 12 years of age’] can stop her habit when her father tells her to, but when she is alone she does not want to stop it. She thinks it might give her a bit of satisfaction—a feeling, that is difficult to describe—a feeling between satisfaction and not wanting to do it, something like satisfaction but something else which she cannot explain. She likes the sun so much. Sometimes she would go into a little room and do it hard a number of times, and then would leave the room and very firmly shut the door. She thinks that if she were cross and grumpy her habit might give her some relief. She did it once with a baseball glove but it didn’t feel very nice and she stopped it. (Robertson 1954, 237)
6 CLOSE EXPOSURE: OF SEIZURES, IRRITATING CHILDREN, AND STRANGE…
223
And another involving a 17-year-old girl who self-induces with television: The patient was reluctant to discuss her experiences, but spoke of a ‘magnetic feeling’ as if being ‘pulled’ close up to the TV set when she was watching it from across the room. She described it as a ‘trance-like’ or ‘hypnotic’ feeling, when everything became blurred or unclear and she was aware only of the TV picture, which changed in tempo as if the action were in slow motion. […] She denied turning the set on and making the picture roll to precipitate the feeling, which she described as ‘vaguely scary’, but admitted that it was not there when the set was not on. (F. Andermann 1971, 271)
It is more than mere coincidence that queer theorist José Esteban Muñoz chose a distinctly ocular metaphor when he described what queer utopia might look like. “Indeed to access queer visuality we may need to squint, to strain our vision and force it to see otherwise, beyond the limited vista of the here and now” (Muñoz 2009, 22). Squinting is also a tiny form of labor; it requires flexing and straining our muscles to refocus our view and look at things in unusual and “unnatural” ways. This queer visuality, which later scholars have linked to crip modes of desire (McRuer 2017), reverberates through noisy, flickering and decaying images. In her analysis of Tony Conrad’s The Flicker, Bridget Crone arrives at a similar conclusion from a very different starting point when she links the flicker of cinema to practices of speculation and fabulation, to imagining the not-yet-existing (Crone 2017). I argue that what photosensitive children do when they mess with the settings of a TV receiver to disturb the image is a queer media practice. Self-induction finds pleasure and meaning in noise. It defies a cultural system of visual values which insists that a crisp picture and error-free data are the universally desirable norm. These children are media archaeologists avant la lettre: as Wanda Strauven has argued, media archaeology is precisely this type of “noisy” praxis, an exploration of the potentialities of media through their “improper” use (Strauven 2015). To remind ourselves of what Hito Steyerl has emphasized repeatedly, distinguishing between signal and noise is, always but especially now, a political act. For Steyerl, the politics of noise rests in its power to disturb representation—pictorial representation by imaging media, but also political representation. But noisy politics can be thought even further, on the level of infrastructure. Brian Larkin has shown that seemingly simple and
224
M. JANCOVIC
straightforward media practices like watching TV are not detachable from vast infrastructural webs encompassing urban and rural architecture, religion and moral values, transport and logistics, and spatial and temporal modes of rule (Larkin 2008, 219). These permeate, orient and transform the lived affective experience of media, constituting not simply “viewers,” but political subjects. For a media epigraphy of video compression, the practice of self- induction analogously demonstrates how a close look at disturbances in compressed images makes possible new ways of narrating and understanding media history. It makes possible new technological histories, showing, for example, the many insufficiently understood applications of television and video in scientific and medical contexts, and the contributions of people with neurological disabilities and medical professionals to the invention of new media devices, techniques and even experimental filmmaking styles. But the epigraphical approach also unveils that video compression, beyond its technical and scientific dimension, has global sociocultural consequences. Compression intersects with the history of electrification and electrical standards. It can influence how often symptoms of disabilities appear around the world. But it also permeates life on its small, local and domestic scale, modulating the affective dynamics of patient-physician and parent-child relationships, with their internal disciplinary and patriarchal legacies. And, on an even smaller scale, compression relays the long history of infrastructure and standardization into the sensory constitutions of the body. Refracted in compression artifacts like flicker is not just the materiality of media, but also how we make use and misuse of it: how we enter into relations of proximity with our devices, how we distribute them in our homes and cities, how we traverse the environments they share with us, how we cause violence and pleasure with them. Media epigraphy is the attempt to follow all of these pathways, starting from the small traces they fold into. Gender, Technology, Family Self-induction is an obscure, little-known media practice that can teach us new ways of thinking about media and their historical configurations, but also helps us re-evaluate the construction of gender through technology and notions like spectatorship. Pleasure, compulsion and erotic gratification are by far not the only reasons some people with epilepsy induce seizures. Purposefully altering neuronal activities in the brain can provide
6 CLOSE EXPOSURE: OF SEIZURES, IRRITATING CHILDREN, AND STRANGE…
225
relief from tension, stress or anxiety, help escape boredom or unpleasant situations, or create social advantages like skipping school. Self-inducing a seizure in such situations can be empowering. Some people induce seizures in a safe environment at their own convenience, after which they can rely on a longer, seizure-free refractory period (Binnie 1988; Ng 2002). One notable medical report describes a woman who, after several years of unsuccessful treatment for severe chronic depression, was able to resolve it by regularly secretly stopping her anticonvulsant medication and inducing “therapeutic seizures.” Self-induction, in such cases, becomes a technique of healing. It transforms moving images’ potential for violence into agency and serves as a small counterbalance to the power asymmetries inherent in the relationship between bodily impairment and the social structures of disability. But medical literature on photosensitivity also sustains many tensions surrounding the technologies of gender and the gendering of technology—specifically, patriarchal familial arrangements and the role that video plays in their performance. A letter to the editors of Epilepsia, the chief journal of the International League Against Epilepsy, describes the family history of one of the young girls I mentioned above: The parents separated, and the child would visit her father at weekends. She continued to manifest compulsive attraction to the conventional television in her mother’s house, but her father bought himself a plasma screen television, and noted immediately that she did not have any problems at all when watching this. (Sharma and Cameron 2007, 2003)
It is striking to read in a journal of neurology, of all places, about a television set operating as the linchpin at the center of a disintegrating nuclear family. In 2007, the year in which this report appeared, the production of LCD screens surpassed that of cathode ray tubes for the first time. The plasma TV mentioned here was also a historical milestone: it marks the slow demise of analog broadcasting and the disappearance of cathode ray tubes from the household and, consequently, also a reduction in flickering epileptogenic stimuli from domestic spaces. But in this case, the TV set also plays a strange part in a libidinal economy. The newly single father, freed of the expectations of monogamous matrimony and of the need to justify purchases of expensive gadgets to his partner, moves on from a relationship he had started in an analog time.
226
M. JANCOVIC
The short report goes on to observe that the decreased flicker of LCD and plasma screens might explain the girl’s equally decreased attraction to the device. The authors finally conclude that “[p]hotosensitive epilepsy could then be added to the list of reasons why a father may need to purchase a new plasma screen television” (Sharma and Cameron 2007, 2003). The joking tone notwithstanding, this conclusion confesses the implicit patriarchal order of technological things in the family and picks up the post-war discursive association of media-technological advancements with masculinity (Keightley 1996, 2003). It is fathers who procure and oversee technology in the household, and children with disabilities can be conveniently utilized to convince a—so the insinuation goes—uninformed and antagonistic wife. Here, different screen technologies with different flickering behaviors seem to modulate the maintenance of the family, and thus of a reproductive future. It is not at all unusual to encounter evidence like this of the dysfunctional realities of familial life reproduced in and through dysfunctional devices and images. Epileptological literature abounds with distant fathers and anxious, guilt-ridden and controlling mothers, and with children and adolescents who learn to take advantage of their own seizures and the queer materiality of media to manipulate or punish their parents. [‘D.A.’] was seen again at the age of 17. Her parents had been separated for some years, her father had remarried and her mother was married by common law. The patient was much disturbed by the situation. When particularly upset or depressed she would turn on the TV set, but not when she was occupied. Sometimes she shifted channels quickly, but more often she came within a few inches of the screen, moved her right hand slowly to the focusing dial and turned it so the image blurred and then ‘rolled like crazy’. She then had an absence attack with unconsciousness and with or without a few myoclonic jerks; 3 times within 4 months such attacks progressed to a generalized tonic-clonic seizure […]. She had been traumatized by her father’s separation and second marriage, which she interpreted as rejection. She tried to compete with her stepmother and lost, so she played the sick role to make her father feel guilty; in her mind this was a means of achieving her wish to return to him. (F. Andermann 1971, 270–273)
Cases like these are plentiful. Over the previous year she [an 11-year-old girl] had had typical absences and myoclonic jerks that she found pleasurable and induced by rapidly
6 CLOSE EXPOSURE: OF SEIZURES, IRRITATING CHILDREN, AND STRANGE…
227
changing television channels, particularly when emotionally upset. When this behaviour was frustrated by a new television set she induced absences, jerks, and occasional GTCSs [generalized tonic-clonic seizures] by rapidly switching between games on a home video game. (Ferrie et al. 1994, 926)
Struggling to find their position inside of familial spaces that are failing them affectively, these unruly children—girls, mostly—turn to the TV or video game for comfort. But they practice a very different form of looking than the one we would normally associate with these media. The flickering screen provides a minor relief, a momentary respite from unbearable situations. Often, these girls are patronizingly described by their physicians in terms like “spoiled, ill-mannered, and irritating” (Hutchison et al. 1958, 244). What their non-compliance really irritates, however, are the norms of spectatorship—the normative orders of media use that regulate how television-viewing should be practiced, how we should behave around screens. It seems that these norms of media conduct are inseparable from the modes of power that uphold the authority of the father and the physician and their ability to monitor, constrain and control female pleasure. With their habit of intentionally stimulating queer feelings with flickering light, the spoiled, ill-mannered and irritating girls defy the social order of the patriarchal family.
Vital Failure: Self-induction as Visual Pleasure Photosensitive children and adults who induce seizures for self-satisfaction invented, in essence, a new form of visual pleasure. Discussions of pleasure have a long history in film theory, but this particular manifestation does not fit neatly into any received model of spectatorship. Self-inducers are quite unlike any viewing subject previously known to film theory. They do not behave like the distracted film audiences described by Kracacauer and Benjamin, nor the voyeurs of Lacanian feminist cinema theory, nor Baudry’s cinemagoer oblivious to the ideological workings of film. Neither are they comparable with the classical and, as Lynne Joyrich pointed out, distinctly gendered viewers conjectured by early television scholarship (Joyrich 1996). Viewers imagined by the “glance theories” (as John Caldwell called them) of John Ellis, Raymond Williams and others are inattentive, their glances towards the screen are given in passing (Caldwell 1995). In contrast, self-induction does require attention to the screen, but it does not have to be sustained for long—a long-held
228
M. JANCOVIC
precondition of many theories of media attentivity. And although it is queer indeed, self-induction also differs from the oblique form of vision described and practiced by Vivian Sobchack when watching horror films, and characterized more by looking away than by looking towards (Smith 2008, 119). Inspired by Sobchack, Laura U. Marks perhaps comes closest to developing a model of vision that could also account for the viewing practices of photosensitive people. Her theory of embodied spectatorship and haptic visuality encompasses the sensuousness of such uses of media (Marks 2002). But even then, the eroticism of the encounter does not fully encapsulate its simultaneously violent and transgressive potential. There are some similarities between self-inducers’ mode of engagement with optical media and alternative approaches to filmviewing developed in North American art circles in the 1960s. Stroboscopic devices like the Dreamachine, the flickering device closely affiliated with the structural film movement and the Beat generation, were overtly influenced by neurological research on PSE not only in their construction, but also in how they were viewed.4 The Dreamachine’s flickering light projections were meant to be “watched” with the eyes closed, just like light stimulation is traditionally performed in the clinic according to the testing protocol for photosensitivity. This is because eyelid closure is known to increase the susceptibility to seizures, a fact also popularized by William Grey Walter’s book. In the flicker films of the 1960s, just like when a child tunes the TV to atmospheric noise, the picture is unproductive: it refers to nothing but its own physics. It is a non-image in that there are no representations of the world, no bodies to gaze at, no pictorial structure, no camera whose look to identify with, no commodities to advertise. Actively looking at images like these is a type of visual practice predicated not on interpreting forms, following a narrative or distilling meaning, but a perception that operates on the level of phenomenal intensities, spatial and temporal frequencies, brightness patterns and fluctuations, and chemical and electrical signals in the brain. The perceptual responses to both avant-garde flicker films and TV static are multisensory and unpredictable. Their pleasurable or adverse effects are highly subjective and contingent upon a multitude of biochemical and psychological parameters. In this sense, the self-inducing child 4 Ter Meulen et al. (2009) speculate that the Dreamachine’s failure to be purchased by Philips Corporation for mass production can be attributed to fear of photosensitive epilepsy.
6 CLOSE EXPOSURE: OF SEIZURES, IRRITATING CHILDREN, AND STRANGE…
229
and the viewer anticipated by avant-garde cinema thus share a number of commonalities. But in other ways, they are dissimilar. Structural films lay out carefully calculated aesthetic irritations and let them take place in collective projection settings. In films like The Flicker or Epileptic Seizure Comparison, the flicker frequency is predetermined by their directors’ meticulous formal construction and the conventional hierarchy between creative author and passive viewer remains intact. These films and the flicker in them are made strategically to assault and disturb—with good intentions, to be fair, but nonetheless embedded in the same pattern of harmful experimentation, disciplining and non-consensual exhibition that neurology and psychiatry have historically subjected disabled people to. In contrast, self-induction is an active form of spectatorship that brings about disturbances in the image on its own terms. It is interactive and improvised inasmuch as it invites and even requires tinkering with the hardware of one’s own senses, adjusting the television as well as one’s position in space and orientation towards it. When children use television screens to give themselves seizures, the self-induction queers a domestic object against its intended use—it deconsexualizes television, we might say, stripping its images of all content down to only the material electrical signal. Questions of authorship and style are irrelevant to the types of images that photosensitive children seek out to provoke seizures. The act of self- induction embraces the potential for violence latently present in every flickering moving image but transforms it into pleasure, agency and relief. Unlike screenings of The Flicker and installations of Epileptic Seizure Comparison, self-induction is also deeply private. It is stigmatized, shamed, punished and done in secrecy, like a great many queer practices are. From a normative point of view, self-induction appears like a dangerous, irrational and irresponsible compulsion or sexual paraphilia. It goes against health, where notions of “health,” as Regina Kunzel and others argue, are often nothing other than a set of unjust social norms (Kunzel 2017; Metzl and Kirkland 2010; Roberts 2010). But such privatizing affective states, as Judith Butler has argued in a different context, are not necessarily depoliticizing (Butler 2004). Self- induction is full of disruptive and subversive potential and often consciously employed as such. It enters medical discourse first as an unsystematic collection of anomalous individual reports, cases and observations. With the incursion of television sets into the domestic space, this “collective, ephemeral archive of sensuous experience that imaginatively
230
M. JANCOVIC
connects users across time and space” (Payne 2018, 532) gradually breaches psychiatry’s normative presumptions about disability, sexuality and pleasure. By sheer repetition, the misbehaving children confounded physicians and psychiatrists and forced them to acknowledge the pleasurable and therapeutic uses of seizures by people of different social classes, ages and cognitive dispositions. Self-induction as bodily and mediatic technique made pleasure—female pleasure, especially—speakable in relationship to disability. The relation between the photosensitive child attuned to the television is thus best understood as a vital failure: a failure of the image, a failure of treatment, a failure to contain pleasure—all of them vital and vitalizing. Positive Exorcisms The Swiss video artist Pipilotti Rist once said in an interview that irresponsible children are the quintessential critical viewers of visual media (Ross 2000). This is a delightful film-theoretical proposition, aptly coming from an artist whose work is also often irreverent, deeply personal and “irresponsible.” Many of Rist’s videos are replete with visual disturbances, compression artifacts and other technical failures, which, much like some photosensitive children, she recuperates as positive, emancipatory aesthetic interventions and reclaims into moments of reassurance and resistance. The playful exploration of one’s own body is another common theme in Rist’s œuvre, and in the context of this chapter, we could directly contrast it with psychiatric endeavors to contain (women’s) bodily and neurological pleasure. In Rist’s own words, the “parallels between psychosomatic disturbances, character deficiencies and technical faults in machines” (quoted in Julin and Praun 2007, 110) recurrently figure in her early work, offering a productive ground for media-epigraphical interpretations. Many of Rists works, but especially the early I’m Not The Girl Who Misses Much (1986) and (Absolutions) Pipilotti’s Mistakes (1988), have been analyzed through the prism of hysteria (e.g. Ross 2000; Phelan et al. 2001; Ross 2001; Julin and Praun 2007), with various authors’ interpretations converging on psychoanalytical readings. But when placed against the backdrop of self-induced seizures, I think we can also recast Rist’s early video work in a way that teases out new meanings, doing justice to the historical separation of hysteria from epilepsy.
6 CLOSE EXPOSURE: OF SEIZURES, IRRITATING CHILDREN, AND STRANGE…
231
One could, for example, note that the unusual visual effect she uses in her influential video tape I’m Not the Girl who Misses Much bears an undeniable pictorial similarity to the tracings of an electroencephalograph’s needle (Fig. 6.1). In this five-minute-long tape, Rist performs a one- woman cover version of The Beatles’ song “Happiness is a Warm Gun” for the camera and accompanies it with a spasmodic performance in the tradition of what Kélina Gotman (2012) has termed “alterkinetic dances.” The audio track is severely modulated and the image is equally heavily distorted, as if the internal mental stimulus of Rist’s frenetic performance had been short-circuited with the electric emissions of the video machine. The frame appears to show multiple internal and external processes simultaneously. The strange compressions and rarefactions of the image and the diagonally progressing partial freeze-frame that slowly skims across the frame from left to right in the middle portion of her piece dissects and briefly suspends Rist’s jumping (and, with emphasis, gendered) body,
Fig. 6.1 Still from I’m Not the Girl who Misses Much (1986) by Pipilotti Rist. © Pipilotti Rist c/o Pictoright Amsterdam 2022
232
M. JANCOVIC
immobilizing its distorted contours on screen. The resulting zig-zag pattern of vertical smudges is like an indication that we are not only watching a recording of Rist’s outward frenzied, contorted movements, but at the same time also measuring some unknown internal neurophysiological variable. Rist’s subsequent work (Absolutions) Pipilotti’s Mistakes takes up the same theme and expands on it artistically and technically. The dynamic 1988 single-channel tape, though only under 12 minutes in length, addresses a dense range of thematic concerns. Its title already calls for special attention. In German, the original title Entlastungen can mean as much as relief, exoneration, or, most appositely, electrical discharge. Yet the video’s somewhat peculiar English rendering as Absolutions alludes to an ecclesiastical subtext. The mistakes of the mind and the mistakes of the machine also seem to become synonymous with sin. Visually, the short video presents a number of recurring motifs. Purely graphical elements like colored squares alternate with footage of a woman, played by Rist herself, in various spatial settings and situations, often falling down, appearing to drown or to be pushed under water in an involuntary ablution. These images are overlaid with prolific, chaotic analog signal disturbances, compression artifacts, coarse textures and striations of saturated green, orange, blue, fuchsia and pink color. A certain queer impulse is overtly present in Absolutions. At one point, the voice-over asks, in German, “Am I a man or a woman?”. In the interpretive framework laid forward by Rist, everything subordinate and faulty seems placed to the fore. Analog video, along with television, was frequently theorized as feminine on account of its qualitative deficiency against film in the late 1980s (Joyrich 1996). But Rist cherishes video’s faults and compression traces as the main object of aesthetic interest; not a flaw in the surface of an underlying image, but a surface to be observed and enjoyed in itself. She affirms deficiencies and mistakes in all of their forms. Her voice-over exclaims: “I hate all those ideas about the ideal… which doesn’t exist!” (my translation). What exactly Pipilotti’s mistake is remains unclear, but it seems to me that the title can be interpreted as a self-reflexive statement on Rist’s own use and abuse of video. Instead of delivering us an easily legible image, the screen flickers, flutters, frizzles; the pictures skip and act as if of their own accord. The art historian Rebecca Lane makes the compelling observation that we, as viewers, connive in Pipilotti’s mistakes and also “occupy a position of failure: a failure to experience visually [their] resolution” (Lane
6 CLOSE EXPOSURE: OF SEIZURES, IRRITATING CHILDREN, AND STRANGE…
233
2003, 24). This is, of course, in spite of Rist’s technically masterful manipulation of the medium. As she has made clear in her interviews, Rist explicitly views the technical aspects of her art as an issue of gender and strongly emphasizes her resolve to retain technical control over how her work is installed and presented (Harris 2000). Rist’s work’s proximity to music video has been noted repeatedly (Lane 2003; Julin and Praun 2007; Slyce 2009), yet she more often than not also paraphrases and exaggerates the production methods and aesthetic language of home movies. It is precisely through the vocabulary of home movie formats that the image disturbances and signal failures in Absolutions (and other early works in which Rist performs herself) can be understood. As Alexandra Schneider (2003) points out, the “mistakes” of home movies—the technical deficiencies, narrative incoherencies and other commonplace features of amateur filmmaking—do not diminish the pleasure of viewing them but, quite on the contrary, augment it. Rist clearly revels in this damaged and imperfect mode of representation and maximizes the pleasure of failures and errors in the image. Just like the self-inducing young girls in front of the tube, Pipilotti literally gets too close to the screen, her face and eyes shown in extreme close-up over and over again (Fig. 6.2). The consequence of getting too close to the technology, her penance for the “sin” of video, is a seizure. We see Rist unable to assume a stable position in space and struck to the ground over and over again, as if possessed by the falling disease, as epilepsy was called in Babylonian times. Robbed of repose, she falls down and is violently baptized in a pool of water. Falling down is her absolution: a discharge, a relief, an exoneration, an exorcism, a climax, Entlastung. Rist frames herself in the frame; she enters the space of video in order to bend, distort and queer its already asymmetrically gendered geometry from within. As a political and aesthetic tactic, she does not simply abnegate video’s failures, mistakes, poor resolution and heavy compression, since this would also mean disarming the actual potential of video art to upset, disorient, irritate, provoke, excite and amuse. Instead, Rist embraces negativity— “The bad exists, yes!”—and by repeating and replaying Pipilotti’s mistakes and failures, affirms failure and claims its space as her own. “Failure,” Judith Halberstam argues, “allows us to escape the punishing norms that discipline behavior and manage human development with the goal of delivering us from unruly childhoods to orderly and predictable adulthoods” (Halberstam 2011, 3). Sara Ahmed has similarly insisted
234
M. JANCOVIC
Fig. 6.2 Still from (Absolutions) Pipilotti’s Failure (1988) by Pipilotti Rist. © Pipilotti Rist c/o Pictoright Amsterdam 2022
that failure can represent “the hope for new impressions, for new lines to emerge, new objects, or even new bodies” (Ahmed 2006, 62). Rist’s failure to resolve an image is, in oblique ways, comparable to the failure of photosensitive children to abide by medical protocols. They both produce new modes of corporeality, new forms of spectatorship. New possibilities for pleasure and new sensory intensities. 100.000.000 Years of Video History Sara Ahmed’s notion of queer phenomenology teaches us to pay attention to matters of orientation, space and proximity. This methodological stance is very valuable when thinking about media and disability because it helps us to recognize how some media environments extend certain bodies more than others, and how technologically constituted spaces orient and navigate us towards certain objects and experiences. In one of the entries on her game review blog “Indie Gamer Chick,” Catherine Vice details some of her experiences of gaming with photosensitive epilepsy.
6 CLOSE EXPOSURE: OF SEIZURES, IRRITATING CHILDREN, AND STRANGE…
235
In order to play upcoming Xbox Live Arcade title Charlie Murder, I had to ditch my beautiful Sony 3D LCD television and instead slum it on an old projection TV with a fading image. In addition to that, I had to bring extra lighting into my office, and wear sunglasses. This was in addition to my normal precautions, which include a proper distance from the screen and my medications. (Indie Gamer Chick 2013, n.p.)
This account paints a startling picture of the mundane arrangements and supplemental labor that people with photosensitive epilepsy need to undertake in order to inhabit certain media spaces, including the domestic space of one’s own. I am reminded of when Susan Leigh Star spoke about the “critical differences between those for whom networks are stable and those for whom they are not, where those are putatively the ‘same’ network.” Star continues: “part of the public stability of a standardized network often involves the private suffering of those who are not standard” (Star 1990, 42). For someone susceptible to photosensitivity, many objects and environments besides television screens can turn into epileptogenic stimuli. Among them, machines, media and modes of transport that paradigmatically mark technological modernity are especially numerous: “computers, videogames (VGs), discothèque lights, venetian blinds, striped walls, rolling stairs (escalators), striped clothing, and sunlight […] interrupted by trees during a ride in a car or train […], rotating helicopter blades, disfunctioning fluorescent lighting, welding lights, etc” (Kasteleijn-Nolst Trenité et al. 2004, 2). This list appears in a paper written by neurologists, but we could also view this enumeration of jarringly unrelated things as a new archaeology of the moving image. It anticipates by two years Thomas Elsaesser’s call for a “film history of image-interference” (Elsaesser 2006). How would the history of moving images look like if it were recounted from the vantage point of neurological conditions like photosensitive epilepsy, as a haphazard succession of disorienting spaces, flickering lights and stroboscopic sparks? If we conceive of the archaeology of cinema as a story of optical and bodily failures, of discomfort and nausea, then we would have to extend its scope and start including non-cinematic objects like the spinning potter’s wheel or fluorescent lamps, recent yet already almost forgotten devices like the 100 Hz analog television, semi-cinematic experiences like a ride in a moving vehicle along a row of trees, and medical devices like
236
M. JANCOVIC
flash generators, photostimulators, toposcopes, anti-epileptic optical aids like tinted glasses or special effects amplifiers diverted from the broadcasting industry into neurological research. All of these are machines and devices that produce or transform, in one way or another, light as moving images, and therefore must be considered part of what Elsaesser called the “different S/M registers of the cinematic apparatus”: the sensory and motory, the scientific and medical, and the sadistic and masochistic (violent and pleasurable) effects of the moving image (2006, 21). Disturbances and traces of compression like flicker insistently remind us that it is too soon to abandon the philosophical and historical problem of the body and its “facticity”—the body as it experiences, is constituted by and constitutes technology (Young 2005, 16). Both photosensitive and nonphotosensitive people tend to see hallucinations in reaction to strobing light. Remarkably, those hallucinations are often described as cinema. Allen Ginsberg called his Dreamachine-induced visions “homemade optic movies” (quoted in Geiger 2003, 58) and one of John R. Smythies’s epilepsy patients called them “scenes in a badly cut film” (quoted in Canales 2011, 231). Similar hallucinations are reported by most viewers of The Flicker, a queer film that thus generates other films with each viewing. Nineteenth century clinical literature had already described epilepsy-related hallucinations in terms that are oddly prescient of abstract animated films. Visions often take the form of contaminations, distortions and disturbances in the visual field: “increase or diminution in the size of objects, […] sparks, a ball of light, a flash, or a glare” (Gowers 1881, 63). Early medical literature on epilepsy documents moving and wiggling blobs and lines, geometric patterns and colors, whirling spots, circles, stars and deformations of perception like blurs. Some of the patients in medical sources describe seeing even more intricate mise-en-scènes during seizures: old women in brown dresses, ugly and resentful faces or large rooms (Gowers 1881, 66; Cohen et al. 1999). How can we account for this hallucinatory cinema in media-historical terms? These apparitions are certainly “movies” inasmuch as they appear as animated pictures. Optical hallucinations certainly have aesthetic qualities, and the more elaborate ones also have narrative dimensions, as evidenced by epilepsy patients who evaluate their montage as good or bad. Flicker- induced and seizure-induced visions differ from conventional cinema in that they are obstinately unrecordable and thus unarchivable. That, however, does not preclude them from having a history. On the contrary. To echo an argument that Peter Geimer has put forward regarding the
6 CLOSE EXPOSURE: OF SEIZURES, IRRITATING CHILDREN, AND STRANGE…
237
history of photography, these images spontanés mean that humans have been seeing electrically produced moving images—in other words, video— for as long as epilepsy has been around (Geimer 2010, 22–27). Consequently, this must mean that the “history” of video has neither origin nor inventor because it reaches at least as far back as the mammalian brain. Thinking media through disorderly compressed images and disorderly bodies thus goes beyond media archaeology. It goes beyond the media- archaeological project of expanding previous histories of technology and probing them for occlusions and omissions. A media epigraphy of visual disturbances bursts into a “natural history” of video spanning, potentially, hundreds of millions of years.
Queer Audiovisual Practices Bodies have tendencies, Sara Ahmed reminds us. When looking at the ways we encounter media and at the actions they allow us to perform, screen-based flickering technology seems, at first, at odds with the tendencies of bodies that are sensitive to light. But whether performed for sexual, social or therapeutic reasons, self-induction is a true “positive exorcism” in Pipilotti Rist’s sense. When self-inducing a discharge, the photosensitive body reorients itself, takes a new direction; it apprehends a media device against its obvious affordances. Technology and its standards in such cases become more than just a constraint on the impaired body’s possibilities to act in accordance with some norm. They turn into a space in which bodies can extend their unique capacities for action. In their encounters with the queer materiality of media, new modes of existing in the sensory world can emerge, modes that are also rife with frictions and failures (Stiegler 2001; Mills 2011b; Harrasser 2013; Birnstiel 2016; Mills and Sterne 2017). Self-induced television seizures turn our attention away from the properties of television as a media object and towards television as a set spectatorial practices materializing out of electrical infrastructure. Seizures bring to light the complicated media politics of childhood, gender and sexuality. And they reveal the productive ways in which people with PSE sense their sense of sight (to paraphrase Daniel Sack) and find value, meaning and desire in unstable, faulty and generally undesirable images. This value appears in moments of failure that are ephemeral, during random signal dropouts, sudden ruptures in the televisual flow, gaps in the channel spectrum or brief flashes during zapping from one program to another.
238
M. JANCOVIC
Self-inducing children made this noise between the signal into a signal. They are at once media recipients and creators: when one strategy to produce the images that bring them pleasure fails—for example, when the parents get a new TV that does not flicker as much—they invent new strategies. Perhaps we can learn from these children how to look at media in new ways. Pursuing the traces of queer materiality, as exemplified by malfunctioning televisions and projectors or compressed flickering pictures, leads us to a rich history of such dismediating practices. Although often occurring in hidden, private or fringe media spaces, they challenge some of the core issues of media theory and compel us to consider the long, latent history of sensory politics tucked away in the electric signals that make up the images we encounter in our lives. In this media epigraphy of flicker, I sketched the aporetic nature of compression as both a source of violence and of pleasure and healing. Guided by queer phenomenology, compression lets us rethink the relationships between technological standards and disability in queer ways and remains useful in bringing forward “that which is permanently escaping, subverting, but nevertheless in relationship with the standardized” (Star 1990, 39). Heteronormativity, able-bodiedness and video standards work in somewhat similar ways inasmuch as all of them draw upon and produce similar regimes of medical knowledge. They all draw lines of alignment and orientation: they provide us with instructions for distinguishing images that are normal or abnormal, viewing conditions that are normal or abnormal, desires that are normal or abnormal, and bodies that are normal or abnormal. But with their playful, dangerous, compulsive, irresponsible, erotic and therapeutic media practices, self-inducing people queer the standards of our visual culture from the margins. Although they offer ample room for many other readings, the disturbances in Pipilotti Rist’s early video works, acknowledging, as they do, the malleability and plasticity of human perception, serve as a vibrant surface along which these relationships can be articulated. Rist lingers on moments of failure because it is precisely when they begin breaking down that the transformative forces of moving images become apparent. By bringing forward the strangeness and labor involved in looking at media, my intention here is not simply to critique the hegemony of normalcy or to campaign that technological standards must be inclusive of the infinite sensory permutations of the human body. Yes, pragmatically
6 CLOSE EXPOSURE: OF SEIZURES, IRRITATING CHILDREN, AND STRANGE…
239
speaking and from a medical standpoint, ideally, standards should be inclusive, since that would avoid unnecessary therapies and decrease the overall risk in the population in cases like photosensitive epilepsy. But such a demand would miss the mark becauses, as we have seen, some corporeal responses to media do not precede standards, but only first take shape as their effect. Instead, it seems more constructive to critically examine the historical processes, technological practices and media environments that together act to permit various forms of violence, pleasure and desire to exist. Seen through the lens of disability media studies, the history of television looks ambivalent. Television and video (and cinema, to a lesser degree) have played three distinct roles in the medical field of epileptology: as a cause of seizures, as object of study and as instrument of research. Television’s mass adoption has made PSE acutely visible to neurology and stimulated viewing practices that led to its more nuanced understanding. It facilitated the eventual acknowledgment that the self-induction of seizures was not a pathology, but could also be therapeutic and pleasurable. The 1960s spike in the visibility of PSE and its romanticization in North American counterculture has left a lasting legacy on the history of film and video art and their experiments with the limits of the experienceable. Another sharp increase in attention to PSE, this time manifesting as a social panic, occurred following the proliferation of home video gaming, particularly the Christmas sales of Super Mario World in 1992 and the incidents in the United Kingdom and Japan in 1993 and 1997. These engendered well-intentioned national and in some limited cases international broadcast guidelines and have led to the adoption of techniques like automated flash analysis. On the other hand, they have also reinforced the reductive association of flicker with an aesthetics of transgression. Media- assisted medical methods like video monitoring have led to improvements in therapy but have also created new health risks. A more recent development has been the removal of auto-playing GIFs on online platforms like Twitter and Facebook, which was likely influenced by accessibility guidelines and possibly by incidents like the one involving Kurt Eichenwald. But the implementation of these measures is uneven and their efficacy debatable. Some of their byproducts like epilepsy warnings are easily co- opted into uses that trivialize or sensationalize photosensitivity. Recent approaches to television have begun revisiting some of the obsolete assumptions of a now canonized generation of texts in television theory. In her book on televisual pleasure and spectacle, Helen Wheatley
240
M. JANCOVIC
has shown the limitations of media historiographies interested, first and foremost, in establishing essential differences between TV and cinema (Wheatley 2016). A close study of photosensitive epilepsy and self- induction affirms the basis of Wheatley’s argument that pleasure and spectacle are central to the history of television. Time and again, the historical and material evidence demonstrates that media incessantly cross-pollinate across many dispositifs, both traditional ones such as the cinema or the domestic space of the living room, and peripheral ones like the neurological laboratory. By following traces of compression in moving images, some of these pathways between multiple configurations of media—these thousands of undisclosed possibilities of telling the past—can be addressed. But reaching beyond Wheatley’s study, it seems that there are also forms of televisual pleasure that are completely untethered from TV programing or content, as well as from technological developments like color television or increased picture resolution and quality. In fact, the history of self-induction testifies that such technological “improvements” can at times hinder specific spectatorial practices. On the material level, the increasing black-boxing of television sets (the removal of manual horizontal and vertical hold dials in the 1970s, automatic channel scanning and later the transition to digital broadcasting) seems like a story of progress only if we choose a vague notion of “user-friendliness” as its only measure. As cathode ray tube televisions retreat into museums and collector’s homes, the “hands-on media practices” (Strauven 2015) of photosensitive people, such as the capability to mistune a TV and see analog static, become an increasingly rare experience. For people who “occasionally deliberately twiddled the TV dial to give [themselves] a minor seizure” (F. Andermann 1971, 269–270), the same process of black- boxing meant a reduction of potentially dangerous environmental stimuli, but at the same time also decreased possibilities to tinker, play and experiment with screens, flicker and compressed images. How might we conceive of queer audiovisual practices in light of this conflicted history? My suggestion is that we need to expand what queerness means in relation to media. A queer audiovisual practice—be it creating, watching, playing with or otherwise interacting with media—must be anti-normative not only as far as narrative, characters or form. Queer audiovisual practices also need to challenge compression methods, formats, frame rates and engage with and resist those material, technical norms that “embody a range of cultural and economic values, some of which are deliberately ‘scripted’ into design, others of which accrete
6 CLOSE EXPOSURE: OF SEIZURES, IRRITATING CHILDREN, AND STRANGE…
241
inadvertently” (Mills 2011a, 323). A queer media practice is thus one that recognizes that universal standards are anything but. A queer media practice is one that accounts for the dismediations of media: for both the capacities and incapacities of our bodies, for both the violence as well as the pleasure that media allow us to enact. This is always impractical, often irrational and sometimes impossible. We align with the paths that are put in front of us and have been there before us. We are, for instance, bound to the frame rates that are already there. Short of taking a monitor, television display or graphics card apart and manipulating the signal generators, circuit-bending or hacking into the firmware, there is little possibility to alter the forms of compression they have been built for. Tony Conrad was faced with this impossibility when he unsuccessfully tried to reformat and compress The Flicker from film to digital video (MacDonald 2006, 72). Yet it is precisely the futility and predisposition for failure which are, according to queer scholars Jack Halberstam, Sara Ahmed and others, the conditions under which new modes of visual culture and pleasure can thrive. The queer media practices of children with photosensitive epilepsy are a good reminder that our bodies are technological. Bodies simultaneously rely on and resist standardization, they are living electrical instruments that can create and project movies to themselves and react to their environment in unruly, frustrating, dangerous, enjoyable and wondrous ways. Relatively few video formats allow for fluidity in temporal compression or the possibility to adjust frame rates. Animated GIFs are a rare example, as the GIF format allows frame duration to be adjusted for each frame individually. Non-professional consumer video cameras (like those embedded in older smartphones and cheaper photographic cameras) also often have unstable frame rates that fluctuate depending on brightness, battery level or writing speed. Such video material is ill-suited for standardized media workflows. Professional video editing software will often reject formats with variable frame rates entirely or require “conversions” and “rectifications” before they can be shown in institutional settings from “proper” fixed-frame rate formats. What is at work in such norming mechanisms in technical standards is also an almost medical concept of normalcy against pathology, correct performance against perversion. In the political economy of images, formats with fluid frame rates like GIF can therefore become deadly weapons, but they can also become a media- technological form of queering.
242
M. JANCOVIC
Photosensitive children and adults demonstrate not only one possible method for looking at faulty images productively, but also of pleasurably inhabiting unfamiliar, nauseating and even dangerous spaces by reaching toward things that are already in reach.
References Ahmed, Sara. 2006. Queer Phenomenology: Orientations, Objects, Others. Durham: Duke University Press. Ames, Frances R. 1971. “Self-induction” in Photosensitive Epilepsy. Brain 94: 781–798. https://doi.org/10.1093/brain/94.4.781. Ames, Frances R., and David Saffer. 1983. The Sunflower Syndrome: A New Look at “self-induced” Photosensitive Epilepsy. Journal of the Neurological Sciences 59: 1–11. https://doi.org/10.1016/0022-510X(83)90076-X. Andermann, F. 1971. Self-Induced Television Epilepsy. Epilepsia 12: 269–275. https://doi.org/10.1111/j.1528-1157.1971.tb04934.x. Andermann, K., S. Glen Oaks, P.M. Berman, J. Cooke, H. Dickson, A. Kennedy Gastaut, et al. 1962. Self-Induced Epilepsy: A Collection of Self-Induced Epilepsy Cases Compared with Some Other Photoconvulsive Cases. Archives of Neurology 6: 49–65. https://doi.org/10.1001/archneur.1962.004 50190051007. Binnie, C.D. 1988. Self-induction of Seizures: The Ultimate Non-compliance. Epilepsy Research. Supplement 1: 153–158. Binnie, C.D., C.E. Darby, R.A. De Korte, and A.J. Wilkins. 1980. Self-induction of Epileptic Seizures by Eyeclosure: Incidence and Recognition. Journal of Neurology, Neurosurgery, and Psychiatry 43: 386–389. Birnstiel, Klaus. 2016. Unvermögen, Technik, Körper, Behinderung. Eine unsystematische Reflexion. In Parahuman. Neue Perspektiven auf das Leben mit Technik, ed. Karin Harrasser and Susanne Roessiger, 21–38. Köln: Böhlau. Bower, Brian D. 1963. Television Flicker and Fits. Clinical Pediatrics 2: 134–138. https://doi.org/10.1177/000992286300200308. Butler, Judith. 2004. Violence, Mourning, Politics. In Precarious Life: The Power of Mourning and Violence, 19–49. London: Verso. Caldwell, John Thornton. 1995. Televisuality: Style, Crisis, and Authority in American Television. New Brunswick, NJ: Rutgers University Press. Canales, Jimena. 2011. “A Number of Scenes in a Badly Cut Film”: Observation in the Age of Strobe. In Histories of scientific observation, ed. Lorraine Daston and Elizabeth Lunbeck, 230–254. Chicago: The University of Chicago Press. Cartwright, Lisa. 1995. Screening the Body: Tracing Medicine’s Visual Culture. Minneapolis: University of Minnesota Press.
6 CLOSE EXPOSURE: OF SEIZURES, IRRITATING CHILDREN, AND STRANGE…
243
Chao, Dora. 1962. Photogenic and Self-induced Epilepsy. The Journal of Pediatrics 61: 733–738. https://doi.org/10.1016/S0022-3476(62)80346-1. Cohen, Oren, Yaron River, and Oded Abramsky. 1999. Seizures Induced by Frustration and Despair Due to Unresolved Moral and Political Issues: A Rare Case of Reflex Epilepsy. Journal of the Neurological Sciences 162: 94–96. https://doi.org/10.1016/S0022-510X(98)00293-7. Crone, Bridget. 2017. Flicker Time and Fabulation: From Flickering Images to Crazy Wipes. In Futures and Fictions, ed. Henriette Gunkel, Ayesha Hameed, and Simon O’Sullivan, 268–294. London: Repeater Books. Edelman, Lee. 2004. No Future: Queer Theory and the Death Drive. Durham, NC: Duke University Press. Elsaesser, Thomas. 2006. Early Film History and Multi-Media: An Archaeology of Possible Futures? In New Media, Old Media: A History and Theory Reader, ed. Wendy Hui Kyong Chun and Thomas Keenan, 13–25. Abingdon/New York: Routledge. Ferrie, C.D., P. De Marco, R.A. Grünewald, S. Giannakodimos, and C.P. Panayiotopoulos. 1994. Video Game Induced Seizures. Journal of Neurology, Neurosurgery, and Psychiatry 57: 925–931. Foucault, Michel. 1995. Discipline & Punish: The Birth of the Prison. Translated by Alan Sheridan. New York: Vintage Books. Geiger, John. 2003. Chapel of Extreme Experience: A Short History of Stroboscopic Light and the Dream Machine. New York: Soft Skull Press. Geimer, Peter. 2010. Bilder aus Versehen: eine Geschichte fotografischer Erscheinungen. Hamburg: Philo Fine Arts. Gotman, Kélina. 2012. Epilepsy, Chorea, and Involuntary Movements Onstage: The Politics and Aesthetics of Alterkinetic Dance. About Performance 11: 159–183. Gowers, W.R. 1881. Epilepsy and Other Chronic Convulsive Diseases: Their Causes, Symptoms and Treatment. London: J & A Churchill. Green, Joseph B. 1966. Self-Induced Seizures: Clinical and Electroencephalographic Studies. Archives of Neurology 15: 579–586. https://doi.org/10.1001/ archneur.1966.00470180019002. Halberstam, Judith. 2011. The Queer Art of Failure. Durham: Duke University Press Books. Harding, G.F.A., P.M. Jeavons, and A.S. Edson. 1994. Video Material and Epilepsy. Epilepsia 35: 1208–1216. https://doi.org/10.1111/j.1528- 1157.1994.tb01791.x. Harley, R.D., H.W. Baird, and R.D. Freeman. 1967. Self-induced Photogenic Epilepsy. Report of Four Cases. Archives of Ophthalmology (Chicago, Ill.: 1960) 78: 730–737. Harrasser, Karin. 2013. Körper 2.0: über die technische Erweiterbarkeit des Menschen. Bielefeld: Transcript.
244
M. JANCOVIC
Harris, Jane. 2000. Psychedelic, Baby: An Interview with Pipilotti Rist. Art Journal 59: 69–79. https://doi.org/10.2307/778122. Hutchison, J.H., F.H. Stone, and J.R. Davidson. 1958. Photogenic Epilepsy Induced by the Patient. The Lancet 1: 243–245. Indie Gamer Chick. 2013. The Epilepsy Thing. Indie Gamer Chick. Accessed 28 November 2017. https://indiegamerchick.com/2013/08/06/the-epilepsything/. Joyrich, Lynne. 1996. Re-viewing Reception: Television, Gender, and Postmodern Culture. Bloomington: Indiana University Press. Julin, Richard, and Tessa Praun, ed. 2007. “People who feel that talking about art ruins it should stop reading now”—A Conversation between Pipilotti Rist and Richard Julin, Chief Curator Magasin 3 Stockholm Konsthall. In Pipilotti Rist—Congratulations!, 11–145. Baden: Lars Müller Publishers. Kammerer, Th. 1963. Süchtiges Verhalten bei Epilepsie: Photogene epilepsie mit selbstinduzierten Anfällen. Deutsche Zeitschrift für Nervenheilkunde 185: 319–330. https://doi.org/10.1007/BF00243681. Keightley, Keir. 1996. “Turn It down!” She Shrieked: Gender, Domestic Space, and High Fidelity, 1948–59. In Popular Music, vol. 15, 149–177. Cambridge: Cambridge University Press. ———. 2003. Low Television, High Fidelity: Taste and the Gendering of Home Entertainment Technologies. Journal of Broadcasting & Electronic Media 47. Routledge: 236–259. Kunzel, Regina. 2017. Queer History, Mad History, and the Politics of Health. American Quarterly 69: 315–319. https://doi.org/10.1353/aq.2017.0026. Lane, Rebecca. 2003. Guilty Pleasures: Pipilotti Rist and the Psycho/Social Tropes of Video. Art Criticism 18: 22–35. Larkin, Brian. 2008. Signal and Noise: Media, Infrastructure, and Urban Culture in Nigeria. Durham: Duke University Press Books. MacDonald, Scott. 2006. Tony Conrad. On the Sixties. In A Critical Cinema 5: Interviews with Independent Filmmakers, 55–76. Berkeley: University of California Press. Marks, Laura U. 2002. Touch: Sensuous Theory and Multisensory Media. Minneapolis: University of Minnesota Press. McRuer, Robert. 2017. Any Day Now: Queerness, Disability, and the Trouble with Homonormativity. In Disability Media Studies, ed. Elizabeth Ellcessor and Bill Kirkpatrick, 272–292. New York: NYU Press. McRuer, Robert, and Anna Mollow, eds. 2012. Sex and Disability. Durham: Duke University Press Books. Metzl, Jonathan, and Anna Kirkland. 2010. What Is Health and How Do You Get It? In Against Health: How Health Became the New Morality, ed. Jonathan Metzl and Anna Kirkland, 15–25. New York: NYU Press.
6 CLOSE EXPOSURE: OF SEIZURES, IRRITATING CHILDREN, AND STRANGE…
245
ter Meulen, B.C., D. Tavy, and B.C. Jacobs. 2009. From Stroboscope to Dream Machine: A History of Flicker-Induced Hallucinations. European Neurology 62: 316–320. https://doi.org/10.1159/000235945. Mills, Mara. 2011a. Do Signals Have Politics? Inscribing Abilities in Cochlear Implants. In The Oxford Handbook of Sound Studies, ed. Trevor Pinch and Karin Bijsterveld, 320–346. Oxford: Oxford University Press. ———. 2011b. On Disability and Cybernetics: Helen Keller, Norbert Wiener, and the Hearing Glove. differences 22: 74–111. https://doi.org/10.121 5/10407391-1428852. Mills, Mara, and Jonathan Sterne. 2017. Afterword II: Dismediation—Three Proposals, Six Tactics. In Introduction: Toward a Disability Media Studies, ed. Elizabeth Ellcessor and Bill Kirkpatrick, 365–380. New York: NYU Press. Muñoz, José Esteban. 2009. Queerness as Horizon: Utopian Hermeneutics in the Face of Gay Pragmatism. In Cruising Utopia: The Then and There of Queer Futurity, 19–32. New York: NYU Press. Ng, Beng-Yeong. 2002. Psychiatric Aspects of Self-induced Epileptic Seizures. Australian and New Zealand Journal of Psychiatry 36: 534–543. https://doi. org/10.1046/j.1440-1614.2002.01050.x. Payne, Robert. 2018. Lossy Media: Queer Encounters with Infrastructure. Open Cultural Studies 2: 528–539. https://doi.org/10.1515/culture-2018-0048. Phelan, Peggy, Elisabeth Bronfen, and Hans Ulrich Obrist. 2001. Pipilotti Rist. London: Phaidon Press. Roberts, Dorothy. 2010. The Social Immorality of Health in the Gene Age: Race, Disability, and Inequality. In Against Health: How Health Became the New Morality, ed. Jonathan Metzl and Anna Kirkland, 61–71. New York: NYU Press. Robertson, E. Graeme. 1954. Photogenic Epilepsy: Self-precipitated Attacks. Brain 77: 232–251. https://doi.org/10.1093/brain/77.2.232. Ross, Christine. 2000. Fantasy and Distraction: An Interview with Pipilotti Rist. Afterimage 28: 7–9. ———. 2001. Pipilotti Rist: Images as Quasi-objects. n.paradoxa 7: 18–25. Schäfer, Armin. 2015. Literatur im Aufschreibesystem von 1800 ist ein Simulakrum von Wahnsinn’. Anmerkungen zu einer These von Friedrich Kittler. Metaphora. Journal for Literary Theory and Media 1: III-1–III-16. Schneider, Alexandra. 2003. Die Ankunft von Tante Erica. Wie Familienfilme aus den dreißiger Jahren anfangen. montage AV 12: 119–129. Sharma, Alok, and Duncan Cameron. 2007. Reasons to Consider a Plasma Screen Television—Photosensitive Epilepsy. Epilepsia 48: 2003. https://doi. org/10.1111/j.1528-1167.2007.01165_5.x. Slyce, John. 2009. Adventures Close to Home. In Elixir: The Video Organism of Pipilotti Rist, 47–60. Rotterdam: Museum Boijmans van Beuningen.
246
M. JANCOVIC
Smith, Marquard. 2008. Phenomenology, Mass Media and Being-in-the-World. Interview with Vivian Sobchack. In Visual Culture Studies: Interviews with Key Thinkers, 115–130. London: SAGE. Star, Susan Leigh. 1990. Power, Technology and the Phenomenology of Conventions: on Being Allergic to Onions. The Sociological Review 38: 26–56. https://doi.org/10.1111/j.1467-954X.1990.tb03347.x. Stiegler, Bernd. 2001. Philologie des Auges: Die photographische Entdeckung der Welt im 19. Jahrhundert. München: Fink. Strauven, Wanda. 2015. The (Noisy) Praxis of Media Archaeology. In At the Borders of (Film) History: Temporality, Archaeology, Theories, ed. Alberto Beltrame, Giuseppe Fidotta, and Andrea Mariani, 33–41. Udine: Forum. Trenité, Kasteleijn-Nolst, G.A. Dorothée, Gerrit van der Beld, Ingrid Heynderickx, and Paul Groen. 2004. Visual Stimuli in Daily Life. Epilepsia 45 (Suppl 1): 2–6. Wheatley, Helen. 2016. Spectacular Television: Exploring Televisual Pleasure. Reprint edition. I.B.Tauris. Whitty, C.W.M. 1960. Photic and Self-induced Epilepsy. The Lancet 275: 1207–1208. https://doi.org/10.1016/S0140-6736(60)91095-3. Young, Iris Marion. 2005. On Female Body Experience: Throwing Like a Girl and Other Essays. Oxford: Oxford University Press.
CHAPTER 7
Conclusion: Tracing Compression
Our visual world is replete with errors and losses. What I have proposed in this book is that we can change how we think of such visual failures. Instead of regarding them as an irritating inconvenience or inevitable symptom of decay, we can value them as companions. Oftentimes, these traces can teach us about the material practices and techniques that let images travel around the world. When I set out to write this book, it was a project about small visual traces of decay. I had no more than an incipient hunch that such traces could be spun into new narratives of media history. It was not a book about compression, and certainly not about bibliography, mathematics, signal processing or neurology. But the traces I was encountering seemed to be repeatedly hinting at compression as one of the most fundamental mechanisms of moving image culture that shapes nearly every piece of video ever made, leading me down bewildering rabbit holes I had never anticipated. I ended up learning and writing about calculus and harmonic analysis, about psychiatry and epileptic disorders, and about the history of bookmaking. It seems to me that in order to begin grasping the richness and complexity of the culture and technology of the moving image, we may need to expand our vision of its history to include these fields and practices. Such an expansion was not entirely planned, but it was the narrative that seemed to materialize from all the marks of interlacing, blocking, flicker and whatnot. I call the process of telling this story media epigraphy. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Jancovic, A Media Epigraphy of Video Compression, https://doi.org/10.1007/978-3-031-33215-9_7
247
248
M. JANCOVIC
The concept of media epigraphy is the central proposition that I make in this book. Media epigraphy is a method, a way of sensing, a particular way of orienting oneself towards media’s past, a way of looking at—and looking for—traces in moving images. It is the attempt to ply traces so as to formulate new questions about media culture and discover novel methods of inquiry: think transversally across different scales of analysis, bring together disparate disciplinary regions and ask how they can inform new understandings of media. Compression is an omnipresent media technique that tends to leave prolific markings. Video compression is a very technical subject and can easily become opaque and inaccessible to anyone not versed in mathematics, electrical engineering and signal processing. But there is a different way to approach it, to look at it through the lens of culture and mediation. I see compression as an industrial process, operating like a powerful engine at the interstice of science, the body, spectatorship, aesthetics and technology. Techniques of compression may seem abstract, but in every context, they should be considered on thoroughly material terms. They are enacted through physical procedures, bodily gestures, and concrete sensuous actions that inscribe themselves in numerous aspects of life, culture and society. Compression and its traces are imbricated in the emergence of scientific concepts, they irritate and structure processes of knowledge- formation and technological standardization, they provoke new regimes of looking, propagate through the physical environment and affect our bodies. Much has recently been made of the realization that visual—or rather, computational—culture has been slipping beyond human perception. As practices of image-making, along with much of political governance, recede behind a thickening smog of codecs and algorithms, it is tempting to relent to an ill-boding anticipation of human insignificance in the face of microtemporal, planetary and ubiquitous media processes. But there is an alternative: to insist on the weight and presence of all that is sensuous; relish it, even. Two key threads seem to emerge when we heed the materiality and corporeality of compression. One is its generative potential. The other is the rich sensory and epistemic possibilities that are born out of encounters with compressive media, and the many intriguing ways in which it is possible to learn to experience moving images. Instead of emphasizing the representational or “textual” information carried by images, these corporeal encounters foreground their graphical, sensory or mathematical qualities. Bibliographers in the 1970s developed non-textual ways of looking at
7 CONCLUSION: TRACING COMPRESSION
249
historical documents by collating misprints—but the mediatized collating methods introduced new difficulties of reformatting, scaling and registration. The harmonic analyzer gave rise to ringing, a bizarre visual phenomenon which was then incorporated into mathematics and continues to trouble science and visual culture. In magnetic resonance imaging and electroencephalography, media techniques of visualization and observation made possible new medical understandings of the human body, but at the same time produced new interpretive dilemmas. Interlacing solved television flicker, only to introduce its own visual disturbances. And flicker itself—a disturbance resulting from the sampling and compression of moving images—gave rise to entire visual languages, artistic movements, medical fields, legal norms and media practices of violence and pleasure. Compression introduces losses, errors and failures, but those losses also remain generative of new media practices. Inscriptions of compression extend in many directions. They cross mainstream visual culture and popular entertainment, as well as canonical works of film history and avant-garde video art. To some extent, these traditional boxes in which we categorize moving images remain important. But in order to more fully appreciate what moving images do with us and what we do with them, it seems we also need to look for traces elsewhere and dismantle and think across all sorts of categories of thought: cross boundaries between science and non-science, between popular culture and experimental art, between disability and “normalcy,” between macrohistories of infrastructure and standardization and the microtemporal operations of machines. This process may necessitate that we adopt an expansive and generous definition of “moving images.” Flicker, ringing, ghosting, combing, blocking—these markings also affect those moving images that are not a fully formed cinema. Images in which representation fails, poor images that are sometimes tiring to look at. Brief moments of flicker and ephemeral hallucinations, short and flashing repetitive animations, encounters with queer materiality like the atmospheric noise on a broken TV, minuscule specks that flash for a brief second before disappearing. Carefully treating such phenomena as the imprints of material media practices and asking where they come from reveals the formation of techniques and routines among such diverse communities of practice as mathematicians, book scholars or neurologists. In order for media epigraphy to work as I am proposing, the notion of a moving image thus has to accommodate animations like a strobing GIF consisting of only two frames, the “flicker”
250
M. JANCOVIC
of two book pages when they are being optically collated by bibliographers, or the hallucinations that precede an epileptic seizure. Some of these embodied ways of looking can serve us as good conceptual methods for approaching the analysis of moving images. The optical collation performed by bibliographers is a method of historical research that takes the body’s perceiving capacities seriously and draws on them to create knowledge. The self-induction of seizures by children, while not methodical in the same sense, is also a provocative example of the multiplicity of ways in which one can productively approach moving images, even those that seem damaged, mutilated or defective. The moving image shaped by compression seems to historically emerge out of infrastructural networks of electrification, but also epistemic and material networks of the formal sciences. The physical infrastructure of media, like shortwave radio towers and the electrical grid but also more symbolic infrastructure like algorithms, all have important roles to play. Transient and barely perceptible phenomena interlink with short-lived technical devices and experimental and improvisatory bodily events. The large infrastructure touches our bodies on a small scale and affects our senses in myriad ways, not all of which have been fully documented, examined and understood by media theory yet. The methodological challenge is how to bring these disparate domains into view. A certain methodological open-mindedness is necessary in the pursuit of an objective set by Susan Leigh Star; her appeal that “[w]e need to be able to theorize across the continuum of information infrastructures, from the old, historical, global to the everyday, simple and quintessentially invisible stuff of ordinariness” (Star 1999, 120). One of my primary goals in writing this book was to reframe discussions about media in terms of formats and techniques. These, I argue, can allow the field to think together epistemic and cultural regions between which there are distant but vital mutual effects still waiting to be discovered. Some of these include scientific fields like calorimetry, industries like software engineering or trades like papermaking. All of them, I believe, are intricately connected to how moving images circulate around the world. Formats and video compression techniques influence the functioning of large configurations of media like the web-based film festival industry or the daily operations of film archives. They also develop effects on the physical world we inhabit, permeating the environment of our planet as heat and our living rooms as flicker. They oscillate in the complex politics of the electromagnetic spectrum as well as in the equally complex politics of the family and the
7 CONCLUSION: TRACING COMPRESSION
251
domestic space. They are present in the unexpected pleasures some people discover in ordinary technical objects like the television screen, but also in malicious and violent attempts to cause harm through media. Human perception is inseparable from technology, infrastructure and standards. The same standards can serve some people perfectly well yet be harmful to others. Formats can shift how we ask questions about media. They can direct our view towards some of the weak spots of media theory. For example, formats can help us grasp the operations of media, culture and science on new terms, unencumbered by pre-formed categories and rigid but often imprecise distinctions like “micro” and “macro,” “cinema” and “television,” “analog” and “digital.” This line of inquiry has brought out some unusual modes of embodiment and forms of engagement with moving images, exemplified by the therapeutic self-induction of epileptic seizures, a practice that finds value and meaning in lossy and faulty images. Within the cartography of compression, one encounters a multitude of gestures that move and fold, touch and order, measure and calculate. Surface and space play a critical role here. In all cases, compression seems to achieve an effect through physical and spatial transformations—from the folding of books to the grid and block logics of the discrete cosine transform. Critically examining compression techniques, algorithms and formats might thus be as essential to our understanding of moving image culture as looking at the images themselves. Techniques like interlacing and the discrete cosine transform are vital not just to the audiovisual culture of our time. They have persisted across long stretches of media history, much longer that one might assume. They migrate chaotically across many media, enmeshing digital video in a web of mathematical controversies, forgotten algorithms, old computational methods and analog devices like the phototelegraph. Altogether, these impulses seem to suggest that we may need to continue expanding the scope of media-historical inquiry both in time and space. If the algorithms so crucial to today’s streaming platforms have been around since the early nineteenth century, what does that mean for our conception of the history of (digital) moving image media? Given their perduring influence on audiovisual culture, should mathematical fields like harmonic analysis not be brought under the purview of media studies? What critical redefinitions and conceptual calibrations would have to take place in order to make that
252
M. JANCOVIC
possible? How would we need to shift the methods and vocabularies we use in investigating media history? Compression also forcefully foregrounds the spectrum as a relevant domain of attention for anyone interested in moving images. So much of video and digital cinema has to do with spectrality. Images depend on both the manipulation of the frequency spectrum within them, and on their movement through the electromagnetic spectrum. Spectral questions are crucial to the aesthetics, production, circulation and preservation of film and television, yet the spectral turn of the last years has not yet resulted in a critical consideration of these matters. Existing media theories of codecs have tended to focus on their aesthetic characteristics, primarily in relation to digital video and its glitches. Perhaps we can rearticulate codecs with a view toward finer gradations: we can recognize a culture and logic of compression not just in digital imaging, but distributed across a wide field of activities that includes such diverse practices as analog television broadcasting, cinematographic techniques of framing, various bodily computing techniques including measuring and drawing, and others. Historicizing those media-technological effects of digitality that extend beyond the limited period of computational media of the last few decades may also facilitate a rethinking of “the digital” at large. The history of compression and formats is a bricolage, a tangled but exciting warren of practices and technologies. The corner-scanning copying machine envisioned by bibliographers that Xerox never considered making. The mechanical harmonic analyzers and book-based computers used to calculate coefficients of periodic functions, all of which had decisive parts to play in the history of non-electronic computing. Johann Walter’s image transfer protocol or Fritz Schröter’s phototelegraph, whose cumbersomeness far outweighed their benefits but which still representatively stand in for large conceptual shifts in twentieth-century communication. The myriad of optical, light-emitting, light-projecting and light-transforming devices used in epileptological research, including the 100 Hz cathode ray tube TV—the very last member in the family of mainstream cathode ray tube applications. Some of these devices and technologies are obsolete, others never really existed, yet others still quietly live on at the periphery of science, medicine or commerce. Many of them produce images that move, and thus also have a humble place in the history of the moving image. Looking at recent technological developments, it would be easy to assume that we are approaching a future in which compression is no
7 CONCLUSION: TRACING COMPRESSION
253
longer necessary. After all, Internet connection bandwidths continue rising and data storage is cheap. And yet, our media are more heavily compressed than they ever have been. “There will be no post-compression age,” as Jonathan Sterne has concluded (2012, 231). The exploding number of devices jostling for a slice of the overcrowded spectrum; the exponentially swelling image resolutions and bit depths of our screens; impatient consumers who expect instant access to high-definition content everywhere at all times—all of this has driven the need for compression to unprecedented levels. Competition around compression algorithms is fierce. Traditional gatekeepers like the MPEG consortium are duking it out with industry upstarts while giant tech corporations like Netflix, Amazon and Google are banding together to develop new compression standards, in spite of their rivalries in the streaming realm. Betting on the wrong compression format or not procuring the right hardware on time can bring phone manufacturers to their knees. And this game of chess is playing out across multiple geographical and sociopolitical arenas: examples like the Chinese “national” AVS video format or Google’s influence on the specifications of the open-source MKV format spring to mind. The geopolitical value of compression remains an open researched field waiting to be explored. Many more epigraphies are waiting to be written.
References Star, Susan Leigh. 1999. The Ethnography of Infrastructure. American Behavioral Scientist 43: 377–391. https://doi.org/10.1177/00027649921955326. Sterne, Jonathan. 2012. MP3: The Meaning of a Format. Durham: Duke University Press.
Appendix: List of referenced audiovisual works
Arcangel, Cory. Untitled (After Lucier). 2006. Video installation. Digital. Brakhage, Stan. Mothlight. 1963. 16mm. Christie, Amanda Dawn. Spectres of Shortwave. 2016. Digital. Conrad, Tony. The Flicker. 1966. 16mm. Davis, Paul B. Codec. 2009. Digital. https://vimeo.com/17709560. Davis, Ted. Quantization in Motion. 2015. Digital. https://vimeo. com/96820483. Delpeut, Peter. Lyrisch Nitraat. 1991. 35mm. Elderkin, Nabil. Welcome to Heartbreak. 2009. Music video. Digital. https://youtu.be/wMH0e8kIZtE. Jackson, Peter. The Hobbit: An Unexpected Journey. 2012. Digital. Jarman, Derek. Blue. 1993. 35mm/digital. Jennings, Garth. The Hitchhiker’s Guide to the Galaxy. 2005. 35mm. Lynch, David. Came Back Haunted. 2013. Music video. Digital. https:// youtu.be/1RN6pT3zL44. Morrisson, Bill. Light is Calling. 2004. 35mm/digital. https://www.youtube.com/watch?v=yx0HzBiaVn4. Ootomo, Katsuhiro. Akira. 1988. 35mm. Paik, Nam June. Zen for Film. 1965. 16mm. Riefenstahl, Leni. Olympia. 2008 [1938]. DVD [35mm]. Rist, Pipilotti. (Absolutions) Pipilotti’s Mistakes. 1988. VHS.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Jancovic, A Media Epigraphy of Video Compression, https://doi.org/10.1007/978-3-031-33215-9
255
256
APPENDIX: LIST OF REFERENCED AUDIOVISUAL WORKS
———. I’m Not the Girl who Misses Much. 1987. VHS. Rouy, Philippe. Machine to Machine. 2013. Digital. Sharits, Paul. Epileptic Seizure Comparison. 1976. 16mm. Talen, Julie. Sitting in a Room. 2008. Digital. https://vimeo. com/2485897. Wenders, Wim. Wings of Desire. 2009 [1987]. Blu-ray [35mm]. Watson, Alex. Faking It: The Obviously Dubbed Telephone Ring. 2018. Digital. https://www.youtube.com/watch?v=AxXsIQDafog. West, Kanye. Ni**as in Paris. 2011. Music video. Digital. https://www. youtube.com/watch?v=gG_dA32oH44.
Index
A Absolutions (Pipilotti’s Mistakes), 232–233 Acceleration, of computation time, 132, 155, 157, 162 Advanced Video Coding (AVC) codec, 128 AEG, 103 Aesthetics, 4, 9, 18, 57, 96, 103, 136, 172, 178, 194, 239 clichéd, 59 of incompressibility, 136 precarious, 56 Afterglow, fluorescent screen, see CRT phosphors Ahmed, Nasir, 131 Akira, 40–43, 58, 151 Alexanderson, Ernst, 100 Algorithm marketing, 159 Alterkinetic dances, 231 Amateurs, 48, 64 Analog video, 21, 99, 232 Analytical Theory of Heat, The, 122 Analyzers automated, 180
electroencephalograph frequency, 153 flash pattern, 195 mechanical, 146, 149, 153 tide, 145–146 Animation, 171, 194, 236 Anthropocene, 15 Anticonvulsant medication, 177, 221, 225 Apologia, 175 Apuleius, 175–176 Arcangel, Cory, 58, 130 Archiving, audiovisual, 68, 113 Ardenne, Manfred von, 91, 97 Arriflex camera, 150 Art criticism, late Romantic, 55 Artifacts blocking, 3, 43, 62, 125–126, 128–130, 138–139 combing, 79 ghosting, 5, 102 interlacing, 3, 20, 82, 107, 138 jitter, 92–93, 106, 186, 194 macroblocking, 20, 42, 60 ringing, 3, 130, 140–141, 143, 148 shearing, 54
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Jancovic, A Media Epigraphy of Video Compression, https://doi.org/10.1007/978-3-031-33215-9
257
258
INDEX
Aspect ratio, 8, 32–34, 36, 43, 47, 49 of paper, 36 Avant-garde, the, 197, 205, 228 AVI container format, 32 AV1 compression codec, 129 B Bain, Alexander, 82, 85 Baird, John Logie, 96–97, 99 Bakewell, Frederick, 85 Ballard, Randall, 96, 100–102, 105–107 Ballard’s interlacing system, 101–102, 105 Baudot, Émile, 82, 90 BBC, 138–139 Beat Generation, The, 197, 228 Bell Labs, 111 Berlin Radio Exhibition, 100 Bibliography (book studies), 14, 30, 35, 37, 44, 64, 67 Bicêtre Hospital, 203 Bioscop, 104–105 Bitmap, digital, 83 Black-boxing, 240 Black vertical bars, 42, 47, 49–50, 187 Blue (Derek Jarman film), 59 Bôcher, Maxime, 149 Brakhage, Stan, 136, 182 Bredar, James, 171 Brightness increases, screen, 103 British Television Committee, 101 Broadcasters, 49, 193–194, 236 Broadcasting guidelines for flashing images, 193–195 BT.1702, ITU recommendation, 195 BT.601 norm, 5 Burckhardt, Jacob, 48 Burn, Ian, 58 Bush, Vannevar, 147
C Cabaret, 203 Cameron, James, 51 Cannes Film Festival, 79 Carslaw, Horatio Scott, 145 Case file, medical, 217 Cathode ray tube (CRT), 50, 64, 78, 91–92, 103, 111, 183, 186, 195, 225 disappearance of the, 225, 240 Chainlines, 30, 35 Charcot, Jean-Martin, 177, 181, 203–204 Chase, Walter, 181 Christie, Amanda Dawn, 113 Chroma subsampling, 94, 142 CIE (International Commission on Illumination) color space, 5 Cinema experimental, 197, 207, 229 high frame rate (see Frame rates) CinemaScope, 34 Cinematography, conventions of, 135–136 Cine-medical assemblage, 196 Circulation, of moving images, 19, 130, 135 Codec (Paul B. Davis work), 17 Cold War, 159 Collation, optical, 37–42, 64–66, 136 Color perception, 5, 127 Communities, informal, 48 Compression the elusiveness of, 16 lossy, 14, 19, 127, 139, 142 temporal, 54, 191, 196, 241 Compression algorithms, 18, 32, 60, 63, 134–135, 161–163 Compression and reformatting, 14, 48, 67, 69 Compression codecs history and theory of, 109, 110 satirical, 18
INDEX
Computational efficiency, 131–132, 155 Computational savings, 132, 159 Computation, mechanical, 148 Computation, optical, 159 Computing machines, symbolical, 163 Conrad, Tony, 197–200, 205, 223, 241 Container, 31 Copy Shop (Virgil Widrich film), 58 Corporeality, 6, 54, 130, 172, 234 Counter-forensics, 44 CRT phosphors, 50, 91–92, 102 CRT, see Cathode ray tube D Dadaism, 55 Damage, 35, 50, 58–59, 64 as trace, 35 Datamoshing, 60 Davis, Paul B., 17–18, 61 Davis, Ted, 130 Decay, 9, 59, 61–62 iterative, 58 Deconsexualization, 229 Delpeut, Peter, 61 Dénes, Mihály von, 97 Desire, queer, 221, 222 Deterioration, see Decay Dickson, William, 15 Dietsch, Günther, 153–154 Digitality, 80 Digital switchover, 112 Digitization, 43, 64–65, 68, 93, 181 of written language, 82 Dis/ability, 18, 173, 189, 207, 221, 225, 230 instrumentalization of, 203, 226 in relationship to media technology, 107, 234, 238 Disability media studies, 207, 239
259
Disability studies, 46, 173, 206 Disc, wax, 99 Discontinuity, 140 Discrete cosine transform (DCT), 36, 53, 126–129, 131–132, 141, 155–157, 161–163 Dismediation, 189, 193, 238, 241 Dispositifs, medical, 143, 186 Distance from the television screen, 92, 188, 221, 235 Disturbances atmospheric, 87 psychosomatic, 230 unanticipated bodily, 188 Divide and conquer (computational technique), 158 Dolby AC-3 sound format, 161 D-1 tape format, 93 Dreamachine, 197, 228, 236 DVD format, 4, 7–8, 33, 79, 129 DV format, 47–48, 161 E Early film comedy, 203 Early television image acquisition, 102 Echo-matic cassette format, 68 Ecocriticism, 13 Economy of stasis, 136 Edison, Thomas, 16 EEG, see Electroencephalograph Eichenwald, Kurt, 169–171, 173, 204 Elderkin, Nabil, 60 Electricity, 87, 103, 104, 138, 176 Electrification, 107, 176, 189, 224 Electroencephalograph (EEG), 176–185, 220 digital, 181 early machines, 180 interpretive difficulties, 179 limitations, 180 split-screen, 184
260
INDEX
EMI, 101 Encoding, 29, 133, 135–136, 138, 156 difference, 108 entropy, 90 run-length, 156–157 Encryption, 88, 181 Energy, 133 Engstrom, Elmer, 101 Epidiascope, 154 Epilepsy monitoring units, 184–185 19th century treatments of, 177 relationship with hysteria, 177 warnings, 196, 197, 201, 239 Epileptic Seizure Comparison, 202, 204, 229 Eroticism, 222, 228 Errors addition, 145 aestheticized, 60 field dominance, 94 tracking, 58 transient decoding, 125 typographical, 37 Euler, Leonhard, 125 Events atmospheric, 87 auditory, 172 bodily, 179, 189, 204 special mathematical, 149 F Facticity, of the body, 236 Fading, electromagnetic, 87 Failure aesthetic theories of, 54–57 of the body, 173, 235 as a concept, 62, 206, 233, 241 environmental, 63 in film art, 57, 230
of the image, 186, 232, 238 as an index of time, 59 the pleasure of, 233 in science, 122, 131 the social character of, 80 vital, 230 Familial life, dysfunctional, 226 Fascism, 8, 170 Fast algorithms, 131, 157, 159 Fast Fourier Transform (FFT), 129, 131, 155 Fax machines, 99, 105, 106 single-line, 89 telegraph-based, 85 Federal Communications Commission (FCC), 33 FFmpeg, 53 FFT, see Fast Fourier Transform Film experimental, 21, 137, 183, 198, 205 grain, 136–139 photochemical, 1, 61, 68, 102, 137 structural, 198, 202, 204, 229 widescreen, 47, 50 Filmmaking, experimental, 197, 207, 229 Filters, 143, 145 deinterlacing, 93 low-pass, 93 noise, 153 Financial trading, high-frequency, 113 Finitude, 152 Flicker, 39, 103, 182, 187, 191, 193, 199, 220, 235–236, 238, 239 aversion to, 103, 106 as compression artifact, 171, 173, 188, 224 as form of speculation, 223 intentional use of/as stylistic device, 185, 197–198, 204, 205, 218, 229
INDEX
of lamps, 103 in neurology/as seizure precipitant, 182, 186–188, 220 reduction of, 92, 101–103, 105, 135 Flicker, The (Tony Conrad film), 197–200, 204, 223, 228, 229, 236, 241 Fluctuations, electrical grid, 104 Fluorescent lighting, 235 Fluxus, 58 Folding, techniques of, 27–29, 33–35, 52, 67, 69, 155 Forensic architecture, 44 Forensics, 44 Format changes, see Reformatting Format protection clause (German law), 32 Formats the concept of, 10, 28, 29, 36 digital, 3, 68, 94 as instruments of cooperation and conflict, 67 mixed, 35 multiplicity of, 35 as practices, 30 television, 31 unidentifiable, 35 universal, 49, 128 Format standardization, 49, 66 Format studies, 139 Format theory, 28, 36 Format war, mechanical and electronic television, 91 Foucault, Michel, 13, 127, 171, 221 Fourier, Joseph, 121, 141, 144, 149 Fourier analysis, 123, 198 digital, 183 Fourier domain, see Frequency domain Fourier series, 141–145, 158 Fourier transform, 128, 157 Frame rates, 97, 99, 101, 183
261
high, 186, 192, 196, 204 low, 97, 103 in public discourse, 193 standardization of, 191 variable, 241 Frampton, Hollis, 182 Fraunhofer diffraction, 160 French Academy of Sciences, 121 Frequencies, utility, 101–104, 106, 191–192, 224 Frequency domain, 53, 123, 126, 143, 153, 156, 159 Frequency, flicker, 175, 188, 198, 229 Futurism, 55 G Ganglion cells, 109 Gastaut, Henri, 182, 195 Gauss, Carl Friedrich, 155, 158–159 Gaze, neurological, 184–185, 197, 217, 219 Gender, 219, 224–227, 233, 237 General Electric, 100 Generation loss, 59 Geneva Frequency Plan, 97 Geopolitics, of standards, 192 German Radio and Television Exhibition, 93 Gestures, spatial and physical, 17, 31, 52, 84, 132, 149, 153–155, 162, 205–206, 218 Gibbs, Josiah Willard, 141 Gibbs phenomenon, 140, 142–144, 152 GIF format, 170–172, 241 GIFs, auto-playing, 239 Ginsberg, Allen, 236 Ginzburg, Carlo, 29, 44, 50, 55 Glance theories, 227 Glitch art, 17–18, 59–60 commodification of, 60
262
INDEX
Glitches, 15, 18, 56, 59–61 Goebel, Gerhart, 91 Gowers, William, 176 Graphic method, the, 178 Grauer, Victor, 205 Greenhouse effect, 122 Grids, 18, 32, 82–85, 128 Guffey, George R., 65 Guild, John, 6 Gysin, Brion, 197 H Halftone reprography, 82 Hallucinations, 197–198, 236 Harmonic analysis, 180 Harmonic analyzers, 145, 155 Hart, Samuel L., 97–98 Hate crime, 169 Health, as a concept, 229 Heat, 121, 122, 132–135, 163 Heat equation, 122 Heat, propagation of, 21, 121–122 Hell, Rudolf, 89–90 Hellschreiber, 89–90 High-definition television, 109 High-frequency visual patterns, 93, 136 Hinman, Charlton, 38 Hinman Collator, 38–39, 65 Historiography, 152, 153, 158 Hitchhiker’s Guide to the Galaxy, The, 46–47 Hobbit, The, 192 Home movies, aesthetic language of, 233 Hot Town Music-Paradiso, 6 Hubbard, Henry, 190 Huffman coding, 156 Human senses, plasticity, 98 Human vision, sensitivity of, 109–110, 127, 142–143, 156
Hummel, Ernest A., 85 Hybridity of media, 9, 13, 34, 59, 68, 81, 102, 240 of television, 104 Hysteria, 177, 181, 203, 230 Hysteroepilepsy, 177 I Iconoscope, 81, 101–102 Image quality, influence on epileptic seizures, 194 Image retention, 92, 97, 107 Images non-representational, 157, 159, 172 strobing, 169 widescreen, 33, 110 Image transmission, early digital, 82 Immortality, mythologies of, 152 Imperfections, 8, 54–56, 124 Incompatibilities, 136 format, 8, 48–49, 61, 65, 67, 95, 241 frame rate, 11, 49, 67, 102, 183 of temporal regimes, 140 Infinity, notions of, 124, 152, 153 Infrastructure, 19, 54, 67, 163, 172, 189, 223 algorithmic, 161–163 electrical, 12, 21–22, 102–103, 106, 189, 237 Inscription, relationship to signification, 85, 89 Inscriptions, ancient, 44–45 Institut d’Égypte, 121 Instruments, technical in science, 149 Interference patterns, 160 Interlacing aversion to, 80 early patents, 97–100 effects on contemporary video, 93
INDEX
Interlacing patents, various, 97, 99 Interline twitter, see Artifacts, jitter Intermittent photic stimulation, 182, 185, 198 International Commission on Illumination (CIE), 5 International Electrical Congress, 176 International Exhibition of Electricity, 176 International Federation for EEG and Clinical Neurophysiology, 185 International League Against Epilepsy, 225 International Telecommunication Union, 34, 195 Ionosphere, 86 J Jackson, Peter, 192 Jarman, Derek, 59 Jay-Z, 200 JPEG image format, 32, 128, 141 JPEG2000 image format, 113, 129, 141 Judder, 7, 135, 205 Judgment Day, 51 K Karolus, August, 100 Kelvin, Lord, 124, 145–146, 155 Kinetoscope, 15 Kinne, Erich, 97 Kittler, Friedrich, 82, 121, 123, 129, 153 Knowledge, sensory, 37 Kubelka, Peter, 205 Kubrick, Stanley, 191
263
L Lagrange, Louis, 123, 141, 144, 149 Lamps, fluorescent, 235 Latency, 86, 101, 113 LCD screens, 78, 114, 173, 186, 225–226, 235 Letterboxing, 47 Lichtenberg, Georg Christoph, 36 Light is Calling, 61–62 Lindstrand, Gordon, 39 Linotype, 82 Living Brain, The (William Grey Walter book), 180, 183, 197–198, 228 Loewe, 102 Lucier, Alvin, 59 Lye, Len, 205 Lynch, David, 200 Lyrisch Nitraat, 61 M Machine to Machine, 63–62 Macroblocks, 19, 128, 136 Magic lantern, 105 Magic, separation from medicine, 176 Magnetic resonance imaging (MRI), 143 Magnetic tape, 183 Mains frequencies, see Utility frequencies Marey, Étienne-Jules, 178 Massachusetts Institute of Technology, 183 Materiality, 52, 57, 66, 92, 114, 134, 140, 145 of compression, 16, 152 of film, 28, 197 of formats, 36 of signals, 163, 229 Material practices, 13–20, 28, 41, 64 Mathematics, history of, 163
264
INDEX
Matsumoto, Toshio, 205 May, Joseph, 176 Measurement, 6, 29, 126, 181 Media archaeology, 21–22, 45, 52–54, 130, 162, 206, 223, 235, 237 materialist, 52, 162 Media convergence, 80 Media environments, 189, 191, 195, 207, 234, 239 Media epigraphy difficulties, 11 principles of, 11, 37, 42, 45–46, 52, 54, 66 Media infrastructure, 195 Media practices, 173, 217 Media, spatial, 155 Media technique, of medical surveillance, 184 Media techniques, in early neurological research, 178–179 Media violence, 172 Medicalization of vision, 190 Mekas, Jonas, 198 Meme, flickering GIF, 204, 207 Menkman, Rosa, 130 Mergenthaler, Ottmar, 82 Michelson, Albert, 144–145 Microfilm, 65–66 Microhistory, 20, 50 Microtemporality, 130, 156, 162 Mohan Rao, Ram, 131 Moiré patterns, 96 Montez, Mario, 199–200 Morrison, Bill, 62 Morse, Samuel, 82 Morse code, 83–84, 99 Morse telegraph, 84, 90 Mothlight, 138–136 Motion compensation, 111, 135–136, 152 Movietone sound-on-film format, 191 Moving images
electrically produced, 237 history of, 21, 105, 235 MPEG, 139 MPEG-4 AVC, 128 MPEG-1 format standard, 113 MPEG standards family, 128, 136, 161 MXF format, 113 Myograph, 178 N National Television System Committee (NTSC) television standard, 33, 93, 104, 143, 188, 191 Neuroimaging, 124, 143 Neurology, 176, 180–181, 217, 219, 221, 225, 229, 239 influence on film history, 197–207 media practices of, 182 Neuroscience, 109 computational, 181 Nipkow, Paul Gottlieb, 176 Nipkow disk, 81, 108 Noé, Gaspar, 200 Noise, 16, 153, 180, 221, 223 politics of, 223 visual, 136, 228 Non-compliance, with norms or standards, 13, 219, 227 Non-image, 228 Normalcy, 5–6, 238, 241 Norms electrical, 102 flicker, 193–196 social, 229, 233 technical in television, 93 Notational iconicity, 90 O Obsolescence, 81, 186 Olympia (Leni Riefenstahl film), 151
INDEX
Optical collator, see Collation, optical Optical machines, 46, 66, 154, 183 Optical toys, pre-cinematic, 190 Ordering processes, 17–19, 31, 52, 127, 155 Ō tomo, Katsuhiro, 40 P Paik, Nam June, 58, 183, 198 Painting, fin-de-siècle, 203 Paiva, Adriano de, 108 PALplus, 32–34, 110 Paper as a medium, 27–30, 34–36, 49, 58, 67, 69 wove, 35 Papermaking, 30, 35, 37, 67 Paranoia, nuclear military, 159 Parsifal, 49 PBD file format, 18 Perception, as historical method, 39 Perception of movement, early research on, 191 Performance drag, 199 medical, dramatic conventions of, 204 Persistence of vision, 191 Phase Alternating Line (PAL), 94, 188, 191 Phonograph records, 99 Photosensitive epilepsy general characteristics of, 174 history of, 175, 187 relationship with audiovisual media, 181 Photosensitive seizures, incidence, 188 Phototelegraph, Schröter’s, 81, 86–92, 106 Phototelegraphy, 77 Pillarboxing, 47
265
Pleasure, 172, 196, 219, 222–224, 227, 229–230, 233–234, 240, 241 Pocket Rocker audio cassette format, 68 Pokémon panic, 193–195 Politics sensory, 172, 238 spectral, 112–114 Poor images, 19 Potassium bromide, 177 Power, patriarchal, 174, 224–226 Predictability, 139 Prediction, motion, 136 Printing industry, mechanization of, 34 Projection, large-format, 100 Propaganda, national-socialist, 100 Properties, retentive, 92, 97, 107 Psychiatry, 176, 217, 219, 229–230 Pulldown, 2:2:3:2:3, 4 Purity, 8, 9, 40, 102 Q Quad tapes, 99 Queer audiovisual culture, 199 Queer audiovisual practices, 237–242 Queer cinema, 198–199 Queering, 40, 229 Queer materiality, 188–189, 192, 199–200, 206, 220, 226, 237–238 Queer media practices, 223, 229, 241 Queer phenomenology, 206, 234, 238 Queer studies, 206 Queer subversion, 199 Queer utopia, 223 Quested, Andy, 139
266
INDEX
R Radio broadcasting, 31 Radio Corporation of America (RCA), 100, 101 Ray, Man, 205 Reformatting, 4, 8, 36, 48–50, 65, 67–68, 139, 155, 241 destructive, 8, 101 resistance to, 64 traces of, 9, 46, 49, 64 Relationships between bodies and technology, 189, 221 proxemic, 187–188, 221 Representation pictorial, 53, 223, 228 political, 223 Resolution, 151 dual, 109 of early television, 92 Ringing phenomenon, 20–21, 140–145, 148–149, 163 Rist, Pipilotti, 230–233, 237, 238 Rouy, Philippe, 63–62 Ruskin, John, 55 S Salpêtrière asylum, 181, 203 Sanabria, Ulises Armand, 96, 106–107 Scanning, 58, 85, 87, 91–93, 102, 176 automatic channel, 240 helical, 99 Schröter, Fritz, 77, 81, 85–92, 96–97, 100, 106–109, 111 Screen sizes, 100, 132 SECAM, 188 Seizures absence, 184, 219, 220, 226 causes of, 174, 187, 239 generalized tonic-clonic, 220, 226–227
incidents, 193–195, 205, 239 light-induced, 181, 202 self-induction, 218–219, 221–225, 227–230, 237, 240 television-induced, 193–194, 220 therapeutic, 225 warnings, 196, 197, 201, 239 Seldes, Gilbert, 31 Selenium, 176 Sensing, bodily ways of, 14, 66, 84 Sensory-technological coupling, 84 Sexuality, 177, 219, 222, 230 Sharits, Paul, 197, 202 Sharpness, 143, 152 low, 58 perceived, 92 Shipton, Harold, 183 Shortwave radio, 86, 113–114 Shortwave telegraphy, 87, 106, 114 Showscan film format, 196 Shutter solid, 105 triple-bladed, 92 Signal processing, 129, 155, 157 Sitting in a Room (Julie Talen work), 59, 256 16mm film, 138 Skladanowsky brothers, 105, 111 Slow-motion, 183 Smelting, 14 Smith, Gerald A., 64 Smith, Jack, 199 Smith, Willoughby, 176 SMPTE, see Society of Motion Picture and Television Engineers Smythies, John Raymond, 183, 236 Society of Motion Picture and Television Engineers (SMPTE), 31, 190–191 Sommerville, Ian, 197 Sound films, 100–101, 105 Space, politics of, 192, 234 Spatiality, 155–156
INDEX
Spectatorship, 188 various theories of, 227 Spectral turn, 113 Spectres of Shortwave, 113 Spectrum electromagnetic, 86, 114, 155 frequency, 113, 127 limited broadcast, 32, 139 the politics of the, 113, 241 shortwave radio, 77, 86 Standard-definition television, 33, 110 Standardization of bibliographical methods, 65 in cinema and television, 190 resistance to, 9, 224, 240 spectrum channel width, 97 tendencies in MPEG encoder, 136 Standards, 6, 9, 80, 102, 172–173, 238 broadcasting, 32, 102–103, 186, 188 colorimetric observer, 5 compression codec, 133 electrical (see Frequencies, utility) frame rate, 190 German 180-line television, 93 as potential, 237 in relationship to the body, 192, 205, 239 television, 32, 102–103, 186, 188, 191 universal, 196, 241 Stasis, 139 Stephenson, William, 97 Steyerl, Hito, 19, 151, 223 Stratton, Samuel, 145 Structural film, 204, 228 Syberberg, Hans-Jürgen, 49 Synchronization, 104, 182 System, human sensory, 92, 127, 191, 196, 198
267
T Tables for Harmonic Analysis, 155, 158 Talen, Julie, 59 Tambellini, Aldo, 205 Technicity, 189 Techniques, 106 analytical, 29, 123, 180 cultural, 84, 180 descriptive Galilean, 29 diagnostic, 179, 184–185, 198 disorientation, 205 embodied of sensing, 52, 84 filmic, 183, 197 history of, 30–31 image-processing, 82 infrastructural, 95 inscription, 84, 144 inscription optimization, 158 mathematical, 21, 33, 123, 159, 163, 181 mediatic, 230 spatial, 158 symbolical, 149 visual medical, 203 Technological standards, in televison, 221 TEKADE, 102 Telediagraph, 85 Telefunken, 77, 81, 86–87, 91, 93, 102, 114 Telegraphy, 89 Television analog, 32, 34, 80, 111, 129, 142, 162 closed-circuit, 183–184 early, 81, 92, 102, 104, 107 epilepsy, 187, 220 history of, 239 mechanical, 91 pioneers, 77, 99 widescreen, 32–34, 110
268
INDEX
Television research, 87, 110, 114 interwar, 111 Temporality, 112, 130, 140, 162, 200 Thomson, William, 124, 145–146, 155 Toposcope, 183 Torque amplifiers, mechanical, 147 Touch, 171, 192 Traces, 44, 53 the phenomenality of, 64 theories of, 45, 52 Transforms, mathematical, 126, 129 Treatment, antiepileptic, 219 Trigonometric functions, 122, 149 Trumbull, Douglas, 191, 196–197, 202, 205 Tscherkassky, Peter, 205 Tukey, John, 155 Twitter, 207, 239 Typesetting, 82 U Undersea telegraph cables, 176 University of Dortmund, 33 Untitled (After Lucier) (Cory Arcangel work), 58 US National Bureau of Standards, 190 Utility frequencies, 101–104, 191 V Vail, Alfred, 82 Vertical videos, 49 VHS format, 42, 58 VHS look, 58 Video Compression Study #4 (Paul B. Davis work), 18 Vine video format, 68 Violence, 130, 170–173, 196, 200, 205–206, 225, 229, 239, 241 media and, 171
representations of, 172, 205 Visual epistemology, 39 Visuality, 48, 132, 139, 200 fin-de-siècle, 85, 203 haptic, 228 1960s and 1970s Western, 204 queer, 223 VP9 compression codec, 129 W Walter, Johann, 82, 85 Walter, William Grey, 180, 182–183, 186, 193, 197, 203 Wardrobe, newscasters, 96 Warhol, Andy, 57, 205 Waste, 13, 15 Weimar Republic, 77 Welcome to Heartbreak (music video), 18, 60 Wenders, Wim, 149–152 West, Kanye, 18, 60, 200 Westinghouse, 103 Widescreen Signaling (WSS), 3, 7, 33 Widrich, Virgil, 58 Wilbraham, Henry, 145, 148 Williams, Danny, 57 Wings of Desire, 149–153 Wirephoto machines, 77 Wright, William David, 6 Writing, the graphicality of, 44, 90 X Xerox Book, 58 Xerox Corporation, 64–65 Z Zen for Film, 58 Zipperer, Ludwig, 154 Zworkyn, Vladimir, 96, 101