124 109 30MB
English Pages 432 [427] Year 2014
Visual Ecology
Visual Ecology Thomas W. Cronin, Sönke Johnsen, N. Justin Marshall, and Eric J. Warrant
P R I N C E T O N U N I V E R SI T Y P R E S S P R I N C E T O N A N D OX F O R D
Copyright © by Princeton University Press Published by Princeton University Press, William Street, Princeton, New Jersey In the United Kingdom: Princeton University Press, Oxford Street, Woodstock, Oxfordshire OX TW press.princeton.edu Photograph of robber fly (Holcocephala sp.) © Thomas Shahan All Rights Reserved Library of Congress Cataloging-in-Publication Data Cronin, Thomas W., – Visual ecology / Thomas W. Cronin, Sonke Johnsen, N. Justin Marshall, and Eric J. Warrant. pages cm Includes bibliographical references and index. ISBN ---- (hardcover) . Vision. . Animal ecology. . Physiology, Comparative. . Eye— Evolution. I. Title. QP.C .'—dc British Library Cataloging-in-Publication Data is available This book has been composed in Minion Pro Printed on acid-free paper. ∞ Printed in the United States of America
To John Lythgoe and William “Mac” McFarland, two greatly missed pioneers in visual ecology
Contents List of Illustrations Preface xix
ix
1
Introduction
2
Light and the Optical Environment
3
Visual Pigments and Photoreceptors
4
The Optical Building Blocks of Eyes 66
5
The Eye Designs of the Animal Kingdom
6
Spatial Vision 116
7
Color Vision
8
Polarization Vision
9
Vision in Attenuating Media
10
Motion Vision and Eye Movements
11
Vision in Dim Light
12
Visual Orientation and Navigation
13
Signals and Camouflage
1 10 37
146
Glossary References General Index Index of Names
178 206 232
262
313
289
91
Illustrations Figure 1.1 Figure 1.2 Figure 1.3 Figure 1.4
Figure 1.5 Figure 2.1
Figure 2.2 Figure 2.3 Figure 2.4 Figure 2.5 Figure 2.6 Figure 2.7 Figure 2.8 Figure 2.9 Figure 2.10 Figure 2.11 Figure 2.12 Figure 2.13 Figure 2.14
Figure 2.15 Figure 2.16
Cloisonné vase with sunfish, made using gold wire and enamels. 2 Female cardinal evaluating a male in sunlight against natural foliage. 4 Fossilized compound eyes of two trilobite species. 5 A view of visual evolution as a punctuate process, advancing through a series of steps that represent significant leaps in visual capabilities. 6 Evolution of modern opsins. 7 The elephant hawkmoth (Deilephila elpenor) as viewed during late twilight and on a moonless night, showing the extreme chromatic changes that can occur within 30 minutes. 11 The electromagnetic spectrum. 12 Daylight spectrum binned by equal wavelength intervals and by equal frequency intervals. 14 The three most common light measurements: vector irradiance, radiance, and scalar irradiance. 15 Solar irradiance outside the atmosphere and at the Earth’s surface. 16 Fraction of irradiance at Earth’s surface near noon due to skylight alone. 17 Downwelling vector irradiance and scalar irradiance (at 560 nm) as a function of solar elevation. 18 Illuminance due to moon and sun at various times of day. 19 Irradiance on a North Carolina beach when sun is 10.6° below the horizon (late nautical twilight). 19 Landscape photographed under full moonlight and spectral reflectance of the moon. 20 Lunar irradiance (normalized to 1 for a full moon) and fraction of moon illuminated as a function of degrees from full moon. 21 Landscape under starlight and reconstructed downwelling irradiance spectrum of moonless night sky. 22 A green aurora combined with an unusually intense red aurora. 23 Light pollution in New York City and downwelling irradiance spectrum dominated by light pollution on a cloudy night at Jamaica Pond in Boston, MA. 24 Human-based chromaticities (i.e., perceived color) of daylight, sunset, twilight, and nocturnal irradiances. 25 Spectra of downwelling irradiance in a forest near Baltimore, MD. 26
x
Illustrations
Figure 2.17 Figure 2.18 Figure 2.19
Figure 2.20 Figure 2.21 Figure 2.22 Figure 2.23
Figure 3.1
Figure 3.2 Figure 3.3 Figure 3.4 Figure 3.5
Figure 3.6 Figure 3.7 Figure 3.8
Figure 3.9
Figure 3.10 Figure 3.11 Figure 3.12 Figure 3.13 Figure 3.14 Figure 3.15
The first two principal components (statistical measures of major sources of variation) of the spectra shown in figure 2.16. 26 The absorption and scattering coefficients of the clearest natural waters, along with their sum. 27 Downwelling irradiance in the equatorial Pacific modeled from vertical profiles of absorption, scattering, and chlorophyll concentration. 28 Radiance as a function of viewing angle and depth in equatorial Pacific waters with the sun 45° above the horizon. 30 Peak wavelength versus emission width for the light emissions from deep-sea benthic and mesopelagic species. 33 Examples of the various uses of bioluminescence. 34 Ambient light images of a deep-sea volcanic vent at nine visible and near-infrared wavelengths imaged by the ALISS camera system. Spectrum of emitted light at four locations within vent compared to the spectrum of a blackbody radiator at the vent temperature. 35 Structures of vertebrate and invertebrate visual pigments viewed from the plane of the plasma membranes of photoreceptor cells. 38 Structure of retinal, the most common chromophore found in visual pigments. 39 Typical absorption spectra of visual pigment molecules normalized to the same peak value. 40 The four types of chromophores known to be used in visual pigments. 41 Spectral effects on a visual pigment of changing from a retinal chromophore to a 3-dehydroretinal chromophore and how the wavelength of maximum absorption changes when the same opsin is bound to an A1 or an A2 chromophore. 42 Schematic illustration of dichroism in visual pigments. 43 Seemingly unimportant changes in absorbance can produce major effects on how a photoreceptor cell actually absorbs light. 45 Electron micrographs to show the arrangements of photoreceptor membranes in a vertebrate rod cell and an invertebrate rhabdom. 47 The structure of typical vertebrate photoreceptors illustrated diagrammatically, showing the inner and outer segments connected by a ciliary neck. 49 Structure of typical microvillar, or rhabdomeric, photoreceptors found in many invertebrates. 50 Optical modifications of vertebrate photoreceptors for specialized tasks. 51 Filters in vertebrate cones and the spectra of light they transmit. 52 Filters in invertebrate photoreceptors. 53 Photographs of invertebrate photoreceptor filter pigments and characteristic absorption spectra of the pigments. 54 Spectral sensitivity augmentation by sensitizing pigments. 55
Illustrations
Figure 3.16 Figure 3.17
Figure 3.18
Figure 3.19
Figure 3.20 Figure 3.21
Figure 4.1 Figure 4.2 Figure 4.3 Figure 4.4 Figure 4.5 Figure 4.6 Figure 4.7 Figure 4.8 Figure 4.9 Figure 4.10 Figure 5.1 Figure 5.2 Figure 5.3 Figure 5.4 Figure 5.5 Figure 5.6 Figure 5.7 Figure 5.8 Figure 5.9 Figure 5.10
Figure 5.11
Figure 5.12 Figure 5.13
Histograms to show values of λmax of visual pigments in photoreceptors of deep-sea animals. 57 Visual pigments in rod photoreceptors of marine mammals that reach different maximum depths when foraging, compared with a terrestrial mammal (the cow). 58 Irradiance spectra in typical habitats occupied by animals and an analysis of how photon capture in each irradiance condition changes with visual pigment λmax. 59 Color images of natural scenes in a temperate forest and a tropical coral reef along with dichromatic images of the same scenes using various wavelength pairs. 62 The cone array in the retina of the chinchilla (Chinchilla lanigera). 63 Developmental changes in visual pigments that occur during development of the hydrothermal vent crab Bythograea thermydron. 65 The pigment-pit eye of the arc clam Anadara notabilis. 67 A schematic camera eye viewing a visual scene (the trunk and bark of a eucalyptus tree). 69 Spatial resolution and light level. 70 Quantum events in photoreceptors. 72 Diffraction. 79 Optical aberrations. 81 Underfocused eyes. 83 Solutions for optical cross talk in the retina. 85 The modulation transfer function. 87 The modulation transfer function of an eye. 89 The pinhole eye of Nautilus. 93 The concave mirror eye of the scallop Pecten. 95 Camera eyes. 98 Amphibious vision in camera eyes. 101 The dark-adapted and light-adapted pupil of the nocturnal helmet gecko, Tarentola chazaliae. 103 The camera eyes of the octopus and the cod (Gadus morhua). 104 Ommatidia. 107 The two broad subtypes of compound eyes. 108 Focal and afocal optics in apposition compound eyes. 109 The three different types of superposition optics found in the Arthropoda, showing the paths of incident light rays focused by the cornea into the crystalline cones. 111 The refracting superposition eyes of the nocturnal dung beetle Onitis aygulus, which consist of a smaller dorsal eye and a larger ventral eye that are separated by a chitinous canthus on each side of the head. 112 Graded refractive index optics in the refracting superposition eyes of the nocturnal dung beetle Onitis aygulus. 113 Theoretical retinal image quality in the refracting superposition eyes of three species of dung beetles from the genus Onitis—the
xi
xii
Illustrations
Figure 6.1 Figure 6.2 Figure 6.3 Figure 6.4 Figure 6.5 Figure 6.6 Figure 6.7 Figure 6.8 Figure 6.9 Figure 6.10 Figure 6.11 Figure 6.12 Figure 6.13 Figure 6.14 Figure 6.15
Figure 6.16 Figure 7.1
Figure 7.2 Figure 7.3 Figure 7.4 Figure 7.5 Figure 7.6 Figure 7.7 Figure 7.8 Figure 7.9 Figure 7.10 Figure 7.11
diurnal O. westermanni, the crepuscular O. alexis, and the nocturnal O. aygulus. The rhabdoms of dung beetles are flower shaped in cross section. 115 The definition of the interreceptor angle Δφ in camera eyes and the interommatidial angle Δφ in compound eyes. 118 The densities of rods and cones in the human retina as a function of distance (eccentricity) from the fovea. 120 The retinal ganglion cells, the sampling stations of the vertebrate retina. 122 The eyes of jumping spiders. 124 Negative lenses in camera eyes. 126 Acute zones and bright zones in apposition compound eyes. 128 An acute zone in the retina of a superposition compound eye. 130 Double eyes and sexual dimorphism in insects. 133 Optical sexual dimorphism in the apposition eyes of flies. 134 Neural sexual dimorphism in the apposition eyes of flies. 136 The dorsal acute zone of dragonflies. 138 The visual fields of predators and prey. 139 Horizontal acute zones in compound eyes. 140 Habitat-related ganglion cell topographies in terrestrial mammals. 141 Vision through Snel’s window, where the 180° view of the world above the water surface is compressed due to refraction into a 97.6°-wide cone below the water surface. 143 Deep-sea eye structure and the changing nature of visual scenes with depth. 144 An apple tree image optimized for human color vision using our three-color (RGB) printing process and with the color stripped from the image. 147 The varied spectral sensitivities of vertebrates and invertebrates. 148 The colors of natural objects and their perception by the colorvision systems of animals. 150 Histogram of the distribution of known spectral sensitivity peaks in 40 hymenopteran species. 153 The spectral sensitivities of tetrachromatic birds and fish. 154 The spectral sensitivities and resulting trichromatic color space of adult lungfish modeled without and with oil droplets. 155 Bee color vision and flower colors, reef fish color vision and fish colors. 158 UV patterns and contrast made visible to the human visual system by photography through UV filters. 160 λmax positions of cones and rods in marine fish arranged by habitat. 161 The spectral sensitivities of four species of reef fish from the same photic microhabitat. 162 Color vision in cardinal fish (Family Apogonidae). 163
Illustrations
Figure 7.12 Figure 7.13 Figure 7.14
Figure 7.15 Figure 7.16 Figure 7.17
Figure 7.18 Figure 7.19 Figure 7.20
Figure 8.1
Figure 8.2 Figure 8.3 Figure 8.4 Figure 8.5 Figure 8.6 Figure 8.7 Figure 8.8 Figure 8.9 Figure 8.10 Figure 8.11 Figure 8.12 Figure 8.13 Figure 8.14 Figure 8.15 Figure 8.16 Figure 8.17 Figure 8.18
African cichlid eyes collectively contain all seven cone opsins known in fish. 165 The offset hypothesis. 166 The barracuda Sphyraena helleri is associated with reefs and lives in a shallow photic habitat that could accommodate more than its known two spectral sensitivities. 167 Crustacean spectral sensitivities. 169 Spectral tuning with depth both between and within stomatopod species. 171 Behaviorally determined wavelength discrimination functions (Δλ) in stomatopods (Haptosquilla trispinosa) compared to various animals: human, goldfish, butterfly, honeybee. 172 Regionalization of spectral sensitivities in the eyes of vertebrates and invertebrates. 174 Vision in the archerfish (Toxoides chatareus), a fish that spits at prey located above the water surface. 175 The hypothetical origin of color vision as a way to remove caustic flicker produced by ripples in water in shallow Cambrian aquatic habitats. 176 Two representations of various states of polarized light starting as a nonpolarized light beam and its polarization through a linear polarizing filter and then conversion to circular polarization through a quarter-wave retarder. 179 Sources of linearly polarized light in nature. 181 Polarized light in air and water. 182 Sources of elliptically polarized light in nature. 184 The orientation of visual pigment chromophores and resulting dichroism within photoreceptor membrane. 185 Terrestrial and aquatic scenes showing differences in polarization information. 187 Drassodes cupreus and its polarization-sensitive posteromedial eyes. 188 Ommatidia in the dorsal rim area and main eye of various insects. 189 Rhabdom construction in crustaceans and cephalopods. 190 Circular polarization detection in stomatopods. 192 Responses of polarization receptors. 194 Polarization processing. 195 Polarization vision in the backswimmer Notonecta glauca. 196 Organization and function of dorsal rim area ommatidia. 197 Organization of polarization information coming from the compound eyes of the locust, Schistocerca gregaria. 198 Behavioral orientation and navigation under celestial e-vector patterns or artificial polarizers. 199 Responses of animals with polarization vision to virtual looming stimuli. 202 Choices of female Australian orchard swallowtail butterflies, Papilio aegeus, viewing color/polarization targets. 203
xiii
xiv
Illustrations
Figure 8.19 Figure 9.1 Figure 9.2 Figure 9.3 Figure 9.4
Figure 9.5
Figure 9.6 Figure 9.7 Figure 9.8 Figure 9.9 Figure 9.10 Figure 9.11
Figure 9.12 Figure 9.13
Figure 9.14
Figure 9.15 Figure 9.16
Figure 10.1
Figure 10.2
Figure 10.3
Linear polarization patterns likely to be involved in signaling. 204 Underwater photograph taken at the Great Barrier Reef off the coast of Australia and fog on a pond in Durham, NC. 207 A beam of light is both absorbed and scattered as it passes through an attenuating medium such as water or fog. 208 The scattering and beam attenuation coefficients at the surface and at 100 m depth in clear oceanic water. 210 A mackerel (Grammatorcynus bicarinatus) in water. The object, path, and total radiance for an object that is twice as bright as the background viewed horizontally in oceanic water at 480 nm. 212 The GretagMacbeth Colorchecker modeled to show how it would appear if viewed horizontally in clear oceanic water at a depth 5 m and at various viewing distances. 216 The minimum detectable radiance difference as a function of the adapting illumination in humans. 217 The contrast threshold as a function of the adapting illumination in humans. 218 The contrast threshold as a function of spatial frequency for a number of vertebrate species. 219 Historical room in Bäckaskog castle in Kristianstad, Sweden. 219 Sighting distances depend on the product of two factors. 221 The second factor affecting sighting distance (1/[c − KL cos θ]) is given in meters as a function of wavelength and viewing angle (θ) for oceanic water at 100-m depth. 222 The inherent contrast of an underwater object is greatly affected by wavelength. 223 Sighting distances for a large object that diffusely reflects 50% of the light that strikes it as a function of depth, wavelength, and viewing angle. 224 Underwater scene photographed through vertically oriented and horizontally oriented polarizers. Image also shown after polarization-based dehazing. 226 The scattering coefficients in a standard atmosphere as a function of altitude. 228 A case in which the only light that reaches the viewer’s eye is light that has never been scattered. Another case in which some of the light that reaches the viewer’s eye has been scattered more than once. An underwater scene where multiple scattering is just beginning to be significant. 230 Representations of flow fields projected onto a retina (these vectors also could represent the motions of objects in visual space). 234 A diagrammatic view of a Reichardt detector sensitive to motion in the direction of the large open arrow. An apparatus often used to study motion perception in flying insects. 236 Wide-field cells in the lobula plate of Drosophila. 238
Illustrations
Figure 10.4 Figure 10.5 Figure 10.6 Figure 10.7 Figure 10.8 Figure 10.9 Figure 10.10 Figure 10.11 Figure 10.12 Figure 10.13
Figure 10.14
Figure 10.15 Figure 10.16 Figure 10.17
Figure 10.18 Figure 10.19 Figure 11.1 Figure 11.2 Figure 11.3 Figure 11.4 Figure 11.5 Figure 11.6 Figure 11.7 Figure 11.8 Figure 11.9 Figure 11.10 Figure 11.11 Figure 11.12 Figure 11.13 Figure 12.1 Figure 12.2 Figure 12.3
Modeling wide-field sensing in flies. 238 Schematic showing typical responses of a motion-sensitive ganglion cell. 239 Motion sensing in vertebrate retinas. 239 Eye movements in mantis shrimp (photographs). 241 Eye movements in mantis shrimp (graphs). 242 Examples of optokinesis in aquatic animals. 245 Ocular fixation movements in action. 246 Tracking and chasing. 249 Tracking in praying mantis. 250 Flight room at the Janelia Farm research center in Virginia for study of the neurobiology underlying motion sensing and flight control in dragonflies. 252 Aerial tracking by a peregrine falcon at a cliff site in Colorado, showing spiral flight paths made to intercept prey over a lake. 253 Scanning behavior in larvae of the water beetle Thermonectes marmorata. 254 A flight tunnel used in experiments on motion vision of flying honeybees. 257 Schematic drawing of the apparatus used to test the abilities of honeybees to judge relative heights of “flowers” using motion vision. 258 Head movements of a foraging whooping crane searching for food on the ground. 259 Responses of a dragonfly interneuron that responds to looming stimuli, signaling estimated time to contact. 260 Nocturnal color vision in insects. 265 Nocturnal navigation using celestial cues. 267 Nocturnal homing in arthropods. 268 Visual acuity in nocturnal birds and mammals measured behaviorally as a function of luminance. 272 Optical adaptations for nocturnal vision in camera eyes. 274 Fresh head of a giant squid (likely from the genus Architeuthis). 275 Tubular eyes in deep-sea fish. 276 Rostral aphakic gaps in the eyes of deep-sea fish. 278 The tapetum lucidum and the eye glow of animal eyes. 279 Retinal specializations for dim-light vision in vertebrates. 280 The banked retina of the nocturnal oilbird and deep-sea fish. 281 Specializations for dim-light vision in compound eyes. 284 Spatial summation in nocturnal bees. 287 A sandhopper, Talitrus saltator, on a sandy beach in Italy. 290 Two photographs of deep-sea fish with bioluminescent lures and flashlight fish with lids open and closed. 292 The third zoeal stage of the larva of the crab Rhithropanopeus harrisi and graphs showing phototaxis to various wavelengths of light by early-stage zoea larvae and descent by these larvae on a
xv
xvi
Illustrations
Figure 12.4 Figure 12.5 Figure 12.6
Figure 12.7 Figure 12.8 Figure 12.9 Figure 12.10 Figure 12.11 Figure 12.12 Figure 12.13 Figure 12.14 Figure 12.15 Figure 12.16 Figure 12.17 Figure 12.18 Figure 13.1
Figure 13.2
Figure 13.3
Figure 13.4
Figure 13.5
sudden decrease in light intensity in clean water and in water that had contained ctenophores, a predator on larvae. 293 Genetic inheritance of orientation in different populations of Talitrus saltator on the west coast of Italy. 294 Orientations of ball-rolling dung beetles, Scarabaeus nigroaeneus, in the field, South Africa. 295 The orientations during flight outdoors near noon of intact and antennaless monarch butterflies, Danaus plexippus, before and after being subjected to a 6-hour phase delay in the light:dark cycle. 297 “Turn-back-and-look behavior” in foraging honeybees, Apis mellifera. 299 Nest orientation in the solitary wasp Cerceri rybyensis on leaving its nest hole. 300 Landmark learning in honeybees. 300 The “snapshot model” of landmark orientation in honeybees. 301 Nocturnal learning and recognition of landmarks by the Panamanian bee, Megalopta genalis. 302 Landmark orientation in the desert ant Cataglyphis bicolor, an individual of which is shown at the top left. 303 Place cells in the rat, Rattus norvegicus. 304 Landmark learning and use in the ant, Formica rufa. 306 Landmark use in the Australian desert ant, Melophorus bagoti. 307 Images of the Panamanian rainforest canopy taken at intervals along a trail through the forest. 308 Seaward orientation by hatchling loggerhead turtles, Caretta caretta. 309 Visual odometry in honeybees. 311 Silhouette of the cookie-cutter shark (Isistius brasiliensis) as viewed from below without counterillumination and the silhouette of the same animal with its many small ventral photophores turned on. 314 Animals in featureless environments, such as the snub-nosed dart (Trachinotus blochii), tend to employ strategies that reduce contrast and thus detectability. In contrast, animals in complex environments, such as the giant Australian cuttlefish (Sepia apama), tend to employ strategies that reduce recognition, usually via color patterns and in some cases texture. 315 The four primary mechanisms of camouflage and signaling are based on the four ways in which light can interact with matter. 316 The barreleye spookfish (Opisthoproctus soleatus), one of many deep-sea fish with eyes that look directly upward, presumably to catch the most light and thus improve vision. 317 The euphausiid shrimp, Meganyctiphanes norvegica, and the deepsea hatchetfish, Argyropelecus aculeatus. 318
Illustrations
Figure 13.6 Figure 13.7 Figure 13.8 Figure 13.9 Figure 13.10 Figure 13.11 Figure 13.12 Figure 13.13 Figure 13.14 Figure 13.15 Figure 13.16
Figure 13.17
Figure 13.18
Figure 13.19
Figure 13.20
Figure 13.21 Figure 13.22 Figure 13.23 Figure 13.24
Figure 13.25
Various established and hypothesized functions of bioluminescence in marine species. 320 Fireflies at night. Note that the dark and simple background makes the signal easy to discern. 321 The bioluminescent courtship displays of various species of ostracod. 322 The coronate medusa Atolla wyvillei and one moment in the animal’s bioluminescent “pinwheel” display. 322 Examples of transparent animals. 324 Transparency and pelagic existence mapped onto a phylogeny of the major phyla in the Animalia. 325 The bolitaenid octopus, Japetella heathi in its transparent and pigmented forms. 327 The pelagic tunicate, Salpa cylindrical, and its transmittance of visible and ultraviolet radiation. 328 Selected images of transparent plankton viewed between parallel polarizing filters and crossed polarizers. 329 The anemone cleaner shrimp, Periclimenes holthuisi. 330 Strongly colored lures on the tips of the feeding tentacles of the siphonophore Resomia ornicephala, an example of aggressive mimicry. 330 Reflection from vertical mirrors underwater can look just like the view behind the animal because the light field is symmetric around the vertical axis. A vertical mirror photographed at 10-m depth shows that mirroring also works as crypsis in shallow waters if the sun is high in the sky. 331 The snub-nosed dart (Trachinotus blochii) well camouflaged by reflection from its silvery sides and a moment later, after it had tilted slightly, reflecting downwelling light to the viewer and becoming more visible. 332 The common bleak fish, Alburnus alburnus and a cross section of the fish showing the vertical orientation of most of the reflecting structures in the scales. 332 Intensity image of the bluefin trevally (Caranx melampygus), showing that it matches the background light well. Another image of the same fish shows the degree of polarization of the fish and the background. 334 Structural colors in the plumage of the male of the Indian peafowl (Pavo cristatus). 334 The coloration of oceanic species as a function of depth. 336 The red-eye gaper (Chaunax stigmaeus) photographed at depth under broad-spectrum lighting. 337 Mexican blind cavefish Astyanax mexicanus, one from a surface-dwelling population and one from a cave-dwelling population. 338 Twelve color morphs of the strawberry poison dart frog (Dendrobates pumilio), all from the Bocas del Toro region of Panama. 339
xvii
xviii
Illustrations
Figure 13.26 Figure 13.27
Figure 13.28
Figure 13.29
Maxwell triangle for a honeybee (Apis mellifera) viewing a violet flower and a blue flower through fog. 341 The dottyback (Pseudochromis paccagnellae) photographed at depth on a coral reef and a head of a male three-spined stickleback (Gasterosteus aculeatus) photographed in the greenish water that it usually inhabits. 341 The GretagMacbeth ColorChecker™ as it appears in daylight and as it would appear if it were photographed at 40-m depth in clear ocean water or if it were photographed at 20-m depth in green coastal water. 342 Full-color image of two surf parrotfish (Scarus rivulatus), a lined butterflyfish (Chaetodon lienolatus), a minifin parrotfish (Scarus altipinnis), and the head of a hussar (Lutjanus adetii). The same image as viewed by an animal with only a medium-wavelength visual pigment, (the patterns on the three parrotfish are no longer visible) and as viewed by an animal with only a long-wavelength visual pigment, which increases the achromatic contrast of the details on the parrotfish. A dichromatic viewer can distinguish more than a monochromat. 343
Preface We humans are visual creatures. We are also introspective and curious, a combination that makes us all by nature amateur visual ecologists (even if we don’t know it). Because our world is dominated by visual sensations, we naturally wonder how other animals see their particular worlds. When a cat is entranced by images of fish on a television screen, does it see their colors? Does it think they are real fish? What is it experiencing? When a wasp flies up and stares us in the face, just what is it seeing? These questions have probably been asked for as long as our species has been around. The third-century Roman philosopher Plotinus stated the fundamental principle of visual ecology nearly two millennia ago: “To any vision must be brought an eye adapted to what is to be seen, and bearing some likeness to it.” Still, the study of visual ecology didn’t come into its own until near the end of the twentieth century. Researchers such as John Lythgoe in England and Bill “Mac” McFarland in the United States, to whom we have dedicated this book, first used the term “visual ecology” to describe their research on animal visual systems in nature. In , Lythgoe published a smallish book titled The Ecology of Vision. The book introduced a new way of thinking about animal vision, relating the huge evolutionary diversity of visual systems to the environments inhabited by particular animals. Lythgoe presented unifying principles that involved several fields—visual optics, environmental radiometry, retinal physiology, visual aspects of behavior, and numerous others—that had historically been considered as separate. Lythgoe, McFarland, and a few others like them made up the scientific generation who seeded the field, and the authors of this book were inspired as graduate students and young scientists by their elegant work and their personal charm. The four of us typify the field. We’re international. Our home laboratories are in the United States, Sweden, and Australia, but we all consider the planet our field site and the animal kingdom our model organism. We spend time on ships, in woods and rainforests, in deserts, and in marshes, muddy waters, and coral reefs; we work in daylight, moonlight, starlight, and in the black depths of the world’s oceans; our field sites cover six of the seven continents (and we envy the lucky few visual ecologists who have made it to Antarctica!). Perhaps most importantly for the writing of this book, we are friends. We have known each other for decades, visited each other’s home laboratories and actual homes, worked on the same teams in the field, published together, attended international meetings together, and shared long evenings in bars. Again characterizing the field as a whole, we really enjoy each other’s company, and despite our natural competitiveness as scientists, we readily share our ideas, data, and students. We like each other, and that has gotten us through the difficulties that necessarily arise in a multiauthor project such as writing a book like this. We got a lot of pleasure out of just sitting back in an armchair to read each other’s contributions—and equally much pleasure from constructively criticizing them! Naturally, an effort like this involves many, many more souls than just the four of us. Laura Bagge, Jamie Baldwin, Nicholas Brandley, Mike Bok, Karen Carleton,
xx
Preface
Jon Cohen, Kate Feller, Yakir Gagnon, Alex Gunderson, Elizabeth Knowlton, Dan Speiser, Cynthia Tedore, Kate Thomas, and Jochen Zeil read our chapter drafts. Tammy Frank, Ron Douglas, and an anonymous reviewer read and critiqued the entire manuscript. Judy Rubin contributed to the artwork, and we benefited from the graphic and photographic contributions of dozens of colleagues and associates. The team at Princeton University Press, especially our senior editor Alison Kalett, our editorial associate Quinn Fusting, and our production editor Karen Carter helped keep us on track and calmly sorted our differences of opinion when they inevitably arose. The entire manuscript was expertly and thoroughly copyedited by Elissa Schiff. Our graduate students and the other members of our laboratories kept us motivated with their own curiosity, their drive to carry the field forward, and their interest in seeing the final product of our labors. We are particularly indebted to Marie Dacke, Almut Kelber, Mike Land, Ellis Loew, Dan-Eric Nilsson, David O’Carroll, Jack Pettigrew, David Vaney, Rüdiger Wehner, and Jochen Zeil for their inspiration and encouragement. Most of all, we thank our wives, Roselind, Lynn, Sue, and Sara, for support throughout the project. None of them card-carrying visual ecologists, they nevertheless stuck with us as we wrote this book and gracefully put up with our global wanderings and consuming passion for science.
Visual Ecology
1 Introduction
A
tiny fruit fly, hardly bigger than a speck of dust, drifts along the edge of a pond in the morning sunlight, keeping a constant course and height through the unpredictable buffeting of the morning breeze. Despite its minuscule size, its silhouette against the empty sky has betrayed the fly’s passage to an alert predator. A dragonfly perched on the pond side rises in a buzz of wings, aims to intersect the fruit fly’s path, and plucks it from the air in a graceful upward swoop. Returning to its perch to enjoy its catch, the dragonfly is spotted by a hovering kestrel far above; with a blur of motion, the kestrel snatches up the dragonfly and carries it to her home not far away. As the mother kestrel settles onto the edge of her nest, two tiny babies greet her arrival by stretching their necks upward. Tempted by her nestlings’ colorful, bobbing mouths, she tears up their breakfast and shoves a piece into each one. Each of these creatures was guided by its eyes as it carried out its accustomed behaviors—the fruit fly heading steadily toward its chosen destination; the dragonfly as it sighted the fly, computed its course, and snatched it from the sky; the kestrel as it picked out the dragonfly against the tangle of pondside vegetation and unerringly plucked it from its perch; the nestlings as they spotted their mother’s familiar shape descending from the sky; and the kestrel again as it responded to their irresistible opened mouths. Each animal’s eyes allowed it to execute the behavior necessary for its survival (although, admittedly, not all survived this quiet morning). How do these various visual systems function? How is it that the fruit fly was so beautifully adapted for visual control of flight in air that buffeted its minuscule body? What allowed the dragonfly to see such a tiny spot in the sky, to recognize it as nearby prey and not a bird passing far above, and to derive the geometry required to intercept it? How could the kestrel see a blue, toothpick-sized form against masses of green vegetation from so far above? Why were its babies’ mouths so attractive? For that matter, how was it that the fruit fly never spotted the approaching dragonfly, and what allowed the dragonfly to see the speck of a passing fly while missing the predatory stoop of the kestrel? These are the kinds of questions that are addressed in the field of visual ecology.
2
Chapter 1
Visual Ecology Defined Visual ecology can be broadly defined as the study of how visual systems function to meet the ecological needs of animals, how they have evolved for proper function, and how they are specialized for and involved in particular visual tasks. Researchers who work at various levels of inquiry, from genes to behavior, call themselves visual ecologists, but all are primarily concerned with how animals use vision for natural tasks and behaviors. Although the outcomes of visual ecological research may well have implications for health or may be applicable for use in engineering or technology, the research itself centers on the animal of interest and on how it employs its visual system to meet its own ecological needs. A researcher who studies the retinas of squirrel monkeys to learn more about human visual physiology is not a visual ecologist; one who does nearly identical research to learn how the monkeys discriminate ripe from unripe fruit is. Thus, the kinds of questions that spark visual ecological research are some of the oldest in biology, some predating even the earliest science: How can predators sight prey at times when humans are essentially blind? Does my dog (or horse, or cat) see color? Do seals see equally well both underwater and on the shore? And many others that readers have probably asked themselves about vision in other animals. A particularly beautiful (literally so) illustration of how an artist’s sensitive eye registers principles of visual ecology is seen in the Japanese cloisonné vase in figure ..
Figure 1.1 Cloisonné vase with sunfish, made using gold wire and enamels. The artist is Yoshitaro Hayakawa, Ando Cloisonne Company, Japan, 1910. (Stephen W. Fisher collection. Photographed by J. Dean, johndeanphoto.com, used with permission)
Introduction
The artist, Yoshitaro Hayakawa, has created a convincing underwater scene, with the viewer just a short distance from the sunfish that seems to float only millimeters beneath the vase’s surface. The fish’s colors, patterned no doubt for communication with its conspecifics, are brilliant; they stand out in contrast to the water itself. A second fish, only slightly further away, has lost some of its brilliancy and clarity but still is well defined and colorful. The third fish (at the right margin of the vase) is almost lost in the underwater haze, fading into the background. Only the red on the posterior edge of its dorsal fin remains to signal what kind of fish it might be. Here we see clearly how water absorbs the colors of objects viewed through it, simultaneously hiding them in a veil of light scattered into the viewing path. This is not all that has captured the artist’s attention. The water near the surface is brightly lit with white light, but as the depth increases, the water’s color changes continuously to a rich blue-green. The plants and rocks on the bottom are bluish or a muddy green, as the extremes of the spectrum have been filtered away by the colored waters inhabited by the fish. As with the distant fish, absorption and multiple-path scattering have obscured the colors and details of the background objects. Hayakawa was clearly a keen observer of nature; one suspects that he would have made an excellent visual ecologist. Many of the principles we will be visiting in this book are seen in this one lovely object: properties of light in natural environments, color vision in animals and its relationship to signaling, and the ability both to signal and to hide using the same color palette. To illustrate visual ecological thinking in a slightly more scientific framework, consider a biological question that concerns vision (see the illustration in figure .). All songbirds have excellent color vision—far better than our color sense—based on a complicated set of photoreceptors coupled to oil-droplet filters. Surprisingly, despite their radically different appearances and plumage, all these birds have nearly identical color vision. A visual physiologist would be interested in the receptor properties, the wiring of the birds’ retinas, how the color information is processed, and other functional issues. At another extreme, a behavioral biologist might study what aspects of the color patterns or sexual displays of males make them desirable to females. Visual ecologists would certainly be interested in knowing these things as well, but their focus would be on how the visual systems of songbirds fit generally into the birds’ needs. Are photoreceptor sets, similar across species, adaptive for high-quality color vision in the habitats occupied by songbirds, or are they in some way limited by the plumage colors that songbirds have? In other words, is the color vision of songbirds an evolutionary product of their light environments, or has it evolved for the requirements of signaling? Do birds choose to display their plumage at particular times of day, or in particular locations, where lighting makes their color patterns most visible? Do male birds arrange their displays to be seen from a particular vantage point? Do they choose backgrounds that enhance these colors? To answer questions such as these, measurements are needed of the environmental light at times of interest or biological significance, the properties of the color receptors themselves, the ways in which the plumage, background, and possibly even entire environmental scenes reflect light. Depending on the scope of the research, the visual ecologist might also consider the optics of the birds’ eyes and their specializations to function at particular light levels (in the case of songbirds, perhaps in the open vs. under forest canopy), and possibly even to the optics of feather reflectivity and color production. Any single visual ecological study might not include all these questions, but these (and many others with regard to songbird vision) all lie within the purview of the field. The unifying principle is that all these
3
4
Chapter 1
Figure 1.2 Female cardinal evaluating a male in sunlight, against natural foliage. White dashed lines indicate the natural illumination, the red ones the light reflected from the cardinal, and the green ones the light from the foliage. The female must discriminate the male from the background and evaluate the color and quality of his plumage using the color-vision system present in her eye. (Illustration by E. Cronin)
are components of the natural visual tasks in which songbirds are engaged and that the research focus is on what the animals actually require from, and can achieve with, their visual systems.
Eyes and Their Evolution The ability to sense the presence of light dates back to some of the earliest forms of life on Earth, but it took most of the history of life thereafter to evolve truly functional visual systems capable of imaging scenes and objects and of providing timely information about changes in the visual surround. The first recognizable eyes appear in the fossil record in the Cambrian, some half-billion years ago; incredibly, these are already well-developed compound eyes, not terribly different from eyes functional in some animals today (figure .). Equally impressive is the discovery of fossils showing that highly complex eyes in an amazing diversity of animals appeared within a few millions or tens of millions of years of evolution. In fact, it has been argued that the appearance of the first truly competent eyes acted as a catalyst for rapid animal evolution and diversification (see Parker, ; Land and Nilsson, ). The rise of high-quality vision enabled animals to remain well oriented during travel at high speed and opened up a new world of long-distance animal interactions
Introduction
A
B
C
D
Figure 1.3 Fossilized compound eyes of two trilobite species, Bojoscutellum edwardsii (Barrande) (A,C) and Eophacops trapeziceps (Barrande) (B,D), showing dorsal (A,B) and lateral (C,D) views. These eyes were fossilized approximately 450–500 million years ago but show many features of modern compound eyes. (Photographs by E. Clarkson)
via predation, predator evasion, and communication. Unfortunately, until time machines become available to researchers, hypotheses about the visual ecology of these animals must remain untested, although comparing them with modern species having similar body plans and inhabiting analogous environments is fruitful for speculation. In recent years, due to the availability of ever more efficient molecular approaches and tools for examining animal phylogenies, there has been an explosion of interest in the evolution of eyes and photoreceptors. This book is not about visual evolution, but visual ecologists are intensely interested in how eyes became mated to their ecological and behavioral functions, and simultaneously, how they are limited by the constraints of their ancestry. But here in the Introduction, it is worth taking a look at how the functions and adaptations discussed in this book might have come into being. Nilsson () has convincingly argued that eyes evolved through a series of four discrete stages, each characterized by a particular task related to photoreception, ultimately leading to high-quality vision. Advances in the underlying biological technology enabled each leap to the next, more competent stage (figure .). According to Nilsson, once a reasonable way to detect light appeared, living organisms immediately gained a way to measure time of day, day length, depth, the passage of a shadow, and other highly adaptive abilities. Adding directionality and then low-resolution vision and ultimately high-resolution vision placed ever-higher demands on the light-sensing machinery, including improved ways to get enough light into the photosensing cells to provide a useful signal. Nilsson’s () view of vision advancing through a series of steps, each permitting a major step in sensory capability, has much to offer. It has the additional utility of providing a view of visual evolution through adaptation for specific ecological functions.
5
Chapter 1 Efficient photopigment regeneration Screening pigment Membrane stacking Focusing optics
Spatial vision: high resolution Task complexity
6
Spatial vision: low resolution
Directional scanning-photoreception
Non-directional photoreception Time
Figure 1.4 A view of visual evolution as a punctuate process, advancing through a series of steps that represent significant leaps in visual capabilities. Visual ecology has considered organisms at each of the four steps, but by far the greatest interest is in those that have reached the two final steps, where spatial vision comes into play. (After Nilsson, 2009)
Themes in Visual Ecology In reading this book you will come to recognize recurring themes that appear in chapter after chapter. One has already been introduced—the significance of evolution, and evolutionary history, in shaping the ways in which animals use, and are even able to use, their visual systems. All of vision is united by reliance on a single class of protein molecules (discussed in detail in chapter ). These are the opsins, clearly special because no visual system that has advanced beyond the very first of Nilsson’s stages uses any other molecular design for light reception. All opsins—in every seeing creature, from jellyfish to your favorite advanced animal—have descended from a common ancestral molecule, possibly some sort of melatonin receptor (figure .). Today, they fall into specific opsin families, either three or four, depending on who is making the argument. Most animals have many opsins available in their genomes, generally of more than one family, and these are the film in the camera of vision. No visual ecologist ignores their properties! A second theme that recurs in many chapters is that of “matched filters,” an idea introduced by Rüdiger Wehner () in a much-quoted essay in honor of the vision scientist Hansjochem Autrum’s th birthday. A matched filter is a sensory construct that serves as a reduced model of expected events or qualities in the outside world. Wehner’s original essay gave examples from several sensory systems, with vision getting the lion’s share. You will meet many examples of matched filters in eyes of animals
Introduction R opsins
Gr
ou p4 op
Cnid
ops
sins OTG R P S
C o psins
Figure 1.5 Evolution of modern opsins. From an ancestor in the center of the circle, the opsins known today split into four families in this reconstructed phylogeny: C-opsins (used in vertebrate photoreceptors), Cnidops (in jellyfish), R-opsins (in major invertebrate groups such as arthropods and molluscs), and a hodgepodge group named “group 4 opsins,” which have many functions, most (but not all) not used in visual photoreception. OUTGRPS stands for the outgroups used to construct the phylogeny. (Figure prepared by M. Porter)
in this book. Filters work by excluding information, and in doing so they simplify the analytical task required to analyze what is left. Filters also let things through, so a more positive view is that they admit just the things an animal needs to know. Rather than run through a series of examples now, we just encourage you to make your way into the book!
An Overview of the Book The chapters in this book explore at increasing levels of inquiry the topics that are of interest in visual ecology today. We begin with the properties of light and the photic environment: the basic physics of light, including intensity, spectrum, and polarization; natural sources, distributions, and temporal features of light; how light is
7
8
Chapter 1
changed as it is absorbed and scattered by air or natural waters; the effects of natural structures and surfaces—including the surfaces of living things—on light’s properties; and ultimately the statistical features of the natural scenes that animals view. Light measurements are important to nearly every visual ecological topic because the photic features in the environments inhabited by a given animal species, and within which it behaves, define the limits of its sensory abilities and shape the possibilities for particular visual tasks. Light’s fundamental properties limit the performance of even perfect photochemical and optical systems, but its variation among habitats permits species to specialize and thereby outperform their competitors, predators, or prey. We proceed to consider these photochemical and optical systems in the next three chapters, discussing how photoreceptors intercept light and convert it to a usable biological signal, how the pigments and cells of vision vary among animals, and how the properties of these components affect a given receptor’s sensitivity to light’s intensity, spectrum, and polarization. Eyes consist of photoreceptor arrays that capture and transduce light, but the performance of these receptor surfaces is ultimately determined by the optics of the eyes within which they are found. These chapters provide the foundation for the rest of the book, which examines how eyes and photoreceptors become specialized for an enormous diversity of visual tasks and how various species excel in some ecologically appropriate subsets of all these possibilities. These specializations and tasks are considered at mounting levels of integration throughout the rest of the book, moving in the next three chapters through design-based processes such as spatial, color, and polarization vision. The final five chapters of the book tackle higher-level aspects of visual ecological concerns: motion vision, seeing in attenuating media, dim-light vision, visual orientation, and ultimately communicating with other animals. As the book proceeds, we revisit the animals introduced in the first paragraph of this chapter (and many other fascinating creatures) to examine how their vision enables—and limits—their behavior. Because humans are such visual animals, it is very difficult to escape thinking of animal vision in human terms. The reader must resist this temptation, as it is nearly always thoroughly misleading. Humans evolved from primate ancestors with a very specialized set of visual needs, and like all primates humans devote a disproportionate fraction of their central nervous processing capacity to visual analysis. Human vision met the needs of our ancestors, but it is only one of a literal infinity of possible solutions of how an animal might see; in fact today, in the days of virtual reality, three-dimensional cinema, and heads-up displays in our machines, it is easy to forget that what we give ourselves to see is specifically designed (by us!) to be processed by our visual systems. We see increasingly artificial scenery. The visual worlds of animals are fundamentally different. More important, they are interpreted by alien modes of processing that are barely analogous to our intuitive concepts of vision. As the reader progresses through this book or just samples its contents, he or she should always bear in mind that the sensory world of each species is unique and that it is risky, and almost always misleading, to make analogies to human visual experience. Given all that has just been said, perhaps it is just as well that we have not yet even attempted a definition of vision! There are many kinds of photoreceptors, and many organs that are clearly eyes of some sort, that sense light levels and their changes but provide at best a very rudimentary sense of space or visual “scenes.” For the purposes of this book, a visual sense must provide both spatial and temporal information at rates fast enough to enable complex behavior beyond such simple activities as
Introduction
photoperiodism or circadian timing. There is no clear line at which complex behavior begins, of course, but we are here mostly concerned with those controlled through the input into two (occasionally one) usually anterior visual organs connected to a discrete neural center in what is clearly a brain. With this as a tentative definition of vision, we now proceed to consider the field of visual ecology.
9
2 Light and the Optical Environment
Y
ou are about to take your dive in a submersible. Yesterday, the ship left port in a brackish lagoon on the Florida coast. There, the water was first brown and opaque, like chocolate milk, then green and clear like an aquarium that had not been cleaned in months. The weather was beautiful with a sun too bright to look at and a sky that was deep blue overhead and a paler blue near the horizon. Today, the water is an astonishing and pure shade of blue, with glints of white sunlight at the edges of the waves. The submersible dips below the surface, and the light becomes far more even. In fact, this is the most featureless environment that you have ever encountered. Ahead, the water is bright and blue. Overhead, the light is white, with some portions of the underside of the sea surface appearing mirrored. Underneath the submersible, the light is violet-blue, with shifting beams of brighter light that seem to radiate from a point directly below. As the submersible descends, the water slowly becomes dimmer and bluer, and the objects inside the passenger sphere slowly lose their colors. The first color to go is the red shirt of the pilot, which now looks black. Then the orange seat cushions go brown, and a yellow camera strap turns gray. Finally, the green lettuce on the sandwich you brought with you also turns grayish blue, and the only remaining nonblue objects are the red indicator LEDs inside the submersible. About minutes after that, you notice that the submersible is descending into blackness, and the only light, which is now dim and gray, is directly overhead. Soon this vanishes, and you are surrounded by flashes of blue light, which appear to be made by objects impacting the sphere as it descends. Some flashes are small and brief, but others are large and frankly messy, much like a glowing pudding being smeared against the window. The submersible stops descending and all the flashing lights stop. The pilot turns off the indicator lights, leaving you in profound darkness. After many hours of work at depth (with floodlights), the submersible resurfaces right before sunset. The sun at the horizon is a deep orange-red, and the sky is a riot of colors, ranging from blue to magenta to what can only be called a golden pink. About minutes after the sun sets, the only color remaining in the sky is an extremely pure blue, bordering on purple. Twenty minutes after that, the sky is dark gray and filled with stars and the Milky Way. You cannot discern it but are told by a
Light and the Optical Environment
A
B
Figure 2.1 The elephant hawkmoth (Deilephila elpenor) as viewed during late twilight (A) and on a moonless night (B), showing the extreme chromatic changes that can occur within 30 minutes (images are set to have roughly the same average brightness). Note that the illumination on a moonless night is far warmer than that during late twilight due to the dominance of reddish stars rather than blue sky (see Johnsen et al., 2006, for details).
fellow crew member that the dim gray light between the stars is actually greenish, and the Milky Way is full of uncounted numbers of red dwarf stars. Later, the moon rises and everything takes on a much brighter silvery sheen. The moon is close to full, and there is enough light to do just about anything except read. The same know-it-all crew member mentions that the sky is now actually just as blue as the daytime sky, though a million times dimmer. The day ends with the arrival of a thunderstorm, which sends down bolt after bolt of purplish-white lightning into the dark sky. Humans, and nearly all animals on Earth, witness astonishing variation in their optical environment. Brightness changes by many orders of magnitude each day, and colors also shift dramatically (figure .). Those animals that enter forests and especially the water experience even larger changes. Given this, it is surprising that nearly all the natural light on Earth ultimately comes from two sources, the sun and bioluminescence. This chapter describes how these two sources (and a few minor players) light our world.
Light and Its Measurement Before we can discuss the optical environment we first need to answer some questions and define some terms. The first obvious question is: What is light? To that question, the only honest answer is that we have no idea. No one does. Unfortunately, there is no intuitive reality to light as there is, for example, to a ham sandwich. As you are probably aware, light has been described as both a microscopically small packet of energy known as a photon and as a diaphanous electromagnetic wave that extends throughout space (figure .). Although some physicists take the stand that light truly is one or the other (see Kidd et al., ), we suggest that it is best to take a practical approach and use whichever metaphor is most appropriate for the situation. For example, many people find it easier to think of light emission and absorption in photonic terms and polarization in wave terms. The metaphor you choose does not affect how you do your calculations and measurements. So, as long as you do those correctly, you can imagine light’s true incarnate nature to be anything you
11
Wavelength
Chapter 2
Frequency (Hz)
12
Gamma-rays
0.1 Å
1019 1Å 0.1 nm 1018 X-rays
1 nm 400 nm
1017 10 nm 1016
Ultraviolet 100 nm
1015
500 nm
Visible Near IR
1014
1000 nm 1 µm
600 nm
10 µm
Infra-red 1013 Thermal IR
100 µm
700 nm
1012 Far IR 1000MHz
1000 µm 1 mm
1011 UHF
500MHz
1 cm
Microwaves 1010
Radar 10 cm
109 VHF 7-13 100MHz
1m 108
Radio, TV 10 m
FM VHF 2-6
107 100 m
50MHz 106
AM 1000 m Long-waves
Figure 2.2 The electromagnetic spectrum.
like. We move freely between these two constructs throughout the book, starting with the next paragraph. Regardless of how you think about it, a beam of monochromatic light has only three properties: () intensity, () wavelength/frequency, and () polarization. Intensity can be measured in two ways, in energy terms or in quantal terms. In other words,
Light and the Optical Environment
one can either measure the amount of energy the beam imparts to a surface (usually in watts) or the number of photons that intersect that surface over a given period of time. In visual ecology it is generally better to measure light in photons because that is how photoreceptors work: they count photons rather than measure energy (see chapter ). Unfortunately, most light detectors (which are built by and for physicists) measure light in energy units. However, it is simple to convert from one to the other if you know the wavelength of the light. Then the intensity of the light in photons per second per area is simply the intensity in watts per area multiplied by . × and the wavelength (in nanometers) of the light in air. You must use the wavelength it would have in air (technically in a vacuum) because the wavelength of light changes in different media. For example, blue-green light transmits best in the ocean and has a wavelength nm in air. Its wavelength in water, however, is nm ( divided by ., the refractive index of water). Actually, the frequency of the light is more fundamental than the wavelength because it does not depend on the medium the light is going through. However, for historical reasons—and maybe because people like lengths more than frequencies—wavelength is far more popular with visual ecologists, and we use it throughout this book. Therefore, when we mention the wavelength of light, you should assume that it has been measured in air. The final property of light—polarization—is discussed in detail later (see chapter ). The intensity, wavelength, and polarization completely describe monochromatic light, which of course does not exist in nature. Instead, natural light is a collection of beams of different wavelengths, intensities, and polarizations. If we ignore polarization for now, the best way to describe natural light is via a spectrum that gives the intensity as a function of wavelength. In quantal terms, the units of each point in a spectrum are photons/s/cm/nm. These units measure how many photons within a wavelength bin nm wide impact a square-centimeter surface in a second. Note that the term “bin” in the previous sentence implies that intensity spectra are histograms. As a result their shape depends on the nature of the bins. For example, it is just as correct to put a solar spectrum in equal-frequency bins as it is to put it in equal-wavelength bins. Unfortunately, equally sized wavelength bins will not have equal widths when moved into frequency space (see Johnsen, ). This matters because people often compare peaks of visual sensitivity curves to natural spectra to make statements about adaptation. For example, it has been said that the human photopic sensitivity curve is matched to daylight. Indeed, if one looks at a graph of this curve against a solar spectrum binned by wavelength, the fit seems good. However, if one bins by frequency units, the fit is terrible (figure .). This is all described in detail elsewhere (see Soffer and Lynch, ; Govardovskii et al., ; Johnsen, ), but there are two take-home messages. First, one cannot compare the peaks of visual sensitivity curves with environmental spectra because the shape of the spectrum depends on how it is binned. Second, there is no such thing as spectrally flat, “white” light despite many researchers’ desires to use it as a control in experiments. A flat photon spectrum binned by wavelength will not look flat when binned by frequency, and neither the wavelength or frequency spectrum will look flat if energy units are used. These caveats aside, we now come to what is actually measured. “Intensity” has an intuitive meaning but is not a useful term for visual ecology. Instead, we primarily deal with two properties, irradiance and radiance (figure .). The first describes how much light is reaching a certain point from all directions, and the second describes
13
Chapter 2
A
Irradiance (W/m2/nm)
4 x 1014
3 x 1014
1.0
2 x 1014 0.5 1014
0.0
0
500
1000
1500
2000
2500
Irradiance (photons/cm2/s/nm)
5 x 1014 1.5
0 3000
Wavelength (nm)
10
5 x 1014
Frequency (x 1014 Hz) 5 4 3 2
1
1.5 x 10–6
4 x 1014
3 x 1014 1.0 x 10–6 2 x 1014 5.0 x 10–7
1014
0
0
500
1000
1500
2000
2500
Irradiance (photons/cm2/s/Hz)
B
Irradiance (photons/cm2/s/nm)
14
0 3000
Wavelength (nm) Figure 2.3 Daylight spectrum binned by equal wavelength intervals (A) and by equal frequency intervals (B). Human photopic (cone-based) sensitivity curve (in response per photon) is arbitrarily scaled and plotted as the dotted line.
how much light is coming from a certain direction. Taking irradiance first, most visual ecologists will measure one of two versions of it: vector or scalar. A vector irradiance sensor measures all the light that hits one side of a surface, weighted by the cosine of the angle of the light relative to the normal to the surface. In other words, light that strikes the surface perpendicularly is weighted the most, and light that strikes a glancing blow is weighted the least. Although this seems unnecessarily
Light and the Optical Environment
Figure 2.4 The three most common light measurements: vector irradiance, radiance, and scalar irradiance. Vector irradiance
Radiance
Scalar irradiance
complex, many natural objects behave in this way. For example, the amount of rain that falls into a hole depends on the cosine of the angle of the rain relative to the hole, and how much the summer sun warms your forehead depends on the cosine of the angle between your forehead and the sun. Thus, this cosine-weighting function is the natural way to describe how much light strikes a flat surface. Vector irradiance detectors are fitted with devices called cosine correctors that are usually flat disks of white plastic that scramble the incoming light direction, making the detector behave like the hole described above. These are compact, rugged, and relatively cheap, so at least % of irradiance measurements made by visual ecologists are of the vector variety. A potential disadvantage of these devices is that they are orientation dependent. So using them to measure environmental light levels can be ambiguous at best. For example, it is common but incorrect to use measurements of downwelling irradiance to determine the adapting illumination for an animal that is looking horizontally. Instead, one must measure horizontal irradiance (i.e., point the cosine corrector horizontally instead of up). Scalar irradiance sensors (which look like ping-pong balls on sticks) measure the light that comes from all directions, with all directions weighted equally. Thus, they give a single spectrum for an optical environment, which is potentially more valid and useful. Because vector irradiance is still the dominant form measured by biologists, we use it often throughout this book (particularly downwelling vector irradiance), but scalar irradiance measurements have many advantages and should be considered. The other primary measurement of optical environments is radiance, which describes how much light comes from a certain direction. Thus, the measured light is divided by the solid angle (in steradians) of the field of view of the detector, giving units of photons/s/cm/nm/sr. Radiance sensors typically have a narrow and circular field of view on the order of a degree or two across. Eyes are essentially radiance detectors because they provide information on how much light is coming from each small location in space. Many detectors, including bare fiber-optic cables, are not radiance sensors because their field of view is usually too broad. Before we move on to the actual optical environment, we give a brief word of warning. The field of light measurement (known as radiometry) is subtle, with many pitfalls. Although any light detector or spectrometer will happily provide you with a number, it is easy to measure the wrong thing or the right thing in the wrong way. Intuition is also a poor guide because of both the logarithmic nature of visual adaptation and color constancy. Therefore, one can have an exceedingly wrong measurement and not know it. This brief introductory section is only meant to provide enough information to help the reader understand the optical measurements in the rest of this book. For a more complete understanding of light and how to measure it, see Johnsen (), Bohren (, ), or Bohren and Clothaiux ().
15
Chapter 2
Daylight Even in the densest urban center, outdoor light from minutes before dawn to minutes after sunset is completely dominated by direct and indirect sunlight. This sun, a fascinating celestial object, can for visual ecologists be summed up simply: it is an approximate blackbody radiator with a temperature of ~°K and an angular diameter of half a degree. A blackbody radiator is simply an object whose light emission spectrum can be predicted from its temperature and is maximal for that temperature (Planck, ). Most organisms are also blackbody radiators, but because they are at a much lower temperature, they radiate far less and mostly at nonvisible wavelengths. However, even at mammalian body temperature, the amount of emitted energy is considerable, about watts for an adult male human (which is – times our basal metabolic rate and, fortunately, counterbalanced by electromagnetic energy we absorb from the environment). At temperatures of °K, the amount of radiated energy is impressive, even at visible wavelengths, as evidenced by the fact that even an object the size of your thumbnail at that temperature would keep you as warm as the sun when held at arm’s length (and thus subtending an angle of half a degree). It would also slowly kill you because there would be no ozone layer between you and it to absorb the far-ultraviolet radiation it would be emitting. The ozone layer and other constituents of the atmosphere remove most of the ultraviolet radiation and large swaths of the infrared spectrum. This absorption makes direct sunlight at the Earth’s surface not exactly blackbody radiation (figure .), but it allows us to live on land without severe radiation damage.
6 × 1014 Sun (in space) Irradiance (photons/cm2/s/nm)
16
5 × 1014 4 × 1014 3 × 1014 2 × 1014 Sun (at earth’s surface)
1014 0 300
Sky 400
500
600
700
800
Wavelength (nm) Figure 2.5 Solar irradiance outside the atmosphere (light yellow) and at the Earth’s surface (yellow). The portion due to skylight is in blue. (Based on the standard reference spectra produced by the American Society for Testing and Materials)
Fraction of irradiance due to skylight (%)
Light and the Optical Environment
100
80
60
40
20
0 300
400
500
600
700
800
Wavelength (nm) Figure 2.6 Fraction of irradiance (at Earth’s surface near noon) due to skylight alone. (Based on the standard reference spectra produced by the American Society for Testing and Materials)
The direct light of the sun accounts for nearly all the infrared light that reaches the Earth and most of the longer visible wavelengths. However, as one moves into the shorter visible wavelengths and especially into the ultraviolet, the contribution of skylight (which is simply scattered sunlight) becomes more important (figure .). Although the radiance of the sky is far less than that of the sun, the former is also far larger than the latter (~, times larger). Skylight is also richer in shorter wavelengths as a result of Rayleigh scattering. Thus, in the blue region of the spectrum (– nm), the sky contributes about –% to the total downwelling irradiance. In the UVA (– nm), it contributes –%, and in the UVB (– nm), it contributes % to nearly %. Thus, a person stepping under a small umbrella on the beach will immediately feel cooler (due to less infrared radiation) but may still get a sunburn (due to only moderately reduced UVB). Skylight is also special because parts of it can be highly polarized, which we discuss in chapter . The Earth’s rotation changes the apparent position of the sun in a manner that predictably depends on geographic location, season, and time of day (see appendix in Johnsen, , for equation). Surprisingly, however, for most of the day, the elevation of the sun has only a minor effect on downwelling irradiance and even less on scalar irradiance. Once the sun is more than about ° above the horizon, downwelling irradiance varies by at most -fold, and scalar irradiance only varies by about threefold (figure .). This is roughly the same effect as moderate cloudiness and not visually significant due to the highly nonlinear nature of animal visual systems (see chapter ). At higher latitudes the peak solar elevation is lower, and the total variation in irradiance is even less. Thus, the position of the sun during most of the day is unlikely to have a significant effect on light available for vision.
17
Chapter 2
1.0 Irradiance (normalized to one)
18
Scalar irradiance
0.8
0.6 Vector irradiance 0.4
0.2
0
0
15
30
45
60
75
90
Solar elevation (0 equals sunset) Figure 2.7 Downwelling vector irradiance and scalar irradiance (at 560 nm) as a function of solar elevation.
Twilight The situation changes dramatically, however, once the sun approaches and crosses the horizon. Figure . shows this in detail. It uses illuminance (irradiance weighted by the photopic spectral sensitivity of humans; see chapter ) instead of irradiance, but it is such a useful graph that it is worth showing. The illuminance decreases about -fold as the sun drops from ° to the horizon, the roughly - to -minute period when people typically watch the sunset. In the next hour, the sun drops by about another °, and the illuminance decreases approximately a millionfold, which is about , times more than even the largest range of daytime illuminances. Our eyes adapt to darkness at about the rate that the light levels drop (probably not coincidentally), so we do not fully appreciate the magnitude of this drop until the sun is about ° below the horizon. However, the drop is substantial and has many implications for vision, so twilight has been divided into three stages: civil, nautical, and astronomical. During civil twilight, when the sun is less than ° below the horizon, the sky is still bright enough for normal daytime activities (Leibowitz and Owens, ), and only the brightest planets and stars are visible. During nautical twilight the sun is between ° and ° below the horizon, and the light levels drop extremely rapidly, and the sky goes from blue to black. The last stage of twilight is called astronomical and looks like night to most humans. Most relatively bright stars are visible at its beginning (when the sun is ° below the horizon), and on a moonless and light-pollution-free night, galaxies and nebulae will be visible by the end, when the sun is ° below the horizon. In addition to the enormous drop in light levels, twilight is also distinguished by large changes in the color of the illumination. At sunset, the reddish sun is balanced by the blue sky and the various colors of the clouds (which are reflecting the reddish sunlight) to create an average illumination spectrum that can be highly variable but is generally
Light and the Optical Environment
19
105 104
Illuminance (Lux)
103
Nautical twilight Civil twilight
Astronomical twilight
106 Sunlight
Clear sky Light cloud Overcast
102 10 0 Moonlight
Full Gibbous Quarter
10–1 10–2
Crescent Starlight
10–3 10–4 -20
0
20 40 Elevation (deg)
60
80
Figure 2.8 Illuminance due to moon and sun at various times of day. (From Bond and Henderson, 1963)
Irradiance (photons/cm2/s/nm)
4 x 108
3 x 108 Figure 2.9 Irradiance on a North Carolina beach when sun was 10.6° below the horizon (late nautical twilight). (From Johnsen et al., 2006)
2 x 108
108
0 350
400
450
500
550
600
650
700
Wavelength (nm)
shifted toward longer wavelengths. However, once the sun is more than a few degrees below the horizon, the illumination becomes an intense blue. This may not seem unusual because the daytime sky is blue, but a spectrum of this twilight sky shows that it is different (figure .). Instead of the blue being due to the scattering of sunlight (as it is during the day), it is due to sunlight that has traveled an exceptionally long distance
Chapter 2
through the atmosphere (due to the low position of the sun) and had a portion of its visible spectrum filtered out by ozone. This absorption of visible light by ozone, known as the Chappuis band and centered at about nm, is what makes the late twilight sky blue. Without ozone, it would actually be a pale yellow (Hulbert, ; Hoeppe, ). However, by astronomical twilight this effect fades, and the illumination is dominated by either moonlight or various features of the moonless sky that we now examine.
Moonlight Moonlight is reflected sunlight, and, for animals that can see it (see Kelber et al., ), the colors of a moonlit sky and landscape are close to those under sunlight, although they are slightly shifted to longer wavelengths because the moon reflects twice as much long-wavelength light as short-wavelength light (figure .). The moonlit sky is even polarized for the same reasons that the sunlit sky is (see chapter ). On average, the moon is above the horizon for half of each night, but the amount of sunlight that it reflects to the Earth and how high it gets in the sky depend strongly on season and lunar phase. When the moon is full it provides a downwelling irradiance about , times weaker than that of the sun at the same elevation. Unexpectedly, the irradiance due to a half moon is not half that of a full moon but closer to / (figure .). This means that a half moon is not only smaller than a full moon but has a lower radiance by about fivefold. This is primarily due to the longer shadows that lunar craters and mountains cast when the sun hits them at a lower angle. The irradiance from a crescent moon is only a few percent of that of a full moon and generally irrelevant because a crescent moon is always so close to the sun that the illumination is dominated by the latter.
A
B 15
Reflectance (%)
20
10
5
0 400
450
500
550
600
650
700
Wavelength (nm)
Figure 2.10 (A) Landscape photographed under full moonlight. Note that only the presence of stars distinguishes it from a daytime sky. (Courtesy of Joseph A. Shaw) (B) Spectral reflectance of the moon. (From Lawrence et al., 2003)
Light and the Optical Environment
Irradiance or fraction illuminated
1.0 Fraction of moon illuminated
0.8
0.6
Lunar irradiance
0.4
0.2
0
0
15
30
45
60
75
90
105
120
Degrees from full Figure 2.11 Lunar irradiance (normalized to 1 for a full moon) and fraction of moon illuminated as a function of degrees from full moon (0° is full moon; 180° is new moon). (Data taken from Lane and Irvine, 1973)
Season and latitude affect the height of the moon just as they affect the height of the sun, but in the opposite direction. During the summer solstice in the northern hemisphere, the sun has its highest maximum elevation, but the moon has its lowest. The situation is reversed for the winter solstice. Because irradiance is higher when the illuminating object is higher in the sky, the brightest northern hemisphere night sky occurs on the full moon that is closest to the winter solstice. At this time, the total illumination is roughly equivalent to early nautical twilight. Another important aspect of lunar phase is that it affects how long the moon is above the horizon at night. A crescent moon is always close to the sun and thus only above the horizon for short periods after sunset or before dawn. It is, of course, also above the horizon for much of the day but not visible due to its proximity to the sun. A half moon is roughly at its zenith during dusk or dawn (depending on whether it is waxing or waning) and above the horizon for half the night. A full moon rises at sunset and is above the horizon all night. Thus, because a full moon is so much brighter and high in the sky for much of the night, the total amount of illumination it provides (irradiance × time) is orders of magnitude greater than that of a crescent moon ( times greater if one ignores starlight). Thus, knowledge of the phase of the moon is critical for biologists studying nocturnal activities and rhythms (e.g., Korringa, ; Lohmann and Willows, ).
Moonless Night Skies Although it may appear that the illumination from moonless night skies is primarily due to the stars that we can see, these actually only play a minor role. The major components are actually: () light from stars too dim to see, () zodiacal light, and
21
Chapter 2
A
B 1.5 x 107 Photons/cm2/s/nm
22
1.0 x 107
5.0 x 106
0 300
400
500
600
700
Wavelength (nm)
Figure 2.12 (A) Landscape under starlight (Zabriski Point in Death Valley, CA; color temperature preset to 5500K). (B) Reconstructed downwelling irradiance spectrum of moonless night sky. (From Johnsen et al., 2006)
() airglow (figure .). The first component seems counterintuitive, but a careful tabulation of all the stars (Matilla, ) shows that those that are too dim for us to see (mostly red dwarfs with a stellar magnitude much dimmer than the human visible threshold of .) have a total illumination equal to about first-magnitude (i.e., the brightest) stars. For comparison, only about eight first-magnitude stars are generally visible in the sky at any time. Because the dim stars are red dwarfs, they contribute a relatively large amount of long-wavelength light to the total irradiance. The second component, zodiacal light, is sunlight reflected from dust in the solar system and thus has a spectrum similar to that of the sun, which again is slightly long-shifted. Zodiacal light is most prominent after sunset and before dawn and is brightest directly above the location of the sun. The final major component, which is responsible for the narrow peaks in the spectrum, is airglow, which is light emitted when atmospheric oxygen, nitrogen, and water molecules that were split by sunlight during the day recombine at night (they also recombine during the day, but this cannot be seen). This light is what makes the spaces between visible stars at a rural site appear a dark gray rather than true black (although visual adaptation may also play a role). Airglow is dim but found everywhere in the sky, so it contributes a substantial fraction of the irradiance in moonless night skies. Although there are other minor contributors to the irradiance of the night sky (e.g., Gegenschein and galactic light; see Lynch and Livingston, ), the only other major contributor (albeit limited to certain locations and seasons) is the aurora (figure .). Like airglow, auroral spectra are due to light emission by excited atmospheric molecules, but in this case the excitation is due to high-speed charged particles from the sun (known as the solar wind) that manage to penetrate the Earth’s magnetosphere near the poles. The total irradiance can approach that of full moonlight, particularly during solar sunspot maxima, and the red aurora is even polarized. However, they are relatively ephemeral phenomena, and it is not known if they have any visual significance to organisms at high latitudes. Because they form a ring around the magnetic poles rather than the geographic poles, and because the northern magnetic pole is in northern Canada, the aurora in the northern hemisphere is
Light and the Optical Environment
Figure 2.13 A green aurora combined with an unusually intense red aurora. Image taken at Hakoya island, just outside Tromsø, Norway, October 25, 2011. (Frank Olsen, Wikimedia commons)
more commonly observed in North America than in Europe and Asia. The southern aurora is less commonly seen due to the relatively smaller land mass at circumpolar latitudes below the equator. Whether the aurora affects animal behavior (for example, the daily vertical migrations of many oceanic animals) is unknown but would make an interesting study.
Light Pollution Unfortunately, artificial lighting has a significant impact on the night sky in many locations (Garstang, ). Although the fraction of the Earth’s surface that is affected by light pollution is still fairly low, this is the fraction where visual ecologists (and many of the animals they study) (figure .) live. Because most universities tend to be in or near major urban centers, few visual ecologists see a true night sky on a regular basis, and their research often focuses on organisms that have been significantly impacted by artificial nocturnal illumination. In addition to dramatically increasing the nocturnal irradiance, artificial lighting also significantly shifts its spectrum. By far the major portion of artificial illumination is not due to the incandescent and fluorescent lights found in homes but to commercial lighting, which primarily consists of mercury bulbs and high- and low-pressure sodium lamps. This results in a relatively monochromatic and substantially long-shifted spectrum (figure .) whose effects on nocturnal vision and behavior are only just beginning to be explored (e.g., Moore et al., ; Longcore and Rich, ).
23
Chapter 2
A
Figure 2.14 (A) Light pollution in New York City. (Courtesy of Wikipedia) (B) Downwelling irradiance spectrum dominated by light pollution on a cloudy night at Jamaica Pond in Boston, MA. (From Johnsen et al., 2006)
B 1.0 x 109 Photons/cm2/s/nm
24
5.0 x 108
0 300
400
500
600
700
Wavelength (nm)
The Color Cycle of Terrestrial Irradiance The various ways in which sunlight and starlight are manifested result in impressive color shifts whenever the sun or the moon rises or sets (Rozenberg, ; Meinel and Meinel, ; figure .). As mentioned above, the irradiance goes from relatively spectrally neutral during the day, to long-shifted during sunset, followed quickly by a substantial shift to shorter wavelengths during twilight. This is then followed by a return to the somewhat long-wavelength-shifted spectra of either moonlight or a combined spectrum of starlight, airglow, and zodiacal light. The same process reverses itself during dawn. Thus, the periods surrounding the rising and setting of our two primary celestial bodies are times of significant change in the spectral composition of light in addition to enormous change in brightness. Much has been written about light adaptation, but the effects of these spectral shifts on vision and signal detection are poorly understood at best. Given the importance of crepuscular periods in the foraging and predatory behavior of many species of animals (e.g., Yahel et al., ; Rickel and Genin, ), this may be a fruitful avenue for future research.
Light and the Optical Environment 2500 K
2500 3000 3500 4000 4500 5000 6000 7000 8000 9000 10,000 20,000 40,000
0.50
v'
5000 K
0.45
0.40
0.35 0.15
10,000 K 20,000 K 40,000 K
Planckian locus Daylight Civil twilight Daylight in forest Full moon Starlight Light pollution
0.20
0.25 u'
Figure 2.15 Human-based chromaticities (i.e., perceived color) of daylight, sunset, twilight, and nocturnal irradiances. The Planckian locus shows the chromaticities of blackbody radiators as a function of temperature. Data points for this locus are every 500K up to 5,000K, and every 1,000K up to 10,000K, after which each point is labeled. The color chart shows the sRGB appearance of each temperature. Note that twilight, starlight, and especially light pollution are far outside the range of daytime illumination, even when forest environments are also considered. (Modified from Johnsen et al., 2006) The CIE 1976 u’v’ coordinates are used instead of the more usual CIE 1931 xy chromaticity coordinates because the former are more perceptually uniform. Distances on the graph correspond to perceptual differences.
Light in Special Habitats: Forest Terrestrial illumination during most of the daylight hours is roughly constant in spectral irradiance, but the situation is more complex within forests. The canopy can significantly reduce the intensity, particularly in dense tropical forests, and the wavelength-dependent absorption of light by chlorophyll and other pigments within leaves can affect the spectrum of the light that reaches the forest floor. Because leaves are numerous and small, and because they can be moved by the wind, these effects can vary over small spatial scales and short time periods, leading to diverse spectra (figure .). Although it can be difficult to categorize forest illumination, it is fundamentally influenced by two factors. First, the spectrum is affected by whether the light that passes through gaps in the leaves is primarily due to the sky or the sun. Because the radiance of the sun is so much greater than that of the sky at all wavelengths, the penetration of direct sunlight through a gap in the canopy dominates the spectral irradiance at the forest floor. However, if direct sunlight is blocked by a large number
25
Chapter 2
6
Quantal irradiance (arbitrary units)
5
4
3
2
1
0 350
400
450
500 Wavelength (nm)
550
600
650
700
Second component
Open
Figure 2.16 Superposition of 238 spectra of downwelling irradiance in a forest near Baltimore, MD. (From Chiao et al., 2000) The spectra are all normalized to have the same integrated irradiance (from 350 to 700 nm). Note that there are three peaks, representing (in order of increasing wavelength) the peak irradiance of a blue sky, the peak transmission wavelength of leaves, and the peak irradiance of the sun at visible wavelengths.
Closed
26
Sun-dominated
First component
Sky-dominated
Figure 2.17 The first two principal components (statistical measures of major sources of variation) of the spectra shown in figure 2.16. The first component roughly represents whether the downwelling irradiance is dominated by sunlight or skylight, and the second component roughly represents how closed the canopy is. Note that although the data roughly fall into classes, there is strong variation, making the idea of a set of typical forest illuminants problematic.
Light and the Optical Environment
of leaves or—more critically—a tree trunk or branch, then any gaps are dominated by skylight, which is significantly shifted to shorter wavelengths. The second factor is how closed the canopy is. A completely closed canopy creates a downwelling irradiance that is heavily influenced by the absorption spectrum of leaves. This leads to the often-observed greenish light under dense canopies, particularly on sunny days. Because the closure of the canopy and degree to which direct sunlight reaches the forest floor are variable (in addition to the presence and degree of cloud cover), the spectra within forests form a continuum rather than falling into discrete groupings (figure .). Thus, although it would certainly be convenient to use a “standard” spectrum for a forest, by far the better solution is to use a large number of measured spectra from the specific habitat in question to get a true sense of the local variation. Recent developments in low-cost and portable spectrometers have made this simpler than it was even a few years ago.
Light in Special Habitats: Water The effect of leaves on illumination pales in comparison to the effect of water. Water covers approximately three-fourths of the Earth’s surface, and the oceans comprise over .% of the Earth’s liveable space, so understanding light in this environment is critically important to much of visual ecology. In oceanic habitats the primary physical process that affects downwelling irradiance is the wavelength-dependent absorption of light by water (figure .). The scattering of light by water is also wavelengthdependent, but its magnitude is far less than that of absorption and so plays a lesser role in attenuating light. In clear oceanic waters the absorption is least for -nm light, so light of this wavelength penetrates farthest. However, it is a common misconception that light of all other wavelengths attenuates far more quickly. In fact,
Absorption or scattering coefficient (m–1)
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 300
350
400
450
500
550
600
650
700
Wavelength (nm) Figure 2.18 The absorption (solid line) and scattering (dashed line) coefficients of the clearest natural waters, along with their sum (heavy line). (From Smith and Baker, 1981) Note that light attenuation is dominated by absorption.
27
Chapter 2
Downwelling irradiance (photons/cm2/s/nm)
28
0 1013
5 10 20 50
1010
100
107
104
200 300
10
400 500
10–2 400
450
500
550
600
650
700
Wavelength (nm) Figure 2.19 Downwelling irradiance in the equatorial Pacific modeled from vertical profiles of absorption, scattering, and chlorophyll concentration. The solid lines show the irradiance when Raman scattering and chlorophyll fluorescence are included. The dotted lines show the irradiance if they are excluded.
once a depth of about m or so is reached, most of the visible light at wavelengths greater than nm attenuates at approximately the same rate as -nm light (figure .). This is because most of the long-wavelength photons at depth did not originate as long-wavelength photons from the surface. Instead these long-wavelength photons are primarily created via a fluorescence-like process known as Raman scattering, where a small fraction of the -nm photons are converted into photons of lower energies and thus longer wavelengths (Marshall and Smith, ). This means that there is far more long-wavelength light at depth than is generally appreciated, by nearly a hundred orders of magnitude for -nm light at m depth. This light is still at least four orders of magnitude dimmer than the main -nm peak but still bright enough (at depths less than m or so) to be detected by visual systems of animals living there and potentially isolatable from the main peak via the filtering in the oil droplets found over the cones of certain aquatic birds and reptiles (see chapter ). An additional factor in the top m is fluorescence from the chlorophyll in phytoplankton, which peaks at about nm. Due to the presence of this fluorescence and Raman scattering, one cannot simply extrapolate light levels at depth from values measured near the surface as was done in many earlier works in ocean optics (e.g., Jerlov, ). Because few direct measurements of deep-sea irradiance exist, the best current method is to input vertical profiles of measured absorption and scattering coefficients (see chapter ) along with chlorophyll concentration into optical modeling software designed for these sorts of calculations (e.g., Hydrolight, Sequoia Scientific Inc.; see Mobley, ). Scattering plays a minor role in the downwelling irradiance of the open ocean, but it critically affects the radiance distribution. Without scattering, the open sea in all
Light and the Optical Environment
directions except toward the sun and sky would be black. Together, absorption and scattering eventually create what is known as an asymptotic radiance distribution at depth, which is fairly well developed by about m depth in clear water (figure .). In this distribution the radiance is always greatest directly overhead and—for -nm light—is roughly times brighter than the radiance directly below. This is true regardless of the position of the sun because the direct downward path through the ocean has the shortest path length and thus the least attenuation. This difference in path length has a small effect in the top m but becomes increasingly important with depth. Raman scattering and chlorophyll fluorescence, in addition to greatly increasing the amount of long-wavelength light at depth, also affect the radiance distribution of these wavelengths. This is because both processes are isotropic, meaning that they emit light in all directions roughly equally (figure ., lower graph). Therefore, the radiance distribution of these wavelengths at asymptotic depths is nearly uniform, which may have significant implications for open-ocean camouflage (see chapter ). Although the blue ocean is by far the largest aquatic habitat, most biological studies occur in nonoceanic waters because of their easier accessibility. These can be either coastal marine habitats or fresh water. Even more so than with forests, these waters vary tremendously in their effects on light. Scattering and absorption coefficients can be far higher than are found in the open ocean and also have different wavelength dependencies that vary by season, tide, depth, proximity to substrate, current, and other factors. For this reason it is unwise in the extreme to make generalizations from one “standard” water type. Jerlov’s () water classification scheme provides a rough guide, but it does not remotely capture the variation and is not useful for fresh water or marine waters within a few miles of the coast (where much biological research is performed). Although there is a general trend of water color shifting from blue to green to brown as interaction with land increases, this is only the loosest of guidelines and has many exceptions. That said, as a very general rule, the absorption and scattering in nonoceanic waters are usually primarily affected by three substances (aside from the water itself): phytoplankton, suspended sediment, and dissolved organic matter. Waters in which the optics are dominated by water and phytoplankton are known as Case I waters and are relatively simple to analyze and categorize because the main parameter is the chlorophyll concentration. These range from open-ocean water (with low chlorophyll) to eutrophic waters that are intensely green. However, much coastal water and nearly all fresh water has significant amounts of suspended sediment and dissolved organic matter. These are known as Case II waters and are harder to categorize. Even Case I waters can be challenging because different species of phytoplankton have different absorption and scattering characteristics. Thus, as with forests, it is important for aquatic visual ecologists to characterize the water of their study habitat. As any scuba diver knows, even the tide and recent rainfall can profoundly affect the optics of coastal and inland waters, reducing visibility from several meters to only a few centimeters and decreasing light levels at even m depth by orders of magnitude. Fortunately, the tools required to characterize a body of water have become less expensive and simpler to use in the last few decades, and large databases of optical measurements now exist. Most discussions of aquatic optics begin with Snel’s window (yes, this is how his name is actually spelled: see Bohren and Clothiaux, ), but we end with it to emphasize that it is a relatively minor factor in most considerations of aquatic vision. As is true of every situation where a beam of light passes from a medium of one
29
Chapter 2
A
Radiance (photons/cm2/s/nm/sr)
1015
480 nm
0 10 20 50
1012
100 109 200 300 106 400 500 103
–180 –150 –120 –90 –60 –30
0
30
60
90
120 150 180
Viewing angle (0 is up) B 1015
600 nm
1012 Radiance (photons/cm2/s/nm/sr)
30
0 10 20 50
109
100 106
200 300
103
400 500
0
–180 –150 –120 –90 –60 –30
0
30
60
90
120 150 180
Viewing angle (0 is up) Figure 2.20 Radiance as a function of viewing angle and depth in equatorial Pacific waters, with the sun 45° above the horizon. (A) 480-nm blue-green light. (B) 600-nm red light. Note that the radiance is greatest in the sunward direction (corrected for refraction at the air–water interface) for shallow depth but moves to the zenith as depth increases. The 600-nm light is dominated by Raman-scattered light; thus, the radiance is nearly equal in all directions.
Light and the Optical Environment
refractive index to a medium with a larger index, the beam appears to bend more toward the direction of the perpendicular to the plane of the interface. In the case of light going from air to water: 3 θ water = sin −1 c sin θair m 4
(.)
where θ is the angle of the beam relative to the zenith of the sky. Thus light from the horizon (θ = °) has an angle of about ° in the water, and the entire hemisphere of the above-water world is compressed into an underwater circle ° across. This circle is known as Snel’s window, and if one is underwater looking up, one cannot see the outside world when looking outside this angle. Instead one sees a mirrored undersurface if shallow and scattered light if deeper. However, in reality Snel’s window is seldom apparent. This is due to a few factors. First, with the exception of small sheltered lakes and ponds, the surface is seldom smooth but instead is broken up by waves of all sizes. These waves not only obscure the edges of Snel’s window but also act as positive and negative lenses that create a highly dynamic light field with short bursts of intensity that can be a couple of orders of magnitude brighter than the average. Second, near the surface, the downwelling light is far brighter than the light in other directions, and it is difficult to look up without being dazzled. If one goes deep enough to escape the lensing and the intense radiance, scattered light obscures the window. Finally, in most aquatic habitats Snel’s window contains only the sky, which does not look dramatically different from the water outside the window. Thus, an obvious presence of Snel’s window is usually limited to small, still ponds within forests. For example, an archer fish would often have a splendid view of it.
Bioluminescence Thus far, we have dealt with illumination that either directly or indirectly comes from stars, either our own or others. However, there are other sources of light in the biosphere. Many (e.g., lightning, lava, meteors) are unlikely to have biological relevance, and another (mechanoluminescence; see below) is only hypothetically relevant. Bioluminescence, however, is clearly important to many organisms. Although it is mostly confined to a few insects and fungi on land and nearly nonexistent in fresh water (for unknown reasons), bioluminescence is common in marine habitats, particularly in the mesopelagic realm of the ocean (– m depth). Although a reliable estimate is of course difficult to obtain (or even define), it appears that about % of mesopelagic fish and crustaceans are capable of emitting light (Herring, ; Herring and Morin, ; Hastings and Morin, ). Bioluminescence is also nearly ubiquitous in mesopelagic cephalopods and gelatinous zooplankton but less common in certain mesopelagic taxa (e.g., copepods, amphipods) and rare or absent in others (e.g., heteropods, pteropods, chaetognaths, salps, doliolids) (Herring, ; Hastings and Morin, ; Haddock and Case, ). In marine benthic habitats, bioluminescence is less common and found in only a few percent of species, typically ophiuroids, sea pens, fish, and certain polychaete worms and hydroids (Morin, ). However, those species can often be abundant, so the total number of bioluminescent organisms in certain benthic habitats can be large. Unlike sunlight and starlight, which are due to thermal radiation, bioluminescence is a form of chemiluminescence in which light is produced via chemical reactions.
31
32
Chapter 2
This is a far more efficient way to produce visible light and the only possibility for organisms because producing even dim levels of visible light via thermal radiation would require a body temperature of at least °K. In the case of bioluminescence, the process appears to have evolved independently tens of times and always involves an enzyme-mediated and ATP-dependent oxidation of a small organic molecule. Although the actual substrate varies (see Widder, , for a review), it tends to have antioxidative properties and is always known as a luciferin. The enzyme is in turn always referred to as a luciferase. The reaction itself occurs in one of three places. In many fish and a few cephalopods the light is produced by symbiotic bacteria that live in a pouch. The bacteria produce light continually, so the animal uses a shutter to control whether the light actually exits the body. The connections between the bacteria and the host are deep and have been explored in detail by Margaret McFall-Ngai (reviewed by Nyholm and McFall-Ngai, ). In many other cases the light is produced by the animal itself inside cells known as photocytes. These cells are often part of beautifully complex organs that contain filters, mirrors, lenses, and other apparatus for controlling the color and direction of the light. Finally, some marine species perform the chemical reaction outside of their bodies, essentially vomiting the necessary reactants into water. Certain shrimp and other crustaceans are masters of this, producing brilliant clouds of what is termed “spew bioluminescence.” Although it is more efficient than thermal radiation at producing visible light, the process is nevertheless energetically expensive, and even the brightest bioluminescence is still just barely visible under room light. Thus, bioluminescence is limited to mesopelagic depths and/or night. The rate of photon emission varies over several orders of magnitude, even within the same individual, but appears to peak at roughly photons/s (integrated over all wavelengths and in all directions). The spectra themselves generally take two forms. In most cases they have a Gaussian shape that can be defined by a peak wavelength and a spectral width parameter known as the full-width half-max (FWHM), which is the wavelength range over which the spectrum is at least one half the peak value (figure .). In oceanic species the light emission nearly always peaks in the blue and blue-green portions of the spectrum (– nm). However, longer peak wavelengths occur in coastal and terrestrial species, with many green emissions and a handful of yellow and red ones. The second form of bioluminescence spectrum is generally due to the addition of fluorescence. In this case the original emitted light is blue but is then converted to light of a longer wavelength via a co-occurring fluorophore such as green fluorescent protein (GFP). These spectra, because they are fluorescence emission spectra, tend to be narrower and often have a shoulder (note the peak and FWHM of the sea pens in figure .). The functions of bioluminescence appear to be varied but are often poorly understood. A central problem is that bioluminescence is best developed in mesopelagic species, a set of animals that are difficult to collect in good condition (or even in poor condition) and that are about the worst lab rats imaginable. Therefore, direct behavioral confirmation of many of the proposed functions is lacking, and most hypotheses are based on morphology alone. The primary proposed functions include luring, defense, camouflage, communication, and illumination (figure .), and they have been extensively reviewed in Haddock et al. () (also see chapter ). The temporal dynamics of bioluminescent emissions are highly varied and depend on function. In many organisms, such as dinoflagellates, mechanical disturbance
Light and the Optical Environment
120
50 m
Light emission width (nm)
100
80
100 m
60 150 m 200 m
40 300 m 400 m 500 m
20 430
440
450
460
470
480
490
500
510
520
530
Light emission maximum (nm) Cephalopods Cephalopods(c) Teleosts and sharks Teleosts and sharks(c) Decapod crustaceans Decapod crustaceans(c) Pelagic cnidarians Ctenophores Copepods Amphipods Mysid
Sea pens Sea anemones Isidid corals Benthic decapod crustaceans Holothurians Asteroids Ophiuroids Crinoid
Figure 2.21 Peak wavelength versus emission width (full-width half-max) for the light emissions from deep-sea benthic and mesopelagic species. White symbols represent photophores used for counterillumination (c). Blue symbols represent light emitted for other purposes in pelagic species. Red symbols represent light emitted by deep-sea benthic species. The green line gives the peak wavelength and spectral width of the downwelling irradiance in clear oceanic waters as a function of depth. All spectra are calibrated in energy units. (Modified from Johnsen et al., 2004, and Johnsen et al., 2012)
33
34
Chapter 2
A
C
B
D
E
Figure 2.22. Examples of the various uses of bioluminescence. (A) Counterillumination of the ventral surfaces in the eye-flash squid Abralia veranyi. (B) Defensive spew bioluminescence in the shrimp Parapandalus sp. (C) The bioluminescent lure of the barbeled dragonfish Eustomias pacificus. (D) Possibly aposematic bioluminescence in the ophiuroid Ophiochiton ternispinus. (E) Bioluminescent searchlight under the eye of the threadfin dragonfish Echiostoma barbatum.
triggers a brief and presumably defensive flash of light. However, in some species (particularly gelatinous ones), repeated stimulation leads to more complex and longer-lasting patterns of emission including, for example, the rotating pinwheel emissions of the deep-sea coronate medusa Atolla that are thought to attract higherorder predators that—in theory at least—will attack whatever is preying on the medusa. In fact, a simulation of the bioluminescent emissions of Atolla (using blue LEDS) was recently used to attract the giant squid Architeuthis to a submersible. In animals that use bioluminescence for luring or counterillumination (using ventral photophores to obscure the silhouette), the light emission tends to be steady, although in the latter case it adjusts to match the changing intensity of the downwelling light. The emissions of animals that spew bioluminescence for defense or leave bioluminescent secretions that mark a predator tend to slowly fade over tens of seconds. In the case of light emitted for communication, there is a vast gulf between what is known for fireflies and what is known for nearly all the remaining bioluminescent species. The patterns of light emission for fireflies are species and gender specific and well known. Aggressive mimicry has even been established in certain species (Lloyd, , ). However, with few exceptions (the mating displays of the Bermuda glow worms Odontosyllis enopla, the flash patterns of the ostracods Oncea
Light and the Optical Environment
35
and Vargula, and the circumoral light organ of brooding females of the pelagic octopus Japetella), communication via bioluminescence has not even been established in other species. The complex photophores of mesopelagic fish and cephalopods and the species-specific patterns of their distribution suggest communication, but behavioral proof of this has not been obtained.
Mechanoluminescence A final source of light that is potentially relevant to vision is mechanoluminescence. In this process, light is produced by mechanical processes, including deformation (piezoluminescence), fracturing (triboluminescence), and crystallization (crystalloluminescence) (see Walton, , for a review and Sweeting, , for an accessible discussion of the relevant principles). The latter two have been suggested as being at least partially responsible for ambient light at deep-sea vents. A highly sensitive multichannel imaging system showed that the light from deep-sea vents cannot entirely be explained by thermal radiation due to their high temperature (Van Dover et al., ) (figure .). The emissions at the red end of the spectrum are approximately times brighter than would be predicted from thermal radiation, and—more importantly—the emissions at shorter wavelengths are far higher. For example, one vent emitted , times more -nm light than would be predicted by thermal radiation (White et al., ). The actual source of this light has been heavily debated but appears not to be due to bioluminescence. Active deep-sea
B
A
1012 300°C black body model 450 nm
550 nm
599 nm
652 nm
705 nm
753 nm
1010
792 nm
870 nm
947 nm
Photon flux
108 106 104 102 0 400
500
600
700
800
900
1000
Wavelength (nm)
Figure 2.23 (A) Ambient light images of a deep-sea volcanic vent at nine visible and near-infrared wavelengths imaged by the ALISS camera system. (B) Spectrum of emitted light at four locations within vent (line colors match contour colors) compared to the spectrum of a blackbody radiator at the vent temperature. Note that there is far more short-wavelength light than predicted from thermal radiation. (From White et al., 2002)
36
Chapter 2
vents are sites of powerful mechanical forces and intense crystallization when the magma is quickly cooled by the surrounding cold water; thus, mechanoluminescence is the primary candidate. Interestingly, one of the prominent species at deepsea vents is the shrimp Rimicaris exoculata, which has nonimaging eyes with high sensitivity (Van Dover et al., ). Analyses of these unusual eyes show that they are most sensitive to blue-green light, suggesting that they are not specialized to detect the thermal emission from the vent but light produced via another process such as mechanoluminescence.
3 Visual Pigments and Photoreceptors
A
t noon in the featureless depths of the open ocean, when this world is as bright as it ever can be, a silently cruising dragonfish sees only a dim blue glow overhead grading into darkness all around. As the afternoon progresses, the dragonfish and its invisible neighbors for many kilometers around slowly begin their daily migration toward the surface, following the irresistible call of the slowly dimming blue beacon above. Suddenly, in this monochromatic world, a flash of red bioluminescence appears in the gloom. The dragonfish instantly turns to reorient, alerted by the signal. None of the other fish, even those closer to the flash, respond—they all seem completely unaware of its existence. What allows this one fish to sense the signal, which in fact originated from another dragonfish? The explanation is simple— whereas all the vertically migrating fish had photoreceptors in their retinas that were sensitive to the blue downwelling light field (in fact, fabulously sensitive to such light, for reasons that are explored in depth in the chapters that follow), only the dragonfish also has visual pigments in a subset of its receptors that respond strongly to the red emissions of its conspecifics. In this chapter we examine why many visual adaptations depend ultimately on the properties of visual pigments and photoreceptors.
Visual Pigments: Opsins and Chromophores In living things, photoreception inevitably begins with a photochemical event—a molecule intercepts a photon of light and is somehow changed. Various molecules, generally known as photopigments, perform this function in animals and plants. The molecules involved in vision are not surprisingly called visual pigments. In fact, it appears that in all animals, vision (though not necessarily other aspects of photoreception in animals) ultimately depends on a single family of proteins that all have descended from one common ancestor. These are the opsins. Opsins comprise a major class of a much larger ensemble of membrane-bound proteins called G-protein-coupled receptors, or GPCRs. All these molecules have similar overall structure, with seven helical segments spanning the cell’s plasma membrane, connected by extracellular and intracellular loop segments that may
38
Chapter 3
Bovine Rhodopsin
Squid Rhodopsin
Figure 3.1 Structures of vertebrate and invertebrate visual pigments, viewed from the plane of the plasma membranes of photoreceptor cells. These three-dimensional views are based on x-ray diffraction analyses by Palczewski et al. (2000) of bovine rhodopsin and by Murakami and Kouyama (2008) of squid rhodopsin. The seven transmembrane helical segments, typical of GPCRs, are colored identically in the two pigments for comparison (there is also an eighth helix, which does not cross the membrane, and the squid rhodopsin has a very short ninth helix as well). The position of the retinal chromophore is illustrated in white. Note that the proposed structures are different in detail, but overall they are generally similar. In this representation the portion of the molecule inside the cell (cytoplasmic region) is at the top of the figure. (Figure prepared by M. L. Porter)
interact with other molecules inside or outside the parent cell. Figure . provides three-dimensional images of two opsins. GPCRs are called receptor proteins because they become activated when they bind some sort of signal molecule, and in the active state, they then proceed to couple to one type of another class of molecules called G-proteins (giving GPCRs their name). The G-protein, in its turn, goes on to activate a specific enzyme cascade in the cell, which in the case of opsin in a photoreceptor cell leads to a neuronal signal. Overall, GPCRs are the most abundant and diverse types of cell surface receptors. They handle and respond to an immense number of extracellular signaling compounds that regulate all sorts of cellular processes. Opsin, however, is very unusual—in fact unique—among GPCRs in that only opsin permanently binds an inactive form, or conformation, of its specific signaling molecule. The binding of this component, -cis-retinal (or a very close relative, discussed later), gives the two-part molecule a visible color, and the bound molecule is therefore called a chromophore (for “color bearer”). The bound chromophore has two very important properties that make vision effective: it provides the necessary signaling molecule in a ready-to-go inactive form, already bound to the opsin, and it shifts the absorption
Visual Pigments and Photoreceptors
of the complex into a range of the solar spectrum that is useful for seeing. This creates visual pigments that respond to important light stimuli extremely rapidly, allowing them to serve a sense aware of transient events in the outside world and enabling appropriately rapid behavioral responses. The critical event that triggers the entire process of vision occurs when the inactive -cis form of the chromophore absorbs a single photon of light. Almost always, the absorption snaps the molecule essentially instantaneously into a new conformation, all-trans-retinal (figure .). This longer form of the chromophore, still bound to the opsin and confined by the palisade ring of seven helices, forces the protein as a whole to reshape itself, couple to a G-protein, and thus begin the chain of events that leads to the perception of light. Because the initial photochemical reaction requires only one photon, photoreceptors as a whole, loaded with opsin molecules ready to trigger, literally count photons. In fact, at low levels of light, careful recording of photoreceptor cell electrical properties shows a single small response (called a “quantum bump”) for each arriving photon. This means that a photoreceptor cell can in principle respond to the minimum quantity of light that exists. For effective photoreceptor function, the visual pigments it contains must have spectral absorption properly placed to capture photons in the wavelength range that illuminates the cell, and they must be present in sufficient quantity to make the probability of capturing arriving photons reasonably high. As noted, visual pigments are unique among GPCRs in that they permanently bind an inactive form of their signal ligand, in this case -cis-retinal (or a close relative of this compound, as described shortly). This certainly contributes to the speed of the photoresponse, because the light-generated formation of the all-trans chromophore already in the binding pocket permits signaling to begin almost instantaneously. Of
11-cis retinal H3C
H C
C H2C 2 H 2C
CH3
CH3 1
6
3
5 4
C
7
8
C H
C
10
9
C H
11 12
CH
Figure 3.2 Structure of retinal, the most common chromophore found in visual pigments. The figure shows how the 11-cis conformation of the molecule is switched by the absorption of a photon of light into an all-trans form; the small arrow indicates the position of the bond where the molecule changes shape. The conventional numbering of the carbon atoms is also illustrated, showing that the 11th carbon atom is the site of the rotation. When the retinal is bound to an opsin, this event is the only direct effect of light on the visual pigment molecule.
C 13
H3C
C
C H2
cis configuration
H C
CH3
14
CH
HC 15
Light absorption
H 3C H 2C 2 H 2C 3
CH3
CH3 H C
C 1
6 5 4
C H2
C
O
7
C CH3
8
C H
C 9
CH3 10
C H
H C
11
12
C H
C
13
trans configuration All-trans retinal
14
C H
H C
15
O
39
Chapter 3
the potential photopigments that might underlie vision, opsin-based visual pigments have another highly desirable property. They absorb in the region of the spectrum that is transmitted not only by the atmosphere, but also by water. The origin and early evolution of visual pigments still lie in the realm of speculation, but it is likely that their success and current universal presence in visual systems stem at least in part from their absorptive overlap with the transmission spectra of air and, more importantly, of water.
Spectral Properties of Visual Pigments Because all visual pigments are evolutionarily related, and all depend on similar or identical chromophores, they all absorb light in roughly the same way. In fact, about a half-century back, H.J.A. Dartnall, of Sussex University in England, found that the absorption spectra of all visual pigments could be represented using a simple graphical technique (famously known to vision scientists as the “Dartnall nomogram”). More recently, mathematical means of computing visual pigment absorption spectra based on their wavelengths of maximum absorption (the λmax, because the Greek letter λ commonly symbolizes wavelength) have been developed. Thus, all that is needed to predict accurately the absorption of any visual pigment (even of visual pigments that have never existed!) is a λmax value and the right computer program. This simplifies visual ecological modeling using hypothetical visual systems. Standard spectra of visual pigments with λmax from to nm, generated using a common computational approach, are illustrated in figure .. Note that these are all normalized
1.0 Normalized absorbance
40
0.7 0.6 0.4 0.2 0
0
400
500
600
700
Wavelength (nm) Figure 3.3 Typical absorption spectra of visual pigment molecules normalized to the same peak value. The λmax values range from 350 nm to 575 nm at 25-nm intervals and thus show the approximate range spanned by natural retinal-based pigments. Spectra are colored to show the appearance of light that would be maximally absorbed. Each spectrum is computed using standard visual pigment templates derived by Govardovskii et al. (2000). Notice the presence of a main peak (the α-band) reaching its maximum at the λmax and a much smaller peak (β-band) at ultraviolet wavelengths (very short-wavelength-absorbing visual pigments do not show a β-band). Also notice that the spectral width of the main peak increases regularly as visual pigments absorb at longer and longer wavelengths.
Visual Pigments and Photoreceptors
to an absorbance of . at their λmax. The mathematical expressions for generating the spectrum do not in fact predict the absolute absorption coefficient of the visual pigment; surprisingly, there has not yet been a systematic exploration of how the absolute absorption of light by visual pigments varies with λmax. All this discussion suggests that visual pigments vary in their spectral properties, but so far we have not really suggested why this might be so. In fact, there are two types of changes in visual pigments that shift their absorption—appropriately enough, one in each of their two components. The simpler of these is just to switch to a different chromophore (figure .). Nearly all visual pigments employ retinaldehyde, usually called retinal or even just retinal (this is a noun, with the emphasis on the last syllable, not the adjective spelled identically but with the accent on the “ret”), but some opsins will bind -dehydroretinal (dehydroretinal, or retinal) in its place. This is common in freshwater vertebrates (e.g., fish or amphibians) and crustaceans (e.g., crayfish) and occurs in a few other animals. Take a look at the structures of retinal and dehydroretinal in figure .. You can see that the sequence of single and double bonds along the length of the carbon chain extends along one side of the sixcarbon ring in retinal but continues for another set around the ring of dehydroretinal, right through the number- carbon (the numbering system is given in figure .). For reasons beyond the reach of this book, the length of the single-double bond sequence determines the wavelength range absorbed by the chromophore; the longer the sequence, the longer is the preferred wavelength range. Thus, the same opsin binding a dehydroretinal chromophore creates a longer-wavelength-absorbing visual pigment than when bound to retinal itself. Figure . shows examples of this using data from a number of vertebrates and a freshwater crayfish.
Retinal
3-dehydroretinal O
O
HO
3-hydroxyretinal
4-hydroxyretinal OH O
O
Figure 3.4 The four types of chromophores known to be used in visual pigments. Retinal is the most common, but 3-dehydroretinal is also frequently used by animals and often is exchanged with retinal in the same opsin. 3-hydroxyretinal appears in many insects, primarily flies and butterflies, and 4-hydroxyretinal is known only for one species of deep-sea squid.
41
Chapter 3
B
A
600
1.0 0.8
550
0.6
500
A2 λ max
Normalized absorbance
42
0.4
400
0.2 0
450
0
400
500
600
Wavelength (nm)
700
350 350
400
450
500
550
A1 λ max
Figure 3.5 (A) The spectral effects on a visual pigment of changing from a retinal chromophore (absorption spectrum illustrated in green, λmax at 533 nm) to a 3-dehydroretinal chromophore (yellow curve, λmax at 567 nm). Also illustrated by a gray line is the expected absorption for a retinal-based visual pigment peaking at 567 nm; note that this curve is narrower than the yellow dehydroretinal-based curve and has a relatively lower peak in the β-band. (Data for the crayfish pigments from Zeiger and Goldsmith, 1989.) (B) Changes of the wavelength of maximum absorption when the same opsin is bound to an A1 or an A2 chromophore, with the trend line illustrated. As λmax for the A1 pigment increases, that for its A2 counterpart increases more rapidly. The red point in the upper right corner is for the crayfish data plotted in panel A.
The other two known chromophores are -hydroxyretinal, found in a number of insect species including all flies and butterflies, and -hydroxyretinal, so far detected only in a deep-sea squid. There is no obvious explanation for the use of these two chromophores in their host species, and as far as is known these compounds produce visual pigments absorbing essentially the same as ones using retinal itself. So the only chromophore exchange that is known to produce a significant change in visual pigment spectral location is that between retinal and dehydroretinal. We return to this later on in the context of spectral tuning. Changing chromophores offers a limited ability to shift the absorption spectrum of a visual pigment, and if it were the only mechanism available, animals would be restricted to a total of only two visual pigment classes (and mixes of these). Visual ecology and our perception of the world of color would be much less interesting. However, evolution has a much more powerful way to adjust the spectral absorption of visual pigments: changing the amino acids at critical positions in the opsin protein. In fact, spectral maxima of visual pigments based only on the most common chromophore, retinal, range in λmax from the deep ultraviolet (near nm or even less) to beyond nm in the reds, as seen in figure .. This extensive range is possible because amino acids that are near the chromophore can interact with it and thereby shift its ability to interact with particular wavelengths of light. Retinal itself, detached from opsin, absorbs maximally near nm, but when bound to opsin and surrounded by the amino acids of opsin’s transmembrane helices, it generally shifts its absorbance to much longer wavelengths. Because wavelength is the inverse of frequency, and the energy of photons is directly proportional to frequency, a shift to longer-wavelength absorption corresponds to a lowering of
Visual Pigments and Photoreceptors
43
the energy required to activate the chromophore. This lowering of energy is thought to restrict the upper limit at which a visual pigment can function. With increasing wavelength, visual pigments become more susceptible to random thermal isomerizations, and eventually thermal noise swamps out the ability to detect photons. Thus, the longest-wavelength visual pigments are mostly restricted to freshwater animals with limited ability to thermoregulate that inhabit cold environments (like frogs, fish, and crayfish). Even in these animals, the longest-wavelength pigments of all bind the -dehydroretinal chromophore and are normally formed only during the winter, apparently because the extremely cold-water environment minimizes thermal noise. At the other end of the spectrum many visual pigments are most sensitive to ultraviolet light, often at wavelengths shorter than that absorbed by the retinal chromophore itself. The molecular interactions between the opsin and the chromophore that produce such short-wavelength sensitivity are not well understood, but the advantages of sensing ultraviolet light constitute a theme repeated throughout this book.
Polarizational Properties of Visual Pigments The molecular structures of the chromophores bound to opsin provide an unexpected bonus. Take a look at these molecules as portrayed in figure .. The extended series of single and double bonds (“conjugated” double bonds) extends out from the ring portion as a bent linear structure. Because it is this part of the molecule that interacts with light, a photon is most effective at producing photoisomerization when its electrical vector vibrates parallel to the molecule’s axis. When bound to opsin, the chromophore is oriented so that this axis extends directly across the parallel ring of helices, as can be seen in figure .. In biological membranes, where visual pigments (like all GPCRs) are confined, the helices extend across the lipid bilayer. Consequently, the chromophore is oriented more or less parallel to the plane of the membrane in which the opsin resides. If rays of light arrive perpendicular to this plane, the visual pigment molecule will be dichroic—it will preferentially absorb photons with electrical vectors parallel to the chromophore. This is illustrated schematically in figure ., showing a case in which absorption for rays B
A 45˚ 0˚
Normalized absorbance
1.0
0˚
0.8 45˚
0.6 0.4
90˚
0.2 0
0
400
500
600
Wavelength (nm)
Figure 3.6 Schematic illustration of dichroism in visual pigments. (A) A “wire diagram” of the structure of a typical vertebrate rhodopsin with the orientation of the retinal chromophore illustrated in black (compare to figure 3.1). The arrow traces the path of a ray of polarized light through the molecule, with the angle of polarization indicated at 0° and 45° to the axis of the chromophore. (B) The absorbance that would be measured at each polarization an700 gle, with absorption being greatest when polarization is parallel to the chromophore’s axis and least when the polarization is perpendicular.
44
Chapter 3
polarized parallel to the chromophore is three times as effective as for rays polarized perpendicular to it, with rays polarized at ° being absorbed at an intermediate level. This property, fundamental to visual pigment molecules, can be exploited in a properly designed photoreceptor cell to detect polarization of incoming light.
Visual Pigments in Photoreceptors Visual pigments have been discussed to this point as if they were molecules in isolation, like the chemicals on a sheet of photographic film. In reality, these pigments reside in extremely specialized photoreceptor cells. It is the organization and biochemical properties of these cells, as well as subcellular modifications in the actual photoreceptors to alter the light reaching the visual pigments they contain, that determine how visual pigments perform in an actual retina. It was once thought that individual photoreceptor cells express only one type of opsin and thus contain only one visual pigment (although this opsin sometimes binds alternate chromophores, as already mentioned). We now know that single photoreceptors occasionally express two, and possibly even more, opsins. Although this would obviously impact the receptor function, the actual significance of multiple opsin expression is not currently understood. Thus, the discussion presented here assumes only a single opsin class per cell.
Light Absorption: Absorbance versus Absorptance Recall that opsin molecules are confined to cell membranes. By molecular standards visual pigments are good absorbers of light, but a single molecule of pigment, or a monolayer of such molecules in a plasma membrane, would trap only a minuscule portion of the photons passing through. To increase the opportunities for capturing light, all photoreceptor cells used in visual systems avail themselves of the same strategy: they add membrane layers shamelessly and thereby multiply the odds of absorbing arriving photons. This being true, one might think that it is always beneficial to build photoreceptors with as many membrane layers as possible. However, there are often good functional reasons for limiting the number. Before turning to the actual receptor cells of animals, we should consider how pigments of increasing concentrations in photoreceptors absorb light. Absorbance of a substance, solution, or transparent material is defined as the logarithm, to the base , of the quotient I (incident light intensity) divided by It (transmitted light intensity). For example, a solution of any pigment that transmits % of incident light of a given wavelength (in other words, absorbing % of it) would have an absorbance of . at that wavelength. As illustrated in figure ., visual pigments have characteristic absorbance spectra with a predictable shape, given the pigment’s λmax. The great advantage of using absorbance to plot the spectral absorbance of a substance is that the shape of the spectrum, when normalized, remains constant for any concentration of that substance; this is why template spectra are so useful for modeling visual pigment performance. However, plotting absorbance can be highly misleading when one considers how light is absorbed in a given situation, as in the multiple layers of a photoreceptor cell. To show this, units of absorptance are used. Absorptance is simply defined as the fraction of light absorbed at a given wavelength
Visual Pigments and Photoreceptors
A
B Normalized absorbance
8
Absorbance
7 6 5 4 3 2 1 0
400
500 600 Wavelength (nm)
0
700
C
1
400
500 600 Wavelength (nm)
700
400
500 600 Wavelength (nm)
700
D
Absorptance
Normalized absorptance
1
0
400
500 600 Wavelength (nm)
700
1
0
Figure 3.7 Seemingly unimportant changes in absorbance can produce major effects on how a photoreceptor cell actually absorbs light. “Absorbance” is the standard term used in spectroscopy to refer to the absorption spectrum of a material, and when normalized, its form is constant in shape for a given light-absorbing material. “Absorptance” is the actual fraction of light absorbed by a pigment in a given situation, as for the entire length of a photoreceptor cell (outer segment or rhabdom). Data in all four panels refer to the same series of absorbance spectra, representing a typical visual pigment with its λmax at 500 nm but with increasing peak absorbances of 0.15, 0.3, 0.75, 1.5, 3, and 7.5. Note how at low absorbances, the absorptance spectral shape resembles the absorbance spectrum, but as absorbance increases the absorptance curve becomes increasingly broader and flat-topped.
and is therefore the same as (I − It)/I (or, more simply, − It/I). Because both absorbance and absorptance are based on incident and transmitted light values, they are easily interconvertible. The best way to compare how absorbance and absorptance spectra differ from each other is to compare graphical representations of identical data. Examine figure . for examples. Panel A plots a series of absorbance spectra of a typical visual pigment with λmax at nm. Peak absorbances were set to six values in successive curves: ., ., ., ., , and .. (These values were selected to represent the range that exists across photoreceptors in animals, as explained in the subsequent section on photoreceptor structure.) Panel B shows all six curves normalized to their peaks and superimposed. The normalized curves are identical, so only a single curve—the
45
46
Chapter 3
normalized curve of the visual pigment’s absorbance spectrum—suffices. As already noted, the absorbance spectrum for a given material is constant when normalized, so this is just a graphical statement of the same thing. Absorptance spectra for the same series of data are illustrated in panel C. At low absorbances, the shape of the absorptance spectrum is much like that of absorbance, but as absorbance grows, the absorptance curve becomes increasingly broad, and eventually, at a peak absorbance of ., the spectrum looks almost rectangular, being nearly flat-topped out to about nm and then dropping precipitously to near nm. When these curves are normalized so that the shapes of the spectra can be compared (panel D), it is obvious that different absorptance spectra have radically different shapes, changing most rapidly at moderate absorbance values. What is going on here? At an absorbance value of ., by definition % of incoming photons are absorbed. Therefore, whenever absorbance exceeds this value, the absorptance curve approaches its upper limit of .. Even for a peak absorbance of . (the fourth spectrum in the series), much of the spectrum is near or above .; at a peak absorbance of ., almost the entire width of the main peak exceeds ., and when peak absorbance reaches ., essentially the entire absorption spectrum save the extreme long-wavelength tail lies well above .. Thus, as absorbance increases, the regions of the absorption spectrum away from the peak become increasingly important in accounting for how the visual pigment captures light. The flattening of absorptance at higher values of absorbance is called self-screening, and it is a useful property of visual pigments. When photon capture is the primary concern in photoreceptor design, obviously a high absorbance is desirable. Note, however, that this is not because having a higher peak increases peak absorbance—once peak absorbance reaches ., meaning that % of photons are absorbed, higher values can only raise photon capture at the peak by % at most. The benefit of the elevated peak absorbance is that the absorbance away from the peak also rises, broadening the region of the spectrum where photons are effectively captured. Increasing peak absorbance beyond moderate values provides decreasing advantages; by the time the peak is at . (the fifth curve in the series), almost all light from nm to nm is already being absorbed. Further increases in visual pigment density add almost nothing to the total absorption. The consequence for photoreceptor design is that there is little or no need to extend photoreceptor size or pigment density above a peak value of . or so. Increasing visual pigment absorbance has other undesirable consequences, particularly when absolute photon capture is not at a premium (for instance, when vision is important in daylight). At high absorbances, the absorptance spectra of visual pigments having different λmax values are essentially identical, and there is almost no change in absorption of light across the spectrum. If it is desirable for the visual system to respond differently to different wavelengths, as for color vision, high absorbance in a single photoreceptor is counterproductive. To summarize these points, photoreceptors that have high sensitivity should have moderate absorbance, peaking around . or so. Those used for spectral discrimination should have quite low absorbance, peaking at . or less. Thus, the functional absorbance range for visual pigments is surprisingly restricted, only varying over a factor of three or so. Now that we have considered the fundamental concepts of how visual pigments in cells would absorb light, it is time to turn to the photoreceptor cells used by animals.
Visual Pigments and Photoreceptors
Photoreceptor Cells Photoreceptors are built on a huge diversity of plans, but those committed to vision have at least one attribute in common—the cells contain huge amounts of membrane, and the membranes are arrayed to form a series of layers arranged perpendicular to the expected direction from which light will arrive. Despite the diversity of these receptors across all animal taxa, the groups of animals that are most intensively used in visual ecological research rely on photoreceptors built on two fundamental plans, reflecting the two known molecular forms of opsin (illustrated in figure .). Visual photoreceptors of vertebrates are derived from ciliated epidermal cells and are consequently called “ciliary receptors.” All these receptors have their visual pigments localized to stacks of membrane layers piled on top of each other like stored linen. Those of arthropods (insects, crustaceans, etc.) and molluscs (scallops, octopus, squid, etc.) instead lack any trace of a cilium. Instead, visual pigments in these photoreceptors exist in hordes of tiny, cylindrical microvilli, which usually extend out parallel to each other from the cell body like bristles from a toothbrush. Because the microvillar stack forms a rod-like structure, these microvillar receptors are sometimes called “rhabdomeric photoreceptors” (“rhabdom” means “rod”). A very few animals, for instance scallops, use both types of photoreceptors in their retinas for vision. Arrangements of membranes containing visual pigments in ciliary and rhabdomeric photoreceptors are illustrated in figure .. Despite the fundamentally different arrangements of membranes in these receptor types, all photoreceptors absorb similar amounts of light per unit length, although rods and cones probably are slightly better light absorbers than microvillar photoreceptors. For each micrometer (μm, / of a millimeter) that light travels along a
A
B
Figure 3.8 Electron micrographs to show the arrangements of photoreceptor membranes in a vertebrate rod cell and an invertebrate rhabdom. (A) The rod disks can be seen clearly, with each disk containing two membrane layers that would be packed with visual pigment (inset). (Photograph by J. Besharse) (B) The invertebrate rhabdom illustrated here is layered, an arrangement that reveals the thin microvillar tubes seen both in profile and in cross section. Each microvillus has a structural core formed of filamentous actin molecules (arrowheads). (Photograph by C. King-Smith)
47
48
Chapter 3
rod or cone, about % of the photons are captured at the visual pigment’s λmax, corresponding to an absorbance of ./μm. Rhabdomeric receptors have absorbances of ~./μm, absorbing about % of photons at the peak. Happily, absorbances can be summed. So, to reach an absorbance of ., a rod would have to be ~ μm long ( × . = .); for a microvillar photoreceptor to reach the same absorbance, its length would be ~ μm—nearly half a millimeter. Receptors commonly reach these dimensions. Rod and cone photoreceptors are found throughout the vertebrates, and the great majority of vertebrate species contain both receptor classes in their retinas. Both cell types have two distinct functional regions. Their outer segments, which face the back of the retina, contain the stacks of membranes that contain the visual pigment (figure .). These are joined to the inner segments by a thin neck, where the cilium that gives these cells their class name is located. Inner segments contain more typical cellular components: the nucleus, mitochondria, and the cellular machinery that maintains the cell and allows it to function biochemically. The innermost end of the cell forms the synapse that connects each photoreceptor with other cells of the retina. Rods are probably derived evolutionarily from cones and are specialized for extreme light sensitivity. They function in dim-light vision (“scotopic vision”; “scoto-” refers to dark). As would be expected, rods are commonly long and cylindrical. Cones, devoted to bright-light vision (“photopic vision”; “photo-” of course refers to light), tend to be relatively short and, as their name suggests, have conical or tapered sets of membranes. To return to our discussion of absorbance and absorptance in the last section, rods tend to have elevated absorbance and thus are spectrally rather flat-topped in their sensitivity; cones have quite low absorbances and are consequently more finely tuned in their spectral response. Membranes in rod outer segments differ in another important way from those of cones: the membranes that contain visual pigments are formed into flattened disks, having membrane on both surfaces and being empty internally, much like pita bread. Thus, the visual pigment is actually located inside the outer segment, in disks surrounded by the outer cell membrane. The biochemical signals that are initiated by photon absorption by visual pigments must eventually reach the cell’s plasma membrane so that a nervous signal can be generated. In rods, therefore, the separation of the origins of visual chemistry from the outer membrane of the cell slows down the response, making rods relatively poor at signaling rapid changes in visual stimuli. Essentially, rods are high-tech cones that sacrifice speed (and often spatial discrimination as well) for ultimate sensitivity. Cones, on the other hand, respond relatively quickly to stimulus changes but are both less sensitive and noisier than rods. In other words, cones need to absorb many photons of light before the light signal becomes distinguishable from chemical noise (sometimes called “dark noise”) in the receptor. Unlike vertebrate rods and cones, microvillar photoreceptors occur in many animal groups that are not closely related, and their cellular forms are thus far more diverse in their structures and arrangements. Often the microvillar portion—the part of the cell that captures light—extends along most of the length of the receptor cells, so there are no obvious analogues of the outer segment and inner segment of rods and cones (figure .). The microvilli from one cell almost always extend into a central region where they meet, or are near to, microvilli from other cells. The group of receptors forms a cylinder, with the microvilli all in the center. This mass
Visual Pigments and Photoreceptors A
B
C
D
Figure 3.9 The structure of typical vertebrate photoreceptors illustrated diagrammatically, showing the inner and outer segments connected by a ciliary neck. In both cases, light enters through the inner segment, that is from the bottom of the figure. Outer segments of cones (C,D) are generally much shorter than those of rods (A,B). The receptor types differ in the shapes of the outer segments, but they have more important functional differences related in part to the arrangements of the membrane layers in the outer segments that contain visual pigments. Rod membranes are arranged in a series of disks that are contained within the plasma membrane of the outer segment, but in cones the membranes are folded layers of the plasma membrane itself. The more elaborate structure of rods is thought to have evolved from cone-like ancestors. (The detailed cutaway figures in B and D are after Young, 1970)
49
50
Chapter 3
A
B
Figure 3.10 Structures of typical microvillar, or rhabdomeric, photoreceptors found in many invertebrates. Each photoreceptor cell has many layers of finger-like microvilli projecting to one side, forming a structure called the rhabdomere (A). In most animals where these photoreceptors exist, several cells occur in a circular arrangement with the rhabdomeres facing into a central region (B) (After Stowe, 1980) Commonly, the microvilli adhere at their tips and along their lateral margins with those of adjacent cells to form a composite structure called the rhabdom. In some rhabdoms, such as the one illustrated here, the microvilli form layers with their axes aligned at different orientations. In these cases, the photoreceptor cells produce sets of microvilli interrupted by gaps along the length of the rhabdomere (like a toothbrush).
of joined microvilli is the actual rhabdom, and it acts as an optical unit. Absorbance and thus peak sensitivity and spectral tuning depend mostly on the length of the microvillar segment. In crustaceans the microvilli are interrupted at regular intervals so that microvilli of adjacent cells can be inserted into the gaps when the rhabdom is formed (figure ., right). This is the arrangement seen in figure .—the microvilli that are parallel to the page and look like thin rectangles emerge from one cell, whereas those that are perpendicular and look circular come from a different cell. In some crustaceans and insects half the length of the rhabdom can be formed from the microvilli of one or a few cells, and the other half might contain microvilli contributed by a different set of cells. Photoreceptive microvilli extend across the path of incoming light, and each microvillus contains a skeletal core (indicated by arrows in figure .) made of actin filaments for support. This core could have other roles in photoreceptor function and design, roles possibly related to enhancing response speed or selectivity.
Visual Pigments and Photoreceptors
Optical Specializations of Photoreceptor Cells: Focusing and Filtering Placing visual pigments in specialized photoreceptor cells has obvious benefits for organizing them properly in the light path on the retina and for tuning absorption and response. Photoreceptors are often modified further to affect how visual pigments perform, and these modifications almost always play roles in ecological functions of vision. Excellent examples of these specializations are frequently seen in the inner segments of vertebrate cones, where brightly colored inclusions sometimes exist (figures ., .). Light enters the cone through the inner segment, so these colored components act as filters, removing selected spectral regions of incoming light
A
B C
Figure 3.11 Optical modifications of vertebrate photoreceptors for specialized tasks. As illustrated here, light would enter from the bottom of the figure. (A) In nocturnal mammals, nuclei of rod cells (left) often have their chromatin reorganized to form tiny lenses that project light to the outer segments for greater sensitivity (Solovei et al., 2009). Cones of many vertebrate species contain strongly colored pigments in their inner segments that tune the spectral sensitivity of the cone by filtering light before it reaches the visual pigments. These are usually in the form of oil droplets (B) but may exist in other forms. (C) The yellow-absorbing pigments found in the inner segments of some cones of lungfish.
51
Chapter 3
A
B
Absorptance
1.0
0.5
0 C
400
500 600 Wavelength (nm)
700
400
500 600 Wavelength (nm)
700
D 1.0
Absorptance
52
0.5
0
Figure 3.12 Filters in vertebrate cones and the spectra of light they transmit. The peacock retina (A,B), viewed in a microscope spread out flat so that cones are seen mostly end-on, contains cones with four types of oil droplet, termed clear, transparent (here green-colored), yellow, and red. The clear droplets transmit the entire visual spectrum of light, but the transparent, yellow, and red classes act as filters to remove short wavelengths from light before it reaches the outer segments of the cones, which are not visible in the photograph. (Photograph and data provided by N. Hart) Some cone classes in retinas of the Australian lungfish (C) have red oil droplets or yellow inner-segment pigments. (D) Spectra of light absorbed by these pigments. Like the red and yellow oil droplets of the bird, these cone pigments also act as “longpass filters,” blocking short-wavelength light and allowing longer wavelengths to reach the visual pigments contained in the cone outer segments.
and thereby tuning the spectral response of the receptor as a whole. Oil droplets or other components containing yellow, orange, or red carotenoid pigments are found in cones of many vertebrates, from fish to marsupial mammals. These can dominate the volume of the cone inner segment, and despite their tiny size, only a few micrometers in diameter, they block light effectively to restrict the spectrum reaching the visual pigments (see figure .). Usually oil droplets in cones are matched to the visual pigments of the outer segment, obviously playing a role in tuning color vision. Other pigments also sometimes occur in high concentrations in inner segments; a common adaptation seen in fish is to use the yellow cytochromes of mitochondria as filters. A newer discovery is that rods also can be modified for improved function to meet particular ecological requirements. Even though the inner layers of the retina are transparent, the cell structures that exist there scatter light, directing it away from the rod outer segments. Nocturnal mammals have modified rods in which the nucleus
Visual Pigments and Photoreceptors
53
has an arrangement of chromatin that is inverted from the normal situation, converting the nucleus from a scatterer into a miniature lens (Solovei et al., ; figure .). The nuclei become distinctly more transparent than those of other mammals, and optical modeling shows that they can concentrate light into the outer segment. Examples of mammals with these “inverted” nuclei include mice, cats, deer, rabbits, and even some nocturnal primates. Microvillar photoreceptors in many invertebrates have similar modifications to those we just discussed for vertebrates. They lack inner segments, so when filter pigments are present they are located either in the cell body adjacent to the photoreceptor or directly in the rhabdom, separating groups of photoreceptive microvilli (figures . and .). Those that are present in the rhabdom exist in only a few types of crustaceans, as far as we know. They operate much like cone oil droplets with the
A
B
Figure 3.13 Filters in invertebrate photoreceptors. (A) The diagram shows three sets of photoreceptors in a swallowtail butterfly, Papilio xuthus. The photoreceptive rhabdom is the grayish structure extending through the middle of the receptor cells. It is surrounded by yellow or red pigments that act as parallel, or lateral, filters for light in the rhabdom. (B) The diagram shows several filters in rhabdoms of a typical stomatopod crustacean, indicated by the colored regions—the rhabdoms are indicated by diagonally hatched regions. A common set of filters, illustrated here, includes four classes indicated by the colors of light they transmit: yellow, orange, red, and blue.
Chapter 3
B Normalized absorbance
A
1.0
0.5
0 400
700
500 600 Wavelength (nm)
700
500 600 Wavelength (nm)
700
1.0
0.5
0 400 E
500 600 Wavelength (nm)
D Normalized absorbance
C
F Normalized absorbance
54
1.0
0.5
0 400
Figure 3.14 Photographs of invertebrate photoreceptor filter pigments and characteristic absorption spectra of the pigments. (A) Lateral filtering pigments in the fiddler crab Uca pugnax (red oily droplets aligned along the rhabdoms, seen from the side at low and high magnifaction) with (B) flat-topped absorbance spectra. (C) Lateral filter pigments seen end-on in the butterfly Papilio xuthus, where they are arranged in squares surrounding the small, transparent rhabdoms. (D) The red pigments have a more structured absorbance spectrum than that of the fiddler crab pigments. (E) A sampling of the serial filters in stomatopods. The stomatopod filters are seen in cross section at the top and in long section in the bottom three photographs (the transparent regions above and below the filters are the rhabdoms). (F) Absorbance spectra of the four filters found in retinas of Siamosquilla sexava. (Butterfly photograph courtesy K. Arikawa)
Visual Pigments and Photoreceptors
exception that more than one set of pigments may occur in a single rhabdom (see figure .). Recall that different receptor cells can contribute to the rhabdom at different levels. In these cases, filters are sometimes formed between the rhabdom’s tiers. Thus, as light travels through the rhabdom it transits both intrarhabdomal filter pigments and visual pigments, becoming successively filtered along the way. Photographs and absorbance spectra of some of the intrarhabdomal filters are presented in figure .. In other cases the filters that modify light and tune spectral absorption within the rhabdom actually lie adjacent to it. Such an arrangement modifies light by lateral filtering, where pigmented material near a very narrow photoreceptor shares light with the receptor and changes the spectrum of light within the receptor itself. Many arthropods use lateral filters in their photoreceptors. Swallowtail butterflies provide an outstanding example, and retinas of these animals are quite beautiful (see the photograph in figure .). The pigments are used to tune the color-vision system of the butterfly. Less elegant but effective lateral filters exist in many crustaceans, and in the semiterrestrial crabs, such as fiddler crabs, they are particularly prominent (figure .). There is one final means by which accessory pigments in a photoreceptor alter the absorption of light by visual pigments in the same receptor. In this case, instead of being separated from the photoreceptive membranes, the pigments are dispersed among the visual pigments. In fact, for the system to work they can be separated by no more than molecular dimensions from the visual pigments. Such an arrangement of pigments permits energy transfer directly from one pigment to another; the process is called sensitization. The first example of this was discovered in flies and is best explained with reference to figure .. Note that the spectral sensitivity of this fly photoreceptor is similar to the predicted spectral absorption of the visual pigment it is known to contain at medium to long wavelengths but that the receptor is far more sensitive to ultraviolet light than would be possible with the visual pigment alone. The sensitivity spectrum in the ultraviolet is also much more complex in its details than would be expected from the absorption of a second visual pigment, which indicated sensitization to the researchers who discovered the phenomenon (Kirschfeld et al., ). In these receptors a second pigment present in the membranes of the photoreceptor absorbs ultraviolet light and redirects its increased energy immediately to the main visual pigment by a process called radiationless transfer. Sensitizing pigments
Normalized sensitivity
1.5
Figure 3.15 Spectral sensitivity augmentation by sensitizing pigments. This example illustrates data from the blowfly Calliphora obtained by Hamdorf et al. (1992). Thick line is the spectrum of the visual pigment of Calliphora, peaking at 490 nm. Thin line shows the actual spectral sensitivity of the flies when they are fed with vitamin A (retinol), revealing a very significant enhancement of ultraviolet sensitivity over what would be expected from the absorbance of the visual pigment alone.
1.0
0.5
0 300
400 500 Wavelength (nm)
600
55
56
Chapter 3
are not thought to be common, but in vertebrates another example exists: specifically, in a few species of deep-sea fish that sense yellow or red light through a sensitizing process that occurs in the rods. Strangely, in these cases the sensitizing pigment is a modified bacterial chlorophyll molecule (Douglas et al., ), probably obtained from prey that have migrated down from surface waters.
Spectral Tuning of Visual Pigments Early research quickly revealed that the visual pigments extracted from photoreceptors of different animals had different absorption spectra, so some of the earliest research on visual evolution focused on the ecological advantages of these variations. It was once popular to name visual pigments after the colors seen when looking through a solution containing them; thus, rhodopsin comes from “rhodo” (“red”) because it looks red in solution. (Actually, typical solutions of rhodopsin look rather pink, a color that disappears almost immediately when the pigment is exposed to light. Rhodopsin is usually extracted from mammalian rod photoreceptors, and a common misconception is that the “rhod” refers to rods.) Porphyropsin, originally from retinas of freshwater fish, looks purplish in solution (“porphyry” from the Greek for “purple”). The color of a solution is what is transmitted, not what is absorbed. So rhodopsin absorbs blue and some green, and thus transmits red, whereas porphyropsin absorbs green and some yellow, and thus transmits blue plus red, giving purple. Later, other visual pigments were extracted from the retinas of deep-sea fish and crustaceans that transmitted golden-yellow light because they absorbed the complementary color, blue, and these were named chrysopsins (“chryso” refers to gold). Happily, the proliferation of these names eventually came to an end (but not before “iodopsin,” violet, and “cyanopsin,” blue, joined the family tree), as it became obvious that numerous opsins, differing in many important properties not related to their light absorption, can produce visual pigments with similar or even identical absorption spectra. The only term commonly remaining in use is “rhodopsin,” which now signifies any visual pigment formed from opsin and retinal; “porphyropsin” is sometimes used for one with a -dehydroretinal chromophore. Still, the discoveries of the early, named visual pigments led to speculation on why different visual pigments would exist at all. One idea, covered in detail in a later chapter, is that multiple visual pigments are required in a single retina to enable color vision. However, in many cases it seemed that only a single visual pigment existed in the retina or at least that one pigment was expressed at far higher levels than others—particularly in fish, which were favored research subjects for visual pigment work. Denton and Warren () noticed that retinas of deep-sea fish contained golden-colored pigments (they first suggested the name chrysopsin, “visual gold”), adding to an earlier observation that those of coastal fish contained rhodopsins (“visual red”) whereas freshwater fish had porphyropsins (“visual purple”). They provided a visual ecological explanation for this observation, showing that the blue-absorbing chrysopsins were spectrally placed for effective absorption of the blue light of the deep sea, which is essentially the only remaining bit of sunlight. The dragonfish introduced at the start of this chapter is an interesting exception among deep-sea fish for its possession of red-absorbing visual pigments, making it capable of seeing its own private signals! Today, with data from many more deep-sea fish available, the clustering of the wavelengths of maximum absorption of their visual pigments near nm, in the
Visual Pigments and Photoreceptors
A Fish (n=176)
Number of species
16 12 8 4 0 400
450
500
550
600
B Number of species
8 6
Crustaceans (n=35)
4 2 0 400
450 500 550 Wavelength of maximum absorption (nm)
600
Figure 3.16 Histograms to show values of λmax of visual pigments in photoreceptors of deep-sea animals. (Top) Rod photoreceptors of fish (Douglas et al., 2003). (Bottom) Rhabdoms of deepsea crustaceans (Marshall et al., 2003a). Gray bars illustrate data from euphausiids, or krill; dark bars illustrate true shrimps.
blue region of the spectrum, is obvious (figure .). Clearly, such pigments are well suited for absorbing the narrow-band downwelling blue light as well as the blue bioluminescence present at great depths in the ocean (see also chapter ). New techniques, available only in the last couple of decades, have made it relatively easy to learn how the opsin proteins of fish are adapted to produce these favorable absorption spectra by comparing amino acid variations at sites that could affect the chromophore. High-resolution three-dimensional models of opsin proteins (figure .) have also simplified the search for critical spectral tuning sites. Deep-sea crustaceans, whose visual pigments have rather different structures from those of fish (being more like the squid type), provide similar data. On the other hand, visual pigment maxima in most species of true shrimps from the deep sea tend to occur at slightly longer wavelengths than expected (figure .) for reasons that are not fully explained (see Marshall et al., a). In contrast to the fish, spectral tuning mechanisms in crustacean (or other invertebrate) visual pigments are only poorly understood at present. The very simple spectral characteristics of light in the deep sea have made it the perfect laboratory for working out initial principles of visual pigment tuning to meet the ecological requirements for vision. Obviously, what we would like to have now are more general principles: Does knowing the photic environment in which an animal lives permit an accurate prediction of the spectral properties of the visual pigments in
57
58
Chapter 3
its retina? It is reasonable to begin to explore the answer to this question by considering only the major pigment present—in most animals, even when multiple visual pigments exist in the retina, one spectral class is present in far greater amounts than the others. For example, in almost all vertebrates the visual pigment found in rod photoreceptors completely dominates, and rods are specialized for high sensitivity, so if there are general rules that connect visual sensitivity to environment, this is the place to look. A reasonable hypothesis is that rod visual pigments of a given species are spectrally placed to maximize photon capture in the natural environment. It was noted earlier that deep-sea fish have blue-shifted visual pigments compared to surface-living or freshwater species and that in fact their visual pigments are well matched to the light reaching ocean depths. Research on marine mammals by Fasick and Robinson () also shows that visual pigment λmax decreases with the foraging depth in these vertebrates as well (figure .). The one shallow-living, freshwater species included in their study (the West Indian manatee, Trichechus manatus) had rods with the longestwavelength visual pigment detected (peaking at nm). Another significant correlation was that the amino acids of these animals’ opsins that produced the spectral shifts were at the same sites as those that tune visual pigments in fish rods. All this is very interesting, but it has not really addressed the hypothesis we began with. An effective visual ecological approach is to model photon capture under typical natural lighting, for example, in daylight or under the forest canopy or underwater. Because visual pigment absorption spectra are mathematically defined if the λmax is assigned, it is easy to calculate the photons that would be absorbed by each of a series of visual pigments, advancing through successive spectral maxima. Any relevant irradiance spectrum can be used, together with a reasonable value for receptor light capture. Because the question concerns vision in low light, typical spectra of interest are those at night, in deep shadow, or at depth underwater; typical peak absorbances for dim-light photoreceptors range from . to ., so . is a reasonable choice to
Cow 501 nm West Indian manatee 502 nm
250 m
500 m
750 m
Common dolphin 489 nm
Harp seal 498 nm
Humpback whale 492 nm
Sowerby’s beaked whale 484 nm
1,000 m
Figure 3.17 Visual pigments in rod photoreceptors of marine mammals that reach different maximum depths when foraging, compared with a terrestrial mammal (the cow). Each illustration indicates the foraging depth and λmax of the rod visual pigment of the named species. (This figure by E. Cronin is based on data published by Fasick and Robinson, 2000)
Visual Pigments and Photoreceptors
59
use in modeling. The choice of peak absorbance actually has little effect on the outcome of the modeling in any case, partly because natural spectra are usually broad and partly because of the flattening of visual pigment spectra at moderate concentrations (e.g., figure .). In most cases photon capture rises continually with visual pigment λmax (figure .). This is true for natural sources of light—the sun, the moon (reflecting sunlight), and the night sky—and it even holds for the heavily filtered light passing through leaves of the forest canopy, which produces a clear peak in the green region of the A
Daylight
Twilight
1
1
Relative irradiance
0
0 Starlight
Full moon 1
1
Normalized photon capture
Figure 3.18 (A) Irradiance spectra in typical habitats occupied by animals and (B) an analysis of how photon capture in each irradiance condition changes with visual pigment λmax. 0 0 Spectral distribution of irradiance changes Under forest canopy 18m depth (seawater) throughout the day in the first four panels, 1 1 which illustrate a terrestrial habitat open to the sky. Spiky spectrum in the “starlight” panel is a consequence of spectral lines in the spectrum of auroral emission in the upper atmosphere. Similar set of panels below show the relationship 0 0 400 500 600 700 400 500 600 700 between photon capture and visual pigment Wavelength (nm) λmax, using template spectra and assuming a total visual pigment absorbance of 0.75 for the B Daylight Twilight photoreceptor as a whole. In most cases, the 1 1 capture of photons increases monotonically with λmax. There is a minor exception for the spectrally narrow twilight sky, where a blue-sensitive visual pigment does best, but the upward trend of photo capture returns at long wavelengths. Only 0 0 Starlight Full moon in the very confined, narrow spectrum of light in 1 1 seawater away from the surface is there a rough match between the spectral irradiance peak and visual pigment λmax. 0
0 Under forest canopy
18m depth (seawater)
1
0
1
400
500
600
0
400
500
Wavelength of maximum absorption (nm)
600
60
Chapter 3
spectrum. Twilight illumination, dominated by the blue of the overhead sky, favors a broad spread of visual pigments absorbing maximally near nm, but the curve rises again at longer wavelengths. The only case where visual pigments absorbing at medium wavelengths are consistently the most effective is in the narrow blue spectrum transmitted by seawater. This result agrees with the finding discussed earlier where aquatic animals were found to express visual pigments that are evidently tuned to their habitats (figures . and .), but we are left with the paradox that, in nearly all other habitats, animals should have visual pigments absorbing maximally at quite long wavelengths, at least in their photoreceptors intended for use in dim light. No vertebrate, however, has rods with visual pigment λmax placed beyond about nm, and this extreme is highly exceptional; most rod absorbances peak near nm or less. Similarly, the eyes of insects, even night-active species, are never dominated by photoreceptors with λmax beyond nm—longer than that of rods but still far from optimal for best photon capture. The explanation for the consistent selection of seemingly suboptimal pigments almost certainly takes us back to the point mentioned earlier. As the wavelength of maximum absorption marches to longer wavelengths, the thermal activation threshold for activation decreases, increasing dark noise. Although not yet proven, it appears that the spectral positioning of visual pigments specialized for dim-light vision thus represents a compromise between the rates of spontaneous thermal activation and that of photon detection. Ectothermic (“cold-blooded”) animals, including many amphibians living in cool water, actually outperform humans at very low light intensities and do worse as temperature increases, suggesting that thermally induced dark noise is a functionally important limit for vision in nature (Aho et al., ). Given the need for reliable detection of the rare arrivals of photons in dim light, each activating only a single visual pigment molecule, and discriminating these from thermal events that can affect any of the immense numbers of visual pigments in a single photoreceptor cell, it is not really surprising that thermal stability can trump photon capture at the limits of visual performance.
How Many Visual Pigments Are Useful in a Retina? The huge majority of animal species have multiple classes of visual pigments in their retinas. There are many potential advantages for this—functionality in different light regimes (e.g., night vs. day, surface vs. underwater), extension of spectral range, ability to generate different behaviors in response to differently colored stimuli, or providing a retinal substrate for color vision, to name a few possibilities. Vision is about more than simply maximizing sensitivity by capturing photons. What allows an animal to see, to inspect and analyze its environment is the ability to detect contrast—differences in visibilities among objects. In a spectrally limited environment like the deep sea, sensitivity and contrast are often very much the same task because the residual sunlight provides only a narrow spectral band, and thus, variations in perceived brightness must arise from this simple illuminant. That, and the fortunate (and probably not coincidental) fact that visual pigments perform well at the wavelengths transmitted by water, together explain the relationship described earlier for aquatic animals. But in air, where even at night illuminants are spectrally broad and objects reflect a diverse set of spectra, multiple visual pigments obviously have the potential to provide sensory advantages.
Visual Pigments and Photoreceptors
Multiple visual pigments, of course, can also underlie new kinds of visual experiences for animals, such as color vision, ultraviolet sensitivity, secret signals (as in the dragonfish), or spatially specialized regions of vision, but the ecology of these visual submodalities is described in detail in later chapters of the book and is not covered here. Nevertheless, the great majority of species have multiple, different visual pigments in their retinas; vertebrates are rather exceptional in having only a single, highly specialized, class of photoreceptors—rods—devoted to dim-light vision. (Even in vertebrates, a few species of amphibians have two spectral types of rods.) Here, we consider some general questions concerning the visual advantages of particular combinations of photopigments. Many animals have only two classes of retinal photoreceptors specialized for bright-light (photopic) conditions, each expressing a different visual pigment. For example, two cone classes are often present in vertebrate retinas. The roles that multiple-pigment systems play in color vision are covered in a later chapter, but here we introduce the concept of optimal combinations of two-pigment systems. It seems obvious that such pigment pairs would be expected to vary among environments, which turns out to be true. The basic approach to exploring how ecological setting might influence the pigment pairs that occur in a retina generally models how different sets of pigments would perform in discriminating objects typically seen by animals living in a given habitat. John Lythgoe and Julian Partridge () measured the spectra of light reflected from objects collected in a forest, for instance leaves and soil litter, and then examined how well different pairs of sensitivity functions, produced by two photoreceptor classes containing different visual pigments, would discriminate among these spectra. Their results predicted that a blue-sensitive cone class should be paired with a greensensitive one (λmax values near and nm), which compares very well with the cone types found in retinas of arboreal mammals. A similar but analytically more sophisticated attack on this problem by the same research team (Lythgoe and Partridge, ), in this case for seeing in the greenish marine waters near coasts, established that cone pairs in resident fish are well placed for seeing, this time with λmax near and nm. Later work by Chiao et al. () used computer imaging and analysis to examine the advantages of different visual pigment pairs when viewing entire scenes, either underwater or in forests (figure .). The approach was to use digital imaging to collect a series of spatially identical images, each at a different wavelength. This works only for stationary scenes, but it permits an extremely dense and unbiased view of thousands of spectra in a scene, and the modeling can be performed for any desired illumination spectrum. Similar optimal cone pairs were identified as in the earlier research, but the new analysis indicated that changes in light with depth would influence the receptor pair best suited for vision in water. The research of Chiao et al. () also found that ratios of receptor sets in aquatic vertebrates matched predicted modeled optima nearly perfectly, but terrestrial mammals have far fewer blue-sensitive cones than expected (this paucity of blue-sensitive cones is illustrated well in figure .). Ultimately, all these analyses are interesting and informative, and they tell us a bit about the ecological outcomes of the uses of certain receptor pairs. What is lacking in these general models at this point is a weighting of the salience of the features of the scenes being viewed. A rare but critically important object— food, a mate, a predator—is much more important visually than just another leaf or rock. Similarly, there may be a coevolution between visual receptor classes and
61
62
Chapter 3
430/565
430/500
Terrestrial scene
500/565
430/530
430/480
480/530
Aquatic scene
Figure 3.19 Color images of natural scenes in a temperate forest (top left), and a tropical coral reef (top right) represent scenes that were captured in a series of spatially identical images collected at 40 wavelengths from 403 to 696 nm, from which spectral reflectances of all objects in the scene could be obtained. The black-and-white squares in the lower left corner of the full-color images were used for calibration, and the outlined square in each image (128 pixels on each edge) represents the analyzed area. At the bottom are two series of analyzed images of these scenes. The top row, “receptor images” represent pixel-by-pixel the sums of light capture by pairs of visual receptors (λmax values given above each image pair; light captured by the shorter-wavelength receptor is coded in blue, and by the longer in green) in the retina of an animal with only two receptor classes (upper rows). Note that the receptor “brightness” image is completely dominated by the longer-wavelength receptor in these scenes. Below, “dichromatic images” represent normalized light captures of these two receptors (lower rows, coding as described above); because these images are normalized to the same total brightness at each pixel, they show color information separated from brightness, and the blue receptor’s input is visible. (Analyses were carried out by Chiao et al., 2000)
visual signals used by a species. We revisit these ideas frequently in later chapters, as they have played a major role in understanding the design of special visual functions, particularly color vision. It is important to recognize that most animals, whether in terrestrial habitats or in shallow water (whether freshwater, brackish, or marine) have visual systems with more than two photoreceptor classes, so the analyses discussed so far in this section would not apply to them. Attempts have been made to explain the spectral locations of receptors in trichromatic visual systems—those based on three spectral types of receptors—of honeybees and of higher primates. This research was prompted by analysis of color-vision systems, a topic properly discussed in its own chapter later on. Often, as the number of photoreceptor types increases in a retina, the additional classes take on other visual functions, much as the rods and cones of vertebrate retinas are specialized for vision under fundamentally different lighting conditions, so they rarely work together in a given visual task. Thus, their function is ultimately not related to their spectral positioning relative to each other.
Visual Pigments and Photoreceptors
Figure 3.20 The cone array in the retina of the chinchilla (Chinchilla lanigera). Cones have been stained with antibodies to the visual pigments they contain and false colored by spectral sensitivity class: short-wavelength-sensitive (“blue”) cones are illustrated in blue, and middlewavelength-sensitive (“green”) ones are in green. Note the far greater concentration of the green type. (Photograph by L. Peichl)
Visual Pigment Shifts during Development Up to this point we have considered retinas as if the visual pigment or pigments they contain are constant over the life of an animal, and in most cases this is probably true. It is quite common, however, for the visual pigment complement to change over an animal’s life cycle, sometimes at the time of metamorphosis but frequently even within an individual as it grows or (in the case of chromophore shifts) with the seasonal cycle. These changes are not often easy to explain as ecological adaptations. For instance, in one extreme example, freshwater cichlid fish from African rift lakes have cone photoreceptors that can express up to seven different spectral classes of visual pigments (see Parry et al., ). Different sets are expressed among species in patterns that seem to be related to their ecological niches. This seems complicated
63
64
Chapter 3
enough, but very recent work in Karen Carleton’s laboratory reveals that even in one species the relative quantities of different visual pigments can shift continuously and radically throughout development. The implications of such developmental shifts for vision are clearly profound, but the consequences for visual adaptation are rarely obvious. To conclude this chapter we introduce a few examples in which the functional advantages of developmental changes in visual pigments can be understood as visual ecological adaptations. In the s the technique of microspectrophotometry became feasible for the study of visual pigments within single photoreceptor cells. This approach projects a tiny beam of light through an individual cell visualized in a microscope and gets its name because the light can be scanned through a spectral range, making it possible to measure the amount by which each wavelength is absorbed by that cell. Paul Liebman and Gerald Entine, while at the University of Pennsylvania, developed an instrument that was fully practical for vision science. Using their microspectrophotometer to study developing frogs, they discovered the first example of visual pigment shifts. In this case the observed changes were not due to changing gene expression but simply to chromophore exchange—all photoreceptors in leopard frog (Rana pipiens) tadpole eyes absorbed at longer wavelengths than the corresponding classes in adults because the tadpoles utilized dehydroretinal as a chromophore, whereas the adults used retinal. Liebman and Entine () did not offer an environmental explanation, but the pattern they observed is likely to adapt tadpoles for vision in the murky, brown waters of puddles and ponds where they develop. The shorter-absorbing visual pigments of the adult frogs are like those of typical terrestrial vertebrates and therefore appropriate for their environment. Although this first study of development of visual pigments was not particularly concerned with environmental aspects of the changes, today a bit more is known concerning how life history events are associated with the coming and going of visual pigments in the retina. As for adult animals discussed earlier, the findings mainly relate to aquatic animals, partly because it is generally simpler to tease out the environmental influences on vision underwater and partly because many aquatic animals undergo metamorphic changes that separate developmental stages living in quite different environments or having very different light-related behavior. For instance, retinas of larval marine fish often contain ultraviolet- or violet-sensitive cone classes that probably correlate with their need to forage on plankton (Britt et al., ). A good example of how changes of visual pigments appearing in various developmental states reflect ecological adaptation is seen in the hydrothermal vent crab, Bythograea thermydron (figure .). Adults of this crab are abundant at hydrothermal vents in the Eastern Pacific Ocean. Their visual systems are poorly organized for high-quality spatial vision but instead (though it seems incredible) are capable of detecting infrared radiation emitted from the extremely hot water emerging from the vents, permitting the crabs to avoid life-threatening temperatures. Like most crabs, however, they begin life as planktonic larvae that live in open water, where they are exposed, not to infrared light, but to the typical deep blue present in ocean depths. As the early stages of the larvae approach metamorphosis, they must find their way to vent habitats, most likely using a combination of chemical and visual cues. Such a life cycle is perfect for animals like these crabs that inhabit geologically ephemeral habitats, as the constant search for newly opened vents by the larvae permits the species to persist even as masses of adult crabs perish when aging vents shut down. But it demands that larvae sense the nearly monochromatic blue light of great waters
Visual Pigments and Photoreceptors 1
Zoea (larva)
Normalized absorbance
0 1
Megalopa (postlarva)
0 1
0 400
Adult
500
600
700
Wavelength (nm)
Figure 3.21 Developmental changes in visual pigments that occur during development of the hydrothermal vent crab, Bythograea thermydron. Jagged curves are average absorption spectra measured in single photoreceptors of the three developmental stages; smooth curves are best-fit template spectra to the raw data. The first larval stage, the planktonic zoea, acts as a dispersal stage; its visual pigment has λmax at 447 nm (top). The intermediate stage between the zoea and the adult, the megalopa, probably locates the proper habitat for the adult and has a visual pigment with λmax near 479 nm (middle). Finally, the adults that actually inhabit the vents have a peak near 489 nm (bottom). (Data taken from Jinks et al., 2002)
whereas adults attempt to spot infrared light emerging from patches of lethally hot water. The life stages thus require visual systems sampling opposite ends of the visual spectrum. As figure . shows, this is exactly what these animals have, transitioning from deeply blue-sensitive visual pigments in the planktonic zoeal stages of the young larvae to pigments peaking nearly nm to longer wavelengths in the adults (see also Jinks et al., ). The shift of the visual pigments is accompanied by radical changes in the overall anatomy of the eyes, and in the next few chapters we also transition to considerations of the functional structure of eyes and how ocular anatomy reflects ecological demands.
65
4 The Optical Building Blocks of Eyes
A
lmost km below the turbulent surface waters of the open ocean, the softskin smooth-head, a small black fish, slowly swims through the inky darkness. Its tiny eyes pierce the blackness in search of the rare and fitful sparks of living light produced by other animals, the only light this fish will ever see. These tiny punctate flashes of light, brilliant against the shroud of darkness behind, may signal the arrival of a seldomly encountered mate or announce the presence of an equally rare meal. Localizing the point, and acting accordingly, will be a matter of urgency—it may be days or even weeks before such an opportunity arises again. But was that faint glimmer really a light? Did the fish’s eyes collect enough photons to be sure the flash was real? And where exactly was it? Are the fish’s eyes acute enough to be certain of where the flash was located? The answers to these questions ultimately depend on the optical structure of the eyes themselves. Only if the eye is sufficiently sensitive to light and has visual sampling stations adequately dense will the flash be localized clearly. And obviously this visual world—and the eye morphology best suited to it—is very different from that of a coral-reef fish experiencing a colorful and brilliantly lit extended scene. How eyes evolve to best provide their owners with reliable information thus depends on where and how the eyes are used. Even the most rudimentary of eyes— with appropriate levels of sensitivity and resolution—can be highly successful. How successful eyes are constructed is the topic of this chapter. Irrespective of their optical specializations—a topic we return to in chapter —all eyes have one thing in common: they collect and absorb light arriving from different places in the environment, thus giving animals information about the relative distribution of light and dark in the surrounding world, including the contrasts and positions of objects. This information is used to support a variety of visual tasks, such as identifying and avoiding predators, detecting and pursuing prey or conspecifics, and orientating and navigating within the habitat. Although some animals use their eyes to perform more or less all of these tasks, others do not. As nicely summarized by Land and Nilsson (), all visual systems evolved within one of two main categories, being either “general purpose” or “special purpose.” The eyes of vertebrates and the more advanced invertebrates (such as arthropods and cephalopods) are of the first type: these eyes have evolved to perform many different visual tasks and are accompanied by the advanced brains necessary to analyze all of them. Animals
Optical Building Blocks of Eyes
with “special purpose” visual systems—for example, jellyfish, worms, bivalves, and gastropods—have visual systems optimized for one primary purpose, such as to detect the shadow of an approaching predator. Their eyes, brains, and nervous hardware are often coadapted to perform this special task with maximum neuronal efficiency. But no matter whether general purpose or special purpose, all eyes must fulfill certain basic building requirements in order to provide their owners with reliable visual information. To see what these are, it is necessary to first define what an eye actually is or, possibly more importantly, what it is not.
What Is an Eye? The simplest type of visual organ—found in many smaller invertebrates and larvae (notably of worms and insects)—is an aggregation of one or more photoreceptors on the body surface, shielded on one side by a pigment cell containing screening pigment granules. Such “eye spots” are unable to detect the direction from which light is incident (i.e., they do not possess spatial vision) and are therefore little more than simple detectors of light intensity. Because spatial vision, no matter how crude, is considered to be the hallmark of a “true eye,” eye spots are not considered true eyes. But for those invertebrates that possess them, eye spots are able to detect the presence or absence of light and to compare its intensity sequentially in different directions, thus allowing animals to avoid light, to move toward it, or to detect sudden changes in its brightness. In a “true eye,” photoreceptors need to be exposed to light in one or more directions and shaded from light in others, thus creating spatial vision. The easiest way to achieve this is to push the pigment-backed layer of light-sensitive photoreceptors into the epidermis, creating a photoreceptor-lined invagination or “pit.” Such “pigment-pit eyes” (figure .) are common in many invertebrate lineages. Because the photoreceptors
A
B
Pigment cup
Photoreceptor
Sensory nerve
Figure 4.1 Pigment-pit eye of the arc clam Anadara notabilis. (A) A semithin longitudinal section through two pigment pit eyes located on the edge of the clam’s mantle. Scale bar: 40 μm. (From Nilsson, 1994) (B) A schematic diagram of the pigment cup eye in A showing its everse invaginated retina of rhabdomeric photoreceptors (exaggerated in size for clarity; small stacked circles represent the microvilli). Each photoreceptor receives light from a different broad region of space (blue shading), thus endowing the eye with crude spatial resolution. (Redrawn with permission from Nilsson, 1994)
67
68
Chapter 4
each occupy different positions in the pigment-lined pit, they are each able to receive light from different directions in space but are shaded in many other directions, thus creating crude spatial vision. In fact, the deeper and narrower the pit, the narrower the region of space from which photoreceptors receive light and the better the spatial resolution. As a result, pit eyes are considered to be true eyes. This brief discussion of eye spots and pigment-pit eyes highlights two design principles that are common to all true eyes: () each photoreceptor has a unique physical position in the retina and thus receives light from a restricted region of the outside world, and () light from that region reaches the photoreceptor through an aperture or “pupil” that restricts the entrance of light from other regions (i.e., shades the photoreceptor from unwanted light). This pupil not only affects the resolution but also sets the sensitivity because a larger pupil supplies more light to the retina. Even greater resolution and sensitivity are achieved from another watershed advance during the evolution of vision: the appearance of refractive lenses (Nilsson and Pelger, ). Such lenses—typical of compound eyes and camera eyes—focus an image of the outside world onto the underlying retina and have the potential to dramatically improve visual performance. Good spatial resolution and high sensitivity to light, arguably the two most highly desirable properties of any eye, generally trade off against each other: in an eye of given size, it is difficult to maximize one without compromising the other. Which of the two is maximized ultimately depends on what the animal has evolved to see and, in particular, what light level the animal is usually active in. What is very useful for our purposes is that this trade-off also reveals the basic design principles for constructing a successful eye.
Building a Good Eye: A Trade-off between Resolution and Sensitivity What makes a successful eye? To answer this question, consider the simplified schematic camera eye shown in figure .A. In analogy to its namesake, a camera eye has a retina of visual cells that senses an image formed by an overlying cornea and lens. These eyes are found in all vertebrates, including ourselves, as well as in many invertebrates, and we discuss this eye type in more detail in chapter . Even though we have used camera eyes as an example, the reasoning developed below applies to all types of eye. In most camera eyes the photoreceptors are packed into an orderly matrix around the back of the eye that collectively receives a focused image of the outside world. Each photoreceptor is responsible for collecting and absorbing light from a single small region in this image (and thus from a single small region of the outside world). In this sense, to borrow the digital camera term, each photoreceptor defines a single “pixel” in the visual image; two neighboring photoreceptors therefore define two neighboring “pixels.” And as is well known from digital cameras, the density of pixels sets the finest spatial detail that can be reconstructed: smaller and more densely packed pixels have the potential to reconstruct—or “sample”—finer details. However, this is true only if each pixel can collect enough light. If, in the quest for higher resolution, the pixel becomes too small—that is, if the region of space from which the pixel collects photons becomes too small—then it will collect too little light to reliably sample the image. This of course becomes more and more problematic as light levels fall: at any
Optical Building Blocks of Eyes
A
Eye
Visual scene
B
Pupil I
Pixel
Δφ
Δρ
θ N d
f
Receptors
Lens
A
Percentage of sensitivity
100
Macroglossum Deilephila Gaussian
Δρ
50
0 –6
–4
–2
0
2
4
Angle φ (deg)
Figure 4.2 A schematic camera eye viewing a visual scene (the trunk and bark of a eucalyptus tree). (A) Array of photoreceptors in the retina, whose photoreceptive segments have a length l and a diameter d, receives a focused cone of light of angular half-width θ through a circular pupil of diameter A. Angular separation of two neighboring photoreceptors in the retina is the interreceptor angle Δφ. Each photoreceptor has a narrow receptive field (of angular width Δρ) and thus receives light from only a small part (or “pixel”) of the visual scene. The focal length f of the eye is defined as the distance between the optical nodal point N (in this case located at the center of the lens) and the focal plane at the distal tips of the photoreceptors. (B) Electrophysiologically measured receptive fields of single photoreceptors in the superposition compound eyes of two species of hawkmoths, the day-active Macroglossum stellatarum and the nocturnal Deilephila elpenor. The half-width of the receptive field Δρ, also known as the “acceptance angle,” is wider in the nocturnal species (3.0°) than in the diurnal species (1.3°). The receptive field is roughly Gaussian in shape, although in nocturnal species such as Deilephila, where the cone of incident light can be especially broad, optical cross talk (see text) may induce significant off-axis flanks (as evidenced here), thereby reducing spatial resolution. (B adapted from Warrant et al., 2003)
given light level the minimum visual pixel size (and the maximum resolution) will be limited by the sensitivity of the eye. Thus, inherent in the design of any eye is an unavoidable trade-off between resolution and sensitivity. In an eye that strives for higher resolution the pixels must become smaller and more densely packed. In doing this the photon sample collected by each, and the reliability of the light intensity measurement made by each, necessarily declines. For an eye adapted for vision in bright light, when photon samples are large, this is not likely to be a problem. But for a nocturnal eye the situation is probably quite different—larger and more coarsely packed pixels, each catching as many of the available photons as possible, are instead likely to be the preferred design. This trade-off between resolution and sensitivity is readily revealed by the few anatomical parameters shown in figure .A. Let us begin by considering sensitivity.
Sensitivity Why does a small photon catch limit the ability of photoreceptors to discriminate the contrasts of fine spatial details? The answer to this question is beautifully explained in the classic studies of Selig Hecht, Simon Schlaer, and Maurice Pirenne from years ago (Hecht et al., ; Pirenne, ). They were the first to recognize, on the basis
6
69
70
Chapter 4
of psychophysical experiments, that individual photorecepters (rods) in the human retina are capable of detecting single photons, an ability probably shared by all animals. However, despite this apparently remarkable performance, they also recognized that this was insufficient on its own to discriminate objects in dim light. In fact, our threshold for detecting the mere presence of a light source requires at least – photons (Pirenne, ). It turns out that the problem for object discrimination in dim light (at least in part) lies in the random and unpredictable nature of photon arrival at the retina. Pirenne explained this point in his well-known illustration of a grid of photoreceptors receiving photons from the image of a perfectly dark circular object on a brighter background (figure .; Pirenne, ). At the dimmest light level (×: figure ., upper left panel), the sporadic and random arrival of photons from the background results in their absorption by only six photoreceptors—in this situation the object remains indistinguishable. Even at a -fold higher background
1X
10X
100X
1000X
Figure 4.3 Spatial resolution and light level. A matrix of 400 photoreceptors (small circles) image a black disk (large circle at center) at four different light levels that differ in intensity by factors of 10. Photoreceptors excited by light are shown as white; inactive photoreceptors are shown as black. (Adapted and redrawn from Pirenne, 1948)
Optical Building Blocks of Eyes
light level (×: figure ., upper right panel), the object is still disguised, hidden in the random “noise” of photon absorptions. After a further times increase in light intensity (×: figure ., lower left panel), the dark object becomes noticeable, but not without a degree of uncertainty. It is not until light levels are increased by yet a further times that the object can be distinguished clearly (×: figure ., lower right panel). Pirenne’s famous illustration highlights one of the main problems for vision in dim light—noise arising from the random arrival of photons, sometimes referred to as photon “shot noise.” The effect of this noise is even more devastating for realistic visual scenes that contain a great spectrum of contrasts from large to small than it is for a simple high-contrast black-or-white binary scene like the one used by Pirenne. The reason for this, as predicted by Pirenne’s demonstration, is that the smaller the contrast of an object (or an internal detail of an object) the earlier it will be erased by shot noise as light levels fall. Thus, at any given light level, the smallest detectable contrast will be set by the level of photon shot noise. At about the same time as Pirenne, two scientists—the American physicist Albert Rose () and the Dutch biologist Hugo de Vries ()—independently quantified the effects of photon shot noise on the discrimination of visual contrast. They recognized that because the arrival and absorption of photons are stochastic (and governed by Poisson statistics), quantum fluctuations will limit the finest contrast that can be discriminated. If we say that a photoreceptor absorbs N photons during one visual integration time, the uncertainty or shot noise associated with this sample will be √N photons. In other words, the photoreceptor will absorb N ± √N photons (Rose, ; de Vries, ). The visual signal-to-noise ratio, defined as N/√N (which equals √N), improves with increasing photon catch, implying that photon shot noise and contrast discrimination are worse at lower light levels. This is the famous “Rose–de Vries” or “square root” law of visual detection at low light levels: the visual signal-to-noise ratio, and thus contrast discrimination, improves as the square-root of photon catch. To illustrate this, imagine two neighboring photoreceptors (which we will call P and P) that sample photons from a contrast border in a scene. Let P sample photons from the brighter side of the border and P from the darker side. Let’s also say that the contrast is such that the brighter side of the border reflects twice as many photons as the darker side. Consider first a very low level of illumination. Imagine that during one visual integration time P samples six photons from the brighter side of the contrast border and P samples three photons from the darker side. Due to shot noise and the square-root law, these two samples will have an uncertainty associated with them: ± . photons and ± . photons, respectively. In other words, the noise levels would be about half the magnitude of the signals, and in this case the samples would not differ significantly. Thus, at the level of the photoreceptors the contrast border would be drowned by noise and remain invisible. But in much brighter light it is a different story. Imagine instead that P samples million photons, and P million. Now the samples are ,, ± , photons and ,, ± , photons. These noise levels are a minuscule fraction of the signal levels, and there is no question that at this light intensity the contrast border would easily be seen. Indeed, even borders of significantly lower contrast would likely be seen. Thus, the finest contrast that can be discriminated by the visual system depends on the light level. Moreover, photon shot noise also sets the upper limit to the visual signal-to-noise ratio (which in our simple example above is around at the lower light level and about at the higher light level). In reality, however, the signal-to-noise
71
72
Chapter 4
ratio is always lower than this. This is because there are two further sources of noise that degrade vision in dim light even more. Shot noise is an unavoidable consequence of the properties of light and is thus an external or “extrinsic” source of noise. In addition to this there are also two internal, or “intrinsic,” sources of noise. The first of these, referred to as “transducer noise,” is inherent in the photoreceptor’s response to single photons. Even though Hecht, Schlaer, and Pirenne recognized quite early that visual systems must be capable of responding to single photons, it was not until nearly two decades later that the American physiologist Stephen Yeandle actually recorded such responses from a photoreceptor in the horseshoe crab Limulus (Yeandle, ). These small but distinct electrical responses, which he termed “bumps,” are now commonly recorded from photoreceptors in both vertebrates and invertebrates (figure .A). Six years later, together with Michealangelo Fourtes, Yeandle established that there is a : relationship between transduced photons and bumps (Fuortes and Yeandle, ): a single bump results from the absorption and transduction of no more than a single photon, and a single transduced photon leads to no more than a single bump. What also became apparent was that despite being the response to an invariant stimulus—a single photon—bumps were, in contrast, highly variable, changing in their latency, duration, and amplitude (see figure .A). This inability of the photoreceptors to produce an
A 2 mV 0.3 s
Nocturnal bee
B 1 pA 50 s
Nocturnal toad
Figure 4.4 Quantum events in photoreceptors. (A) Single-photon responses recorded intracellularly in a photoreceptor of the nocturnal bee Megalopta genalis (measured in response to continuous dim light). (From Frederiksen et al., 2008) (B) Dark noise recorded in a rod photoreceptor of the nocturnal toad Bufo marinus in complete darkness, showing the two main components of the noise: (1) discrete “dark events” (red dots) indistinguishable from responses to real photons and (2) a continuous low-amplitude response fluctuation (e.g., within the green box). This suction electrode recording shows rod outer segment membrane currents at 22°C. (Adapted from Baylor et al., 1980) Note the much slower response speed in toads compared to bees (A).
Optical Building Blocks of Eyes
identical electrical response to each absorbed photon introduces visual noise. This source of noise, originating in the biochemical processes leading to signal amplification, degrades the reliability of vision (Lillywhite, , ; Lillywhite and Laughlin, ; Laughlin and Lillywhite, ). The second source of intrinsic noise, referred to as “dark noise,” arises because the biochemical pathways responsible for transduction are occasionally activated—even in perfect darkness (Barlow, ). Two components of dark noise have been identified in recordings from photoreceptors (figure .B; Baylor et al., ): () a continuous low-amplitude fluctuation in measured electrical activity (sometimes called membrane or channel noise) and () discrete “dark events,” electrical responses that are indistinguishable from those produced by real photons. The continuous component arises from spontaneous thermal activation of rhodopsin molecules or of intermediate components in the phototransduction chain (such as phosphodiesterase) (Rieke and Baylor, ). The amplitude of this membrane noise is negligible in insects (Lillywhite and Laughlin, ) but can be quite significant in vertebrate photoreceptors, particularly cones. “Dark events” also arise due to spontaneous thermal activations of rhodopsin molecules. In those animals in which dark events have been measured, they are rare (e.g., insects, crustaceans, toads, and primates) (Lillywhite and Laughlin, ; Baylor et al., , ; Dubs et al., ; Doujak, ; Henderson et al., ; Katz and Minke, ). At their most frequent they occur around once per minute at °C, although in most species they occur a lot less frequently. At very low light levels both components of dark noise can significantly contaminate visual signals (Rieke, ) and even set the ultimate limit to visual sensitivity (as found in the nocturnal toad Bufo bufo) (Aho et al., , a). And because dark noise is the result of thermal activation of the molecular processes of transduction, it should come as no surprise that it is more pronounced at higher retinal temperatures. Thus, in cold-blooded animals living in cold environments such as the deep sea (where water temperatures are around °C), dark noise is probably quite low. Nevertheless, dark noise may still be sufficient to limit the visibility of bioluminescent flashes. These two sources of intrinsic noise—transducer noise and dark noise—further degrade visual reliability in dim light because they raise the contrast threshold for visual discrimination above that resulting from shot noise alone (Laughlin and Lillywhite, ). Taken together, intrinsic and extrinsic noise pose a serious threat to the reliability of vision in dim light. How then can visual reliability—and thus contrast discrimination—be improved in dim light? One way is to have an optical system that captures more light (e.g., a wider lens and larger pupil). Another way is to have retinal circuitry optimized to reduce noise and maximize the visual signal-to-noise ratio. The photoreceptors themselves can be optimized for signaling in dim light as in nocturnal insects (e.g., Laughlin, ; Frederiksen et al., ), or the underlying circuitry can filter out noise by using a strategy of nonlinear thresholding as has so beautifully been demonstrated at rod-rod bipolar cell synapses in the retina of nocturnal mice (Okawa and Sampath, ; Okawa et al., ; Pahlberg and Sampath, ). Finally, an eye can simply give up the attempt to see fine spatial detail altogether and have larger “pixels,” instead improving the discrimination of coarser details. All of these solutions improve the reliability of vision at low light levels and are commonplace in animals that have evolved to see well in dim light (as we discuss in detail in chapter ). How eyes maximize their sensitivity to light is the topic to which we turn next.
73
74
Chapter 4
The Optical Sensitivity of Eyes to Extended Scenes and Point Sources The features of eyes that maximize their sensitivity have been nicely summarized by Michael Land in his famous sensitivity equation (Land, ). This equation accounts for the most common type of visual scenes experienced by animals: “extended” scenes. Unlike scenes dominated by point sources, such as those experienced by deep-sea animals that search the inky depths for tiny points of bioluminescence, extended scenes are characterized by objects visible as a panorama throughout threedimensional space. Good sensitivity to an extended scene results from a pupil of large area (πA/) and photoreceptors each viewing a large solid angle (or “pixel”) of visual space (πd / f steradians) and absorbing a substantial fraction of the incident light (kl/[. + kl ] for broad-spectrum terrestrial daylight). The optical sensitivity S of the eye to an extended broad-spectrum scene (in units of square micrometers times steradians) is then simply given by the product of these three factors (Kirschfeld, ; Land, ; Warrant and Nilsson, ): 2
d kl π2 S = ` j A2 c m c m. 4 f 2.3 + kl
(broad spectrum, extended)
(.a)
In mesopelagic deep-sea habitats, where daylight is essentially monochromatic (blue light of around nm wavelength), the following expression is more accurate (Land, ; Warrant and Nilsson, ): 2
π2 d S = ` j A 2 c m ^1 − e −kl h . 4 f
(monochromatic, extended)
(.b)
In both equations A is the diameter of the pupil, f the focal length of the eye, and d, l, and k the diameter, length, and absorption coefficient of the photoreceptors, respectively (figure .A). Wider pupils, shorter focal lengths, or larger photoreceptors (i.e., wider “visual pixels”) all increase S. These are common features of eyes adapted for vision in dim light (see chapter ). The absorption coefficient k is a constant that describes the efficiency with which a photoreceptor absorbs light—a photoreceptor with a larger value of k absorbs a greater fraction of the incident light per unit length than a photoreceptor with a lower value. If the photoreceptor length has units of micrometers, then k has units of inverse micrometers. Thus, a photoreceptor with k = . μm– absorbs a fraction of . (or %) of the propagating incident light for every micrometer of photoreceptor through which the light passes. Photoreceptor absorption coefficients vary over a roughly -fold range (Warrant and Nilsson, ) and tend to be smaller in invertebrates (ca. .–. μm–) than in vertebrates (ca. .–. μm–). The reason for this is most likely the fact that in vertebrate photoreceptors the ratio of photoreceptive membrane to cytoplasm is greater than it is in the photoreceptors of invertebrates. The total fraction of light absorbed by the photoreceptor (the final bracketed term in equations .a and .b) can now easily be calculated. If, for example, the photoreceptor is μm long, then this means that for white light (equation .a), the total fraction of light absorbed is ⋅ ./(. + ⋅ .) = .. That is, around % of the incident light is absorbed by this photoreceptor.
Optical Building Blocks of Eyes
To put all this into perspective, imagine an eye of optical sensitivity S μm sr that views an extended scene of average radiance R photons μm– s– sr–. For a nocturnal terrestrial scene, R will be roughly orders of magnitude lower than for the same scene illuminated by bright sunshine. The total number of photons N absorbed per second from the scene by a photoreceptor in the eye is then simply N = R ⋅S
(.)
Thus, eyes of greater optical sensitivity absorb more photons from an extended scene of given intensity. In other words, the highly sensitive superposition compound eyes of a nocturnal hawkmoth (see chapter ) will absorb several hundred times as many photons every second as the apposition compound eyes of a day-active honeybee that views exactly the same scene. This example highlights the great variation in optical sensitivities found in the animal kingdom, which vary over many orders of magnitude and are fairly tightly correlated with the light levels in which animals are active (table .). Even very small eyes, like those of nocturnal moths, can have high optical sensitivities. This is because the optical sensitivity is also dependent on the eye’s “F-number,” a parameter that describes the eye’s light-gathering capacity and does not depend on eye size. The F-number is simply the ratio of the focal length to the pupil diameter, f/A. F-numbers are quite familiar to photographers because camera lenses of lower F-number produce brighter images. The same is true in eyes, as an inspection of equation . can testify: a higher optical sensitivity results from a lower F-number (i.e., from higher A or lower f ). And because f and A tend to be correlated (i.e., larger eyes tend to have longer focal lengths and larger pupils), even small eyes can have very high optical sensitivities. The tiny camera eyes of the nocturnal net-casting spider Dinopis subrufus—a formidable nocturnal hunter (Austin and Blest, )—have an optical sensitivity that is over times greater than that of our own much larger eyes (calculated for dark-adapted foveal vision). Finally, as we mentioned above, many deep-sea animals have eyes adapted to see pinpoints of bioluminescence against a profoundly dark background. The image of a point source on the retina is, by definition, also a point of light. For a visual detector to collect all the light from this point, the “pixel” it represents need not be any larger than the image itself. Receptors viewing large solid angles of space do not improve the detection of a point source. In fact, one would predict that an eye adapted to reliably detect a point source of light against a dark background should only need a large pupil and long photoreceptors of high absorption coefficient. Thus, in the deep sea, the optical sensitivity (now with units of square micrometers) of an eye to monochromatic bioluminescent point sources is simply: π S = ` j A 2 ^1 − e −kl h, 4
(monochromatic, point)
(.c)
where all symbols have the same meanings as before (figure .A). If accurate localization of the point source is also necessary, then one would predict the eye to have good spatial resolution as well. As we see in chapter , these conditions are fulfilled in the eyes of many deep-sea animals that rely on the detection of bioluminescence for survival and reproduction. For a human observer, with large eyes and large dark-adapted pupils (ca. mm diameter), the night sky is ablaze with stars. This, however, is not the case for the
75
76
Chapter 4 Table 4.1 The optical sensitivities S and photoreceptor acceptance angles Δρ of a selection of dark-adapted eyes (in order of decreasing S) Species
Animal
Eye
A
d
f
l
S c
Δρ
Cirolana
Isopod
LMApp
150
90
100
90
5,092
Oplophorus
Shrimp
MSup
600
200b
226
200b
3,300c
8.1°
e
Dinopis
Spider
NCam
Deilephila
Hawkmoth
NSup
Onitis aygulus
Dung beetle
NSup
845
86
503
Ephestia
Moth
NSup
340
110b
170
b
37.9
52°
1,325
55
771
55
101
1.5°
937d
414b
675
414b
69
0.9°
86
58.9
3.3°
110b
38.4
2.7°
d
b
Macroglossum
Hawkmoth
DSup
581
362
409
362
Octopus
Octopus
ECam
8,000
200
10,000
200
Pecten
Scallop
LMirr
450
15
270
15
Megalopta
Sweat bee
NApp
36
350
97
350
Bufo
Toad
NCam
5,550
54
4,714
54
2.41
0.03°
Architeuthis
Squid
LMCam
90,000
677
112,500a
677
2.3c
0.002°
Onitis belial
Dung beetle
DSup
309
32
338
32
1.9
1.1°
Planaria
Flatworm
Pig
30
6
25
6
1.5
22.9°
Homo
Human
DCam
8,000
30
16,700
30
0.93
0.01°
Littorina
Marine snail
LCam
108
20
126
20
0.4
1.8°
Vanadis
Marine worm
LCam
250
80
1,000
80
0.26
0.3°
Apis
Honeybee
DApp
20
320
66
320
0.1
1.7°
380
23
767
23
0.038
0.2°
Phidippus
Spider
DCam
f
4.2c
1.1° 0.02°
4
1.6°
2.7
4.7°
2
Notes: This table is expanded from an original given by Land (1981). S has units μm sr. Δρ is calculated using equation 4.4 and has units of degrees. A = diameter of aperture (μm); d and l = diameter and length of the photreceptor, respectively (μm); f = focal length (μm). See chapter 5 for a full description of eye types. NSup = nocturnal superposition eye; MSup = mesopelagic superposition eye; DSup = diurnal superposition eye; DCam = diurnal camera eye; NCam = nocturnal camera eye; LMCam = lower mesopelagic camera eye; ECam = epipelagic camera eye; LCam = littoral camera eye; DApp = diurnal apposition eye; NApp = nocturnal apposition eye; LMApp = lower mesopelagic apposition eye; Pig = pigment cup eye; Lmirr = littoral concave mirror eye. a
Focal length is calculated from Matthiessen’s ratio: f = 1.25A. b Rhabdom length quoted as double the actual length due to the pres-
ence of a tapetum. c S was calculated with k = 0.0067 μm–1, using equation 4.1b for monochromatic light. All other values were calculated with equation 4.1a for broad-spectrum light. d Values taken from frontal eye. e Posterior medial (PM) eye; f Anterior lateral (AL) eye. Sources: Invertebrates (Warrant, 2006) and vertebrates (Warrant and Nilsson, 1998).
net-casting spider Dinopis. Despite its superior optical sensitivity for viewing dimly lit extended scenes, the spider is now at a distinct disadvantage when viewing stars— with a pupil area times smaller than that of humans, only a small fraction of the stars visible to us would be visible to the spider!
Resolution As we have just seen, optical sensitivity is independent of eye size, which might lead the reader to wonder why animals would ever bother having bigger eyes. It turns out that small eyes are seriously limited in another property critical for vision—resolution.
Optical Building Blocks of Eyes
Small eyes have small apertures and short focal lengths, and their retinas have limited space for photoreceptors. All of these limitations have serious implications for resolution. A higher physical packing density of photoreceptors, which is specified by their smaller angular separation, will increase the eye’s spatial resolution. This angular separation is defined by the interreceptor angle Δφ (figure .A), an angle that depends on the physical separation of photoreceptors in the retina (given by the photoreceptors’ center-to-center spacing rcc) and the eye’s focal length f: Δφ = rcc /f
(radians)
(.a)
In many retinas the photoreceptors are so tightly packed that they touch. The retina is then said to be “contiguous,” meaning that rcc is the same as the photoreceptor’s diameter d. The interreceptor angle is then simply given by: Δφ = d/f (radians)
(.b)
Spatial resolution can thus be increased by reducing the diameter of the photoreceptors or by increasing the eye’s focal length. A downside of reducing d is that it decreases the photoreceptor’s photon capture area—and thus the eye’s optical sensitivity (equation .)—and this may not be desirable in dim light. In fact, as discussed in chapter , photoreceptors are often pooled neurally to effectively enlarge the retinal “pixels” so that they can collectively capture more light in dim conditions. The rod photoreceptors of our own retina are typically pooled in the hundreds for precisely this reason. The downside of increasing the eye’s focal length is that this is only possible by having a larger eye, and this may be impractical for many reasons. The limited body size of the animal itself, the constraints associated with its mode of locomotion, or the extra energy costs associated with running a larger eye are just some of the problems that may prevent the evolution of a larger eye. As we describe in chapter , some remarkable animals have nevertheless managed to pull this off, either by simply having much larger eyes than other closely related animals of the same size or by cramming a small piece of a much larger eye onto their otherwise small heads. In the absence of all other effects (such as the quality of the optical image focused on the retina; see below) the interreceptor angle Δφ would set the finest spatial detail that could be seen. In reality, however, the finest spatial detail is determined by the size of the photoreceptor’s “receptive field,” that is, the size of the region of visual space from which the photoreceptor is capable of receiving photons. The diameter of this roughly circular receptive field is sometimes called the photoreceptor’s “acceptance angle” Δρ (figure .A), and in its simplest geometric form this is given by: Δρ = d/f (radians)
(.)
Values of Δρ in different eyes are given in table .. Notice how eyes having higher optical sensitivity frequently have broader acceptance angles (and lower spatial resolution), revealing the unavoidable trade-off between resolution and sensitivity. This description of the receptive field as a circular region of space is somewhat simplistic: in reality the receptive field of a photoreceptor is equivalent to its sensitivity to light arriving from different angles of incidence φ (figure .B). For axial
77
78
Chapter 4
light (φ = °), sensitivity is maximal (%), but for every other direction it is less, falling steadily as the angle of incidence is increased. A typical receptive field profile is roughly Gaussian in shape: in three dimensions it resembles a bell. Notice that equations . and .b are the same, that is, in a contiguous retina Δρ = Δφ. Thus, to achieve maximal spatial resolution Δρ could be minimized in exactly the same way as Δφ, with the same downsides. However, in real retinas Δρ is not commonly smaller than Δφ. This is because eyes typically possess one or more optical limitations (e.g., aberrations) that blur the image formed in the retina. This blurring broadens Δρ and can coarsen spatial resolution to a value below that predicted by the photoreceptor matrix. This is readily seen in the two-dimensional receptive field of the nocturnal elephant hawk moth Deilephila elpenor (figure .B), measured using intracellular electrophysiology. This has a half-width (i.e., acceptance angle Δρ) of .° (Warrant et al., ), which is considerably larger than the geometric value of .° obtained using equation . (table .). Moreover, the experimental curve is somewhat more flared than a Gaussian of the same half-width—the substantial flanks visible at higher values of φ are the result of optical cross talk (see below), an optical flaw that can seriously degrade spatial resolution.
Optical Limitations on Image Quality The first limitation is unavoidable and a property of all lenses, namely diffraction (figure .). Diffraction results from an important physical characteristic of light itself—its wave nature. No matter how optically perfect an imaging system otherwise is, there is no escaping diffraction. To see its effects on image quality, imagine a lens that focuses the image of a distant star (figure .A). Even though stars are infinitesimally small points of light, their images on the retina (or on the image sensor of a telescope) are far from infinitesimal and are always blurred due to diffraction. Light from the star reaches the lens as a series of straight parallel wavefronts, and as these strike the outer surface of the lens they are refracted. Those regions of the wavefront near the thinner edge of the lens pass through it rather quickly, but those regions entering near the thick center of the lens take longer, and by the time they exit they are out of phase with other regions of the wavefront. Instead of being straight, each exiting wavefront is curved—those regions of the wavefront arriving in phase at the image plane of the lens reinforce to create brighter image areas, whereas those arriving out of phase annihilate to create dimmer image areas. As a result, the image of the star turns out to be a complicated circular pattern consisting of a central bright peak surrounded by light and dark rings (figure .B,C). This pattern is known as the Airy diffraction pattern, named for the British astronomer George Biddell Airy, who first described it in . Its central bright peak is known as the Airy disk. Because in the worst cases the Airy disk may cover several photoreceptors, it leads to uncertainty regarding the exact position of the star and thus degrades spatial resolution. The size of the Airy disk depends on the wavelength of the stimulating light (λ) and the diameter of the lens aperture (pupil) through which this light has passed (A). The half-width ΔρA of the Airy disk’s intensity profile (figure .A) is given by ΔρA = λ /A
(radians)
(.)
and its diameter (i.e., the diameter of the entire central peak measured to the center of the first dark ring) is .ΔρA. Thus, in eyes (or telescopes) with large lenses (i.e., large
Optical Building Blocks of Eyes
A
B
Monochromatic light λ
Pupil A
Lens
Relative intensity %
100
C 50
λ/A
Airy disk
0 Receptors Angle Figure 4.5 Diffraction. (A) Monochromatic light (from a point source at infinity) of wavelength λ passes through the aperture of a lens (defined by a pupil of diameter A). The propagating wavefronts are diffracted, and the resulting image on the retina is a symmetric pattern of light and dark rings called the Airy diffraction pattern, whose central peak (the Airy disk) has a half-width of λ/A. (B) The Airy diffraction pattern seen graphically in three dimensions. (C) The Airy diffraction pattern seen on the image plane from above, showing its characteristic light and dark rings and its conspicuous central peak. (B and C courtesy of Yakir Gagnon)
A), the effects of diffraction can be quite minor. But in eyes with very small lenses the width of the Airy disk could be large, with rather dramatic consequences for resolution. A good example can be found among very tiny insects, whose compound eyes may have ommatidial lens diameters of μm or less. If we consider that such an eye is viewing a point source of monochromatic green light that has a wavelength of nm (or . μm), then the diameter of the Airy disk would be . × ./ radians, or about ° across. In a compound eye, the interreceptor angle is defined by the angular separation of the ommatidia (see chapter ), and this could easily be a lot less than °. Thus, even though the packing of photoreceptors might predict a certain spatial resolution, it might not be realized because of diffraction. But even in eyes with large apertures where the Airy disk is as small as, or smaller than, the receptor spacing (such as in large vertebrate eyes), there are other optical defects that may blur the image sufficiently that resolution is nonetheless lower than predicted by the retinal grain. Many lenses (and eyes) suffer from aberrations, notably spherical and chromatic aberration (figure .), and curiously the effects of
79
80
Chapter 4
these imperfections worsen as lenses become larger relative to focal length—quite the opposite to diffraction. The first of these aberrations—spherical aberration—arises because of the simple inability of lenses to focus incoming parallel light rays (e.g., from a star) to a single point (figure .A,B). In fact, in the case of positive spherical aberration, rays entering the periphery of the lens will be focused to a position closer to the rear of the lens than rays entering near the center of the lens. The actual position of best focus is that for which the lateral spread of rays is least. In such an eye, our image of the star—already blurred by diffraction—would now be blurred even more. To make matters worse, the position of least lateral spread may not even coincide with the retinal surface. Spherical aberration can be corrected by making the lens surface aspheric, but this solution has limited success because generally only those rays entering the eye parallel, or nearly parallel, to its optical axis are corrected: rays entering at other angles are less well focused than they would be in a spherical lens. Nevertheless, this solution has arisen many times, and our own cornea is aspheric for this reason. Correction of spherical aberration can also be achieved by an appropriate gradient of refractive index within the interior of the lens (with refractive index falling from the lens center to its edge; see chapter ), and this is frequently the situation found in camera eyes (e.g., spiders: Blest and Land, ; cephalopod mollusks: Sweeney et al., ; fish: Matthiessen, ). Even our own lens corrects spherical aberration in this manner. Chromatic aberration (figure .C,D) arises because the transparent material of the lens is invariably dispersive; that is, its refractive index is higher for shorter wavelengths of light. Thus, light of shorter wavelength is refracted more strongly than light of longer wavelength. This means that if parallel rays of white light are incident on the lens, the shorter wavelengths (e.g., ultraviolet light) will be brought to focus closer to the rear surface of the lens than the longer wavelengths (e.g., red light). As with spherical aberration, this variation in the position of best focus for different wavelengths leads to a reduction in the quality of the image through blurring. Again, the position of best focus is the position where the least lateral spreading of rays occurs. One way of correcting for chromatic aberration is to replace the single lens element with two or more separate elements, each with a different refractive index and shape, the exact combination carefully chosen to bring rays of different wavelengths to a common focal point. However, no known examples of this exist in nature. A remarkable solution that has evolved, notably in fish, are lenses constructed of several
Figure 4.6 Optical aberrations. (A) Spherical aberration occurs because rays of white light (from a point source at infinity) that enter the periphery of a lens are focused to a position closer to the lens than those entering at the center. The position of best focus coincides with the position of least lateral spreading of the rays—the “circle of least confusion” (colc: arrows). The focal plane (fp) for central rays lies further from the lens. The blurring effects of spherical aberration are evident in an image of a globular cluster of stars taken using a high-powered optical telescope (B, lower image). In the absence of this aberration (B, upper image), individual stars are much more clearly resolved (which is particularly obvious at the center of the cluster). (C) Chromatic aberration occurs because rays of white light (from a point source at infinity) that enter the lens suffer optical dispersion. This arises because the refractive index of the lens material is not the same for each of the constituent wavelengths. Consequently, rays of shorter wavelength are focused to a position closer to the lens than rays of longer wavelength. Chromatic aberration can be seen as rainbow-colored flare in the image of a weather vane (D). (Photograph courtesy of Tony Karp, www.timuseum.com)
Optical Building Blocks of Eyes
A
B
White light
Lens
colc Focal plane
C
White light
D Lens
Blue Focal plane
Green Red
81
82
Chapter 4
concentric shells of different refractive indices. Each shell of such a “multifocal lens” is responsible for bringing a specific band of wavelengths to a crisp focus on the retina (Kröger et al., ). Another possible correction actually exploits the aberration by placing photoreceptors of a certain spectral sensitivity at the position corresponding to the best focal plane of the corresponding wavelengths. This strategy results in a layered (or “tiered”) retina, with photoreceptors sensitive to shorter wavelengths lying in layers positioned distally to other layers containing photoreceptors sensitive to longer wavelengths. This strategy is apparent in the eyes of some jumping spiders (Blest et al., ), where it may play a role in accurate depth perception (Nagata et al., ). Finally, in some curious eyes, the optical image is as crisp and diffraction-limited as one could wish, but the retina is in totally the wrong place to receive a focused image (figure .)! Even though this is not common, it has independently evolved in several lineages, and in these cases it is invariably adaptive. The typical arrangement is to have the retina located too close to the backside of the lens so that the image plane is located a long way behind the retinal surface. If the eye again views a star, this means that the image of the star, although reasonably sharp some distance behind the retina, will be a wide blurry spot on the retinal surface. Such “underfocused” eyes are thereby poorly resolved. Good examples are insect ocelli (figure .A,B). These camera eyes, which are usually found as a triangular formation of three ocelli between the two compound eyes, are often used together as a kind of optical gyroscope to stabilize flight (see Mizunami, , for this and other functions of ocelli). Most (but by no means all) ocelli are heavily underfocused and are thereby capable only of detecting broad changes in light intensity. But this they do quickly and with great sensitivity, a perfect adaptation for monitoring body pitch and roll during flight, when the relative quantities of dark ground and bright sky that occupy the visual fields of the three ocelli are constantly changing (Wilson, ; Stange, ). The camera eyes of the box jellyfish Tripedalia cystophora (figure .C,D) are another excellent example. These jellyfish—which inhabit mangrove swamps in Central America—are remarkable in that they possess eyes of two different types distributed over four sensory clubs— or “rhopalia”—that hang from stalks close to the margin of the bell. Each rhopalium carries a small upper and a large lower camera eye as well as two pairs of simpler pigment-pit eyes (Nilsson et al., ). The upper camera eyes are particularly interesting. As their name suggests, these eyes have an upwardly directed visual field, and these are ideal for viewing the overlying mangrove canopy through the water surface. However, despite an exquisite gradient of refractive index that endows the lens with razor-sharp aberration-free optics, the focal plane of the lens is well below the retina, and the resulting image perceived by the eye is blurry. This blurry image means that only the coarsest details of the scene are registered, in this case the broad dark edge of the mangrove canopy seen against the bright sky. This is the only dorsally located object of interest for jellyfish, and the upper eyes—which act as spatial lowpass filters—ensure that this coarse detail is seen with maximum reliability. Jellyfish rarely stray from the canopy edge—out in the open waters of the lagoon they would likely starve—so holding station close to the mangroves with this reliable visual cue is vitally important (Garm et al., ). The two large and downward-pointing lower camera eyes have a similarly underfocused design, and serve a similar purpose: to reliably detect and avoid the broad dark mangrove roots. Thus, diffraction, aberrations, and image location together determine the quality of the optical image that is focused on the retina and can significantly widen the photoreceptor’s acceptance angle Δρ. This, however, is only half the story, because
Optical Building Blocks of Eyes
A
C
LO
PE
LO
ULE
PE
MO LLE CE
CE
B D a l r b
Retina
a
b
Figure 4.7 Underfocused eyes. (A) The large median (MO ) and lateral (LO ) ocelli of the nocturnal bee Megalopta genalis are located on the dorsal head surface between the two compound eyes (CE ). (B) The lens (l ) of the median ocellus is astigmatic and has two focal planes (a and b) well below the retina (r ) where images (in this case of a striped grating) are sharply focused (insets). (C) A single rhopalium of the box jellyfish Tripedalia cystophora showing the upper (ULE ) and lower (LLE ) lens eyes and the two pairs of pigment pit eyes (PE ). (D) The spherical lens of the upper lens eye has a parabolic gradient of refractive index that brings incoming rays to a sharp focus below the retina. (A,B from Warrant et al., 2006; C,D from Nilsson et al., 2005, with kind permission. Scale bars in A, B, and C: 100 μm; B [insets] and D: 50 μm)
in the end the quality of the visual image—that is, the quality of the image actually perceived—depends not only on the optical properties of the lenses but also on those of the retina. Even if the eye’s optics can focus the crispest diffraction-limited image imaginable, the photoreceptors themselves may fail to preserve spatial details present in the image. The reason for this lies in the inability of photoreceptors to capture all of
83
84
Chapter 4
the light that is intended for them because either () the angle of incident light rays is too great for the photoreceptor to capture by total internal reflection or () the photoreceptor is so thin that it functions as a waveguide and propagates light as one or more waveguide modes, much of the energy of which remains outside the photoreceptor. In both cases this leaked light can be absorbed by neighboring photoreceptors, which then (incorrectly) interpret the light as having arrived from their own receptive fields. Such “optical cross talk” between photoreceptors broadens their acceptance angle Δρ (i.e., widens their receptive fields) and degrades spatial resolution.
Optical Cross Talk between Photoreceptors Whether or not a ray is totally internally reflected depends on how steeply the ray strikes the photoreceptor wall (figure .A): if it strikes too steeply the ray will pass out of the photoreceptor. For a given refractive index difference between the photoreceptor and its surround, there is a minimum critical angle of incidence (θc ) for rays striking the wall such that they undergo total internal reflection: rays incident at the wall with angles less than θc will pass out of the photoreceptor. This critical angle also sets the maximum angle of incidence (θmax ) with which rays can strike the distal tips of the photoreceptors and remain trapped by total internal reflection. θmax for meridional rays (rays passing through the axis of the photoreceptor) depends on the refractive indices of the photoreceptor ni and the external cellular medium no (Warrant and McIntyre, ): sin θmax = √[(ni /no ) − ]
(.)
Thus, the greater the internal refractive index relative to the external refractive index the greater is θmax—in other words, the wider the focused cone of light rays that can be captured by the photoreceptor. How broad is this cone for real photoreceptors? In insects, typical values for ni are .–. and for no around . (Kirschfeld and Snyder, ; Nilsson et al., ), and what is immediately obvious is that the difference between them is only slight: Δn ≈ .–.. When these values are used in equation ., one obtains θmax ≈ °. This means that for total light trapping, the maximum allowable angular width of a cone of light rays perfectly (and idealistically) coincident at the center of the photoreceptor would be twice this angle (see figure .A), that is, °. In many eyes, especially those with high F-numbers adapted for vision in bright light, this condition is easily met, and all light incident on the photoreceptor is captured. However, in many eyes adapted for vision in dim light (with low F-numbers), the cone of incident light can be up to ° wide, which means that a tremendous quantity of light escapes from the photoreceptor, significantly broadening the photoreceptor’s receptive field and degrading spatial resolution. In some eyes—such as the superposition compound eyes of some dung beetles—this spread of light turns out to be the dominant determinant of spatial resolution, even though the image focused by the optics is actually quite good (Warrant and McIntyre, ). Not surprisingly, many solutions exist for overcoming this problem (figure .). The most common solutions involve some type of shielding of the photoreceptors, such that the escape of incident rays is prevented (Land, ; Warrant and McIntyre, ). Some eyes have no shielding whatsoever (as in figure .A), implying that the loss of spatial resolution caused by light spread in the retina is acceptable for normal
Optical Building Blocks of Eyes
1
θmax
2
θc
no
ni
Photoreceptive segment
B
C
D
A Figure 4.8 Solutions for optical cross talk in the retina. (A) A photoreceptive segment of refractive index ni that is surrounded by a cellular matrix of refractive index no will totally internally reflect incident light rays that have an angle of incidence less than or equal to θmax (ray 1). Angles of incidence greater than this (ray 2) result in the ray passing out of the segment. θc is the minimum (critical) angle with which a ray can strike the segment wall and still be internally reflected. (B–D) Three common solutions to cross talk in the retina. (B) A sheath of light-absorbing screening pigment. (C) A noncylindrical photoreceptive segment. (D) A sheath of reflective tapetal structures (reflective pigment granules or tracheoles). (Adapted from Warrant and McIntyre, 1993)
visual behavior. A better solution is a shield of light-absorbing pigment granules (figure .B). Although this solution eliminates light spread, it does so by absorbing light, so in eyes adapted for dim light it is usually not adopted. Another interesting solution, not involving shielding, occurs when photoreceptors have a noncylindrical shape, a feature of some moth and crustacean eyes (figure .C). A barrel-shaped photoreceptor, while still having only a slightly higher refractive index than its surroundings, can trap a much wider cone of incident rays. The final and most complete solution is to encase the photoreceptor (or groups of photoreceptors, as in some fish) in a reflective material that traps every ray incident on the photoreceptor, regardless of its incident angle (figure .D). These reflective structures—which are associated with a tapetum—are either reflective pigment granules (as in fish and crustaceans) or air-filled tracheoles (as in insects). Unlike screening pigments, a reflective shield also bounces the light back into the photoreceptor for further absorption by the visual pigment, enhancing both spatial resolution and sensitivity! A second serious problem for resolution arises when the outer segments of vertebrate rods and cones and the rhabdoms of invertebrates are very thin (less than about μm in diameter). Photoreceptors as thin as μm are commonly found in both vertebrates and invertebrates, but in some extreme cases they may be even thinner: the rhabdomere R in the male killer fly Coenosia attenuata has a diameter of only . μm (Gonzalez-Bellido et al., ). Such photoreceptor diameters approach
85
86
Chapter 4
the wavelength range of visible light (.–. μm). In fact, the diameter of the killer fly R rhabdomere is similar to the wavelength of red light! When light propagates through a photoreceptor that has a diameter similar to the wavelength of the light, the observed optical phenomena can no longer be described by conventional geometric ray optics (as we used above). Rather, the phenomena will obey the principles of waveguide optics, and the photoreceptor will behave as a waveguide (Snyder, ). Waves of light propagating in a waveguide interfere, leading to the production of waveguide “modes,” stable patterns of light within the waveguide. Modes can be grouped into several orders—first- (fundamental), second-, third-, fourth-, . . . order modes (e.g., Snyder and Love, )—and the number that propagate depends on the diameter of the waveguide: fewer (and lower-order) modes are propagated in thinner waveguides. However, the fundamental mode is always propagated, regardless of the waveguide’s diameter. As the waveguide diameter is increased, more and more modes will be propagated until, eventually, the number of propagating modes becomes so large that the laws of geometric optics again become applicable. Waveguide modes have been observed in both vertebrate and invertebrate photoreceptors (e.g., Enoch and Tobey, ; Nilsson et al., ), and as we see in chapter , they play a critical role in the function of afocal apposition compound eyes. An important property of waveguide modes is that not all of their energy is propagated inside the waveguide: a proportion of the mode energy propagates outside the waveguide, and this proportion is greater the higher the order of the mode (Snyder, , ; Nilsson et al., ; Hateren, ; Land and Osorio, ). This property has a considerable influence on spatial resolution: the more light that is propagated outside the target photoreceptor, the more light that can be absorbed by neighboring photoreceptors, and the worse the resolution.
Putting It All Together: Resolution, Sensitivity, and the Discrimination of Visual Contrast A nocturnal spider straining to see the fleeting movements of its favorite prey in the dark has eyes in which the trade-off between resolution and sensitivity has been tipped in favor of sensitivity. The opposite is true of a soaring eagle, whose acute vision—which allows it to spot small rodents on the ground far below—is only possible in bright daylight. How has this unavoidable trade-off affected what these animals can actually see? The answer to this question really comes down to a matter of how fine a contrast they can see, that is to say, to their “contrast sensitivity.” We have seen above that the density of photoreceptors in an eye (Δφ)—and more importantly, the size of the receptive field of each (Δρ)—sets the finest spatial detail that can be reconstructed. But as we have also seen, this is only true if the eye has sufficient sensitivity: even though a greater density of smaller “visual pixels”—to again borrow the digital camera term— has the potential for higher spatial resolution, the pixels risk being too insensitive to achieve it. This becomes more and more true as light levels fall. Not surprisingly, the eyes of animals active in dim light—such as nocturnal spiders— tend to maximize sensitivity by having larger and less densely packed photoreceptors (i.e., each with a wider Δρ). An unavoidable consequence of this is that vision becomes coarser. Reliable contrast discrimination is then confined to a smaller range of coarser
Optical Building Blocks of Eyes
image details, with all finer spatial details drowned by visual noise. The opposite is true of animals active in bright daylight, such as eagles: the eyes of these animals tend to maximize spatial resolution by having smaller and more densely packed visual cells, and the abundance of light allows them to reliably discriminate a broad range of spatial details, from coarse to fine. Thus, it is not only the density of the visual pixels but also the light intensity and the optical sensitivity of the eye (and thus the relative magnitude of the noise) that sets the range of spatial details that can be seen. The ability of imaging devices to faithfully record the spatial details of a scene— whether they are eyes, cameras, or telescopes—has long interested vision scientists and optical engineers alike. But what exactly do we mean when we talk about “spatial details”? One of the most common ways to define spatial detail is in terms of a pattern of black-and-white stripes known as a “grating” (figure .A): coarser details are described by coarser gratings with wider stripes, and finer details are described
A Increasing spatial frequency (cycles deg–1)
1°
Object
Image
Lens B
Image contrast %
100
50
νco 0
Spatial frequency (cycles deg–1)
Figure 4.9 The modulation transfer function (MTF). (A) Four sinusoidal gratings of increasing spatial frequency (left to right): 0.9, 1.4, 2.0, and 4.0 cycles/°. (B) When a grating of 100% contrast is imaged by a lens, the contrast in the image falls as spatial frequency increases. The frequency at which image contrast becomes zero is the optical cutoff frequency νco.
87
88
Chapter 4
by finer gratings with narrower stripes. How coarse or fine a grating is depends on its “spatial wavelength” λs, that is, on the distance between the centers of two consecutive black or white stripes. The inverse of this wavelength defines the grating’s “spatial frequency” ν (in grating cycles per unit distance), with finer gratings having higher frequency. Obviously, the ability of an eye or camera to resolve the stripes of a particular grating depends on how far away it is, so in order to remove this ambiguity, spatial frequency is usually defined on the basis of the angle that a single stripe cycle subtends at the lens (i.e., grating cycles per degree): a given physical grating (of set λs) closer to the lens will have a lower spatial frequency than the same grating further way. The finest spatial detail that an imaging device can discriminate is then simply given by the highest spatial frequency that it can resolve. Lens manufacturers often quantify this for a lens by measuring its “modulation transfer function” (or MTF), and this is also useful for understanding the performance of eyes. In such a measurement, the lens images a series of gratings of maximal (%) contrast at ever increasing spatial frequency. The contrast in the image, relative to that of the object, is then measured for each grating. The resulting plot of image contrast versus spatial frequency—the MTF (figure .B)—shows that gratings of lower spatial frequency are imaged with almost no loss of contrast, but as spatial frequency increases less and less image contrast remains. At and beyond a certain spatial frequency—the optical cutoff frequency νco—all contrast is lost, and the lens is incapable of resolving finer details. For high-quality diffraction-limited lenses where optical defects (e.g., aberrations) are minimal, the optical cutoff frequency can be quite high, which means that the lens is able to resolve very fine spatial detail. The opposite is true for poorer lenses that lack correction for aberrations and other imperfections. Unlike commercial lenses, the highest spatial frequency that eyes can resolve depends not only on the quality of the optical image but also on the retina. The density of the photoreceptors, the optical cross talk between them, and the extent of visual noise all play their parts to further limit the finest spatial detail that can be seen, especially in eyes adapted for vision in dim light. As an example, the camera eye of the nocturnal net-casting spider Dinopis subrufus has a lens that forms a crisp aberration-free image, allowing it to resolve gratings of spatial frequency up to at least . cycles/° (Blest and Land, ). But this resolution is not preserved in the retina: due to the eye’s low F-number (.), the cone of light focused on the retina is a hefty ° wide, causing significant cross talk. With an interreceptor spacing Δφ of .° this leads to photoreceptors with receptive fields that have acceptance angles Δρ of around .° and significant side flanks (see figure .B; Laughlin et al., ). If we were to approximate this receptive field by a Gaussian of half-width Δρ = .°, then the MTF would also be a Gaussian (figure .A; Snyder, ). Further, if we define νco as the frequency at which the MTF falls (say) to % of maximum, then νco = . cycles/° (figure .B). This is only a little more than half the spatial frequency passed by the optics! In reality, the receptive field’s side flanks (see figure .B) would probably reduce νco even more. The situation worsens further if one also accounts for noise. This can be seen if we calculate the number of photons N that Dinopis (with eyes of optical sensitivity S) can sample per second from a uniform extended object with radiance R. As we saw in equation ., N is simply given by the product of S and R. From table ., S for Dinopis is μm sr. A typical starlight value of R is around photon μm– s– sr– (in green light, λ = nm: Land, ). Thus, when Dinopis views an object illuminated
Optical Building Blocks of Eyes
A 100 % sensitivity
Relative modulation transfer
1.0 0.8 0.6
50
0
Angle ϕ (deg)
0.4
νco
0.2
νco
0 Spatial frequency (cycles deg–1) B
Absorbed photons (s–1)
100 Signal 80 60
Figure 4.10 The modulation transfer function (MTF) of an eye. (A) The range of spatial frequencies detectable by an eye depends on the size of the photoreceptor’s receptive field (inset). Narrower receptive fields allow a wider range and a higher optical cutoff frequency νco (green curves) than broader receptive fields (red curves). (B) Signal and (photon shot) noise in the eye of the nocturnal spider Dinopis subrufus, which has a photoreceptor acceptance angle of 2.3° and views a scene illuminated by starlight. The maximum detectable spatial frequency νmax is defined as the frequency at which signal equals noise. νco is here defined as the frequency at which the signal has fallen to 1% of maximum, but other criteria could also be used.
40
νmax
20 0
νco
Noise 0
0.1
0.2
0.3
0.4
0.5
Spatial frequency (cycles deg–1)
by starlight, each of its photoreceptors absorbs about photons every second. What noise will be associated with such a sample? One source, as we saw above, is photon shot noise, the noise arising from the random and unpredictable nature of photon arrival (where noise is simply given by √N). Thus, in our example of Dinopis, this noise is then √ or photons s–. Furthermore, unlike the “signal,” which is the number of absorbed photons N multiplied by the eye’s MTF (Snyder, ), this shot noise is independent of spatial frequency. Thus, if one plots the signal and shot noise together (figure .B), the signal steadily declines with spatial frequency due to the declining MTF while the noise level remains constant. Eventually a spatial frequency is reached at which the two curves cross—beyond this frequency the signal becomes lower than the noise, and above this “maximum detectable spatial frequency” (νmax) the contrasts of all finer spatial frequencies are drowned in the noise (Warrant, ). In our example of Dinopis, if one assumes a Gaussian MTF, then νmax is . cycles/° (figure .B), which is a reduction by about % compared to the cutoff without noise
89
90
Chapter 4
(. cycles/°). In reality, νmax is likely to be even lower again because other sources of noise—particularly those inherent within the photoreceptors themselves—can be quite substantial, equaling or even exceeding shot noise levels (Laughlin and Lillywhite, ). Spatial resolution declines even further as light levels fall. At light levels times dimmer than starlight (i.e., N = photons s–, √N = . photons s– ), νmax would fall to just . cycles/°. Highly sensitive eyes like those of Dinopis are well adapted for coarse but reliable spatial vision in dim light, and this of course is a prerequisite for being able to hunt prey at night. Much of the reason for this ability is the structure of the eye itself—the camera eyes of Dinopis have the potential to supply substantially more spatial information per unit mass than (say) the compound eyes of their arthropod relatives, the insects (Laughlin, ). This naturally highlights the importance of eye design in any discussion of visual performance, so before continuing with that particular topic any further, we next turn our attentions to the major eye types of the animal kingdom.
5 The Eye Designs of the Animal Kingdom
I
f we could put our submersible from chapter into a time machine and take another dive, this time into the oceans of million years ago, we would encounter a world of strange and wonderful creatures—all new to us but somehow reminiscent of the animals around us today. In fact, one of the most spectacular events in the history of the animal kingdom has just occurred. In the space of a just few million years—a blink of an eye in geological terms—many of our familiar modern animal lineages suddenly appeared on the Earth. They typically had well-developed eyes, and they used them in matters of life and death. This explosion of new animal forms ushered in the Cambrian epoch and with it a dangerous new world of fast-moving predators. Vision became the survival sense par excellence. Bigger and sharper eyes not only improved a predator’s chances of spotting its prey but also helped the prey to unmask the predator. This sensory arms race drove the rapid evolution of a sophisticated spectrum of eye types, each formed to outsmart an adversary, possibly by detecting the telltale movements of a predator or by deciphering the subtleties of near-perfectly camouflaged prey. Of course, over time eyes also evolved in response to much gentler—albeit no less urgent—influences. Animals rapidly specialized in shallower and deeper aquatic habitats and eventually became terrestrial. The properties of light and visual scenes experienced by animals thus became ever more diverse, and their eyes evolved accordingly. The featureless vastness of the deep ocean, where faint blue daylight can been seen above and animals signal each other using fleeting points of bioluminescence, or the bright open savannahs of Africa where animals scan the horizon for predators and prey, or the complex three-dimensional world of the dense green entanglement of a tropical rainforest are just three examples of habitats that have had a profound influence on the evolution of eyes. And within these habitats, the search for mates—often involving complex visual signaling—has had a further but no less profound influence. In less than million years, the great variety of eyes we know today arose. In this chapter we look at them in detail. Today there are generally recognized optical eye types that have evolved in various branches of the animal kingdom. Whereas vertebrates possess only one of them, invertebrates possess all , from simple assemblies of photoreceptors that underlie phototaxis to advanced compound and camera eyes that support a sophisticated
92
Chapter 5
range of visual behaviors. Some invertebrates even possess several eyes of more than one type. We have already mentioned some of these eye types in the context of sensitivity and resolution, namely pigment-pit eyes (figure .), compound eyes, and camera eyes. The last of these are characteristic of the vertebrates, although they are also commonplace among the invertebrates. The remaining nine eye types are found only within the invertebrates. Of these, several are various subtypes of the compound eye, the eye design that is possessed by the vast majority of species in the animal kingdom. The account that follows is a brief description of each eye type, including where in the animal kingdom each type can be found and which basic optical principles underlie each. These descriptions are necessarily brief, but full accounts can be found in several marvelous reviews and books on the subject. The most recent book is the highly recommended second edition of Animal Eyes by Michael Land and Dan-Eric Nilsson (), which lucidly treats this and many other topics in this chapter in considerably greater depth (they have also authored many classic reviews on the optics of invertebrate eyes, e.g., Land, and Nilsson, ). For a truly excellent overview of camera eyes throughout the vertebrates, nothing surpasses the remarkable The Vertebrate Eye and Its Adaptive Radiation by Gordon Walls from , which is still as insightful and relevant today as it was years ago. For a more optical treatment of vertebrate eyes, Hughes () is also recommended. We begin here by returning briefly to the eye type we started with in chapter : the pigment-pit eye. In one fascinating group of animals—the ancient and beautiful deep-sea cephalopods of the genus Nautilus—this eye type has reached its logical evolutionary endpoint, functioning somewhat like a pinhole camera.
The Pinhole Eye of Nautilus As we saw earlier, the simplest pigment-pit eyes are nothing more than crude photoreceptor-lined invaginations in the epidermis (figure .). Such an eye was probably ancestral to more advanced pigment-pit eyes (like those of Nautilus) as well as to camera eyes with lenses (Land and Nilsson, ). As we mentioned earlier, the higher-performance camera eyes such as those of other cephalopods most likely arose as the result of a pigment-pit eye acquiring a lens (Nilsson and Pelger, ). In the lineage that led to Nautilus, pigment-pit eyes became more and more spherical, and the pupil—the eye’s “pinhole”—became smaller. Each photoreceptor in the retina thus received light from a smaller receptive field, and spatial vision improved. Such an improvement is already evident in the small (.-mm diameter) pinhole eyes of the giant clam Tridacna maxima, whose photoreceptors have acceptance angles Δρ of around ° (Land, ). But as this evolution progressed, smaller receptive fields only came at the cost of having a smaller pupil and thus a less sensitive retina (see chapter ). The obvious solution to this trade-off is a lens—which provides better resolution and better sensitivity—but for unknown reasons this never occurred, as the pigment-pit eye evolved relentlessly along its path. Although lacking a lens, the eyes of Nautilus are large (about mm across) and well developed (figure .A,B). Remarkably, the pupil is also mobile and varies in size with light level, with a diameter that ranges from . mm to . mm (Hurley et al., ). Pinhole eyes are thought to work like a pinhole camera: the small pupil creates a dim, inverted, and somewhat blurry image on the retina, and smaller pupils produce sharper (albeit dimmer) images (figure .C). Like a pinhole camera the
Eye Designs of the Animal Kingdom 93
A
p
r
B
Figure 5.1 The pinhole eye of Nautilus. (A) Light rays entering the pupil (p) from different directions in space form a dim, blurred image on the retina (r). (Image courtesy of Dan-Eric Nilsson) (B) The cephalopod Nautilus showing the position of the right eye and the conspicuous pupil (arrowhead ). (Photo credit Visarute Angkatavanich, 123rf.com Photo Agency) (C) Images of a Lizars eyesight test chart (left) photographed at a distance of 12.5 cm through a model Nautilus eye with its smallest pupil, 1 × 0.4 mm (right: pupilto-image distance 9 mm). Resolution of the chart’s top line represents a visual acuity of 4°, the second line 2.5°. Even with its smallest pupil, the smallest angular detail resolvable by the Nautilus eye is thus likely to be greater than 4°. (Reproduced from Muntz and Raj, 1984, with permission from the Journal of Experimental Biology)
C
eye is also likely to have a near-infinite depth of field, meaning that accommodation mechanisms are not required to reach best focus at different distances from the eye (as is needed, for example, in our own eyes). Muntz and Raj () discovered that in the Nautilus retina the interreceptor angle φ is around .°. However, even with a .-mm pupil, the photoreceptor’s acceptance angle Δρ is still eight times greater than Δφ, which indicates that the retina has the potential to resolve much finer spatial detail than the pupil can actually supply (see figure .C). A similarly sized eye in a fish, with a wide pupil and a lens producing sharp diffraction-limited images, would
94
Chapter 5
most likely result in a smaller Δρ than Δφ. Moreover, the image would be over two orders of magnitude brighter (Muntz and Raj, ). Thus, compared to camera eyes of the same size, the pinhole eye of Nautilus has rather poor sensitivity and resolution, which probably explains the rarity of this eye type in nature.
Concave Mirror Eyes Another curious eye design involves the use of a concave mirror, rather than a lens, to focus an image into the retina. Like pinhole eyes, concave mirror eyes are rare in nature, but they have nonetheless evolved several times (see Land and Nilsson, ), notably in the bivalve mollusks and within at least three classes of crustaceans (Maxillopoda, Ostracoda, and Malacostraca). Remarkably, they have also appeared in a fish: the deep-sea spook fish Dolichopteryx longipes (Wagner et al., ). Of the crustaceans, the best-known examples are found among the amphipods, ostracods, and copepods, notable among them being the giant deep-sea ostracod Gigantocypris with its huge ocular reflectors covering about a third of its dorsal body surface. These eyes are likely used as highly sensitive light concentrators for detecting bioluminescent prey in the deep (Hardy, ; Land, ). The species in which concave mirror eyes were first described in detail—by Michael Land in the mid-s—is the scallop Pecten maximus, a bivalve mollusk. The mirror (or “argentea”) of Pecten is a multilayer reflector that most strongly reflects blue-green light (Land, a). Within the cells of the argentea, each reflective layer is formed from flattened membrane-bound vesicles densely packed with shiny guanine crystals (Barber et al., ). The mirror so formed is also perfectly spherical, which, as we will see below, is a prerequisite for undistorted image formation. Scallops possess a large number of little concave mirror eyes evenly spaced along the edge of each shell, on either side of the scallop’s open gape. In some species they look rather like tiny blueberries, with a characteristic glow from the mirror clearly visible through the pupil (figure .B). Each eye possesses a weak jelly-like “lens” (of low, homogeneous refractive index) and an underlying retina that is backed by the mirror (Barber et al., ). Unlike in a camera eye, the retina of a concave mirror eye is not separated from the lens by an optically homogeneous aqueous medium, a spacer component that is usually needed to provide the required focal distance between the lens and the image plane. This interesting fact, combined with the weak lens, means that incoming parallel rays of light are brought to a focus a long way below the back of the retina (indeed, a long way below the back of the eye!). Thus, had the eye relied solely on its lens, it would have been badly underfocused (Land, ). The mirror is the key to why, in reality, this is not the case. Like lenses, concave mirrors focus images by bending rays of light, not by refraction (as in a lens) but by reflection (figure .A). The mirror’s image can be upright relative to the object and magnified (as in a shaving mirror), or it can be inverted and minified (as is the case in a concave mirror eye); exactly which combination of image orientation and size results depends on the location of the object relative to the focal plane of the mirror. This focal plane lies halfway between the spherical surface of the mirror and its center of curvature, that is to say the mirror’s focal length f is simply .r where r is the radius of the spherical mirror surface. In the scallop’s concave mirror eye the image of a distant object lies slightly closer to the mirror than this because the incoming light rays are weakly refracted by the lens. Not surprisingly, the retina is optimally
Eye Designs of the Animal Kingdom 95
A
r
l
m
B
Figure 5.2 The concave mirror eye of the scallop Pecten. (A) Light rays are weakly refracted by a gelatinous lens (l ) and are reflected by a spherical concave mirror (m) lining the back of the eye. This mirror creates a focused image in a retina (r ) suspended above the mirror. (Image courtesy of Dan-Eric Nilsson) (B) Shells of the scallop Pecten gibbus (above, scale bar 10 mm) and three eyes along the mantle edge of a living but unidentified species of Pecten (below, scale bar 1 mm). (Upper image courtesy of the National Anthropological Archives, Smithsonian Institution, inventory number USNM 605016; lower image courtesy of Dan-Eric Nilsson)
located to receive this focused image, being separated from the mirror by a smaller (as in Pecten: Barber et al., ) or larger aqueous space (as in the sessile scallop Spondylus americanus: Speiser and Johnsen, ). Curiously, even though the image is eventually focused, light must first pass unfocused through the retina to reach the mirror. Because the photoreceptors are incapable of telling the incoming and reflected light apart, the visual contrast of the image will thus be somewhat degraded. The scallop retina itself is a remarkable structure divided into a proximal layer (closer to the mirror) and a distal layer (closer to the lens). In Pecten the focal plane of the mirror lies within the distal layer—following reflection, light first passes unfocused through the proximal layer before reaching the distal layer. In other species, such as the sessile scallop Spondylus, the aqueous space between the mirror and retina is larger than in Pecten, and it may be the case that the mirror’s focal plane lies
96
Chapter 5
in the proximal layer, although this is yet to be determined with certainty (Speiser and Johnsen, ). Interestingly, in all species examined by Speiser and Johnsen the interreceptor angle Δφ is quite small in both layers (ranging from ° to °), and in the proximal layer Δφ seems to be correlated with swimming strength: species that are stronger swimmers tend to have a smaller proximal Δφ than species that are sessile, which suggests a role for vision during swimming. In Pecten, which is a reasonably good swimmer, such small interreceptor angles seem well matched to the receptive field sizes of the distal photoreceptors, which are capable of reacting to the images of moving targets as small as ° in size (Land, b). Remarkably the two retinal layers also house two entirely different types of photoreceptors. In the distal layer, the light-sensitive parts of the photoreceptors are constructed of cilia and hyperpolarize in response to light (Hartline, ), properties reminiscent of those found in the outer segments of vertebrate photoreceptors (chapter ). In contrast, the light-sensitive parts of proximal-layer photoreceptors are constructed of microvilli and depolarize in response to light—such properties are similar to those found in the majority of invertebrate photoreceptors. Interestingly, each layer also expresses a different visual pigment, with the result that the spectral sensitivity of photoreceptors in the distal layer is shifted to longer wavelengths than the sensitivity of photoreceptors in the proximal layer (Speiser et al., a). In some species this may offset chromatic aberration produced by the weak lens because shorter wavelengths would be focused closer to the mirror than longer wavelengths. The curious mix of rhabdomeric and ciliary photoreceptors in the same retina has a particular functional significance. Due to its hyperpolarizing response properties, a distal-layer photoreceptor experiences a dark target that moves into its receptive field as a sudden removal of light and responds accordingly with a brisk volley of action potentials, that is, with a vigorous “OFF response” (Land, b; McReynolds and Gorman, ; Wilkens, ). Behaviorally, the sudden appearance of a dark target within the visual field of a sedentary scallop could potentially signal the presence of a predator, and with both shells open while filter feeding, scallops are particularly vulnerable. Not surprisingly, in Pecten the intrusion of even a small (~° wide) dark target is sufficient to influence the animal to close its shells completely (Buddenbrock and Moller-Racke, )—and, as we saw above, this is also sufficient to generate an off response in the distal layer receptors. In Pecten these receptors are thus thought to be specialized for signaling approaching predators, whereas the proximal layer receptors—generating “ON responses” to general changes in light level—are thought instead to be used for guiding scallops to brighter and darker areas of their habitats (Land, b). The large number of concave mirror eyes found in a single scallop, evenly spaced along the edge of each shell, probably act collectively as a highly efficient “burglar alarm” for detecting predators (Nilsson, ). In fact, each point in space is viewed by one receptor in each of about eyes, so the likelihood of a predator surprising a scallop unawares is probably very small (Land, )!
Camera Eyes Even though camera eyes are not the most common eyes in the animal kingdom—a title overwhelmingly held by the compound eyes—they are arguably among the most widespread taxonomically (Land and Nilsson, ). Sometimes also referred to as the simple eye or lens eye, the camera eye is the principal eye type of all vertebrates.
Eye Designs of the Animal Kingdom 97
Camera eyes are also found in a large number of invertebrate taxa, including molluscs, annelids (notably polychaetes), crustaceans (only in pontellid copepods, e.g., Labidocera), cnidarians (e.g., cubozoan jellyfish), arachnids (spiders, scorpions, ticks, and mites), and even in insects (as ocelli and larval eyes). Per unit mass (and therefore size), camera eyes are capable of supplying more spatial information than any other type of eye (Laughlin, ), and this evolutionary advantage is possibly one of the main reasons they have become so widespread. Their various optical and retinal components are also surprisingly malleable to the forces of evolution, and this has led to a remarkable variability in their forms. Some, for instance, have evolved for exquisite spatial vision in bright daylight, others are incredibly sensitive for use at night or in the deep sea, and yet others can undergo drastic but reversible optical changes to optimize vision during a sudden transition between air and water. We explore many of these camera eyes (and their visual ecological functions) in the chapters that follow. Here we describe their optics and basic ground plan.
Optics The feature that unites all camera eyes is the presence of a single optical unit (consisting of an internal lens and very often an external cornea) that focuses images of the external world onto an underlying retina of visual cells (figure .A). As we saw earlier in chapter , in some camera eyes—such as the ocelli of insects or the lens eyes of box jellyfish—the focal plane of this image is well behind the retinal surface, and deliberately so in order to optimize specific visual tasks by reducing the spatial information sampled by the retina. Most other camera eyes, in contrast, have a wellfocused image either on or in the retinal layer. In all camera eyes, focused or not, the image is inverted on the retina. In the camera eye of a terrestrial vertebrate in air there are two main refractive elements that collectively focus light onto the retina (figure .A): the curved outer surface of the cornea and the internal lens. Like a convex glass lens, the curved corneal surface is able to focus light because it separates two optical media of very different refractive indices n: the outside air (n = ) and the aqueous interior of the eye (which has a refractive index close to that of water, n ≈ .). Without this refractive index difference the considerable refractive power of the cornea, which accounts for around two-thirds of the total refractive power of the eye, would be eliminated, and the lens, which is responsible for the remaining one-third of the eye’s refractive power, would be incapable on its own of focusing an image. This is why we experience an incredibly blurry world if we try to see underwater without a facemask: the watery medium on both sides of the cornea eliminates its refractive power. Only by inserting a layer of air in front of the eyes—the chief role of a diver’s facemask—can underwater vision be restored. How then do fish, marine mammals, and mollusks all manage to see underwater, when the refractive power of their corneas is negligible? The answer lies in the lens, which in aquatic camera eyes is typically spherical and powerful (figure .B)— underwater vision is permitted by a remarkable gradient of refractive index (figure .C). This gradient, first discovered in the early s by the German physicist and zoologist Ludwig Matthiessen (), is roughly parabolic and falls radially in every direction from the center of the lens to its edge, the result of a continuous decline in the concentration of proteins (known as crystallins) that make up the lens. The
Chapter 5
A
B
C
C
l
l r
r
C
D 1.55 1.50 Refractive index
98
1.45 2
1.40 1.35 1.30
1
0
0.2
0.4
0.6
0.8
1.0
Relative radial distance r Figure 5.3 Camera eyes. (A) A terrestrial camera eye. The cornea (c) is responsible for a significant (and often major) fraction of the eye’s focal power in air. The flattened lens (l ) provides the remaining refractive power, and together both optical elements create a focused image on a retina (r ). (B) An aquatic camera eye for use in water. Because the cornea (c) lacks refractive power, the focal power of the eye resides in the spherical lens, whose parabolic gradient of refractive index provides a crisp spherical-aberration-free image on the retina (r). (Images in A and B courtesy of Dan-Eric Nilsson) (C) Gradient of refractive index found in the spherical lens of the cichlid fish Astatotilapia burtoni from the lens center (r = 0) to its edge (r = 1). 1 = the refractive index in the constant index zone of the lens; 2 = the refractive index of the lens capsule. (From Gagnon et al., 2012) (D) Continuously bent light paths of thin laser beams focused by the large spherical lens of a swordfish (Xiphias gladius). Scale bar = 1 cm. (Eric Warrant and Kerstin Fritsches, unpublished data)
Eye Designs of the Animal Kingdom 99
refractive index of the lens falls from a central value of n ≈ . (the value for a dry protein lens crystallin) to an edge value approaching that of the surrounding aqueous medium (n = .). Such gradients bend light rays continuously within the lens (figure .D), and remarkably they also eliminate spherical aberration in the resulting image (which would have been a major problem had the lens been homogeneous in refractive index—see chapter ). And because of the continuous bending of light, the graded-index lens ends up bending the light much more than would have occurred in a homogeneous lens (resulting in a much shorter focal length). It is this single feature that accounts for the superior refractive power of the lens in marine camera eyes and overcomes the loss of the cornea. Matthiessen also discovered another curious fact about these lenses: whether from a fish, a cephalopod, or a marine mammal, the focal length ( f ) is invariably about . lens radii (r ), that is f/r ≈ ., a ratio today known as Matthiessen’s ratio. Interestingly, graded-index lenses are even found in terrestrial camera eyes, their benefits for resolution (reduced spherical aberration) and sensitivity (a short focal length and low F-number) being the major reason. We mentioned earlier that even our own lenses (and indeed the lenses of all other mammals) contain a weak gradient in order to improve resolution (Pierscionek and Chan, ). Many spider lenses also have gradients, but instead to improve sensitivity. The chitinous lenses of the nocturnal net-casting spider Dinopis subrufus, whose eyes have among the lowest F-numbers known in the animal kingdom (.), are a good example (Blest and Land, ). As we see below, graded-index lenses are also found in some groups of compound eyes, but for very different reasons. Of course, not all animals live permanently in either air or water. Some live sporadically or permanently in both optical media. Marine mammals, diving birds, and even certain kinds of fish spend considerable time both above and below the water surface. These amphibious animals have evolved several remarkable optical tricks to overcome the problems of seeing well in two media.
Amphibious Vision with Camera Eyes Many aquatic animals—such as flying fish, frogs, and seals—spend a significant fraction of their lives in air. And just as curiously, there are many terrestrial animals that instead spend a large fraction of their lives in water. Several remarkable species of diving birds, many turtles, and even some groups of humans are all notable examples. What these animals all share is a need to see well in both media, and optically this is far from trivial. The major problem for such “amphibious vision” is the presence of the cornea. As we discussed above, the cornea is a powerful lens that is responsible for a substantial fraction of the refractive power of a terrestrial eye. Without its contribution, as would be the case when submerged, the eye becomes poorly focused, and spatial vision badly compromised. The simplest evolutionary solution to this problem has been to permanently reduce the cornea’s power in both media, allowing reasonably good vision in either air or water. Instead of being curved (and thus optically significant in air), the cornea in many amphibious animals is a flattened window. Seals and penguins are excellent examples of such animals. Of course with a flattened cornea the lens needs to be powerful, and in seals this is indeed the case: with near-spherical multifocal Matthiessen lenses (Sivak et al., ; Hanke et al., ), seals have a visual acuity in air that is similar to that in water (Hanke and Denhardt, ). The downside of having a
100
Chapter 5
flattened cornea is that the visual field of the eye is somewhat reduced. Certain fish— notably the flying fish Cypselurus heterurus (Baylor, ), the Galápagos four-eyed blenny Dialommus fuscus (Munk, ), and the four-eyed rockskipper Mnierpes macrocephalus (Graham and Rosenblatt, )—have overcome this problem by having a raised cornea of flattened windows oriented in different directions (figure .A). An alternative to having a flattened cornea is to drastically accommodate during the transition from air to water, increasing the eye’s optical power by so much that it actually compensates for the sudden loss of the cornea. This is a common strategy among seals and among diving birds such as cormorants and diving ducks and among amphibious reptiles such as turtles. Optical power is commonly measured in diopters (D) and is equivalent to the reciprocal of the focal length measured in meters: a .-D lens has sufficient optical power to bring parallel light rays to a focus at m, a .-D lens sufficient power to bring them to a focus at ½ m, and so on. The increase in optical power produced by underwater accommodation can approach D in cormorants (Walls, ) and around D in the European pond turtle Emys orbicularis (Heine, ; Munk, ). (As a comparison, the maximum accommodation measured in humans is around D, in infants.) The eyes of these animals are characterized by a heavily muscularized iris supported by a ring of tiny bones called scleral ossicles. During accommodation the ciliary muscles of the iris contract, squeezing the soft lens through the narrow pupil (figure .E). The section of the lens that is forced through the pupil bulges outward, and the anterior surface of this bulge acquires a much more pronounced curvature and thus much greater optical power. Many diving birds chase fish at high speed under water, and for this hunting task the acute underwater vision potentially provided by this powerful accommodation mechanism might be essential. Strangely though, recent measurements in the great cormorant Phalacrocorax carbo reveal that their underwater visual acuity is actually rather poor (White et al., ), suggesting that these birds can detect and pursue prey only at close range (less than m). Another fascinating (although rather less dramatic) example involves our own species. Certain human tribes, such as the Moken people who live along the coasts of Southeast Asia, have for generations relied on their ability to forage for food from the sea floor. This task is usually reserved for children, who can dive to depths of more than m to visually search for mollusks and crayfish without the aid of a facemask (figure .B). Psychophysical measurements have shown that these children have around double the underwater visual acuity of a European child (Gislén et al., ). This ability results from a combination of maximal accommodative power ( D) and the acquisition of an unusually small pupil while underwater. The smaller pupil significantly reduces image blur and thus improves acuity. This ability, however, has not been inherited—even European children after several weeks of training can learn to constrict the pupil and maximally accommodate in the same fashion as the Moken children (Gislén et al., ). And finally, two rather unique examples of how animals have solved the problem of amphibious vision. The first of these is the South American river-dwelling “foureyed” fish of the genus Anableps, which spend their lives literally at the water surface (figure .C) and have eyes that are constructed to see in air and water simultaneously (figure .D). In the light-adapted state, each eye has two pupils created by two touching “iris flaps” lying exactly at the water surface that divide the original pupil into an upper and lower half, one half admitting light from the air above, the other admitting light from the water below. The light admitted by the two pupils is focused onto two
Eye Designs of the Animal Kingdom 101
A
B
C
D
ap wr
l
i Air p Water
ar wp
E
F Air
A
Rhabdom region
Water
Rhabdom region Focal plane 1 Focal plane 2
n = 1.68 n = 1.53 R Figure 5.4 Amphibious vision in camera eyes. (A) Two-faced flattened cornea of the Galápagos foureyed blenny Dialommus fuscus. Scale bar = 5 mm. (Reproduced with permission from Munk, 1980) (B) Under water image of a Moken child from Southeast Asia. (From Gislén et al., 2003) (C) Four-eyed fish Anableps dowi. (Original image by Trudy Nicholson, courtesy of Natus Neurology—Grass Technologies brand products) (D) A schematic cross section through the eye of the four-eyed fish, showing the ellipsoidal lens (l). Light passing through the air pupil (ap) is focused (green rays) on the air retina (ar). Simultaneously, light passing through the water pupil (wp) is focused (blue rays) on the water retina (wr). p = corneal pigment band; i = iris flaps. (Drawn from various sources) (E) Accommodated (A) and relaxed (R) eye of the European pond turtle Emys orbicularis. Note how the lens bulges through the pupil in the accommodated state, providing the eye with an additional 100 D of refractive power when in water. (Reproduced from Walls, 1942) (F) Eyes of the chiton Acanthopleura granulate, whose birefringent aragonite lenses provide a focused image in the retina both in air and water. (Reproduced from Spieser et al., 2011b, with permission from Elsevier)
102
Chapter 5
separate retinas—a dorsal “water retina” and a ventral “air retina”—each of which sends information to a separate part of the optic tectum, a primary visual center in the brain (Schwassmann and Kruger, ). Interestingly, recent work has shown that although both retinas possess an ultraviolet-light-sensitive opsin, each retina possesses a second unique opsin: the dorsal retina has an additional long-wavelengthsensitive (L) opsin, whereas the ventral retina has a middle-wavelength-sensitive (M) opsin (Owens et al., ). This is a nice adaptation to the fish’s habitat because river water, into which the dorsal retina is looking, preferentially transmits light of longer wavelengths than in air due to the presence of dissolved organic matter (known as “gelbstoff ”), which tends to color the water green-yellow (chapter ). The lens too is interesting. It has an odd ellipsoid shape, with its long (and optically more powerful) axis oriented to collect light from the water. The optically weaker short axis instead collects light from the air. This unusual lens thus ensures that a sharp image is formed simultaneously on both retinas, giving the four-eyed fish excellent vision in both air and water. The second and final example concerns a marine mollusk—Acanthopleura granulata—a chiton that lives on rocks close to the shoreline, which, depending on the tide, can be found above or below the water surface. The shells of these animals are embedded with hundreds of small camera eyes (ocelli), each of which remarkably has a lens made out of aragonite, a feature unknown anywhere else in the animal kingdom (figure .F: see Speiser et al., b). Aragonite is birefringent, and a property of birefringent materials is that they have two refractive indices (nα and nβ) and thus two focal planes: for aragonite nα = . and nβ = .. If one models the chiton eye in air, the two focal planes are found to lie quite close together near the distal surface of the retina (figure .F). In water (when the power of the external surface of the lens is severely reduced), the two focal planes are quite separated: even though the focal plane for nβ is below the retina (i.e., underfocused), that for nα is located in the proximal retina. Thus, the chiton eye receives a focused image both in air and water, an impressive fact supported by behavioral studies showing that chitons have the same spatial resolution in both media (Speiser et al., b), allowing them to respond defensively to the slight decreases in illumination that are induced by fast-moving targets as small as ° in size.
The Pupil In camera eyes, changes in image brightness that result from changes in ambient illumination are partially compensated by a variable pupil whose size depends on light level. In the dark the pupil of our own camera eye has a diameter of mm, reducing to mm in bright daylight. This represents a change in area (and image brightness) of / or times, which of course is a lot lower than the times change in light level that occurs from night to day. Nevertheless, even though our pupil is clearly incapable of fully compensating for this daily change, it turns out that its diameter at different natural light levels maximizes the eye’s information capacity by optimizing the compromise between image blur induced by a wider pupil (caused by aberrations) and the extra sensitivity that this wider pupil affords (Laughlin, ). The camera eyes of nocturnal animals such as cats and geckos often have pupils that are capable of considerably greater changes in area than the pupils of our own eyes. This is simply because they can be much larger in the dark and much smaller in bright light. A tiny pupil has a real advantage because it can allow even a very sensitive nocturnal eye to
Eye Designs of the Animal Kingdom 103
DA
LA
Figure 5.5 The dark-adapted (DA, left ) and light-adapted (LA, right ) pupil of the nocturnal helmet gecko, Tarentola chazaliae. (Reproduced from Roth et al., 2009, with permission from the Association for Research in Vision and Ophthalmology (ARVO).
operate during the day. The slit pupil of the domestic cat is a superb example. In the dark, it is large and round with a diameter of around mm. As light levels increase the pupil constricts, not to a small circle as in our own eyes but to a thin vertical slit that almost completely closes. The change in pupil area is at least times (Wilcox and Barlow, ). In nocturnal geckos the change is even greater. In the Tokay gecko Gekko gecko (which has an all-rod retina), the area change is at least times (Denton, ), with the pupil constricting from a large ellipse to a curious vertical slit with four pinhole-sized circular holes evenly spaced along its length. A similar pupil shape results in the helmut gecko Tarentola chazaliae, although the area change is smaller, around – times (Roth et al., ; see figure .). The light-adapted pupil shape of vertebrate and cephalopod camera eyes is thus highly variable—in addition to singular and multiple circular pupils and vertical slit pupils, there are horizontal slit pupils (e.g., in horses and reindeer), ring-shaped pupils (e.g., catfish), and W-shaped pupils (e.g., some cephalopods). The functional significance of this great variety is still a matter of conjecture, although it has been suggested that vertical slit pupils allow nocturnal animals with multifocal lenses (chapter ) to focus images corrected for chromatic aberration (Malmström and Kröger, ), and multiple circular pupils in geckos might aid in distance estimations (Murphy and Howland, ).
The Retina The retinal morphologies of animals with camera eyes are also quite variable. In invertebrate eyes (of all optical types), the visual cells of the retina are exclusively photoreceptors. In more advanced invertebrate visual systems, such as those of mollusks and arthropods, the next stages of processing occur in the optic lobes, regions of the brain placed adjacent to the eyes and specialized for higher visual processing (see the octopus eye in figure .). For instance, in arthropods, signals carried by the photoreceptors are first analyzed in the lamina, the most peripheral neuropil of the optic lobe, whose role, among other things, is to optimize visual contrast. The massively complex
104
Chapter 5
OCTOPUS
COD
Optic lobe
LIGHT
LIGHT
Axons to brain Rhabdomeric receptor Ganglion cells Amacrine cells Bipolars Horizontal cells
Microvilli Supporting cell
Glia
Efferent nerves Afferent fibers to optic lobe
Cone
Rod
Rod disks
Pigment layer
Figure 5.6 Camera eyes of the octopus and the cod (Gadus morhua). Despite a similar size (10 mm diameter) and gross morphology, these eyes have very different retinas. Visual cells of the octopus retina consist exclusively of rhabdomeric microvillar photoreceptors of a single type that point outward toward incoming light. Cod retina consists of many cell types important for the early processing of visual information. Two types of ciliary photoreceptors—rods and cones—point inward away from the incoming light. (Reproduced from Land and Nilsson, 2012, with the permission of Oxford University Press)
medulla, which receives input from both the lamina and the retina, is responsible for the early processing of color, polarization, and motion information. The analysis of optic flow and the detection of moving targets are essential tasks of the lobula and lobula plate, the most central neuropils of the optic lobe (see chapter ). Many of these visual processing tasks—which take place outside the retina in invertebrates—are instead performed inside the retinas of vertebrates. This is possible because neurons in the vertebrate retina are not just photoreceptors. Many other
Eye Designs of the Animal Kingdom 105
nerve cell types—including bipolar cells (to which the photoreceptors connect), amacrine cells, horizontal cells, and ganglion cells—together create a complex and still only partially understood visual processing layer within the eye itself (figure .). To add to this complexity, vertebrate retinas also contain two classical types of photoreceptors (chapter ): rods, which are responsible for sensitive low-resolution monochromatic vision in dim light; and cones, which are responsible for high-resolution polychromatic vision in bright light. Recently a third type of photoreceptor has also been identified in the vertebrate retina. This is actually a special class of retinal ganglion cell, containing the visual pigment melanopsin, which measures ambient light levels and is responsible for setting the circadian rhythm, regulating pupil dilation, and controlling melatonin levels in the body (e.g., Hattar et al., ). In this book, however, we restrict our discussion of vertebrate photoreception to that occurring in the rods and cones. The neural circuits of the vertebrate retina are responsible for the elementary processing of contrast, color, and motion, visual features first extracted only in the lamina and medulla of the invertebrate optic lobe. Signals that leave the vertebrate retina via the optic nerve (which is dominated by the axons of ganglion cells) to the visual cortex of the brain thus do so in highly processed form. A final curious difference between photoreceptors in the retinas of vertebrates and invertebrates is that they are oriented in opposite directions with respect to the incoming light. This can be readily seen by comparing the camera eyes of an octopus and a fish (figure .). At a gross morphological level they seem remarkably similar. Both eyes have roughly the same form, and both have spherical lenses (with exquisite gradients of refractive index) that produce crisp aberration-free images on the underlying retina—without doubt one of the most beautiful examples of convergent evolution in the animal kingdom. However, if one looks more closely at the retina an important difference becomes apparent: the photoreceptive layer (the rhabdoms) of the octopus retina point outward toward the incoming light, whereas the corresponding layer of the fish retina (the rod and cone outer segments) points inward away from the incoming light. This difference arises due to significant developmental differences between the two animal groups. Even though the vertebrate construction seems illogical, it nonetheless works, thanks to the fact that the overlying retinal cell layers are very transparent and in the most important part of the retina—the high-resolution fovea—are also thinned out to the bare minimum.
Compound Eyes Simply on the basis of the sheer numbers of species that possess them, compound eyes are by far the most common eye type in the animal kingdom. For the general public they are most firmly associated with insects, but they are also commonplace among crustaceans. They are even found, often in a quite rudimentary form, in some chelicerates (e.g., the horseshoe crab Limulus), annelids (e.g., sabellid worms), and bivalve mollusks (e.g., ark clams). Unlike the eye types we have already discussed, compound eyes, as their name suggests, are constructed of many individual optical units. Known as “ommatidia” (figure .A), these units are tubular in shape and consist of one or more lenses— typically an outer transparent cuticular “corneal lens” and an inner “crystalline cone”—that supply light to a number of photoreceptor cells (or “retinular cells”)
106
Chapter 5
assembled underneath. The exact number of retinular cells that exist within a single ommatidium is variable across taxa, but a very common number is eight (and these are known as retinular cells R to R). Primary and secondary pigment cells (holding dark granules of screening pigment) are also present within each ommatidium and ensheath the ommatidial tube. The external corneal surfaces of most compound eyes are marked by a crystalline matrix of hexagonal “facets,” each being the curved external surface of a single corneal lens that supplies light to underlying retinular cells. In some compound eyes, notably the reflecting superposition eyes (see below), the facets are instead square, and for good optical reasons. Each ommatidium receives light from a small region of space—for most compound eyes somewhere between ° and ° across—and two neighboring ommatidia receive light from two neighboring such regions. Thus, the greater the number and density of ommatidia in a compound eye, the more finely sampled is visual space. A large dragonfly may have over , ommatidia in each of its two compound eyes, thus dividing visual space into over , “visual pixels,” to again borrow the digital camera term (figure .B). Dragonflies, like many insects, have almost complete wrap-around vision, being able to view nearly the entire ° panoramic sphere of visual space around them, with the exception of a small blind spot immediately behind (where the body is). This ability to see in almost every direction simultaneously is unrivaled by any other eye type and is without doubt one of the major advantages of the compound eye design. Even though dragonflies have many fewer “pixels” than one would wish for in a camera, they nonetheless have formidable visual powers that allow them, among other things, to catch prey on the wing (typically other flying insects—see chapter ). At the other end of the scale are some ants, which may have fewer than ommatidia. An extreme case is presented by soldiers of the tropical army ant Eciton burchellii, whose compound eyes appear to consist of a single giant ommatidium, although the structure of the retina suggests that this “ommatidium” may instead function as a camera eye (Werringloer, ). Each of the retinular cells possesses a microvillous photosensitive rod-like region known as a “rhabdomere”; this it contributes, together with those of the ommatidium’s other retinular cells, to a collective “rhabdom” (figure .C,D). The rhabdomeres are either fused to create a “fused rhabdom” (figure .D, the most common arrangement) or remain separated to create an “open rhabdom” (figure .C, typical of Dipterans [flies] and most Hemipterans [bugs]). The rhabdom (or an individual rhabdomere) is a light-guiding structure that houses the rhodopsin molecules and receives and absorbs the incoming light. Even though the lenses of the ommatidium focus an image of the outside world on the tip of the rhabdom, the spatial information present in this image is lost as the light propagates through the rhabdom. Thus, an individual rhabdom is capable only of discriminating the mean intensity of the light that forms the image. And because, as we saw in chapter , the particular rhodopsin molecule resident in the photoreceptor sets the range of wavelengths that will be absorbed with greatest efficiency, the rhabdom provides the information necessary for determining the color of this light. Each ommatidium thus signals the average intensity and color (and in some cases polarization) of light that is incident from the small region of space (or visual field) that it views. In this way the matrix of ommatidia together create an erect view of the world, formed much like a “mosaic,” to quote the German physiologist Johannes Müller, who in was the first to recognize this mechanism of imaging in compound eyes (Müller, ). A little more than years after Müller, in , the Austrian physiologist Sigmund Exner published a groundbreaking monograph concerning the optical
Eye Designs of the Animal Kingdom 107
c
cc
pc B sc
4
3
rh 2 5
1
rc
6
7
C 1 8 2 7 bp bm
3
6 5 A
4
D
Figure 5.7 Ommatidia. (A) Schematic longitudinal section (and an inset of a transverse section) through a generalized Hymenopteran ommatidium, showing the corneal lens (c), the crystalline cone (cc), the primary pigment cells (pc), the secondary pigment cells (sc), the rhabdom (rh), the retinular cells (rc), the basal pigment cells (bp), and the basement membrane (bm). Left half of the ommatidium shows screening pigment granules in the dark-adapted state; right half shows them in the light-adapted state. (Redrawn from Stavenga and Kuiper, 1977) (B) Compound eyes of an unidentified species of dragonfly. (C) Schematic transverse section through the open rhabdom of a fly, showing the seven distal retinular cells with their separated rhabdomeres. (D) Schematic transverse section through the fused rhabdom of the Collembolan Orchesella, showing the eight retinular cells with their apposed rhabdomeres. (Redrawn from Paulus, 1975)
108
Chapter 5
functions of compound eyes, for the first time recognizing that they fell into two broad subtypes (figure .): apposition eyes and superposition eyes. Despite coming under significant scrutiny during the s, Exner’s brilliant insights endure. Not only do these subtypes exist, but we now know that within each there are several further subtypes. It is to these subtypes we now turn our attention, beginning first with apposition eyes.
Apposition Compound Eyes An essential difference between apposition eyes and superposition eyes lies in the number of lens units that provide light to a single rhabdom. In apposition eyes only one is involved. Light rays entering a corneal facet lens are focused exclusively onto the rhabdom within the same ommatidium. In superposition eyes many lens units are involved: a single rhabdom instead receives light rays that enter a large number of corneal lenses (usually several hundred). A second essential difference between the two subtypes of compound eyes is that in apposition eyes the rhabdoms run the length of the ommatidium to join the proximal tips of the crystalline cones. In superposition eyes the rhabdoms are instead compressed to a proximal layer in the eye: a wide optically homogeneous region called the “clear zone” separates this layer from the crystalline cones. In apposition eyes (figure .A), rays are prevented from passing to neighboring ommatidia by a sleeve of light-absorbent screening pigments that entirely encases each ommatidium. Because a single-facet lens is only a few tens of micrometers across, the amount of light that each can supply is very limited. Not surprisingly, apposition eyes are typically found in animals active in bright light such as flies, butterflies, bees, dragonflies, and fiddler crabs. But amazing exceptions do exist—there are several species of nocturnal and deep-sea arthropods, some with quite extraordinary visual abilities, that have apposition eyes (see chapter ). Even though apposition eyes have restricted sensitivity, the ommatidia of day-active species tend to have high F-numbers and narrow rhabdoms and thus the potential for good spatial resolution. In large apposition eyes (such as those of dragonflies and mantises), experimentally
CZ
A
B
Figure 5.8 The two broad subtypes of compound eyes. (A) Apposition eyes (in this case a focal apposition eye). (B) Superposition eyes (in this case a refracting superposition eye). cz = clear zone. (Images courtesy of Dan-Eric Nilsson)
Eye Designs of the Animal Kingdom 109
measured acceptance angles Δρ (see chapter ) can be as low as .°, among the lowest values measured in compound eyes. Typically, the quality of the image supplied by a diurnal apposition eye is close to the diffraction limit. Apposition eyes come in three main forms, but there are many other rare and interesting variants of these eyes (see Land and Nilsson, ). The first of the three most common forms is the “focal apposition eye” (figure .A), a widespread design among insects and crustaceans. They are referred to as focal because the focal point of the optical system lies at the distal tip of the rhabdom. The second form, the “afocal apposition eye,” has distinctive crystalline cones with tapering apical “stalks,” and the distal rhabdom tip is not located in the focal plane (figure .B). So far, afocal apposition eyes are known only from papilionoid butterflies, where they are of general occurrence. The third form is the “neural superposition eye,” and despite its name, it is an apposition eye in which signals from individual retinular cells in seven neighboring ommatidia—which all view exactly the same small region of space—superimpose on the same second-order cell of the lamina, the first optic neuropil behind the eye. This interesting design—which results in a sevenfold boost in sensitivity for no loss in spatial resolution—is characteristic of higher flies (Kirschfeld, ). In the majority of focal apposition eyes the external curved surface of the corneal lens provides the refractive power needed to focus an inverted image on the distal rhabdom tip. The crystalline cone, with its low and homogeneous refractive index, acts merely as a watery spacer (figure .A). This is not the case in all focal apposition eyes, such as those of the horseshoe crab Limulus polyphemus. In Limulus the surface of the corneal lens is flat and thus without optical power, and the lens itself is internally elongated to form an inwardly pointing cone-shaped extension—the crystalline cone is absent. The focal power of this elongated corneal lens is instead provided by a moderate internal gradient of refractive index that brings rays to a focus on the distal rhabdom tip (Exner, ; Land, ). The situation is somewhat different in afocal apposition eyes (figure .B): instead of being entirely homogeneous in refractive index, the crystalline cone at its proximal end possesses a cylindrical “cone stalk” containing a powerful radial gradient of refractive index from the axis of the cone stalk to its periphery (Nilsson et al., , ; Nilsson, ). Parallel rays focused by the external corneal surface are brought
A
B Focal
Afocal
c
cc pc cs rh
Figure 5.9 Focal and afocal optics in apposition compound eyes. (A) In focal apposition eyes, light (gray) is focused by the corneal facet lens (c) directly onto the distal tip of the rhabdom (rh); the crystalline cone (cc) is watery and acts merely as an optical spacer. (B) In afocal apposition eyes, light is brought to an intermediate focus at the entrance of the powerful cone stalk (cs), which then recollimates the light and directs it to the rhabdom. The gray shading in the cone stalk schematically indicates the presence of the refractive index gradient. pc = pigment cell.
110
Chapter 5
to an intermediate focus at the distal entrance to the stalk and are then recollimated by the refractive index gradient. This stalk, which acts as a powerful second lens, plays a key role in the waveguide optics of the rhabdom, acting as a “mode coupler” that significantly improves visual performance (figure .B). The rhabdoms of butterfly afocal apposition eyes are usually quite thin, some as thin as . μm, values that begin to approach the wavelength of light (. μm for bluegreen light). We mentioned in chapter that when this happens the rhabdom begins to behave as a “waveguide” and to propagate waveguide “modes,” stable patterns of light traveling along the waveguide. The wider the rhabdom, the greater the number of different mode orders that can propagate. The bell-shaped fundamental mode LP always propagates, no matter how thin the rhabdom. For rhabdoms between about . μm and . μm wide, LP is joined by the second-order mode LP, and for widths between about . μm and . μm these are joined by the third-order modes LP and LP. A narrow rhabdom, such as the .-μm rhabdom of the butterfly Argynnis paphia, propagates only the fundamental mode LP. So too does the equally thin proximal cone stalk joining the rhabdom. However, in the slightly wider conical distal stalk, both LP and LP can propagate. These two modes are excited to propagate the Airy diffraction pattern of light produced by the corneal facet (see above). As these two modes propagate proximally through the narrowing cone stalk, the power of LP is entirely transferred—or coupled—to LP, which alone is capable of propagation in the rhabdom. This mode coupling greatly improves the efficiency of light transmission to the rhabdom, significantly increasing both spatial resolution and sensitivity (Hateren and Nilsson, ).
Superposition Compound Eyes Unlike the ommatidia of apposition eyes, those of superposition eyes are not optically isolated from each other by screening pigments (except in the extreme light-adapted state in some species): light rays entering many corneal lenses are focused onto a single rhabdom in the retina below. This is facilitated by the presence of a wide optically homogeneous “clear zone” (figure .B), which separates the crystalline cones from the rhabdoms. Exactly how this focus of light is achieved is remarkable in itself, and it is this optical trick that is probably the most important feature that distinguishes superposition eyes from apposition eyes. In a focal apposition eye, when a bundle of parallel light rays enters the corneal lens at an angle to its optical axis, the rays are brought to a focus (at the distal rhabdom tip) on the opposite side of the axis. This is why the image of a distant scene is inverted at the focal plane of the corneal lens. This image also occurs multiple times, once in each facet of the eye. In superposition eyes, however, the situation is very different. Due to remarkable specializations in the crystalline cones, a bundle of parallel light rays that enters the cornea-cone lens pair at an angle to its common optical axis is instead redirected to the same side of this axis. This redirection is also accompanied by a recollimation of the light beam: rays that enter the lens pair parallel also leave it parallel. The thin beam of parallel rays that results is then fired across the clear zone toward a single rhabdom in the retina far below. In superposition eyes, typically hundreds of such thin pencils of light, leaving equally many crystalline cones, are targeted at one and the same rhabdom. At the focal plane of the eye—which coincides with the targeted rhabdom—these beams superimpose to form a single erect image. That all
Eye Designs of the Animal Kingdom 111
A
B
C
ct
lg
Figure 5.10 The three different types of superposition optics found in the Arthropoda, showing the paths of incident light rays focused by the cornea into the crystalline cones (shown schematically). (A) Refracting superposition optics, in which incident light rays are refracted by powerful gradients of refractive index within the cornea and crystalline cones. (B) Reflecting superposition optics, in which incident light rays are reflected by the flat tapering sides of each homogeneous box-shaped crystalline cone. (C) Parabolic superposition optics, in which incident light rays are reflected and collimated by the parabolic sides of each homogeneous crystalline cone. ct = cone tract, lg = light guide.
these beams of light end up at the same place seems almost miraculous, and indeed, the degree to which they superimpose is a major determinant of image quality and spatial resolution in this type of compound eye. But the fact that each single photoreceptor receives light from not one but from hundreds of facets means that sensitivity is boosted tremendously. Not surprisingly, superposition eyes are common in insects and crustaceans active in dim light, such as nocturnal moths and beetles as well as deep-sea crustaceans. But remarkable exceptions do exist, particularly among dayactive moths, skipper butterflies, and scarab beetles. How do the crystalline cones manage this marvelous act of superposition? It turns out that they do so in one of three possible ways, each one having evolved independently in one or more branches of the arthropods and each giving rise to a completely unique subtype of superposition eye (figure .). The first—functioning by refraction (figure .A)—was discovered in fireflies by Sigmund Exner in the s, but it took a further century before the other two were discovered. In the mid-s Klaus Vogt () discovered that the superposition eyes of crayfish and lobsters function by reflection (figure .B), and over years later Dan-Eric Nilsson () discovered a superposition eye in a crab that functioned by a combination of both refraction and reflection (figure .C). However, by far the most studied superposition eyes are those discovered by Exner, namely refracting superposition eyes, the type found universally in insects, such as dung beetles (see figure .) as well as in some groups of crustaceans, for example, krill (Land et al., ). In refracting superposition eyes—the type of superposition eye shown in figures .B and .C—each ommatidium possesses a corneal lens and a crystalline cone that together function as a Keplerian telescope (figure .D; Exner, ; Cleary et al., ; Land, ; Caveney and McIntyre, ; McIntyre and Caveney, ). Such telescopes contain two lenses separated by the sum of their focal lengths: a parallel bundle of light rays incident on the first lens will be brought to an intermediate focus between the two lenses and recollimated into a parallel bundle by the second lens (figure .C). In such an optical system the ray bundle will exit on the same side of the optical axis as it entered, thus forming an erect image. To cope with large angles of incidence, the second lens needs to be quite large, but in refracting superposition eyes, the Keplerian optics is preserved in miniature form via a remarkable mechanism. Instead of being homogeneous as in an apposition eye, the
112
Chapter 5
co
C
cc cz
re ca
rl bl
A
D
E
rh
CO
de ca
sp
ve CC
B
an
sp bl
Figure 5.11 Refracting superposition eyes of the nocturnal dung beetle Onitis aygulus (A, scale bar 5 mm), which consist of a smaller dorsal eye (de) and a larger ventral eye (ve) that are separated by a chitinous canthus (ca) on each side of the head (B). The beetle’s right antenna (an) is also visible. (C) A longitudinal section through the ventral eye showing the cornea (co), crystalline cones (cc), clear zone (cz ), retinular cell extensions (re), rhabdom layer (rl ), basal membrane (bl ), and canthus (ca). Scale bar 100 μm. (D) A close-up of the cornea and crystalline cones shown in C. sp = screening pigment, Scale bar = 20 μm. (E) A close-up of the retina from C showing the rhabdoms (rh). Scale bar = 20 μm. (Panels C–E used with permission from McIntyre and Caveney, 1985)
corneal lenses and crystalline cones possess powerful gradients of refractive index (figure .A). Sigmund Exner predicted these gradients in , and they were finally measured (using interference microscopy) in the meal moth Ephestia kühniella, the animal in which Exner’s superposition theory was finally proved to be correct after a long period of contention (Kunze, , ; Kunze and Hausen, ; Hausen, ; Vogt, ; Cleary et al., ). Since then refractive index gradients have been measured in the lenses of many arthropod superposition eyes (e.g., dung beetles: figure .A). The corneal lens and distal cone focus an incoming ray bundle into the cone’s waist region. The proximal cone then recollimates the bundle, allowing it to exit from the proximal cone tip as a narrow parallel beam on the same side of the optical axis that it entered (figure .C). Due to the inherent (roughly spherical) curvature of the eye surface, facets further from the central axial facet must accept parallel light rays at larger angles of incidence. Their exit angles are larger too, and in a classical spherically concentric superposition eye, the ratio of the two angles is
Eye Designs of the Animal Kingdom 113
A
C
Refractive index
1.50 1.46 0° r=0
1.42
d
1.38
r
w p
1.34
d
15
p w
10
5
0
50 μ m
5°
Radial distance (μm) B 10°
15°
20°
Figure 5.12 Graded refractive index optics in the refracting superposition eyes of the nocturnal dung beetle Onitis aygulus. (A) Refractive index as a function of radial distance r in the distal (d ), waist (w ), and proximal (p) regions of the crystalline cone (r = 0 denotes the axis of the cone: inset ) (B) Superposition aperture of an unidentified moth revealed by its “eye glow.” (C) Theoretical ray traces showing how parallel light rays incident at different angles of incidence (0–20°) on the corneal surface are bent and recollimated by internal gradients of refractive index present in the corneal facet lenses and crystalline cones. In O. aygulus the maximum angle of incidence allowed (before rays escape the side of the crystalline cone to be absorbed by surrounding screening pigment) is around 27°. (Panels A and C modified with permission from McIntyre and Caveney, 1985)
roughly . This would create a best focus (i.e., a superposition) of exiting light beams, from all the ommatidia of the superposition aperture, at about halfway between the layer of lenses and the center of curvature of the eye. In practice this ratio of exit angle to entrance angle—known as the optical system’s angular magnification—is usually a little greater than , which places the best focus slightly closer to the layer of lenses. Not surprisingly, the retinal surface lies close to this position (McIntyre and Caveney, ). Another consequence of an ever-increasing angle of incidence of parallel light rays at the corneal surface is that beyond a certain maximum angle the cornea and cone can no longer bend the incoming beam sufficiently to allow it to safely exit the cone tip to begin its journey across the clear zone toward the retina (figure .C). Instead, the beam exits the side of the cone, where it is absorbed by screening
114
Chapter 5
pigments. Because the angle of incidence of light steadily increases as one moves further from the central axial facet in all directions, the maximum acceptable angle defines the diameter of the region of facets that accepts light for superposition on a single rhabdom. In refracting superposition eyes this region is a large circular aperture of corneal facets known as the “superposition aperture,” and, as we mentioned above, this may contain hundreds or (in extreme cases) thousands of facets. A larger superposition aperture implies greater sensitivity to light, and larger apertures are indeed found in more nocturnal species, as has been nicely documented in dung beetles (McIntyre and Caveney, ). The superposition aperture is readily visualized in a nocturnal moth or beetle with a reflective tapetum (see chapter ): illumination of the eye from the same direction as it is viewed reveals a bright round “eye glow” whose area is approximately equivalent to the superposition aperture (figure .B). Remarkably, despite the superposition of hundreds of individual beams of light on the retina, refractive superposition eyes usually have quite decent spatial resolution. By using an ophthalmoscope to image the retina, Land () discovered that image quality in the superposition eyes of skipper butterflies and agaristine noctuid moths is only marginally worse than that due to diffraction at the individual corneal facet. This truly impressive result implies that these superposition eyes—with hundreds of imaging facets—can have the same optical resolution as an apposition eye having a single imaging facet of the same size. In an optical model of the superposition eye of the day-active dung beetle Onitis westermanni, McIntyre and Caveney () calculated the destinations of a large number of traced light rays focused by a superposition aperture of facets (figure .B) and discovered that nearly all of them fell neatly over a single flower-shaped rhabdom (figure .A); this again demonstrates how sharply focused superposition eyes can be. In the closely related crepuscular O. alexis (figure .C: superposition aperture of facets) and the nocturnal O. aygulus (figure .D: superposition aperture of facets), the image becomes progressively worse but is still quite well focused despite the increasingly larger and more sensitive superposition aperture. But as we saw in the previous chapter, even an exquisitely well-focused image on the retina is no guarantee of a sharp visual image. This is especially true in eyes of low F-number, such as superposition eyes, where the angles of incidence of individual light rays can exceed °. Without some form of shielding between the rhabdoms (see figure .), such rays will simply spread throughout the retina, degrading contrast and spatial resolution. Indeed, in the crepuscular dung beetle O. alexis, a species that lacks rhabdom shielding, the shape and width of the electrophysiologically measured receptive field can be entirely explained on the basis of this light spread (Warrant and McIntyre, ). Not surprisingly the actual receptive field is much broader (Δρ = °) than the distribution of light on the retina (which has a half-width of ~°: see figure .C). A rather different situation is found in the remarkable nonspherical superposition eye of the diurnal hawkmoth Macroglosslum stellatarum, a species possessing a reflective tapetal sheath around the full length of each rhabdom (see figure .D; Warrant et al., ). Such a sheath fully contains the cone of focused light by total internal reflection and prevents light from spreading to neighboring rhabdoms. The resulting receptive field is the narrowest ever measured from a photoreceptor in a superposition eye, with Δρ just .° (Warrant et al., ). Many apposition eyes, such as those of flies and bees, have values of Δρ that are close to double this.
Eye Designs of the Animal Kingdom 115
A
128 64 32 C
16 8 6 4 2 1
B
D
Figure 5.13 Theoretical retinal image quality in the refracting superposition eyes of three species of dung beetles from the genus Onitis, the diurnal O. westermanni (B), the crepuscular O. alexis (C), and the nocturnal O. aygulus (D). After an optical model of the eye of each species had been constructed, parallel light rays (incident on the surface of the eye in a square grid with spacing 1.5 μm) were traced until their destination on the retina. In transverse section the rhabdoms of dung beetles are flower shaped (electron micrograph in A); in all three species a density contour map (pink) showing the destinations of rays targeted at a single rhabdom reveals that most rays fall on their target, although neighboring rhabdoms may also be intercepted (color scale shows rays per 1 μm2 ). Image is sharpest in the diurnal species (109 contributing facets), followed by the crepuscular species (241 facets), and then the nocturnal species (647 facets). The rhabdom diameter in all three species is 14 ± 1 μm. Scale bar in A = 5 μm. (Panels B–D redrawn from McIntyre and Caveney, 1985)
Thus, considering their formidable sensitivity to light, superposition eyes can have impressive spatial resolution. It is no doubt this combination that has allowed many nocturnal insects with superposition eyes to have spectacular visual behaviors at night, including the abilities to see color, to orient using the faint pattern of polarized light formed around the moon, and to avoid obstacles during high-speed flight through a forest. These, however, are topics for a later chapter (chapter ).
6 Spatial Vision
A
n endless shimmering expanse of tinder-dry yellow grass, broken only by an occasional grove of ancient gnarled eucalypts, bakes under the relentless Australian noonday sun. The heat, rising in great invisible shards from the ground, is palpable. Without warning, one of the world’s largest birds of prey—the wedge-tailed eagle Aguila audax—lifts slowly and deliberately from its aerie in a nearby gum tree, its huge wings climbing ladders of air to gain height. Arching leftward, it enters a vertical chimney of heat and surges heavenward in wide spirals. Atop this thermal, almost km above the ground, this massive bird begins to scan the immense ochre sweep of the Monaro High Plains, a vast undulating grassland stretching westward toward Mt. Kosciuszko. The eagle is on the lookout for rabbits, now its favorite food thanks to their introduction by European settlers years earlier. Its eyes, though evolved for other quarry, have astonishing visual acuity, in fact one of the best in the animal kingdom. Although at this altitude the wedge-tailed eagle is an indiscernible speck in the sky for rabbits on the ground, they themselves, and their telltale scurrying through the grass, are seen with brilliant clarity, allowing the eagle to select its target. Even though the eagle’s descent is long, the rabbit stands little chance. In fact, it seems to be oblivious to the danger descending rapidly from above. The outcome of the eagle’s stoop on its prey is almost certain. The eagle’s visual prowess is due to a remarkable feature of the eagle’s retina. The deep dimple of the foveal surface acts as a negative lens—effectively akin to a telephoto component—that magnifies the image of the rabbit in the eagle’s retina. In contrast, the eyes of the rabbit—with their long horizontal streak of high resolution—are trained around the horizon, from where they expect land-bound predators such as foxes. With their very different eyes, eagles and rabbits underscore the major role predation has played in the evolution of spatial vision for both predator and prey alike. The same can be said for the detection and pursuit of mates. In fact, every aspect of animal life has profoundly affected the way spatial vision has evolved. Exactly how is the topic of the present chapter. We humans are accustomed to seeing the world in high resolution—in fact, this is arguably our best-developed visual capacity. Compared to many other animals, our eyes are not particularly sensitive to light; nor is our sense of color especially good.
Spatial Vision
The undoubted splendors of nature’s ultraviolet colors are totally invisible to us, as are the world’s rich natural sources of polarized light. But when it comes to discerning fine spatial detail, few animals come close to us. There are other primates that share our spatial abilities (e.g., macaques: see De Valois et al., ), but only some groups of birds—notably large birds of prey—significantly exceed them. In the case of wedge-tailed eagles, their large eyes, tightly packed photoreceptors, and the telephoto optics mentioned above endow them with a visual acuity that in bright light is two to three times higher than our own (Reymond, ). But such birds are among the exceptions. What then do we make of the great majority of animals whose spatial resolution is considerably lower than ours—such as the dragonfly, whose two apposition compound eyes sample almost ° of visual space with just , ommatidia? Do these few “visual pixels”—about two orders of magnitude fewer than our own—reduce the world of dragonflies to an incomprehensible blur? Anyone who has taken the time to watch dragonflies hunting and defending their territories around the edges of a sunny pond will know that the answer to this question is clearly “no.” Dragonflies have astounding spatial vision. Not only can they can pluck a fly from the air following a precise high-speed aerial pursuit (Olberg, ; see chapter ), they can actively home in on a single fly in a swarm without being distracted by the movements of others, revealing a selective attention mechanism akin to that in mammals (Wiederman and O’Carroll, ). And as we see in chapter , there are many nocturnal birds and mammals with at least times as many “visual pixels” as the dragonfly that have forsaken vision almost entirely and instead rely on olfaction and mechanoreception for the tasks of daily life. Thus, the sheer number of visual sampling stations possessed by the retina of an eye is a poor indicator of the ability of animals to discern and react to spatial details in a scene. Then of course there is the nature of the spatial task that has to be solved. Some animals need to discern the arrangement of objects in an extended scene. Others need to see and localize a tiny point of light against a dark background (like a deepsea fish viewing a bioluminescent flash) or to detect a tiny dark silhouette against a bright background (like a dragonfly chasing a fly). Some animals of course need to do both. And just as we saw earlier in chapter , to maximize sensitivity, eyes adapted for viewing extended scenes must be built differently from those adapted for viewing point-source scenes. As we will see, the same can be said for the maximization of spatial resolution. But regardless of whether the visual task is to follow a tiny target or to keep track of the physical arrangements of objects in a scene, all aspects of animal life have steered the evolution of spatial vision, particularly the distribution of an eye’s sampling stations in visual space. The pursuit and interception of mates and prey, the avoidance of predators, the negotiation of a complex three-dimensional world during locomotion, and the physical layouts of different habitats have all played critically important roles in how an eye’s sampling matrix has evolved. Exactly how these ecological forces have shaped the sampling matrices of animal eyes is the major theme of this chapter. We restrict our discussion to the two major eye types of the animal kingdom— camera eyes and compound eyes—as these are by far the best understood in terms of the ecology of spatial vision. Moreover, except where necessary in the service of visual ecology, we do not discuss the relationship between image quality and the sampling matrix, topics we introduced in the context of eye specialization in chapters and .
117
118
Chapter 6
For a deeper discussion of this admittedly fascinating topic, the interested reader is referred to several superb accounts, notably those of Wehner (), Snyder (), Hughes (), Land (), and Land and Nilsson (). Before we discuss how the ecologies of animals have driven the evolution of their sampling matrices, we begin by describing the individual sampling stations themselves and the ways in which variations in the sampling matrix have been achieved. We start by defining the parameter of most importance for this description, namely the interreceptor angle Δφ, and we do so first in camera eyes.
The Sampling Stations of Camera Eyes Camera eyes are possessed by vertebrates as well as by a number of invertebrates (chapter ), for instance mollusks (e.g., gastropods and cephalopods) and arachnids (e.g., spiders and scorpions). In chapter we saw that in a camera-type eye, where the sampling stations (or “receptors”) are tightly packed (and are thus touching, or “contiguous”), the interreceptor angle (Δφ) can be described by the following expression (equation .b; figure .A): Δφ = d/f (radians)
[camera eyes]
(.)
where d is the diameter of the receptor (μm) and f is the focal length of the eye (μm). Larger camera eyes with longer focal lengths will tend to have smaller values of Δφ and thus higher spatial resolution (incidentally, they will also tend to have larger pupils and higher sensitivity and thus better vision overall). Even in eyes of constant focal length, areas of the retina with smaller receptors will also have smaller values of Δφ and higher resolution. In this latter case such a local area of high resolution is
A
B
D
f d
R
Δφ
Δφ Figure 6.1 Definition of the interreceptor angle Δφ in camera eyes (A, assuming contiguous photoreceptors) and the interommatidial angle Δφ in compound eyes (B). d = interreceptor distance (equal to the receptor diameter in a contiguous retina), f = focal length of the eye, D = distance between the centers of two adjacent corneal facet lenses (equal to the facet diameter), and R = local radius of the eye surface. (Schematic drawings of the two eyes courtesy of Dan-Eric Nilsson)
Spatial Vision
referred to as an “area centralis” (vertebrates) or an “acute zone” (invertebrates). If in addition the retinal surface within the area centralis is inwardly dimpled to create a pit, the area is referred to as a “fovea.” Foveae result from a local absence of blood vessels and a thinning of the retinal layers overlying the photoreceptors—this thinning reduces the scattering of the incoming light that passes through these layers prior to its absorption. Simian primates (like ourselves) possess a fovea, although the pit is rather shallow. In other species foveae are often deeply pitted. These deep foveae, called “convexiclivate” by Walls (), occur in a wide variety of vertebrate eyes, including those of birds of prey, swallows and kingfishers, chameleons, pipefish, seahorses, and in many species of deep-sea fish. An analogous structure is also found in the retinas of jumping spiders. As we will soon see, deep convexiclivate foveae have a special role to play in spatial vision.
Vertebrate Camera Eyes In vertebrate camera eyes the tightly packed primary photoreceptors (i.e., the rods and cones; chapter ) represent the first matrix of sampling stations that receive the focused image. In the eyes of diurnal primates such as ourselves, the cones are smallest, densest, and thus most numerous at the exact center of the fovea (figure .). In the central fovea of our own eye, for instance, the cone outer segment diameter d is around . μm. Because the focal length f of the human eye is . mm (= , μm), equation . reveals that the interreceptor angle Δφ is . × – radians, or .° ( radian = °/π = .°). This is an extremely small value—it is little wonder that our foveal vision is so acute! In fact this value of Δφ is similar in size to the smallest detail we can discriminate in bright light (when our pupil is about mm in diameter), implying that our spatial vision is limited only by diffraction. In the language of spatial frequencies (see chapter ), this means that the finest pattern of black-and-white stripes that our photoreceptor matrix can reconstruct (with Δφ = .°) has a spatial frequency of around cycles/°. This turns out to be roughly the same value as our optical cutoff frequency at the diffraction limit in green light (see figure .). Diffraction-limited optics matched to the sampling matrix is also a feature of other anthropoid primates as well as large birds of prey. However, in other vertebrates image quality can be much poorer than implied by the photoreceptor sampling matrix. For instance, the small (-mm-diameter) rod-dominated eyes of the nocturnal brown rat Rattus norvegicus have Δφ = .° (foveal rods). When rat eyes are fully dark-adapted (pupil diameter around . mm), the smallest spatial detail resolvable in the optical image is .° wide— times wider than Δφ (Clarke and Ikeda, ; Hughes and Wässle, )! Even in bright light, when the pupil is eight times narrower, the smallest resolvable detail is only a third smaller. This means that the matrix of rods is able to reconstruct much finer spatial details than the optics can supply, indicating that either aberrations and other optical imperfections (rather than diffraction) limit the performance of the eye or that a coarser matrix of underlying ganglion cells provides the limit. As we discuss below, the ganglion cells usually pool signals from large groups of photoreceptors, especially in the retinal periphery, but in nocturnal animals such as the rat this can occur even in the fovea, where it increases sensitivity (see chapter ). Indeed, Δφ for the foveal matrix of ganglion cells is about .° (Hughes, ), six times the value for the rod matrix.
119
Chapter 6
A
0 mm
0.66 mm
1.35 mm
5 mm
8 mm
16 mm
B Cones
160 140 120 Rods
100
Rods Optic disc
Cell density (cells mm–2 x 1000)
120
80 60 40 20
Nasal
Temporal
0 25
20
15
10
5
0
5
10
15
20
Retinal eccentricity (mm) Figure 6.2 Densities of rods and cones in the human retina as a function of distance (eccentricity in millimeters) from the fovea (eccentricity = 0 mm). (A) Transverse sections through the mosaic of photoreceptor inner segments at different eccentricities. The center of the fovea (0 mm) is free of rods, and the cone inner segments (seen here) are extremely narrow. At an eccentricity of 0.66 mm (moving temporally), the cone inner segments have become much wider and less densely packed (large profiles), and this trend increases with increasing eccentricity. Rod inner segments (small profiles), narrow and relatively few at 0.66 mm, increase in number and prominence with increasing eccentricity. Scale bar = 10 μm. (Compiled from images in Curcio et al., 1990, and reproduced with permission of the publisher, John Wiley & Sons) (B) The relative densities of rods and cones along a nasal-temporal transect passing through the fovea (0 mm). Notice how rods and cones change their relative dominance at different locations in the retina. (Adapted from Rodieck, 1998, from data originally published by Østerberg, 1935).
However, this is still less than half the size of the smallest spatial detail resolvable in the optical image. Thus, the bottleneck for spatial vision is the optics, a conclusion that in fact manifests itself in the rat’s behavior—the smallest spatial detail it can discern in dim light is around .° in size (Birch and Jacobs, ; see figure .), very close to the smallest passed by the optics. Interestingly, even in our own eyes, when the pupil is wider than about mm (at moderate light levels), image quality also worsens due to spherical and chromatic aberration, with a corresponding loss in acuity (Charman, ). But let us again return to the human retina. As one moves away from the foveal center, the cone size increases rapidly, and their density falls (figures .A and .C), plateauing to a steady low density throughout the remainder of the retina. At a distance of mm (i.e., at an “eccentricity” of mm) from the foveal center, d has
Spatial Vision
increased from . μm to around μm (figure .A: because the cones are not contiguous, d is the distance between cone centers). Thus, Δφ becomes correspondingly larger, around .°, or over seven times larger than in the central fovea. Interestingly, the density profile for rods is completely different (figure .B). The rods instead are totally absent from the foveal center but quickly rise in number and density to become most numerous at some distance away from the center (in humans around – mm away). Beyond this peak, rod density declines slowly throughout the remainder of the retina. As we see in chapter , this distribution of rods and cones is radically different in the eyes of nocturnal primates such as the owl monkey Aotus, and (as we will also see below) for good reason. But regardless of species, the interreceptor angles (Δφ) for rods and cones in vertebrate eyes will vary markedly from one part of the retina to another. Moreover, because the fovea is cone dominated and the peripheral retina is rod dominated, this variation in Δφ will also differ for the two photoreceptor types. However, despite their large variations, the matrices of rods and cones do not define the visual sampling stations of the vertebrate retina. Instead, the sampling stations are defined by the retinal ganglion cells, the cells to which the photoreceptors ultimately converge (figure .A; Hughes, ; Rodieck, ; Collin, ; Peichl, ). In mammals each cone is connected to two kinds of cone bipolar cells—an ON cone bipolar cell and an OFF cone bipolar cell. These in turn respectively connect to two kinds of ganglion cells—an ON ganglion cell and an OFF ganglion cell. As their names suggest, the ON cells create an ON pathway (which is activated by an increase in illumination), whereas the OFF cells create an OFF pathway (which is activated by a decrease in illumination). The rods also connect to bipolar cells, specifically to rod bipolar cells (which are all of the ON type). These each connect to an amacrine cell (see figure .A), the so-called AII amacrine cell, which in turn connects to one ON and one OFF cone bipolar cell. As before, these connect respectively to one ON ganglion cell and one OFF ganglion cell. Thus, in a sense, the rods converge onto ganglion cells by piggybacking onto the cone circuitry. But irrespective of the specific circuitry that underlies the convergence of rods and cones onto the underlying ganglion cells, and irrespective of the complicating fact that the ganglion cells effectively form two parallel matrices (an ON matrix and an OFF matrix), a critically important principle remains: each ganglion cell receives input from one or more photoreceptors, the exact number depending intimately on the eccentricity where the ganglion cell is located. At the very center of the fovea, one single “midget” ganglion cell receives input from one single cone. As its name suggests, the receptive field of a midget ganglion cell—which is defined by the extent of its dendritic field—is very small. In fact, it is no larger than the receptive field of the cone itself. But as soon as one moves away from the foveal center, the number of photoreceptors supplying single ganglion cells grows. Within a couple of millimeters of the foveal center many tens of cones are supplying each ganglion cell (with a dendritic field that has expanded appropriately). In the retinal periphery, which is dominated by rods, thousands of rods may converge onto single ganglion cells. Not surprisingly, the dendritic fields of peripheral ganglion cells (especially those of the “parasol” type) are enormous. Because the dendritic fields of the ganglion cells tend to “tile” the retina in a regular mosaic with little overlap (Rodieck, ; figure .B), an increase in the size of the dendritic (receptive) field implies a decrease in ganglion cell density (figure .D). Thus, at any one location in the retina, the dendritic field size of the ganglion cells
121
122
Chapter 6
A
C Pigmented cell
Human
Rod Cone Fovea Optic disc Horizontal cell Amacrine cell Bipolar cell
Ganglion cell
Temporal
Nasal Cones
B
D 15,000
Human
10,000
5,000
0 Temporal Cells/mm2
Nasal Ganglion cells
Figure 6.3 Retinal ganglion cells, the sampling stations of the vertebrate retina. (A) Main cell types of the vertebrate retina, showing the retinal ganglion cells that send information processed by the retina to the brain for further processing. In this drawing, light enters the retina from below. (From Hubel, 1995, with permission from Macmillan Publishers Ltd.) (B) A drawing of a network of parasol ganglion cells from the human retina, showing their packing arrangement. (From Rodieck, 1998, using data originally published by Dogiel, 1891, with permission from Sinauer Associates) (C,D) The spatial density of cones (C) and all classes of ganglion cells (D) in a flat-mounted human retina (color scale in D applies to both panels). In the fovea the density of the two cell types is similar, and at its very center there is a 1:2 convergence of cones onto (midget) ganglion cells (see text). However, at increasing eccentricity, the density of ganglion cells falls much more quickly than that of cones, implying that ever-larger pools of cones converge onto single ganglion cells. Scale bar = 10 mm. (Adapted from Rodieck, 1998, from data originally published by Curcio et al., 1990)
Spatial Vision
(now determining d in equation .) defines the local Δφ: the smaller the dendritic fields of the ganglion cells (and the higher their density), the smaller the value of Δφ and the higher the local spatial resolution. Ganglion cell dendritic fields—and values of Δφ—are thus smallest in the fovea. As one moves away from the fovea, the dendritic fields of the ganglion cells gradually become larger (with a concomitant increase in Δφ), and the local spatial resolution falls. Take the midget ganglion cells as an example. In the human retina these constitute % of all ganglion cells and are found throughout the retina—most of the remaining % is comprised of “parasol” ganglion cells (with larger dendritic fields), bistratified ganglion cells, and photosensitive ganglion cells involved in the control of the circadian rhythm. As mentioned above, midget ganglion cells have their smallest receptive fields at the center of the fovea, where they collect inputs from single cones. Thus, as for the cones, the interreceptor angle Δφ for human foveal midget ganglion cells will be .°. However, at an eccentricity of mm, where Δφ for the cones is .° (see above), midget ganglion cell dendritic fields are about μm across (Dacey, ). This results in Δφ = .°, a value nearly five times greater than the value for cones. At the same eccentricity, Δφ for the parallel matrix of parasol ganglion cells would be more than three times larger again. Not surprisingly, density plots for cones and ganglion cells (of all types) across the human retina reveal similar high densities near the fovea, but everywhere else the density of the ganglion cells (i.e., the density of the retina’s sampling stations) is much lower than that of the cones (compare figures .C and D).
Invertebrate Camera Eyes In invertebrate camera eyes the situation is a lot simpler thanks to the existence of only one visual cell type in the retina, the photoreceptor. In this case spatial vision could potentially be subserved by the full density of the photoreceptor matrix (and not by an underlying and more dilute matrix of other cells such as the ganglion cells we discussed above). In squid and octopus—whose camera eyes can be quite massive (see figure .)— the spatial resolution could potentially be extremely high. In the enormous eyes of the giant deep-sea squid Architeuthis dux (which can reach around cm in diameter; see chapter ), the rhabdom diameter d is around μm, and the focal length f of the eye is around , μm (see table .). The interreceptor angle Δφ is then about .° (equation .), almost five times smaller than the smallest foveal value in our own eye (and probably the smallest value in the animal kingdom). Whether this deep-sea creature, which inhabits the inky depths about km below the surface of the sea, actually achieves this resolution is hard to say—the optical quality of the image might be much worse than that needed to match the full resolution of the retina (due to aberrations and other optical problems), as we saw above for rats. Even more likely is that the photoreceptors are summed together into groups to increase sensitivity, a strategy that compromises spatial resolution (by effectively widening Δφ). In smaller cephalopods (with smaller eyes) living in bright light (such as Octopus) image quality could be better and summation unnecessary, in which case a much higher resolution might be attained. Nevertheless, in Octopus vulgaris (whose eyes are – mm across and where Δφ is around .°) the smallest detail that can be
123
124
Chapter 6
PM PL AM
AL
A
AM
AL
2b 2a
1
6
5
PM
r
r 4
3
r
B
PL C
Figure 6.4 The eyes of jumping spiders. (A) A portrait of the male jumping spider Portia fimbriata, which has the highest acuity known in spiders. One eye of each of the four pairs of eyes is labeled. AM = anteromedial eyes, AL = anterolateral eyes, PM = posteromedial eyes, PL = posterolateral eyes. (Image reproduced with the kind permission of Jürgen Otto) (B) A schematic transverse section through the right-side head and eyes of a jumping spider showing the visual fields of each eye (red lines). The long tubular AM eyes are the largest and have a retina (r) at the base consisting of four layers (I–IV). Six muscles (1–6) scan each AM eye tube over a wide arc (horizontal extent is shown by green arrow), and thus compensate their narrow visual fields (5° horizontally by 20° vertically). (After Land and Nilsson, 2012) (C) A transverse light microscope section through retinal layer I in an AM eye of the jumping spider Phidippus johnsoni showing its elongated dorsoventral shape. Notice how the density of photoreceptors (and thus the spatial resolution) increases dramatically toward the center of the retina. d = dorsal, v = ventral, m = medial, o = outer. (From Blest et al., 1988)
Spatial Vision
discriminated behaviorally is about five times larger than Δφ (Muntz and Gwyther, ), implying that lens aberrations could indeed be a limitation to resolution in cephalopods. The same conclusion can be drawn for the camera eyes of most web-building spiders, whose eyes are usually poorly resolved, and which instead rely on mechanosensory cues to catch prey. Nonetheless, some spiders (e.g., the nocturnal wolf spiders and net-casting spiders or the day-active jumping spiders Salticidae) hunt visually. Of these, the jumping spiders have stunningly well-resolved eyes. The fringed jumping spider Portia fimbriata, a native of Southeast Asia and Australia, is an excellent example. Like most spiders, Portia has eight eyes arranged around the front, sides, and top of the head carapace (figure .A): two anterolateral (AL) eyes, two posterolateral (PL) eyes, two posteromedian (PM) eyes, and two anteromedial (AM) eyes. The first three pairs are collectively referred to as the “secondary eyes,” whereas the AM eyes—which in jumping spiders are overwhelmingly the largest— are referred to as the “principal eyes.” The AM eyes are also the most acute—their tubular form houses a long focal length, a sharp lens, and the presence of a deep image-magnifying convexiclivate fovea (see figure .A), all of which ensure a crisp image. The fovea itself has tightly packed photoreceptors (figure .C), which in Portia are separated by an astonishingly small interreceptor angle of just .°, the smallest value known in any spider and quite possibly in any arthropod. This is only five times coarser than in our own fovea, but in an eye thousands of times smaller! As we will also see for the tubular eyes of owls and deep-sea fish (chapter ), the downside of the tubular shape and high resolution of the AM eyes is that they have a restricted field of view (figure .B): in the jumping spider Phidippus johnsoni (Land, ) this is only ° horizontally and ° vertically (compared to ° in their nontubular PL eyes). To compensate for this, six muscles in two bands scan the ocular tube behind the stationary lens, allowing spiders to scan prey and conspecifics and to build up a high-resolution image over time (see chapter ). The secondary eyes, in contrast, with their lower resolution and broader fields of view, are specialized for alerting the spiders to movements at the side. Once registered, these movements lead to a rapid jump of the spider to face (and then scan) the object of interest with the AM eyes.
Negative Lenses and Telephoto Components Both jumping spiders and birds of prey such as the Australian wedge-tailed eagle extend their spatial resolution even beyond that of their already tightly packed sampling stations by taking advantage of the deeply curved inner surface of their foveal pits (figure .A,B). Because the retinal tissue has a slightly higher refractive index than the vitreous (. vs. .), the foveal surface acts as a negative (diverging) lens, which effectively lengthens the focal length of the eye and magnifies the image within the fovea, very much like a telephoto lens in a camera (figure .C). In the small heads of jumping spiders this is particularly useful as it effectively mimics a longer and better resolved eye but in a smaller format (albeit at the cost of a reduced visual field). In Portia, the powerful foveal lens magnifies the image by over one and a half times, increasing the finest spatial frequency that the matrix of layer
125
126
Chapter 6
A
C O
l
B v n=1.336 f i'
n=1.369 r
i E
c
D l v f
Figure 6.5 Negative lenses in camera eyes. (A,B) Foveal pits in the retinas of the tubular AM eye of the jumping spider Portia fimbriata (A with inset) and the red-tailed hawk Buteo jamaicensis (B with inset). Scale bar in A = 50 μm. Ray paths for the foveal negative lens in B are shown in black. (Photo credits: Jürgen Otto and Nialat: 123RF.com photo agency. Histological sections by Richard Dubielzig DVM courtesy of Ivan Schwab, from Schwab, 2011, with permission from Oxford University Press) (C) Enlargement of an image by the diverging back element of a telephoto system (in this case foveal pit f, which separates the vitreous v [of lower refractive index n] from the retina r [of higher n]). By virtue of the foveal back element, the lens focuses an image i of an object o that is magnified relative to the image (i′) that would have been formed had the element not existed. (Adapted from Williams and McIntyre, 1980, with values of n from Land and Nilsson, 2012) (D) The veiled chameleon Chamaeleo calyptratus. (Photo credit: Cathy Keifer: 123RF.com photo agency) (E) Image formation in the eye of Parson’s chameleon, Calumma parsonii. The cornea c acts as a positive (converging) lens, and the curiously shaped internal lens acts as a negative (diverging) lens. Together the cornea and lens act as a Galilean telescope, effectively increasing the focal length of the eye and magnifying the image. A deep convexiclivate fovea magnifies the image even further. Other abbreviations as in C. (Image adapted from artwork by Tim Hengst with permission from Ivan Schwab; from Schwab, 2011, with permission from Oxford University Press)
Spatial Vision
photoreceptors can reconstruct from around . cycles/° (without the foveal pit) to . cycles/° (Williams and McIntyre, ). Similar conclusions can also be drawn for birds of prey. The foveal pit of the hawk (figure .B) also magnifies the image by about one and a half times (Snyder and Miller, ), and with the same benefits. Despite having eyes about the same size as our own, the finest spatial frequency hawks can discriminate behaviorally is around cycles/° (Reymond, ), more than twice our upper limit ( cycles/°). Even though this gain is mostly due to the diverging foveal pit, the hawk’s denser packing of foveal photoreceptors and three-times-wider daytime pupil (which improves the diffraction limit) also contribute. An even stranger negative lens can be found in the curious eyes of chameleons (figure .D,E). Chameleons are well known for their completely decoupled turretlike eyes that independently scan different regions of space in search of insect prey, only recoupling for a final frontal attack, which involves a lightening-fast and wellaimed firing of their sticky tongue (figure .D). Each eye is also capable of a fast and powerful accommodation that allows precise monocular depth perception for accurate estimation of prey distance (Ott and Schaeffel, ), an ability that relies on the accurate focusing of a magnified image. This magnification is achieved by the oddly shaped internal lens which, unlike in every other vertebrate eye known, is negative rather than positive (figure .E). In combination with the strongly positive cornea, this negative lens increases the focal length of the overall optical system (much as a camera’s telephoto lens does) and thus magnifies the image. The image is magnified even further by the chameleon’s deep convexiclivate fovea.
The Sampling Stations of Compound Eyes The interreceptor angle Δφ of a compound eye is simply defined by the density of ommatidia (per unit visual angle). This is nicely illustrated by considering two extreme examples: large aeshnid dragonflies, which may possess as many as , ommatidia in each of its apposition eyes, and some groups of primitive ants that may possess fewer than . If the eyes of both insects view the same solid angular region of visual space, then the dragonfly will sample that region with vastly greater spatial resolution, simply because of its much higher density of sampling stations (i.e., ommatidia). This density is directly related to the local “interommatidial angle” Δφ (figure .B): smaller values of Δφ indicate a greater sampling density and a higher spatial resolution. The interommatidial angle depends primarily on two anatomical parameters, the facet diameter, D and the eye radius, R: Δφ = D/R (radians)
[compound eyes]
(.)
A larger local eye radius (i.e., a flatter eye surface) or a smaller facet produces a smaller interommatidial angle. However, there is a limit to how much Δφ can be narrowed by decreasing the size of the facet because, as we saw in chapter , the size of the Airy disk (λ/D radians, where λ is the wavelength of light) will become unacceptably large if D becomes too small. Nevertheless, it is possible to have a region of the eye with such a large radius of curvature that an extremely small Δφ is still possible without having to sacrifice facet size. In fact in many apposition eyes the facets in these regions can actually be much larger than in other regions of the eye
127
128
Chapter 6
with double the Δφ (which is better for both diffraction and sensitivity)! This can readily be seen, for example, in the fly Syritta (Collett and Land, ), where in the flattest region of the eye (with R = . mm) the facets are at their maximum size (D = μm). Here Δφ is just .° (equation .). In another region of the eye, where the facets are less than half the size (D = μm), Δφ is two and a half times wider (.°). This is simply because the eye surface here is highly curved, having an eye radius
A
B
Dorsal
Ventral
C
D
Figure 6.6 Acute zones and bright zones in apposition compound eyes. (A) The blowfly Chrysomyia megacephala (male) showing the sudden enlargement of corneal facets in the dorsal part of the eye that indicates the presence of a bright zone (used for detecting and intercepting females). (B) The head of a long-legged fly (Family Dolichopodidae) with huge anterior corneal facet lenses that indicate the presence of a frontal acute zone (used in prey capture). (Photo credit: Laurie Knight: www.laurieknight. net) (C,D) Unknown species of dragonfly (D) with a distinct dorsal acute zone of enlarged corneal facet lenses—note the sudden transition to these larger facets from the ventral to the dorsal halves of the eye (which is also accompanied by a sudden change in corneal pigmentation). This transition (indicated by the white box in D) is shown schematically in C. (Photo credit: Nattapol Sritongcom: 123RF.com photo agency. C adapted from Trujillo-Cenoz, 1972)
Spatial Vision
over five times smaller (R = . mm). The flattened frontal eyes of a long-legged fly (figure .B), which have massive facets and very narrow Δφ, provide another good example. Thus, particularly in apposition eyes, facet diameter and eye radius can both vary dramatically within a single eye, which means that the local interommatidial angle can also vary dramatically—some directions of visual space can thereby be sampled much more densely than others. Such high-resolution “acute zones” (Horridge, ) are common among insects and crustaceans and, just as with the functionally equivalent foveae of vertebrates, the placement and size of an acute zone reflect the habitat and ecological needs of the eye’s owner. The dramatic increase in facet size in the dorsal eye regions of many dragonflies (figure .C,D) is an excellent example. As we see below, this distinctive region, which is easily visible to the naked eye, is an acute zone specialized for the detection of prey. Interestingly, however, an eye region of large facets does not always signal the presence of an acute zone. In the males of a couple of notable species of flies—the blowfly Chrysomyia megalocephala (figure .A) (Hateren et al., ) and the hoverfly (dronefly) Eristalis tenax (Straw et al., )—the dorsal eye regions have enlarged facets but maintain large interommatidial angles and wide rhabdoms. In other words the enlarged facets are not associated with increased resolution, as in an acute zone, but with increased sensitivity. These regions have thus been aptly named “bright zones.” Because they have so far been found only in males, bright zones are probably used primarily for the detection and pursuit of mates. The chief advantage of a bright zone is that it supplies more photons and increases the visual signal-to-noise ratio and thus the contrast sensitivity. For the detection of a small moving target this strategy could be quite useful, especially in dimmer light. Indeed, the dronefly’s bright zone enhances the speed and contrast sensitivity of responses from motion-sensitive cells in its lobula, improvements beneficial for both the detection of small moving targets and the control of hovering. Although common in apposition eyes, the spherical shape (i.e., constant eye radius) and strict optical design of superposition eyes exclude the creation of acute zones. One remarkable exception does, however, exist: the highly nonspherical refracting superposition eyes of the hummingbird hawkmoth Macroglossum stellatarum (figure .A). Somehow during development the facet lenses and rhabdoms have become decoupled to produce many more rhabdoms than facets. The rhabdoms have then been arranged to form local acute zones, their local angular packing (figure .C,D) bearing no resemblance to that of facets in the overlying cornea. There are two acute zones, one frontal and slightly ventral, where there are up to four rhabdoms per facet, (figure .F) and another providing improved resolution along the equator of the eye, with over two rhabdoms per facet (figure .E). Moreover, the size of the facets and the area of the superposition aperture (figure .B) are both maximum over the frontal retinal acute zone. By having larger facets, a wider aperture, and denser rhabdom packing, Macroglossum’s frontal acute zone provides the eye with its sharpest and brightest image and samples it with the densest photoreceptor matrix. It is this eye region that Macroglossum uses to fixate flower entrances during hovering and feeding and is no doubt one of the most important parts of its eye. How the eye pulls off this optical feat is still unknown, but it clearly provides better visual performance—at least in the frontal visual field—than a conventional superposition eye of the same size (Warrant et al., ).
129
Chapter 6
A
B
C
D
2°
2°
E
F 3.5
Rhabdoms per facet
130
3.0
D P
D A
P
V
A V
2.5 2.0 1.5 1.0 –60 –40 –20
0
20
40
Latitude (deg)
60
80
–20
0
20
40
60
80
Longitude (deg)
Figure 6.7 An acute zone in the retina of a superposition compound eye. (A) The hummingbird hawkmoth Macroglossum stellatarum. (B) Unlike in a classical superposition eye, the moth’s superposition aperture (equivalent to its greenish eye glow) varies in size and is largest in the frontal eye, the region used for stabilizing the flower entrance while hovering during feeding. (Images courtesy of Michael Pfaff) (C,D) The retinal sampling matrix of rhabdoms is coarser in the lateral (C) than in the frontal (D) part of the eye. The small circles represent the rhabdom centers. (E,F) The number of rhabdoms per facet along the dorsoventral meridian (longitude 0°; E) and along the posterior-anterior equator (latitude 0°; F). D = dorsal, V = ventral, A = anterior, P = posterior. (Panels C and D from Warrant et al., 1999)
Spatial Vision
The Ecology of Spatial Vision Despite their great benefits, foveae and acute zones rarely view more than a small fraction of the eye’s visual field. The reason for this lies in an unavoidable visual constraint—its energetic cost. The energy required per unit solid angle of visual space is much higher in an acute zone or fovea simply because the density of energy-craving photoreceptors—not to mention the cost of additional processing power required in the central nervous system—is much greater than in other parts of the eye. And as shown in flies (see below), acute-zone photoreceptors can be faster, sharper, and code more information than those in other eye regions, which elevates the energetic cost even further (Burton et al., ). Not surprisingly, the longer eye radius—a proxy for a larger eye—and the extra energetic expense are simply unsustainable in more than just a small region of the eye. Thus, the presence of a fovea or acute zone invariably implies that the region of the world being so finely and expensively sampled is of utmost ecological importance. What then might be so important? It turns out that many aspects of animal life have manifested themselves in the layout of the eye’s sampling stations and in the extent and location of foveae and acute zones. The urge to find a partner, the endless search for food, and the vigilant watchfulness required to spot predators and avoid being devoured are three such aspects. A fourth is the physical layout of the habitats where animals live because this determines the probabilities with which relevant objects can be seen in different directions. A fifth aspect, and the one we discuss first, is “optic flow”—the way the spatial details of the world flow past an animal as it moves through its environment. All these aspects of animal life have invariably led to the evolution of sensory “matched filters,” a concept first introduced by Rüdiger Wehner in , whereby the spatial layout of the photoreceptor matrix is matched to the spatial demands of the specific task to be solved. Of course to perceive “the world through such a matched filter,” to quote Wehner himself, “severely limits the amount of information the brain can pick up from the outside world, but it frees the brain from the need to perform more intricate computations to extract the information finally needed for fulfilling a particular task” (Wehner, ). Nowhere is this truer than in the ecology of spatial vision.
Locomotion and Optic Flow Flying insects such as butterflies, flies, bees, grasshoppers, and dragonflies have equatorial gradients of spatial resolution that are adaptations for forward flight through a textured environment (Land, a). As we discuss further in chapter , when an insect (or any animal) moves forward through its surroundings, its eyes experience an optic flow of moving features (Gibson, ; Wehner, ). Features directly ahead appear to be almost stationary, whereas features to the side of this forward “pole” appear to move with a velocity that becomes maximal when they are located at the side of the eye, ° from the pole. If the photoreceptors sample photons during a fixed integration time Δt, the motion of flow field images from front to back across the eye will cause blurring. An object moving past the side of the eye (with velocity v°/s) will appear as a horizontal spatial smear whose angular
131
132
Chapter 6
size will be approximately vΔt°. This effectively widens the local optical acceptance angle (Δρ, see chapter ) to a new value of {Δρ2 + (vΔt)} (Srinivasan and Bernard, ). The extent of this widening is worse at the side of eye (higher v) than at the front (lower v). In order to maintain an optimum sampling ratio of Δρ/Δφ (Snyder, ), the equatorial increase in Δρ posteriorly should be matched by an increase in Δφ. This indeed seems to be the case in many flying insects. For instance, in the Empress Leilia butterfly Asterocampa leilia, Δφ increases smoothly along the equator from the front of the eye to the side, from .° to .° in males and from .° to .° in females (Rutowski and Warrant, ). An accurate real-time analysis of optic flow is critical for the control of flight and is the major task of wide-field motion-detecting neurons in the lobula and lobula plate regions of the insect’s optic lobe. We return to this topic in chapter .
Sex The urgency to reproduce has led to some of the most spectacular visual adaptations found in nature, specifically among the invertebrates and particularly in males. This sexual dimorphism can reach remarkable proportions in the external designs of male eyes. Even the underlying visual circuitry has not escaped: in males there are entire pathways of finely tuned neurons that ascend from the eyes to steer sexual behavior. Visual sexual dimorphism, if it exists at all, is curiously hidden in vertebrates. But in insects it is often glaringly obvious (figure .), with males sometimes possessing entirely separate eyes exclusively devoted to sex. In march flies and mayflies, for instance, the eye has become bilobed, with the upper lobe heavily flattened to drastically increase the retinal sampling density within a narrow upward field of view (Zeil, a). It is here, silhouetted against the brighter sky, that females and rivals will appear as small dark moving spots (Zeil, b). In some species of mayflies each male eye is divided into a huge dorsal “turbanate” superposition eye and a smaller ventral apposition eye. Because mayflies typically swarm at dusk and dawn, the male’s superposition eyes likely provide the extra light catch and greater contrast sensitivity needed for spotting small dark females against the backdrop of a dim crepuscular sky. It is easy to see why good contrast sensitivity (rather than high spatial resolution per se) is important for detecting small dark targets: the ability of a photoreceptor to detect a tiny dark spot passing through its receptive field is actually limited by the smallest amount of dimming that it can just discriminate against the background. It turns out that this smallest amount of dimming can occur when the angular size of the spot is much smaller than the photoreceptor’s acceptance angle Δρ—the drone honeybee Apis mellifera (Δρ = .°), with huge dorsal acute zones, will take flight and chase a queen when she subtends as little as .° at the eye. Such a target will reduce ommatidial illumination by only % (Vallet and Coles, ). This indicates that good contrast sensitivity—and not simply high spatial resolution—is critically important for detecting small moving targets. Sexual dimorphism in eye design need not be as brazen as in march flies and mayflies. In many species of brachyceran flies sexual dimorphism is subtler: the males have eyes that nearly (or completely) touch along the midline of the head, whereas in females the eyes remain widely separated (figure .A,B). The extra piece of eye possessed by the male—which is amusingly referred to as a “love spot”—is used exclusively for the detection and high-speed pursuit of females (Land and Eckert, ).
Spatial Vision
d v
A
B Figure 6.8 Double eyes and sexual dimorphism in insects. (A) Male march fly Dilophus febrilis (Family Bibionidae) with its enormous flattened dorsal (d) apposition eyes and smaller ventral (v ) apposition eyes (inset ) copulates with a small-eyed female. (Image used with permission of the photographer, Dr. Ray Wilson, UK) (B) Unknown species of male mayfly showing the enormous “turbanate” dorsal superposition eyes and the smaller ventral apposition eyes. Two small bulbous eyes below the dorsal eyes are the ocelli. (Photo credit: Laurie Knight: www .laurieknight.net)
133
Chapter 6
Love-spot ommatidia are distinguished by their extra-large facet lenses—depending on the species these subserve either an acute zone (as in the blowfly Calliphora and the hoverfly Volucella pellucens, figure .C) or a bright zone (as in the dronefly Eristalis and the blowfly Chrysomyia, figure .A). The acute zone is clearly seen in the male hoverfly Volucella, which has large love spots located frontally, ° above the equator (figure .C). The interommatidial angle Δφ here falls to just .°. The
A
B D
1.3
1.4 °
1.5
°
1.6
°
1.6°
1.2
D
°
°
1.1°
1.5°
1.0
°
1.4°
°
1.3° 1.3°
0.9
0.8°
L
0.7°
L
1.2°
1.5 °
1.3°
1.0° 0.9°
° 1.6
A C
1.4 °
1.1°
1.4°
134
A V
D
V
Figure 6.9 Optical sexual dimorphism in the apposition eyes of flies. (A,B) Male (A) and female (B) blowfly heads (Calliphora erythrocephala). Note how the eyes of males almost touch, whereas those of females are quite separated. The extra eye surface—or “love spot”—of males (dotted white line) provides the input to a sophisticated neural pathway for detecting and chasing females. (Images from Strausfeld, 1991, with kind permission from Springer Science+Business Media) (C,D) In the hoverfly Volucella pellucens the male love spot is a large dorsofrontal acute zone where interommatidial angles are small (C) and facet diameters are large. The visual fields of the left eyes of the two sexes and interommatidial angles shown by isolines are projected onto spheres. Females have a much smaller frontal acute zone (compare the shaded regions, where Δφ < 1.1°). D = dorsal, V = ventral, A = anterior, L = lateral. (From Warrant, 2001)
Spatial Vision
size of the acute zone (the eye region where, say, Δφ < .°) occupies deg of the visual field (shaded area in figure .C). In females there is also an acute zone, but instead it is directed frontally (figure .D). Δφ only falls to .°, and the area of the acute zone (Δφ < .°) is a mere % as large as that of males ( deg: shaded area in figure .D). The acute zones of male flies are not restricted to the eye surface. Below the eye there is an intricate neural pathway that is specific to males. First, the connections of photoreceptor axons to the lamina are quite different in the acute zone compared to both the female’s eye and the rest of the male’s eye (e.g., houseflies: Franceschini et al., ). In most ommatidia, six of the eight photoreceptors (R–R) are green sensitive and together synapse onto a single second-order neuron in the lamina (the first optic ganglion of the optic lobe). The remaining two (R and R) are instead ultraviolet sensitive and send their axons straight through the lamina to synapse in the second optic ganglion, the medulla. Curiously, however, in the ommatidia of the male love spot, R mimics R–R, being green sensitive and (with the others) synapsing onto the same second-order lamina cell rather than continuing on to the medulla. This neural trick boosts the visual signal-to-noise ratio by % and thereby increases the contrast sensitivity for small moving targets. This is not the only peculiarity of love-spot photoreceptors. In houseflies they are also % faster than the photoreceptors of females and much more acute, with acceptance angles (Δρ) that are a little more than half those of females (Hornstein et al., ). This is reflected in a spatiotemporal contour plot of photoreceptor response gain as a function of spatial and temporal frequency (figure .A): there is no spatial or temporal frequency where female photoreceptors outperform those of males. The diagonal lines are lines of equal angular velocity (°/s) and highlight why male lovespot photoreceptors are so much better suited to the high-speed pursuit of a rapidly turning small target such as a female fly. At the level of response shown in figure .A, a spatial frequency of . cycles/° moves at °/s in males but at only °/s in females. Put in another way, a small target moving at high speed may easily be seen by males when totally invisible to females. The reason spatial resolution is so much better in love-spot photoreceptors is that the overlying facet lenses are large (good photon catch and a narrower Airy diffraction pattern), and the rhabdomeres are unusually thin. The improved response speed is achieved by a faster transduction mechanism and a tuned voltage-activated conductance that enhances the membrane’s frequency response. In fact, in the blowfly Calliphora vicina, this translates into an information rate (in bits/s) in male photoreceptors that is up to % higher than that in females (Burton et al., ). All these photoreceptor modifications dramatically improve the visibility of a small rapidly moving target. The neural contrast of such a target—both in space and in time—is significantly enhanced in the responses of male love-spot photoreceptors compared to the responses of female photoreceptors (figure .C,D). Of course, as we mentioned above, all this only comes at a cost—the extra tuned conductance (which involves the passage of ions through dedicated channels in the photoreceptor membrane) and the production of larger lenses are all energetically expensive. Higher up in the fly’s optic lobe, the sexual dimorphism introduced in the eyes is maintained. Here, in the lobula of males, large male-specific visual cells respond maximally to small dark objects moving across the frontal-dorsal visual field corresponding to the love spots (Strausfeld, ; Gilbert and Strausfeld, ; Gronenberg and Strausfeld, ). Such cells are clearly suited to the detection of females (that
135
Chapter 6
A
B 2.0 1.6
5
0 00
2
0 00
1
0 00
0 0 50 20
1.0 Normalized response
Log temporal frequency (Hz)
0 10
1.2 0.8
h
0.8 0.6
0.8°
0.4 0.2 G
0.4
0
–2 –1 Log spatial frequency (cycles deg–1) C
0.1
1 10 Target height h (deg)
100
D 5 0 –5 –10 –15
1°
–20 –25
Response amplitude (mV)
136
1°
Figure 6.10 Neural sexual dimorphism in the apposition eyes of flies. (A) Spatiotemporal resolution in photoreceptors from the frontodorsal eye regions of male and female houseflies (Musca domestica). In males the “love spot” is located in this eye region, and photoreceptors here possess both higher spatial and higher temporal resolution (red curve) compared to the photoreceptors of females (green curve). Contours show photoreceptor response gain at 1/√2 maximum, and dotted blue lines are lines of equal angular velocity (values in degrees per second). (Adapted from Hornstein et al., 2000) (B) Responses to small moving bars (0.8° wide) of small-field small-target movement detectors (SF-STMDs) in the lobula of male and female hoverflies (Eristalis tenax). Bars of different angular heights (h) were moved at 50°/s through the receptive field centers of SF-STMDs (coincident with the frontal and lateral acute zones of females and the dorsofrontal bright zone of males). Male SF-STMDs were tuned to targets 1–3° high, whereas those of females were tuned to targets around 8° high. Cells in neither sex responded to broad-field gratings (G). (Drawn from data in Nordström and O’Carroll, 2009) (C,D) Neural images (from the frontodorsal eye region) in male (C) and female (D) houseflies reconstructed from photoreceptor responses to a dark 3.44°-wide target moving at 180°/s that show the instantaneous voltage responses of individual photoreceptors separated at angles appropriate for males (Δφ = 1.6°) and females (Δφ = 2.5°). The wider “love spot” facets of males and their superior “love spot” photoreceptor performance (see A), allow males to detect small moving targets with much great spatial, temporal, and voltage contrast. Crosses indicate the current position of the target. (From Burton and Laughlin, 2003, with kind permission of the Company of Biologists)
incidentally lack this circuitry). One such group of cells—the small-field small-target movement detectors (or SF-STMDs)—are an excellent example (figure .B). Even though found in both sexes, the SF-STMDs of females have broad receptive fields covering large parts of the eyes, whereas those of males are confined to the part of the visual field viewed by the love spots. Both respond to small moving targets, but those of males prefer very much smaller targets than those of females—around –° tall,
Spatial Vision
compared to around ° (Nordström and O’Carroll, ). This difference no doubt reflects the sexual differences in spatial and temporal resolution already established in the retina.
Hunters and the Hunted Similar cells to the SF-STMDs of flies are also found in the dragonfly brain, not for spotting females but for spotting prey. These insects have among the best-developed dorsal acute zones found in apposition eyes (figure .A,B), with values of Δφ falling to an incredibly low .° in Anax junius (Sherk, ). The facet diameters here are huge ( μm), and this, together with a yellow screening pigment between the ommatidia, which is transparent to the light that reconverts rhodopsin (Labhart and Nilsson, ), helps to ensure high sensitivity. The acute zone—clearly visible even to our own naked eye (figures .D and .A)—has its region of highest resolution distributed in an elongated dorsal strip (figure .B). And just as in male flies, this region eventually feeds its high resolution and sensitivity to large specialized cells in the brain, which behave as matched filters for fly-sized prey objects against a bright background (figure .C: see Olberg, , ; O’Carroll, ; see also chapter ). In the large Australian dragonfly Hemicordulia tau, the CSTMD cell, a largefield small-target-detecting cell from the lobula, has a response that is tuned to very small moving targets (figure .C), around square degree in size (or even less, as recent recordings indicate). When the target size increases, the response of CSTMD drops dramatically. Another set of neurons—eight pairs of target-selective descending neurons (TSDNs) located in the brain—use the information processed by cells like CSTMD to calculate a continuously updated mean vector to the target and alter the dragonfly’s flight path accordingly, thus allowing it to “lock onto” and eventually intercept the prey being pursued (Gonzalez-Bellido et al., ). The large flattened forward-facing eyes of praying mantises (Kral, ) and longlegged flies (figure .B), or the highly resolved principal eyes of jumping spiders (figure .), are further examples of invertebrate eyes that possess well-developed acute zones for hunting. The pressures of hunting, or being hunted, are even detectable in the eyes of vertebrates. A classic example is seen in the placement and visual fields of the two eyes (figure .). Predators such as cats, foxes, and owls typically have their eyes placed frontally on the head, each eye sharing a very large part of the visual field of the other (Walls, ; Hughes, ). This large “binocular overlap”—which in cats occurs over ° of the frontal visual field—is an important requirement for stereopsis, the ability to discriminate depth, that is, to calculate the distance to objects in threedimensional space. To be a successful predator it is probably highly beneficial to have good stereopsis in the frontal visual field in order to judge the distance to prey during pursuit and to adjust pursuit and interception tactics accordingly. The price paid for this benefit is the large part of the world that remains invisible—in cats, this region is half of all visual space, the ° of the world that lies directly behind them. Some predators compensate for this loss by being able to turn their heads. Many owls, for instance, can turn their heads by up to °! Rabbits and woodcock, in contrast, are classic prey animals, with laterally placed eyes possessing a panoramic view of the world with a minimal blind spot (Hughes, ). Usually rather sedentary, these animals scan the ° horizon around them, constantly on the lookout for cursorial predators. In rabbits this has resulted in a
137
Chapter 6
200 spikes/s
500 ms
1° x 1°
1° x 2°
1° x 3°
1° x 4°
A D
1° x 8° 1 5 2
1° x 12°
1 2
1° x 16°
L
2.4
A Dark edge 1
138
B
V
C
Grating
Figure 6.11 The dorsal acute zone of dragonflies. (A) An unknown species of dragonfly with eyes possessing distinct dorsal acute zones with larger facet lenses. (Photo credit: Tanya Consaul: 123RF.com photo agency) (B) Map of spatial resolution in the eye of the dragonfly Anax junius (expressed as the number of ommatidial axes per square degree). Density of ommatidia (and thus spatial resolution) increases rapidly in the dorsal (D) visual field. V = ventral, L = lateral, A = anterior. (Redrawn from Land and Nilsson, 2012, with data from Sherk, 1978) (C) Responses (peristimulus time histograms) of CSTMD4, a large-field small-target-detecting cell from the lobula of the dragonfly Hemicordulia tau. In the part of the cell’s receptive field corresponding to its dorsal acute zone the cell is most sensitive to small, dark, and reasonably slow-moving targets (as a fly might appear during a highly fixated pursuit). It is insensitive to large bars, edges, and gratings. (From O’Carroll, 1993, reprinted with permission from Macmillan Publishers Ltd.)
remarkable matched filter for sampling the full arc of the horizon (figure .)—a socalled horizontal “visual streak” of high ganglion cell density (and high spatial resolution). Even in cats, which need most of their resolving power in the frontal direction, a weak horizontal streak is evident. This is not so surprising—their prey, after all, are also constrained to the same horizon. Of course, the horizon—and the great variety of visually important events that are constrained there—have not only led to the evolution of visual streaks in predators
Spatial Vision
300
180°
500
44°
44°
T
750 1k 2k 5k
N
98°
18°
155°
155°
T 2
3
67 45
N
1
24° Blind Monocular Binocular
Figure 6.12 Visual fields of predators and prey. Cats (upper panels) have receptive fields typical of predators with frontally directed eyes, a large 180° blind spot to the rear, and large frontal binocular overlap for accurate distance estimations during prey pursuit. A temporal fovea and a mild visual streak (seen as a ganglion cell density map in cells/mm2 on a flattened corneal mount) emphasize the importance of a frontally directed gaze coupled with enhanced prey detection around the horizon. Rabbits, in contrast (lower panels), are typical of prey animals, with about 340° of surround vision, a small rear blind spot, and modest frontal binocular overlap. Their ganglion cell maps (in thousands of cells/mm2 ) reveal a highly developed horizontal visual streak for optimal predator detection around the horizon. N = nasal, T = temporal. Scale bars = 10 mm. (All panels created using data in Hughes, 1977. Photo credits: Oleg Zhukov and Eric Isselee: 123RF. com photo agency)
and prey. In fact horizontal visual streaks are common in all types of animals that inhabit flat environments (Hughes, ). Indeed, the spatial layout of animal habitats—whether flat or highly complex—has profoundly influenced the way the eye’s sampling matrix has evolved.
The Physical Layout of Terrestrial Habitats A horizontal visual streak is a common feature in the apposition compound eyes of insects and crustaceans inhabiting flat environments such as open featureless deserts (e.g., desert ants), water surfaces (e.g., water striders), and intertidal mud flats (e.g., fiddler crabs). The tall barrel-shaped eyes of fiddler crabs are particularly extreme in this respect (figure .; Zeil and Hemmi, ; Smolka and Hemmi, ).
139
Chapter 6
A
B
C 90 Dorsal
60
Elevation[°]
140
30 Horizon
0 −30 0°
−60
–90/270°
Medial −90 300
270
90° 180°
240
210
Ventral Caudal 180
Lateral 150
120
90
Frontal 60
30
0
Medial −30
−60
−90
−120
Azimuth [°] Figure 6.13 Horizontal acute zones in compound eyes. (A) Male fiddler crab Uca vomeris, with the large right claw typical of its sex and two prominent eye stalks. (Photo courtesy of Jeff Wilson) (B) The tall cylindrical eye, which is up to 2.3 mm high and 1.4 mm wide. (Photo courtesy of Ajay Narendra) (C) The optical axes of all ommatidia in the compound eye (black dots), where the edge of the eye’s visual field is shown as a thick dark line. Insets show relevant behavioral tasks in specialized areas: individual recognition frontally (female carapace pattern), burrow defense laterally (male with large claw), and predator avoidance dorsally (terns). (From Smolka and Hemmi, 2009, with kind permission of the Company of Biologists)
Everything in the life of a fiddler crab is defined by the single dominating feature of its extremely flat habitat—the horizon. This fact manifests itself in the way the eyes are constructed. Not only does each eye sample the entire ° arc of the horizon, over one third of all ommatidia have their fields of view crammed into the narrow strip of space that corresponds to this single feature (figure .C). The horizon is thus sampled very finely as well as panoramically. Because the corneal surface is much flatter in the dorsal direction than the horizontal, the eye samples vertical space (Δφv) much more finely than horizontal space (Δφh). In the acute zone Δφv falls to around
Spatial Vision
A
B
C
n
t v
t v
v
0.3 0.4 0.5 0.6 0.75
1
3
4
5 0.25
6
0.3
0.2
4 3 2
0.2 0.6
1 0.5
0.4 0.2
Figure 6.14 Habitat-related ganglion cell topographies in terrestrial mammals. Ganglion cell densities are given in thousands of cells/mm2 (all other conventions as in figure 6.12). (A) The red kangaroo Megalia rufa, with a distinct horizontal visual streak. (B) Doria’s tree kangaroo Dendrolagus doriana, with no obvious retinal specialization. (C) The two-toed sloth Choloepus didactylus, with a vertical visual streak. V = ventral; scale bar = 5 mm. (From Collin, 1999, with references therein, reprinted with kind permission from Springer Science+Business Media)
.°—at the same location Δφh is about ° larger. This optical arrangement breaks up the eye into three main zones—a dorsal zone, a highly resolved equatorial zone, and a ventral zone. Each of them has a special ecological significance, more or less functioning as a matched filter for a specific set of tasks. Anything seen above the line of the horizon in the dorsal zone is interpreted as a predator and induces the crab to make a rapid retreat to its burrow. Anything seen equatorially is interpreted as a conspecific, either a mate or a rival, whereas anything seen in the ventral zone implies the close proximity of a conspecific and the possibility for the discrimination of social signals (Smolka and Hemmi, ). Vertebrates also show a similar response to the horizon. Rabbits, as we have seen, have a horizontal visual streak of increased ganglion cell density (figure .). So do many fish that inhabit broad flat sandy seafloors, such as the red-throated emperor and several species of benthic sharks (Collin and Pettigrew, ; Lisney and Collin, ). Many animals inhabiting open grassy plains likewise have horizontal visual streaks (Hughes, ), such as various ungulates (e.g., cows and deer), elephants, and marsupials. The visual streak of the red kangaroo is particularly distinct (figure .A), but in tree kangaroos it is completely absent (figure .B). This large difference in retinal design is directly related to habitat, a fact that led Austin Hughes to develop his “terrain theory” of vision (Hughes, ): the physical layout of sampling stations in a retina reflects the physical structure of the animal’s normal habitat. In the case of the red kangaroo, which inhabits open plains that are dominated by the horizon, a horizontal visual streak was the evolutionary response to this particular “terrain.” For tree kangaroos, where the horizon is obscured by a complicated threedimensional habitat full of trees and vines, and where no single feature dominates, the evolutionary response to this “terrain” has instead been a circularly concentric gradient of ganglion cell density. Interestingly, another tree dweller, the two-toed sloth, instead has a vertical visual streak (figure .C). This it possibly uses to align its head with the branch from which it hangs upside down.
141
142
Chapter 6
The Physical Layout of Aquatic Habitats As we saw in chapter , the optical properties of water have important consequences for aquatic visual environments. With increasing depth, the color, intensity, polarization, and directionality of light change dramatically, and depending on water clarity—which may vary quite significantly from one body of water to another—the veiling haze of scattered space light may be more or less pronounced. All of these properties have had major affects on the evolution of spatial sampling in the eyes of aquatic animals. The same is true of the optical properties of the water surface itself. The fact that water has a higher refractive index than air means that the entire ° dome of the world above the water surface is compressed to a .° cone of light underwater. Within this cone of light—called “Snel’s window”—all the features of the terrestrial world above can be found, including the flat water surface, which is located at the edge of the cone (chapter ; Walls, , p. ). By looking upward along the edge of the cone, an animal just below the water surface would have a periscopic view of the water surface above and see anything, including prey, that might be trapped on it. One such animal is the backswimmer Notonecta glauca, which stalks prey on the water surface while suspended underneath. The water surface is an important horizon for the backswimmer, and the ventral part of the eye possesses a well-developed visual streak (figure .A) that watches the surface along the edge of Snel’s window (Schwind, ). This matched filtering does not stop at the optics of the eye: in the optic lobe there are cells that have their visual fields coincident with the visual streak and that respond maximally to prey-sized objects on the water surface (Schwind, ). But the water surface is not the backswimmer’s only horizon. Frontward, the backswimmer can also see the environment of the pond and any item of interest that might be located there. There is a second visual streak that views this direction as well (Schwind, )! Remarkably, exactly the same type of visual adaptation has convergently evolved in the surface-feeding fish Aplocheilus lineatus (figure .B), whose camera eyes also have two visual streaks, one viewing the boundary of Snel’s window, the other looking into the water horizontally. As we noted earlier, horizontal visual streaks can also be found in several species of benthic fish and sharks. . As one descends deeper in the open ocean, the dramatic decline of light intensity and its increasingly dorsal direction of incidence have a profound affect on the nature of the visual scene encountered at different depths. In terrestrial habitats visual scenes are said to be “extended”; that is, light reaches the eye from many different directions at once. Nonetheless, terrestrial animals also experience sources of light that are much smaller in spatial extent. Stars, for instance, are point sources. So too are the flashes of bioluminescence produced by nocturnal fireflies. The tiny dark silhouette of a female fly passing across a bright sky will also be point-like to a male fly in hot pursuit. However, it is within the depths of the sea that this distinction between extended sources and point sources has its greatest influence on the visual systems of animals. In the shallower depths, where scattered daylight produces an even blue space light and where the sea floor may be clearly visible, visual scenes are extended in all directions. But at greater depths, where the space light is diminished, bioluminescent point sources also begin to appear, especially from below, where the space light is up to times dimmer than that coming from above. Upward and even laterally, the scene is still extended. But downward the scene begins to be dominated by point sources. At still deeper levels bioluminescent point sources can be seen in all directions. In these
Spatial Vision
Interommatidial angle Δφ
A
6° 4°
47° 41.2°
2°
0° –60°
0°
60°
Direction of view
B 97.6° 47° 0°
Figure 6.15 Vision through Snel’s window, where the 180° view of the world above the water surface is compressed by refraction into a 97.6°-wide cone below the water surface. (A) The backswimmer Notonecta glauca hangs upside down at the water surface, with the ommatidia in the ventral regions of its apposition eyes looking upward (positive directions of view in the left panel). At precisely the boundary of Snel’s window (red dashed lines), there is a sudden decrease in Δφ indicating enhanced spatial resolution for objects (prey) on the horizontal water surface above. In the horizontal direction below the water surface (0°: green dashed lines) Δφ is also minimal, indicating the presence of a second horizontal acute zone. (B) Two horizontal visual streaks are also found in the surface-feeding fish Aplocheilus lineatus, one for the horizontal water surface above (red dashed lines) and one for the horizontal underwater world around it (green dashed lines). (Adapted from Wehner, 1987, with kind permission from Springer Science+Business Media, with data and images from Schwind, 1980, and Munk, 1970. Photo credit: Eric Isselee: 123RF.com photo agency)
mesopelagic depths between m and m, the scene is semiextended, becoming less extended and more point-like as the space light diminishes with increasing depth. Below m, where daylight no longer penetrates, the visual scene is entirely point-like in nature. Not surprisingly, the eyes of deep-sea animals have evolved in response to this changing nature of visual scenes with depth, being optimized for dim extended daylight or for the detection of bioluminescent point sources or both (Warrant, ; Warrant and Locket, ). As we saw in chapter , to accurately localize a narrow point of bioluminescence in the darkness of the deep sea requires another type of eye design than that which is optimal for a dim extended scene. The image of a point source on the retina is, by definition, also a point of light (assuming that aberrations and diffraction do not blur the image too much). For a visual detector to collect all the
143
Chapter 6
light from this point, its receptive field need not be any larger than the image itself. Receptors viewing large solid angles of space or performing spatial summation—so important for improved sensitivity to a dim extended scene (see chapter )—are useless for improving detection of a point source. In fact, one would predict that eyes built to see point sources of light against a dark background should have () a wide pupil to collect sufficient photons to detect the point source (equation .c) and () good spatial resolution to then accurately localize it. This is precisely what one sees in the eyes of deep-sea fish as one goes deeper in the ocean, from the dim extended world of the mesopelagic zone to the dark point-source world of the bathypelagic zone (figure .A; Warrant, ; Warrant and Locket, ). Consider a plot of the smallest angular separation of ganglion cells as a function of depth for some species of deep-sea fish (figure .A). A smaller separation of ganglion cells results in a greater anatomical resolution. Two trends are obvious. First, the eyes of fish on average become sharper with depth, with the eyes of bathypelagic fish being the sharpest, typically having the potential to resolve details subtending just minutes of arc. This is perfect for detecting point-source bioluminescence, the only light source at these depths. Second, the variation across species in ganglion cell separation (and thus resolution) is large in the brighter upper levels (figure .A: error bars) but gradually declines with depth, with minimal variation in the bathypelagic zone (separation = . ± . arc minutes). The small variation in the bathypelagic zone is easy to understand: here the only light sources are point sources, and the best strategy involves little summation and high resolution. The large variation in the mesopelagic zone reflects its semiextended nature, with some species adapted to point sources, some to extended sources, and others to both (Warrant, ). The
A Ganglion cell separation (arc min)
144
B
30
4 8 4 6 N
5 20
3
10
0
3
0–400
400–800 800–1200 Depth (m)
4
4 8
>1200
T
5
3
4
27 5 mm
Figure 6.16 Deep-sea eye structure and the changing nature of visual scenes with depth. (A) The finest separation of ganglion cells found in a survey of 20 species of deep-sea fish living at different depths, showing mean separation (in arc minutes) ± SD (with number of included species indicated above each histogram bar). Deeper-living fish tend toward sharper retinas, a reflection of the increasing dominance of bioluminescent point-source illumination with depth. (B) Rouleina attrita has a retinal design that is common in bathypelagic fish, with deep convexiclivate temporal foveae containing densely packed ganglion cells (in thousands of cells/mm2). This design is ideal for localizing bioluminescent point sources in the frontal visual field. (From Warrant, 2000, with data from Wagner et al., 1998)
Spatial Vision
bathypelagic Rouleina attrita (figure .B) well exemplifies the trend toward sharp vision at depth. Living near sea floors between . and . km below the sea surface (Wagner et al., ), Rouleina has sharp frontally directed deep convexiclivate foveae possessing up to , ganglion cells/mm, giving them a resolution of around minutes of arc. The same arguments apply to dorsally directed eyes that accurately detect the small dark silhouettes of other animals in the dim downwelling daylight. The large apposition eyes of hyperiid amphipods (Land, b, ) and the superposition eyes of euphausiids (Land et al., ) become increasingly dorsal in their field of view—and increasing well resolved—with increasing depth. Near the surface, where scenes are as extended as they are on land, the evolutionary pressures on spatial vision are very similar to those in terrestrial habitats. What varies more rapidly with depth, however—even in the shallows—is the spectrum of natural daylight. This, it turns out, has also had a significant effect on the evolution of vision—particularly with regard to the perception of color, the topic to which we turn next.
145
7 Color Vision
F
rom a small hole in a tree, a bee starts her morning journey. She flies over an emerald-green field looking for nectar and pollen from today’s flowering of yellow buttercups. Their positions and colors are clearly visible to the bee, as her compound eyes sample light from three regions in the spectrum: green, blue, and ultraviolet (UV). As she lands on each flower, her UV photoreceptors help pick out nectar-guides that lead her to the treasure within. At the other end of the field, a farmer is giving his horse some apples for a snack. The farmer also samples three regions of the spectrum, but these are shifted to longer wavelengths than the bee’s and see blue, green, and yellow light best. Both animals, the bee and the human, are trichromatic, but their evolutionary histories and ecological needs have given them different sets of color channels. Like his primate ancestors, the farmer is good at finding ripe fruits in his orchard for his horse. The eyes of the horse, on the other hand, contain only two color channels, blue and green. It cannot easily see the red apples among the green leaves. Chapter examines how photoreceptors and the visual pigments they contain are structured and how they absorb light of different wavelengths. Now we look at why animals have evolved different numbers, ranges, and placements of spectral channels in their color-vision systems and also examine the factors, such as water transmission, visual task, phylogeny, and activity patterns, that drive the evolution of such diverse modes of seeing color.
The Birds and the Bees: The Basics of Color Vision Without color vision, it is still possible to notice a charging bull and climb up the apple tree shown in figure .. Even in the absence of any color sense, the tree is still visible, as most of the information in natural scenes can be gained from achromatic cues alone. Color vision, however, gives an animal more information, allowing it to make quicker and more informed decisions. For this reason, most animals possess some degree of color sensitivity, mediated by to color channels (figure .). As mentioned above, we as Old World primates, have three color channels, and with these we can distinguish over two million colors. However, we have only one view
Color Vision
A
B
Figure 7.1 An apple tree image optimized for human color vision (A) using our three-color (RGB) printing process and with the color stripped from the image (B). Note in B the red apples are harder to see, although other features such as tree against the sky remain obvious. (Photo: Incrediblesnaps.com)
of the world among many. One of the objectives of this chapter is to disentangle our own experience of color from that of other animals to provide an objective measure of what color vision is and how evolution has molded its variety of forms.
A Quick Color Vision Primer The color of an object perceived by a visual system is defined by the spectrum of light reflected from the object and the absorption spectra of the photoreceptors viewing it. The feathers of a parrot (see figure .A) reflect quite different light spectra at various places on the parrot’s body resulting in the spectacularly colorful plumage we are all familiar with. For any one of these colors our three classes of cone photoreceptors differentially absorb the light according to their respective absorption spectra—blue feathers strongly stimulate our short-wavelength-sensitive (S) cones, whereas our medium- (M) and long-wavelength-sensitive (L) cones are much less strongly stimulated. The opposite is true of deeply red feathers, where the responses of L cones would be greater than those of M cones and considerably greater than those of S cones. Although this ratio of excitations in the three cone types is essential for our perception of color, it is not the whole story. In fact, these signals provide merely the input to a complicated neural circuit of retinal cells that add and subtract the cone signals before color information leaves the retina for further processing in the brain. This neural circuitry is still a long way from being fully understood. However, it turns out that the addition of cone signals—particularly from the M and L cones (M+L)—probably provides the basis for luminance vision. This luminance channel is subserved by the wider dendritic fields of the parasol ganglion cells (see chapter ) that receive the summed signals of M and L cones from diffuse bipolar cells. These ganglion cells provide the origins of the magnocellular division of higher visual processing. Like all ganglion cells, the receptive fields of the parasol ganglion cells have a circular center and a concentric doughnut-shaped surround, either of which can be ON or OFF in their receptive field properties (meaning that light placed on the appropriate part of the receptive field either excites [ON] or inhibits [OFF] the response of the cell). Thus, an ON-center parasol ganglion cell has an excitatory
147
Chapter 7
Invertebrates Relative sensitivity
A
Relative sensitivity
C
Relative sensitivity
E
Relative sensitivity
G
Relative sensitivity
I
Relative sensitivity
K
M Relative sensitivity
148
Vertebrates
1.0
B 1.0
0.5
0.5
0
0
1.0
D 1.0
0.5
0.5
0
0
1.0
F 1.0
0.5
0.5
0
0
1.0
H 1.0
0.5
0.5
0
0
1.0
1.0
J
0.5
0.5 0
0
1.0
1.0
L
0.5
0.5 0
0
1.0
N 1.0
0.5
0.5
0 300
400 500 600 Wavelength (nm)
700
0 300
400 500 600 Wavelength (nm)
700
Figure 7.2 Varied spectral sensitivities of vertebrates and invertebrates. (A) Cuttlefish, Sepia lessoniana. (B) Dolphin, Tursiops truncates including the rod spectral sensitivity (dotted line), as this may be involved in color vision. (C) Shrimp, Systellaspis debilis. (D) Dog, Canis familiaris. (E) Honeybee, Apis mellifera. (F) Human, Homo sapiens. (G) Spider, Cupiennius salei. (H) Damselfish, Abudefduf abdominalis. (I) Giant clam, Tridacna maxima. (J) Triggerfish, Sufflamen bursa. (K) Butterfly, Papilio xuthus, showing the five spectrally distinct photoreceptors out of eight subclasses. (L) Lizard, Platysaurus broadleyi. (M) Stomatopod, Neogonodactylus oerstedii. (N) Bird (blue tit, Cyanistes cearuleus). (A–D) Spectral sensitivities of visual pigment alone with no short-wave removal from ocular media or other filters. (Modified from Marshall et al., 1999; Losey et al., 2003; Kelber, 2006; Cronin and Marshall, 1989, and data therein; Fleishman et al., 2011; Chung and Marshall, unpublished data [A])
Color Vision
center and an inhibitory surround and responds best to a bright spot (that fills the receptive field center) on a dark background. The opposite is true of an OFF-center cell. In addition to this luminous contrast channel there is an analogous color contrast channel subserved by the smaller dendritic fields of the midget and small bistratified ganglion cells (which together provide the origins of the parvocellular division of higher visual processing). These two ganglion cell types instead receive cone signals that have been subtracted from each other and thereby create so-called “opponent” processing pathways. The center-surround receptive fields of these ganglion cells are stimulated by two “opposing” colors. The midget ganglion cells, for instance, have red-green opponency: a red ON-center (+L–M) cell has a receptive field center that is excited by red light, but inhibited by green light. The surround, in contrast, is excited by green light, but inhibited by red light. Thus, such a cell would be a perfect red apple detector: when the image of a red apple fills the receptive field center and its green leafy background fills the surround, the cell would react vigorously. A yellow-blue opponency is instead provided by the small bistratified ganglion cells. The center of a blue ON-center (+S–L–M) ganglion cell is excited by blue light but inhibited by signals from both L and M cones (yellow light). Again, the surround has the opposite properties. Such opponent receptive field structures are also necessary for color constancy. If the eye suddenly encounters a change in illumination spectrum (for instance if the animal suddenly enters a forest—see chapter ), then both the centers and the surrounds of opponent receptive fields will be filled with this new light. Because this light will cause equal amounts of excitation and inhibition in the two receptive field regions, the sudden change will be canceled at the ganglion cell level and remain unnoticed. For instance, in the red ON-center (+L–M) cell mentioned above, the greener spectrum of a forest will excite its surround but will also inhibit its center, canceling the new potential color bias—our red apple will still be seen as red as ever! Even if the description above is based on our knowledge of primate trichromats like ourselves, opponent mechanisms are also the basis of color vision in the rest of the animal kingdom. Color opponency is found in vertebrates and invertebrates as well as in dichromats, tetrachromats, and potential pentachromats.
Color Spaces One issue with color vision is how to quantify and analyze it. Unlike brightness and contrast, which can be denoted by single numbers (see chapter ), color, because it compares multiple channels, is by definition multidimensional. Several color spaces have been developed to address this issue. The most straightforward is what is known as the Maxwell triangle. Consider an animal with three color channels, for example, our honeybee. Using the methods outlined in chapters and , one can calculate the photon catch of each channel. Usually one simply determines the photon catch for a single receptor cell of each class, but the exact number of cells considered is unimportant because it is the relative photon catch among the three channels that matters, not the absolute values. Then, using simple formulas (see Kelber et al., ) one can plot this on a triangular graph where each vertex is labeled with a specific color channel (figure .). The closer a point (often called a locus) is to a given vertex of the triangle, the larger the fraction of the total catch (of all three channels) is dominated by that channel. A point in the center of the triangle has equal photon catches for
149
Chapter 7
B
A
50 40
30
Reflectance
Reflectance
40
20 10
30 20 10
0 300
400 500 600 Wavelength (nm)
0 300
700
400 500 600 Wavelength (nm)
700
400 500 600 Wavelength (nm)
700
D
C 100
0.6
80
0.5 Reflectance
Reflectance
150
60 40 20 400 500 600 Wavelength (nm)
0.2
0 300
700 F
M
S
0.3
0.1
0 300 E
0.4
L
S
M
L
Figure 7.3 The colors of natural objects and their perception by the color-vision systems of animals. The reflectance spectra of (A) Eclectus parrot plumage (Eclectus roratus), (B) angelfish (Pygoplites diacanthus), (C) assorted European flowers, and (D) freshwater cichlid (Metriaclima zebra) indicate that the colors of objects vary widely both between and within objects. The colors of M. zebra (D) are plotted (and colored according to graph and approximate colors in life) in a Maxwell triangle, as seen by the human (E) and cichlid (F) visual systems. Black lines in each triangle plot the monochromatic loci of pure wavelengths from 300 to 700 nm. S, M, and L stand for short-, middle-, and long-wavelength sensitivities that are approximately blue, green, and red in humans and UV, blue, and green in the cichlid.
all three channels and is said to be spectrally neutral (i.e., gray). One can also create Maxwell shapes for dichromatic and tetrachromatic visual systems, the shape being a line in the former case and a tetrahedron in the latter. Regardless of dimension, a useful feature about this color space is that the locus of mixture of two colors is on a straight line between the loci of the two colors.
Color Vision
There are two important things to note about this color space. First, it only considers color; the intensity of the signal has been factored out. In other words it gives information about hue (position in triangle) and saturation (distance from the closest vertex), but not brightness. Second, and this is critical, the space is not perceptually uniform. In other words just because two sets of color pairs are equally far apart when plotted on a Maxwell triangle does not mean that they are equally discriminable. This is particularly important as one approaches any of the vertices of the triangle. This issue was addressed by Vorobyev and Osorio (), who developed a metric for color distance that incorporates the fact that different pairs of colors are more discriminable than others. The number generated gives one a better sense of true color distance but unfortunately is difficult to visualize because it is many dimensional, even for trichromatic vision. An excellent review of the use of color spaces, their calculation, and their limitations can be found in Kelber et al. ().
Grades of Color Vision, Behavior, and Ecology Four levels of color vision have been proposed, based mainly on behavior. These are, in increasing order of complexity, () color taxes or light-environment seeking, () wavelength-specific behaviors directed toward objects, () color learning through neural representation of color, and () color appearance including color categorization (Kelber and Osorio, ; Kelber et al., ). This ethological ranking of color vision types often implies an increasing complexity of spectral channels and their neural processing. However, as we see below, some animals perform simple color tasks with a large number of spectral sensitivities, and others perform complex tasks with only two channels. One reason for this is that not all color channels are necessarily compared at all levels of processing. Thus, in this chapter, we use di-, tri-, and tetrachromacy to refer both to animals that possess the corresponding number of spectral channels and to behavioral outcomes that require that number of underlying spectral channels. We also use “color vision” in its broadest sense to cover all behaviors relating to color. In a broad survey of color vision systems within the animal kingdom, it is notable that trichromacy is common. One likely reason for this is that three spectral sensitivities permit the three basic variables of color to be disambiguated. These are hue (the spectral location), saturation (the spectral width), and intensity (the spectral height). A dichromatic eye cannot unambiguously separate these three dimensions, and for many colors such an eye often confuses hue and saturation. A trichromatic system has far fewer confusion points. Tetrachromacy is also common, especially among animals that are sensitive to light from to nm. Thus, we first explore trichromacy and tetrachromacy, using bees and birds as respective models, and then go on to examine other animals and visual systems that do things differently.
Noise and Its Effects on Channel Number Barlow () examined why so few animals go beyond trichromacy by analyzing the radiance spectra of natural objects and the spectral shapes and spacing of photoreceptor sensitivities. Barlow asked a simple question, “If I know the properties of the colors in the world that an animal has to deal with, can I predict the number of spectral sensitivities needed to distinguish those colors?” Depending on channel width and spacing,
151
152
Chapter 7
his analysis predicted between three and six channels. Barlow, and those who have followed this approach (notably Maloney, , and more recently Vorobyev, a,b) argue that, above a certain number, it is not worth having more spectral channels. Another factor to consider is noise (see chapter ). Like other neurons, photoreceptors have signal-to-noise characteristics that determine how efficiently they transmit information. For example, in chapter we noted that the upper limit to the spectral range of visual pigments seems to be set by their thermal noise. In addition, light itself is noisy at dim levels because the number of photons that arrive at any given time varies. Simple linear models can be used to predict the behavioral performance of a color vision system based on knowledge of photoreceptor spectral sensitivities, photoreceptor number, and noise characteristics alone (Vorobyev and Osorio, ). This approach is useful in visual ecology because it allows parameters such as illumination levels and spectral reflectances to be used to examine the number of spectral channels needed in any particular circumstance. For example, such analysis shows that in dim light conditions, a dichromatic animal may discriminate more colors than a trichromat. This could explain why spider monkeys and a few other species of New World monkeys have an odd mixture of dichromatic and trichromatic individuals in the same population (often, in the same family group) (Jacobs, ). Forest conditions can be light limited, but not all foraging goes on under the canopy, so it is good to have friends to help find food when the light changes. For a color sense to be reliable, the visual system should see a given color as the same under different conditions of illumination, a property called “color constancy.” Color constancy ensures that a banana is always perceived as being yellow, irrespective of whether it is seen at noon, when the illumination spectrum of daylight is quite white, or at sunset, when the spectrum is red-shifted. Analyses using noise-based models indicate that in bright light and with the spectral range restricted from to nm, three receptors are enough to endow a visual system with color constancy. If the sampling range is extended to to nm, four receptors are needed. Beyond these, increasing the number of spectral channels makes little difference to color constancy. It should be no surprise, then, that these two optimal solutions are often observed (figure .). As might be expected, further analyses have shown that evenly spread spectral sensitivities expand the color space and may permit better discrimination of colors within it (figure .). Thus, bees (figure .E) possess a nearly optimal set of photoreceptors, allowing them potentially to discriminate four times as many colors as humans. Indeed, a large number of insects, from diverse habitats and lifestyles, have UV, blue, and green channels like those of bees (figure .). Thus, with the usual few exceptions that biology delivers, the individual needs or visual ecology of a given species seems to have had little influence on this basic plan, at least in insects (Briscoe and Chittka, ). Now consider the influence of narrowing the width of each channel, a situation that occurs in many birds and reptiles. As is well known, birds use color in many aspects of their lives. The tetrachromatic color visual systems of most species in this class possess long (LWS), medium (MWS), and short (SWS) sensitivities, and a fourth channel that is sensitive to either ultraviolet (UVS) or violet (VS) light. For example, parrots, most passerines, and gulls are UVS birds, whereas ducks, raptors, pigeons, and most seabirds other than gulls are VS (figure .). Aside from this dichotomy there is remarkably little variation in spectral sensitivity among bird species, but what is remarkable about this group is the presence of intraocular filters that narrow the sensitivity range of each channel.
Color Vision
20 18
S
M
L
Number of photoreceptors
16 14 12 10 8 6 4 2 0 300
330
360
390
420
450
480
510
540
570
600
λ max (nm) Figure 7.4 Histogram of the distribution of known spectral sensitivity peaks in 40 hymenopteran species. (From Peitch et al., 1992)
The intraocular filters found in bird eyes (and in reptiles, lungfish, stomatopods, and certain butterflies) are in the form of colored oil droplets that narrow the spectral sensitivities of their respective photoreceptors by removing the short-wavelength limbs of their visual pigment sensitivities (figures . and ., and see chapter , figure .). Such filtering generally causes a severe sensitivity reduction for the photoreceptor, decreasing its signal-to-noise ratio, and therefore is found only in diurnal animals. The advantage of filtering is an expansion of the color space of the animal (figure .), and such filtering can increase the number of distinguishable colors by around two times in a trichromat and by almost three times in a tetrachromatic system (Govardovskii, ; Vorobyev, b; Vorobyev and Osorio, ; Vorobyev, ). To summarize, bee and bird color vision appear to be near-optimal solutions. The first works in the nm to nm range, and the latter, with the help of filters, functions from nm to nm (Hart, ). We now consider animals that depart from these two plans.
Other Ecological and Behavioral Solutions in Color Vision Specific Tasks, the Achromatic–Chromatic Compromise, and Long-Wavelength Sensitivities The previous section noted that color vision seems to perform best with well-spaced spectral sensitivities. Why, then, do receptor classes in primates, fish, and invertebrates often have spectral sensitivities with irregular spacings (figure .)? It turns
153
Chapter 7
Bird Normalized sensitivity
VS
1.0
0.5
0 300 B
Normalized sensitivity
Fish D
400 500 600 Wavelength (nm)
700
0.5
0 300
0.5
400 500 600 Wavelength (nm)
400 500 600 Wavelength (nm)
700
400 500 600 Wavelength (nm)
700
E
1.0
0 300
1.0
UVS Normalized sensitivity
Normalized sensitivity
A
700
1.0
0.5
0 300
C
Frequency
154
7 LWS 6 5 MWS SWS 4 UVS/VS 3 2 1 0 300 350 400 450 500 550 600 650 Wavelength of peak sensitivity (nm)
F 20
Cones freshwater n=144 Average 511nm
10 0 Cones marine n=598 Average 494nm 40
0 300 400 500 600 700 Wavelength of peak sensitivity (nm) Figure 7.5 The spectral sensitivities of tetrachromatic birds and fish. (A) The violet-sensitive (VS) peafowl (Pavo cristatus). (B) The ultraviolet-sensitive (UVS) blue tit (Cyanistes caeruleus). Single cones, black curves; double/twin (DT ) cones, dotted curve. (C) Histogram of bird peak spectral sensitivities calculated from microspectrophotometry of visual pigment and oil droplets. (Hart and Vorobyev, 2005) (D) Juvenile lungfish (Neoceratodus forsteri) spectral sensitivities, also calculated from visual pigments and oil droplets. (Hart et al., 2008) (E) Spectral sensitivities of goldfish (Carassius auratus), which, like most fish, lack oil droplet filtering. (Douglas, 2001) Several freshwater fish spectral sensitivities may come close to the bird tetrachromacy optimum. (F) Histograms of all currently known fish cone sensitivities from marine and freshwater habitats. (Data courtesy of Ellis Loew)
Color Vision
B
Without oil droplets Normalized sensitivity
Normalized sensitivity
A 1.0
0.5
0 375
475 575 675 Wavelength (nm)
C
775
With oil droplets
1.0
0.5
0 375
475 575 675 Wavelength (nm)
D
M
775
M 600 550
550 600 500 450
S
650 700
L
650 450
S
700
L
Figure 7.6 Spectral sensitivities and resulting trichromatic color space (figure 7.3) of adult lungfish modeled without (A) and with (B) oil droplets (see figures 3.11 and 3.12). Monochromatic locus in each chromaticity diagram (C and D, black line) encloses the area of possible discriminable colors (light gray). The number of possible colors that can be distinguished is dramatically expanded with oil droplet filtering (D). Colors from the lungfish environment, such as weeds or logs, and lungfish colors are shown as black dots. Note how distances between these increase with oil droplet filtering. (Modified from Marshall et al., 2011)
out that there are specific color-based tasks that can make the resulting overlap of multiple channels adaptive. For instance, the heavily overlapping sensitivities of red and green cones in many primates may have evolved to optimize both specific colordriven tasks, such as fruit finding against leaves, and spatial aspects of luminance vision (Nagle and Osorio, ). Primates combine signals from their red and green cones for luminance (achromatic) vision. Many other animals, including insects, primates, and fish, use longwavelength photoreceptors both in achromatic vision and chromatic vision, although the details of how achromatic and color information is encoded by photoreceptors vary among species. In dichromatic mammals the longer-wavelength sensitivity provides luminance as well as color information. Behavioral evidence also indicates that bees locate flowers primarily using luminance vision, which appears to be mediated by the long-wavelength channel. Butterflies possess different types of ommatidia and assign luminance vision to the medium-wavelength channel. Although trichromatic mammals use M and L receptors for both achromatic and color vision, flies, birds, and some reptiles use separate sets of photoreceptors for the two tasks. In birds and some fish species, for instance, a subpopulation of cones, called “double” or “twin” (DT) cones, have spectral sensitivities that completely or
155
156
Chapter 7
partially overlap with those of single cones (figures . and .). Here, it is reasonable to assume that double cones are used for tasks other than color vision. Chapter describes rod and cone photoreceptors in more detail. In many animals DT cone sensitivities are well placed to optimize photon capture (figure .B). On land, photon capture is optimized when sensitivity peaks lay in a range around — nm. Underwater, the sensitivity peak varies to match specific water transmission characteristics (figures .B and .) but is often close to nm. Although it is tempting to categorize DT cones as a photopic luminance channel, this needs species-by-species examination. Fish may possess only one type of single cone (S) plus one double cone, which together contribute one M and one L channel (figure .). For instance, the double cones of the reef triggerfish Rhinecanthus aculeatus are used for color tasks and possess an M–L opponency (Pignatelli et al., ). DT cones are the most frequent cone type in many vertebrates, sometimes comprising over % of all cones, although they are absent in eutherian mammals and sharks. Their high proportion in the retina supports the idea that they function in achromatic spatial vision. As just mentioned, R. aculeatus may use their DT cone (M and L) sensitivities for both spectral and luminance tasks, dependent on how they are wired together beneath the photoreceptor level. In addition, species differences exist on the reef (see figures . and .), making the function of DT cones in fish more variable than seems to be the case in birds. It is interesting that in many reef fish M and L sensitivities overlap considerably (figure . and .), like those of primates, suggesting a luminance/color-vision compromise. As can be seen by even this brief survey, animals use their visual channels for both achromatic and chromatic information in a number of ways. Because of this, many animals have spectral sensitivities that depart from the evenly spaced plan seen in birds and bees.
Multiple Spectral Channels for Simple Color Taxis Other animals do have evenly spaced color channels but do not appear to use the classic opponency system we have previously described. The water flea Daphnia, for example, packs four spectral receptor classes into a compound eye containing only ommatidia. Although these diminutive crustaceans are not known for their colorful ornamentation or habitats, Daphnia’s evenly spaced spectral sensitivities (peaking at nm, nm, nm, and nm) might suggest a complex color analyzer (Smith and Macagno, ). In fact, however, each spectral channel (or simple combinations of them) is dedicated to a specific behavioral task. It is possible that in Daphnia and other “simple” organisms, photoreceptors may directly drive motor output or behavior without further neuronal analysis. A swarm of Daphnia will migrate up and down in different wavelength regimes, for example, to avoid damaging UV wavelengths (Leech and Williamson, ). They also swim toward yellowish water, perhaps because it may be rich in algae. When first observed, these behaviors were charmingly labeled “color dances” and are responses to particular color-dominated areas in their habitat (Smith and Baylor, ). Color-guided phototaxis is found in other animals but is best known in insects. Tsetse flies, for example, have a preference for blue light, possibly an attempt to find cooler shadows that are dominated by the blue light from the sky. The whitefly Trialeurodes vaporarium is attracted to UV light, a response that induces migratory behavior.
Color Vision
Innate or Wavelength-Specific Behaviors with More than Four Spectral Sensitivities Other animals have even more channels but still do not appear to have what we would call color vision. For example, the small white butterfly, Pieris rapae, possesses seven different spectral sensitivities in eight photoreceptor types. They exhibit stereotyped color behaviors: an open space reaction to violet/UV, a feeding reaction to blue, and an egg-laying reaction to green. Behaviors such as these are often labeled as “wavelengthspecific” or innate because, like phototaxes, they appear to be triggered by a specific waveband and involve no wavelength discrimination or learning. In other words they are not judging the difference between colors, just reacting to a particular set point in the spectrum. Wavelength-specific behaviors differ from taxes in that they relate to specific objects such as a leaf or wing rather than to colored environments. They are generally associated with more acute eyes that can see smaller objects. However, not all of the seven color channels of P. rapae subserve particular behaviors. P. rapae and other butterflies are also able to learn color discrimination tasks and use another subset of spectral sensitivities to do this. At the pinnacle of terrestrial color vision, Japanese swallowtail butterflies, Papilio xuthus, possess eight spectral channels based on five different opsins (λmax = , , , , and nm) that are matched with different colored filters (Kinoshita et al., ; figure .). However, they do not appear to have eight-dimensional color vision. In fact, nectar sources such as flowers appear to be chosen by papilionids via a tetrachromatic mechanism. The remaining four channels are used for designated innate tasks, a finding that could explain the proliferation of color channels in some other animals. And as described later in this chapter, a subdivision of the eye into different zones or directions of view for specific tasks also plays a role in this diversity.
Coevolved Colors and Spectral Sensitivities: Examples from Honeybees and Fish Another factor that may influence the number and placement of color channels is coadaptation with color signals. To illustrate this, we first return to bees and then to fish. Flowers that are easy to detect benefit from efficient pollen dispersal, while bees (and other nectar feeders) need to make rapid and reliable decisions when determining which flowers are the best source of nectar and pollen. This mutual need quickly leads to flower preference in bees and possible coevolution or at least coadaptation (figure .). Many natural colors have reflectance spectra that are peakshaped (green for example) or step-shaped (yellow, orange, and red for example) or combinations of these (see figure .). If one compares the spectral radiances of flowers to the areas of best discrimination in bee color vision, a clear correlation is visible, with color steps falling between photoreceptors, in areas of maximal spectral discrimination (figure .; Vorobyev a,b). This has been suggested as an example of coadaptation between signal and receiver (see Chittka and Menzel, ; Vorobyev and Menzel, ). However, the fossil record indicates that bees may predate the flowering plants (Chittka, ), and it is thus possible that flowers did not influence the evolution of bee color vision, but rather that flowers have adapted their colors to an existing set of spectral sensitivities in bees (and other insects: Briscoe and Chittka, ). As a result, although we may not be certain what photoreceptor
157
Chapter 7
A 40
Number
30 20 10 0 300
400
500 600 Wavelength (nm)
700
400
500 600 Wavelength (nm)
700
B
Number
158
9 8 7 6 5 4 3 2 1 0 300
Figure 7.7 Bee color vision and flower colors; reef fish color vision and fish colors. (A) Histogram of 180 flower colors, showing where the reflectance changes rapidly from low to high in a step function. Shaded blocks demark the λmax positions of UV, blue, and green photoreceptors from 40 hymenopteran species (see figure 7.4). (Data from Chittka and Menzel, 1992) (B) Histogram of fish color step functions and single-cone spectral sensitivities (coloring as in A). Dark-shaded block shows spectral distribution of double/twin cones. Lines show reef habitat illumination at 0.5 m (solid line) and 5 m (dotted line). Note how DT cones are optimized for light capture in dimmer deeper habitat.
set ancient insects had, it may be that hymenopteran color vision has driven the evolution of flower color (Dyer et al., ). Similar color spectra to color vision relationships exist in fish (Marshall et al., c; figure .), and again it is not certain how the correlation evolved. In both cases the fact that there is not a single spectral sensitivity set but a spread of them and also a spread in the position of color steps suggests caution in assigning spectral sensitivities to specific color tasks. Where it involves discrimination, color vision in most animals has evolved for a number of different tasks (Vorobyev and Menzel, ; Marshall et al., c).
Ultraviolet and Far-Red Channels Animals from most taxa (particularly insects, birds, and fish) have UV photoreceptors, so it is actually exceptional not to sample this spectral region. Those that do not
Color Vision
have UV sensitivity either do not possess short-wavelength receptors or remove UV light via filtering compounds in the lens, doing so to minimize photodamage and longitudinal chromatic aberration (Siebeck and Marshall ; Leech and Johnsen, ). At the other end of the spectrum, thermal noise limits the ability of visual pigments to peak at long wavelengths (chapter ). Several animal groups including birds, insects, reptiles, and crustaceans possess red filters that move the peak visual sensitivities close to nm (Cronin and Marshall, ; Douglas and Marshall, ; Kondrachev, ). Lungfish use red oil droplets to create a cone class with a spectral sensitivity maximum at nm, the longest vertebrate sensitivity peak yet described (Hart et al., ; Marshall et al., ). Not surprisingly, the benefit of spectrally extending color vision is almost always to gain extra contrast or increase discriminability of special stimuli such as objects or scenes that contain UV, red, or far-red information. In certain cases these channels may simply provide a way to isolate spectral stimuli for wavelength-specific behavior (Douglas et al., ; Kelber, ). In aquatic environments UV signals may be truly covert, as they are detectable to eyes with UV vision at close range, but because of the strong attenuation of shortwavelength light (see chapter ), they are quickly obscured (Bennett et al., ; Andersson and Amundsen, ; Cuthill et al., ; Hausmann et al., ). Interestingly, smaller fish, generally interested in stimuli at closer ranges, let UV through to the retina and often have UV-specific photoreceptors, whereas larger predatory fish often have UV-absorbing ocular media. Also, certain smaller fish, such as swordtails and some damselfish, show behavioral responses to conspecific UV markings (figure .). Many other animals have ultraviolet photoreceptors—including mammals. In general this additional channel simply expands the color space. However, many cases have been documented in which UV vision is involved in discriminating and interpreting species-specific ultraviolet signals, often for sexual selection. These instances are often treated as something special (because humans cannot see them), but they are no different from any other set of animal colors, a topic that is addressed in chapter .
Color Vision in Aquatic Animals: Fish and Crustaceans Water Color and Spectral Sensitivity in Fish The most diverse spectral sensitivities in the vertebrates are found in fish. Correlating fish color vision with the spectral characteristics of the underwater light field has been one of the cornerstones of visual ecology. Lythgoe (, ), Lythgoe and Partridge (, ), Loew and Lythgoe (, ), McFarland (), Loew and McFarland (), McFarland and Munz (), Bowmaker (), Bowmaker and Loew (), Douglas (), Douglas and Partridge (), Marshall and Vorobyev (), and Marshall et al. (b,c), have all tackled this surprisingly vexing problem. Fish generally possess one to four color channels, with some also using double or twin cones. Relatively little sharpening of fish spectral sensitivities by filtering occurs, although the lens may filter out the shorter wavelengths of light (figure .). Compared to birds, both marine and freshwater fish have diverse color channels, for reasons that often elude even the most careful examination. Lythgoe and others have compared cone sensitivities of reef, shallow coastal, deep coastal, and freshwater fish. All show an overall shift toward longer wavelengths as they move to water that is either shallower or more laden with short-wavelength-absorbing
159
160
Chapter 7
RGB Color
UV
A
B
C
D
E
F
Figure 7.8 UV patterns and contrast made visible to the human visual system by photography through UV (350–400 nm) filters. (A,B) The ambon damselfish Pomacentrus ambionensis showing body and facial markings visible only with extended UV sensitivity. (Siebeck, 2004) (C.D) The aster Lasthenia sp. (Photo, Alex Holovachov) (E,F) A reef scene photographed with green (E) (500-nm peak) filter and UV (F) filter. Note the added contrast against the bright background in F. (From Losey et al., 1999)
organic substances (figure .). In deeper species the spectral range of vision is constrained by the decreasing spectral range of the waters they live in, either in fresh or marine environments (see chapter ). Although the sensitivity peaks of the long-wavelength cones generally follow light availability, the peaks of the shorterwavelength cones surprisingly show no clear trend. Fish from all habitats may possess either UV- or short-wavelength-sensitive cones. Single cones also vary in both
Color Vision
400
Estuary
Inner Middle reef reef
Outer reef
Kasmira Bohar Quinquelineatus Adetii Carponotatus Russelli (adult) Malabaricus Sebae Erythropterus Russelli (juvenile) Fulviflamma (juvenile) Johnii (adult) Johnii (juvenile) Argentimaculatus 450
500
550
600
Wavelength (nm) Figure 7.9 The λmax positions of cones and rods in marine fish arranged by habitat. Fourteen species of snapper (Lutjanus) species from coastal through outer reef waters are compared. Rod sensitivities, vertical bars; single cones, filled circles; DT cones, semicircles. Vertical lines plot the zones of greatest sensitivity for each water type for visual pigments within at least 90% of optimum. Note how DT cones fall within this range. Reef DT sensitivities are comparable to those in figure 7.7B. (From Lythgoe et al., 1994)
number and peak sensitivity, even among species from the same habitat. Lythgoe () suggested that some of the variability in spectral sensitivity may reflect the need to be adapted to water color changes mediated by changing seasons and other factors. Thus, individual species may alter their color channels (for example, by chromophore replacement; see chapter ) on a seasonal or life-stage basis to reflect different waters and different needs.
Why Is There So Much Variability in Fish Color Vision? We still must face the question: why are cone sensitivities so variable in fish—especially in fish from apparently similar habitats? Figure . shows the spectral sensitivities of four species of reef fish, all of which share the same basic light environment. Although their depth ranges are distinctive, these different species spend most of their time in and around shallow reefs. Despite this ecological similarity their vision varies both in the number and position of their color channels. Because niche and microhabitat illumination differences are relatively minor, the best explanation for these differences may be found in behavior and lifestyle. Aulostomus chinensis (depth range – m) feeds on small fish and crustaceans; Forcipiger flavissimus (depth range – m) feeds on a wide range of invertebrates and fish eggs. Pervagor spilosoma (depth range – m) also feeds on benthic invertebrates and some algae. The damselfish species Abudefduf abdominalis (depth range – m) is another benthic
161
Chapter 7
B Normalized sensitivity
Normalized sensitivity
A 1.0
0.5
0 300
400 500 600 Wavelength (nm)
700
1.0
0.5
0 300
Normalized sensitivity
1.0
0.5
0 300
400 500 600 Wavelength (nm)
700
400 500 600 Wavelength (nm)
700
D
C Normalized sensitivity
162
400 500 600 Wavelength (nm)
700
1.0
0.5
0 300
Figure 7.10 The spectral sensitivities of four species of reef fish from the same photic microhabitat. (A) Trumpetfish, Aulostomus chinensis. (B) Longnose butterflyfish, Forcipiger flavissimus. (C) Fantail filefish (leatherjacket), Pervagor spilosoma. (D) Banded damselfish, Abudefduf abdominalis. Normalized sensitivities include filtering by ocular media in non-UV-sensitive species, often close to 400 nm. Note the diversity of peak positions, spectral sensitivity number, and mix of single and DT cones and, in Abudefduf abdominalis, the presence of DT and single cones with similar spectral sensitivities. Rods, black; single cones, red; and DT cones, blue curves.
algal and planktivorous species. Whether these differences, or other color tasks associated with food or sexual selection, or perhaps differing phylogeny, are enough to account for the differences among species, is not clear. In an attempt to control for phylogeny, lifestyle, and habitat, three species of cardinalfish of the family Apogonidae (Apogon fragilis, A. leptacanthus, and Rhabdamia gracilis) have been examined (N. J. Marshall, unpublished). All three species live on the reef in multispecies schools associated with coral heads (figure .). All experience the same light microhabitat, are from the same family, and follow similar lifestyles. They are nocturnal planktonic foragers and conduct social, sexual, and other behaviors during the day. All appear to possess UV, violet, or blue markings around the head that may function in social interactions, but mostly their body colors seem to match the background, presumably to avoid predation. Rhabdamia gracilis is different in that its camouflage is based on transparency. All three species are likely trichromats, but the positions of their three channels are quite different (figure .). Perhaps the variation seen in these apogonids is of no consequence for their designated tasks in life; all perform equally well, and therefore their color vision may not be under strong selective pressure. Figure . also shows the result of modeling an important
Color Vision
A 1 A.f. 0 400
450
500
550
600
1 0 400
A.I. 450
500
550
1 0 400
600 R.g.
450
500
550
600
B
Reflectance/radiance
100 80 60 40 20 0 300 C
S
400
500 Wavelength (nm)
600
700
M
M
M
A.F.
A.l.
R.g.
L S
L S
L
Figure 7.11 Color vision in cardinal fish (Family Apogonidae). (A) Spectral sensitivity peak positions of rods (black), single cones (blue), and DT cones (red) of three species that often share the same coral head on a reef (photograph right). Species are Apogon fragilis (A.f.), Apogon leptacanthus (A.l.), and Rhabdamia gracilis (R.g.). UV photograph of Rhabdamia gracilis is inset in main photograph, showing how this species becomes conspicuous when viewed in short wavelengths. (B) Spectral reflections of reef fish colors, reef background, and radiance in water (thick blue line—same as figure 7.7B). (C) Maxwell triangle chromaticity diagrams (see figure 7.3 for explanation) of three species viewing reef colors in B. Circles are reef fish colors, brown square indicates reef background color (an average of 255 measurements: Marshall et al., 2003c), blue square is water radiance “color.” The colors of the coral trout brown body and blue spots lie under the loci of these backgrounds (respectively) in all three species despite other shifts in other colors.
163
164
Chapter 7
task for all three fish, distinguishing a coral trout (a voracious predator) against the reef background. The relative color distance between trout and background color loci is similar for all three species and small, despite their visual system diversity. Given that apogonids are a favored food of coral trout, this close match to background may be no accident but the result of selective pressure for good camouflage. The diversity of apogonid color vision suggests that if specific and salient behavioral tasks can be performed equally well (or indeed poorly), the exact placement of spectral sensitivities may be allowed to wander around a “good enough” locus. Compared to marine systems, freshwater habitats vary considerably in water color and in the depth profile of intensity (see chapter ). Fish in these habitats can be divided into tetrachromatic or trichromatic surface dwellers, mainly dichromatic deeper or crepuscular species, and then monochromats such as catfish, that are also often crespuscular and/or turbid/deep-water species (Lythgoe, ). The color vision of a few model species of freshwater fish has received much attention in recent years, and as a result we know more about the color vision of the tetrachromatic goldfish (Carassius auratus auratus) than perhaps about any other vertebrate beside humans (Neumeyer, , , ). Unfortunately, we know little about their ecology or how their color vision is used. The zebrafish Danio rerio is also tetrachromatic and a genetically manipulable vertebrate laboratory species. Again, however, we know more about its visual system than its visual ecology. The case is different with African cichlids. The wonderfully colorful species in this group are ideal subjects to study both evolution and visual ecology (Carleton, ). Several hundred closely related species are found in three large rift lakes: Lakes Malawi, Tanganyika, and Victoria, the waters of which provide diverse optical environments from clear blue (Malawi) to more turbid green and brown (Victoria). The colors of many cichlid species are sexually dimorphic, and microspectrophotometry (MSP) indicates that their retinas generally contain three functional cone visual pigments—two longer-wavelength sensitivities in a double cone and a short-wavelengthsensitive single cone. Because cichlids are colorful, sexually dimorphic, and blessed with a variety of mating behaviors, it has been assumed that sexual signaling has driven the evolution of their color vision. Lake Victoria fish are often red and green, and their eyes usually possess the long-wavelength-shifted visual pigments typical of freshwater fish. In contrast, Lake Malawi fish are often yellow and blue and possess the short-wavelength-shifted visual pigments typical of marine fish (figure .). The transmittance of signals through the water appears to determine the set used in each lake, but little is known about coevolution of color and color vision within single-lake systems. In addition to these between-lake variations, within-lake habitat differences produce retinal differences between subpopulations. In Lake Malawi the clear water rock-dwelling “Mbuna” species are usually short-wavelength-shifted trichromats with a single cone peaking between nm and nm and a double cone that peaks between nm and nm. The slightly deeper sandy-bottom species are long-wavelength-shifted, with peaks for single cones of about nm and for double cones between and nm (figure .). However, the photic habitat is not markedly different between these shallow habitats, although the sandy-bottom areas can be more turbid. This, along with behavioral differences, may drive some of the variability between UV-based and non-UV-based trichromacy in these fish. Overall, however, as with reef fish, we are left with a greater variability in cichlid color vision types than might be predicted from light environment or known behaviors alone. Precise tuning in search of a cause might suggest several solutions solve the same
Color Vision
problem or that the solutions we see are compromises driven by several tasks, some of which we have yet to quantify. When cichlid visual pigment genes are examined, it turns out that most (if not all) cichlids express seven spectrally distinct cone pigments in their retinas, one each from the LWS and SWS classes, two SWS, and three from the Rh gene set (figure .A). Not all of these opsins, however, are expressed at a level high enough to be visually useful, and most evidence points to the idea that individual cichlid species A
Relative absorbance
1.0 0.8 0.6 0.4 0.2 0
400
600 500 Wavelength (nm)
700
B
Number of cells
15
Tramitichromis intermedius SWS2A
10
RH2Aα
LWS
5
0
400
450 500 550 Wavelength (nm)
600
C
Number of cells
10 8
Pseudotropheus acei SWS21
RH2B
RH2Aα
6 4 2 0
400
450 500 550 Wavelength (nm)
600
Figure 7.12 African cichlid eyes collectively contain all seven cone opsins known in fish. (A) Absorbance spectra of the seven classes of visual pigments determined from molecular sequence, from left to right: SWS1, SWS2B, SWS2A, RH2B, RH2A, RH2Aα, LWS. (B,C) Histograms of cone classes in two Lake Malawi cichlids (measured using MSP): Tramitichromis intermedius (B) and Pseudotropheus acei (C). Distributions of peak sensitivities in single and double cones are shown diagrammatically. Opsin gene classes color coded as above. The two species show differential expression of three of the seven available cone opsins and inhabit rock (P. acei) and mud-bottom (T. intermedius) lake zones. (From Bowmaker, 2008, and see Carleton, 2006) See chapter 3 for introduction to visual pigments and opsin genetics.
165
Chapter 7
usually express only three of the seven available options (figure .B,C). They are able to mix and match different potential trichromacies depending on environmental and behavioral needs. The variation in expression even extends to single species living in different lake areas, which express their opsins differentially. For example, shallow-living Copadochromis eucinostomus show higher LWS expression than their deeper conspecifics, and deep-living Tropheops gracilior are more variable in expression than their shallow-living counterparts. This flexibility is perhaps one reason for the great variety of color vision systems that we see in cichlids.
Dichromatic Solutions to Aquatic Color Vision: Matched and Offset Cone Classes Dichromacy is favored as the photic envelope narrows with increasing depth. In fact, two spectral sensitivities can adequately encode all of the spectral information available in an environment where the illumination spectrum extends only from to nm (Barlow, ; Vorobyev, a). Furthermore, large predatory species look through relatively long path lengths of water as they search for prey. Even near the surface, this means that the spectrum of signals narrows and becomes more like the background illumination over distance (figure .; see chapter ). This narrowed spectrum drives detection systems toward dichromatic solutions and may explain why many large predators such as snapper or barracuda appear to operate with only two spectral classes of cones (figure .).
B
A ∞
0
4 2 1
1
0.3
0
Contrast
Radiance
166
1
0
0.2
1 2 2
0.1
4 4
2 4 ∞
600 500 Wavelength (nm)
0
∞ A
B
C
500 600 Wavelength (nm)
Figure 7.13 The offset hypothesis (Loew and Lythgoe, 1978; Lythgoe, 1979). (A) Spectral radiance of a gray target in coastal (greenish) waters at 0, 1, 2, 4, and ∞ m (where the target becomes indistinguishable from background spacelight. (B) Relative contrast of object and background at the same distances; at ∞ the target is invisible. A, B, and C are λmax positions of hypothetical matched (B) and offset (A and C) spectral sensitivities. Gray bars represent ranges of single and DT cones from figure 7.9.
Color Vision
A
Normalized sensitivity
B
C
1.0 0/8 0.6 0.4 0.2 0 300
400
500 Wavelength (nm)
600
700
D
Figure 7.14 The barracuda Sphyraena helleri (A) is associated with reefs and lives in a shallow photic habitat that could accommodate more than its known two spectral sensitivities. Viewing prey, such as the anemonefish Amphiprion percula (D), over long distances underwater may drive such predatory fish toward the offset, dichromatic solution (B) of contrast over color (figure 7.13). Below the fish and its cone spectral sensitivities are RGB true-color (D) and false-color (C) images of A. percula, the false-color image being a calculated contrast representation of how the barracuda sees its prey. (From Marshall et al., 2006)
Lythgoe suggested that underwater color vision is best encoded by an offset dichromatic system with one visual pigment maximally sensitive to the background light and the other offset from it (figure .; Loew and Lythgoe, ; Lythgoe, ). This is a case where functional dichromacy may be more valuable for detection purposes than trichromacy because the matched pigment allows the backlighting to silhouette foreground fish (figure .E,F). The offset pigment, being less sensitive to the spectrum of the background light, allows well-lit objects near the surface (where the illuminant is spectrally broad) to stand out. The combination produces high contrast for
167
168
Chapter 7
dark, backlit objects at all depths and also for bright, colorful objects near the surface, each being viewed by a single, spectrally specialized cone type.
Color Vision in Aquatic Crustaceans Crustaceans live alongside fish in all marine and freshwater habitats. They possess highly variable eye design, including apposition and superposition compound eyes, simple eyes, and ocelli (chapter ; Land and Nilsson, ). The variability seen is largely optical and structural, as vision in most crustaceans is spectrally simple. Most are monochromatic or dichromatic and invest as much in polarization vision as in color vision (see chapter ). However, Daphnia has four channels (as mentioned above), and stomatopods (described below) are the world record holders with channels. Although the optical environment is known for many crustaceans, the behaviors of all but a handful of crustaceans are poorly understood. Thus, the potential contribution of feeding or mating behaviors to color vision is mostly speculative. Those with two color channels usually possess a UV/violet channel peaking near nm and a bluegreen channel peaking at about nm (figure .). This dichromatism in multiple optical environments suggests that, as in insects, factors other than visual ecology have limited the scope of crustacean color vision, although possibly, like dichromatic fish, many crustaceans may adopt the offset solution described above. Indeed, the visual pigments in the main rhabdoms (retinular cells to , R–) of crustaceans are relatively well matched to the background irradiance or bioluminescence, and the UV/violet sensitivity of the eighth retinular cell (R) may provide the required sensitivity offset. The crustacean visual systems we understand best are in decapods (crabs, lobsters, crayfish, and deep-sea shrimp) and stomatopods. This eclectic species mix, largely based on animal size and historical accident, may not be a good representation of a diverse subphylum that includes over , species. As in fish, there is a general spectral sensitivity shift to shorter wavelengths in oceanic or deep-sea crustaceans compared to freshwater or coastal crustaceans, apparently due to the available wavelengths in the underwater light field. In some deep-sea species, the short-wavelength channel is entirely discarded, and the R cells degenerate or are lost. And again, as in fish, despite a general trend toward maximal photon catch, the visual pigments in the main rhabdoms of some deep-sea shrimp peak about nm short of the expected nm for oceanic water (chapter ). They also do not perfectly match the average bioluminescent emission peak of nm (figure .; Marshall et al., ). However, the long photoreceptors of crustaceans make exact sensitivity matching unnecessary. The main rhabdoms are commonly longer than μm, and even R cells approach this length in mesopelagic decapods. The result is a spectrally broadened absorptance function through self-screening (chapter ) that lessens the need for an exact match between spectral sensitivity and available light (figure .A). Crabs inhabit more complex and diverse habitats than the mesopelagic crustaceans, and it is a great surprise that their spectral sensitivities seem especially conservative. They also possess more blue-shifted color channels than would be predicted through sensitivity match. The blue-shifted light of twilight may be one explanation for this. In the ocean, predators abound at dawn and dusk, taking advantage of the shift from day to night vision. Thus, it is reasonable to optimize vision for this time period (figure .C).
Color Vision
Normalized absorptance
A 1.0 0.8 0.6 0.4 0.2 0 300
400
500 600 Wavelength (nm)
700
400
500 600 Wavelength (nm)
700
B 10
Number
8 6 4 2 0 300
Normalized irradiance
C 1.0 0.8
Figure 7.15 Crustacean spectral sensitivities. (A) Spectral sensitivities of the mesopelagic decapod crustacean Systellaspis debilis, R8 dashed curve, R1–7 solid curve. (Data from Cronin and Frank, 1996; Marshall et al., 1999) Vertical black lines encompass the region in which most bioluminescent emission maxima are found. (B) Photoreceptor spectral sensitivities of most crustaceans studied up to 2003. Except for peak values close to 400 nm and 500 nm, the illustrated diversity is mainly due to stomatopods. (C) Photic environment at midday (solid line) and twilight (dotted line) at Eniwetok Atoll. (Data from Munz and McFarland, 1973) Vertical lines encompass 90% of crustacean R1–7 sensitivity maxima and suggest that their placement is a compromise between daytime and twilight sensitivity matching. (From Marshall et al., 2003c)
0.6 0.4 0.2 0 400
500 600 Wavelength (nm)
700
Another parallel to fish is the unpredictable presence or absence of UV sensitivity. Many species in shallow waters, where there is plenty of UV light available, dispense with their UV channels for unknown reasons. On the other hand, several decapod species, for example Systellapsis debilis (Cronin and Frank, ; Frank et al., ), retain near-UV sensitivity even at depths of several hundred meters. Various hypotheses have been proposed for this, but the most likely is that these deep-sea crustaceans use their UV channel as part of a dichromatic visual system that can discriminate various hues of bioluminescence (Cronin and Frank, ; Frank et al., ). A final parallel is the shift of crustacean spectral sensitivity with season or life stage, as noted in chapter . Recent genetic analysis has shown that many—perhaps
169
170
Chapter 7
most—crustaceans express two visual pigments in one photoreceptor cell, for reasons that are not understood, but one possibility is to provide a ready plasticity as light environments change (chapter ). Such plasticity would be useful in aquatic species, living as they do in locations where the illumination varies with season or habitats change. Semiterrestrial species, such as fiddler crabs, are dichromats (Detto, ). As these animals are active under broad-spectrum bright light, there must be reasons other than offset contrast for their dichromatic vision. Uca species, such as U. pugilator and U. mjoebergi famously wave their major claws in sexual and territorial signaling. The shallow-water blue crabs Callinectes sapidus also use colorful claws in mate choice. Not surprisingly, both crabs have true color vision (Hemmi et al., ; Detto, ; Baldwin and Johnsen, , ). Although molecular techniques suggest that more than two visual pigments exist in retinas of U. pugilator (chapter ), the most parsimonious explanation for color choice in all these crabs is dichromacy mediated by comparison of R and R– cell spectral mechanisms.
Stomatopod Color Vision: An Extreme Example of Multiple Channels Stomatopod vision is far from simple. These hoplocarid crustaceans have been following their own evolutionary branch (and ecological strategies) for close to million years, and a combination of phylogenetic isolation, complex behavior, and a number of unique visual adaptations has resulted in an unusual proliferation of photoreceptor types. Most stomatopods possess anatomical photoreceptor classes. These produce a total of information channels: for linear polarization, for spatial vision, for circular or elliptical polarization, and for color vision (Marshall, ; Cronin and Marshall, ; Marshall et al., ). Stomatopods divide their eyes into three functional subsections, a midband of six ommatidial rows separating two hemispherical ommatidial regions (figure .). The midband’s first four rows of ommatidia contain the color channels with peak sensitivities ranging from just above nm to over nm in shallow-dwelling species. Almost every photoreceptor has a unique visual pigment, and the anatomically tiered arrangement, together with the presence of photostable color filters in the second and third ommatidial rows (chapter ), result in sharply tuned sensitivities (Cronin and Marshall, ; figure .). Mantis shrimps are unique among animals in having multiple color channels that peak in the UV portion of the spectrum. Although their shallow-water world is rich in these wavelengths, we remain ignorant as to why they appear to analyze this spectral region in such detail (Marshall and Oberwinkler, ). Taken together, all these features suggest an unusual color-analysis system. The stomatopod color sense is optically arranged to sample a narrow strip in space, viewing the same slim field as similar strips in the flanking hemisphere regions as well as the polarization midband receptors in the two most ventral midband rows. To combine this visual strip with the extended visual fields of the rest of the eye, stomatopods scan their eyes in slow deliberate swaths (chapter ). This scanning technique has evidently coevolved with stomatopod multichannel sampling of information—in fact, it probably makes it possible. A color sense that extends from nm to beyond nm is worthwhile only in spectrally broad habitats such as the waters of shallow tropical seas. Species found in deeper waters change their spectral sampling in a number of ways. Stomatopods from the superfamily Squilloidea often live below m in muddy benthic environments.
Color Vision
They have reduced their channel number to one or two, reaching the minimum for this group. The spectral sensitivities of squilloid R cells are unknown, but the main rhabdoms of the species have peak sensitivities that cluster around nm. Other superfamilies of mantis shrimps, however, do not surrender their color vision. Instead, as might be predicted, the spectral sensitivities of the peripheral retinal R– cells shift from about nm near the surface to about nm at depth. Simultaneously, the midband photoreceptors operating at longer wavelengths move their spectral
Shallow
Normalized radiance
1.0
1.0
1.0
0.5
0.5
0
0
1.0
0.5
0.5
0
0 D
C
F
Normalized sensitivity
E R1D R4D R1P
400
Deep
B
Normalized sensitivity
A
R4P
500
R2D R2P R3D
600
Wavelength (nm)
R3P
700
R1D R4D R1P
400
R4P
500
R2D R2P R3D R3P
600
700
Wavelength (nm)
Figure 7.16 Spectral tuning with depth both between and within stomatopod species. (A,B) The photic habitat of shallow and deeper species. (C,D) Comparison of 8 of the 12 spectral sensitivities in shallow- (Pseudosquilla ciliata) and deep-living (Lysiosquilla sulcata) species, showing a “gathering in” of peak sensitivity at both ends of the spectrum with depth. Lysiosquilla sulcata (deep) also inhabits more turbid waters, where the spectral envelope is also restricted by water quality. (E,F) Tuning of spectral sensitivities in Gonodactylellus affinis from shallow (E) and deep (F) water to show how adjustment of intrarhabdomal filter spectrum and density narrows the spectral range sampled by deeper-living individuals. (Cronin et al., 2001)
171
Chapter 7
sensitivities to shorter wavelengths by changing the diversity, density, number, or length (and thus the cutoff wavelengths) of overlying filters (figure .). Doing this brings the resultant sensitivities into the environment’s spectral envelope (Cronin et al., ; figure .). Whereas the long-wavelength end is adjusted by changing filters, the short wavelengths in the receptors of the first and fourth midband rows adapt by long-wave shifting their visual pigments (Cronin et al., ). Some individual mantis shrimp species, such as Haptosquilla trispinosa and Gonodactylellus affinis, occupy a broad depth range, from near the surface to deeper than m. In these species, spectral tuning occurs that changes color vision within the species. Rather than shed color channels or switch visual pigments, they adjust the density or spectral position of the photostable filters such that the deeper individuals blue-shift their peak sensitivities by up to nm (figure .). This constrains the longwavelength receptors to remain within, or follow, the long-wavelength limits of the restricted deep-water photic envelope (Cronin et al., ). The long-wavelength end of the spectrum changes the most over depth (chapter ), so UV and shorter wavelengths may remain useful in clear water over considerable depth (Frank and Widder, ). One would nevertheless expect to see a long-wavelength shift in UV sensitivity with depth. This has not been found. The moderately deep Odontodactylus species, for example, possesses the same deep-UV sensitivities as the shallow Neogonodactylus. It seems that stomatopods do their best to retain a full complement of spectral sensitivities rather than just reducing to or . One current hypothesis concerning stomatopod color vision explains both the superabundance of photoreceptor classes and the tendency to retain all spectral channels. The optical arrangement of the eye seems to act as a line-scan sampler, with the spectrum segmented into bins. Color is measured directly by monitoring the
25
Figure 7.17 Behaviorally determined wavelength discrimination functions (Δλ) in stomatopods (Haptosquilla trispinosa) compared to various animals: human, goldfish, butterfly, honeybee. Based on the bandwidth and frequency of stomatopod spectrals sensitivities, the theoretical Δλ in stomatopods (green dashed line) is close to 1 nm over the whole spectrum, but in behavioral tests (red dots) it averages 15–20 nm. (From Koshitaka et al., 2008; Thoen and Marshall, unpublished data)
Tested Δλ Expected Δλ
20 Honeybee
Goldfish
15
Δλ (nm)
172
10 Papilio 5
0
Human
400
500 600 Wavelength (nm)
700
Color Vision
excitation pattern of the spectral zones as the midband passes over objects rather than by using opponency between pairs of spectral sensitivity types. Such a color sense is analogous to the way the cochlea in the mammalian ear examines sound frequencies in auditory space. Such an unusual way of encoding color requires that the spectrum be sampled by many nonoverlapping channels. Thus, the high number of channels in stomatopods may not imply any need to resolve color in great detail (Osorio et al., ). Behavioral testing of stomatopod color vision indicates that the behaviorally assayed wavelength discrimination of the system is in fact quite coarse, supporting this hypothesis (figure .). At least for now, it is comforting to hope that we can avoid analyzing a dodecahedrachromic visual system!
Retinal and Visual Field Regionalization Normally, receptors can be useful for color vision only if they examine the same location in the visual field. Surprisingly, even in retinas without specific areal modifications, photoreceptors are rarely evenly (or randomly) spaced and thus sample oddly shaped patches of visual space (figure .). Some mammalian cone arrangements show dorsoventral variability associated with sky and ground visual fields. Both the terrestrial world and much of the aquatic world are divided into bright dorsal and a dark ventral regions. In the honeybee drone the ventral half of the eye contains all three of the typical insect photoreceptors (UV, blue, and green) while the dorsal half has only UV and blue. This dorsal zone of dichromacy is thought to be useful for spotting queen bees against the sky, recalling the contrast-biased offset dichromatic systems of fish. The dragonfly Sympetrum has a clearly defined dorsally directed region in its compound eye, also dominated by UV and violet photoreceptors, and with a particularly high spatial acuity, both thought to be adaptations for hunting prey against the sky (chapter , figure .). Like drone bees, most butterfly species do not array their photoreceptor types evenly over the eye (figure .). In fact the extension of butterfly spectral sensitivities from the basic insect three types to as many as eight accompanies a rearrangement of photoreceptors such that various combinations look in different directions for different color-driven tasks. Pierid and papilionid butterflies often exhibit a pronounced dorsoventral difference in ommatidial types. More UV-sensitive types exist in skydirected dorsal ommatidia, whereas violet-sensitive types abound in ventral areas. Red- and green-sensitive photoreceptors that may be specialized for flower, foliage, or mate recognition are concentrated in medial and ventral ommatidia. Papillio xuthus, with eight spectral channels, arrays these as color vision subsets, producing tetrachromatic vision anteriorly and in the center of the eye and as two dichromatic subsets for dorsally and ventrally directed tasks. Fish also frequently have differentiated regions of the retina. The archerfish, a species closely associated with water surfaces and famous for spitting insect prey off leaves, divides its retina into three areas (figure .). The dorsal retina that looks into the water contains single and twin cones with spectral sensitivities peaking at nm and nm, respectively, the latter a good match to the upwelling underwater light. This sets up a matched-and-offset dichromacy, ideal for object detection. The ventronasal retina that looks out of the water above the fish has a -nm single cone and a -nm twin cone, and this time it is the single cone that shows the good match to the downwelling skylight and the twin the offset. The ventrotemporal retina, whose job
173
174
Chapter 7
A
B L-cones
C
S-cones
D
Figure 7.18 Regionalization of spectral sensitivities in the eyes of vertebrates and invertebrates. (A) Schematic representation of long- (L) and short- (S) wavelength-sensitive cones in whales, seals, and some nocturnal terrestrial species (upper circles: note absence of S cones) and shrews and mice (lower circles) that show concentrations of S cones in the ventral, sky-directed retina and L cones with a dorsotemporal concentration for forward focus. (From Peichl, 2005) (B) Views into the freshly dissected eye cups of two species of myctophids (lanternfish) showing a concentration of yellow pigments in different retinal regions in different species. (From de Busserolles et al., 2013) (C) Eye shine from two butterfly species (Bicyclus anynana, left, and Heliconius melpomene, right) indicates differential perirhabdomal filters and photoreceptor distribution. (From Stavenga, 2002) This is not visible in normal illumination. (D) The compound eye of the stomatopod Odontodactylus scyllarus showing division between dorsal and ventral hemispheres and the midband. It is the midband that contains the color and many of the polarization receptors (chapter 8).
it is to aim the spit, possesses three cone types, a -nm single cone and a /nm double cone. Trichromacy for this more complex task of insect identification and evaluation may be of use to the fish (Temple et al., ). Several retinal subregionalizations, including those of butterflies, birds, and fish, involve colored filters (Stavenga at al., ; figure .). Pigeon eyes, as well as those of various seabirds such as the shearwater, have a dorsal concentration of red oil droplets called the “red zone” associated with far-red vision. This downward-directed
Color Vision
A
B
0.5 0 Ventro-temporal
1.0
D 0.5
ra
0.5
al
D Ventro-nasal
1.0
al ors
sal Na
0
Te m p o
Relative absorptance/intensity
C
Dorsal
1.0
l Ve ntr
0 400
500
600
700
Wavelength (nm)
Figure 7.19 Vision in the archerfish (Toxoides chatareus), a fish that spits at prey located above the water surface. (A) Cone spectral sensitivities of retinal subregions that view different areas of visual space. The environmental light spectrum impinging on each region is plotted in gray-filled curves. (From Temple et al., 2010) (C,D) Medioventral cone square mosaic (C) and dorsal DT cone line mosaic (D) frequently found in fish. Scale bar = 10 μm.
zone may help view objects against green leaves. Yellow filters in fish, as discussed in chapters and , may also be spread unevenly over the retina or cornea. Suggested functions include breaking counterillumination camouflage in the mesopelagic realm (chapter ) and contrast enhancement over short viewing distances (chapter ). Newly discovered yellow retinal filters in mesopelagic lanternfish are thought to be related to various bioluminescence-spotting functions in several directions. This ability may be correlated with species-specific bioluminescent communication (figure .; de Busserolles et al., ).
The Evolution of Color Vision Vision is thought to have evolved most of its present forms during the Cambrian explosion, some half-billion years ago. In the shallow aquatic environments where many early animals and their visual systems evolved, ripple-induced flickering of light on the bottom and in the water column is a serious visual challenge. Maximov () has suggested that in solving this problem, eyes eventually evolved the basic requirements for color vision. At a single point in shallow water on a sunny day, intensity varies by about two orders of magnitude with frequencies well above visual flicker fusion rates (Sawicka et al., ). Under such conditions, distinguishing an object from its surroundings is challenging, even if it differs in spectral content. It is possible that opponency between two spectrally distinct photoreceptors first evolved to eliminate this problem (figure .). The ratios of the two channels viewing the scene would retain the color opponency of an object, independent of flicker in the overall illumination.
175
Chapter 7
A
500 Relative intensity
176
400 300 200 100 0
0
1
2 Time (sec)
3
4
B
500nm
500nm
Addition
C
500nm
550nm
Opponency
+
–
Figure 7.20 Hypothetical origin of color vision as a way to remove caustic flicker produced by ripples in water in shallow Cambrian aquatic habitats. (A) Intensity fluctuation measured at a single point in shallow water. (B) Two 500-nm-sensitive photoreceptors viewing a yellow object on a green background and wired together return the same intensity for object and background, making it difficult to distinguish. (C) With different spectral sensitivity photoreceptors and subtractive opponent processing, flicker is subtracted out, leaving the object’s contrast against the background distinguishable. (Figure courtesy of Nathan Hart, based on Maximov, 2000)
Once dichromatic color vision evolved, the gradual addition of more channels to solve particular challenges or to permit particular behaviors within the available spectral envelope may have given animals selective advantages. The earliest vertebrates are now thought to have been tetrachromatic (Collin et al., ). An interesting wrinkle in this story is the lungfish, an ancient fish lineage closely related to terrestrial vertebrates. Lungfish possess lungs and lobed fins and are known to be able to wiggle over damp land when required. Retinas of the Australian species, Neoceratodus forsteri,
Color Vision
contain colored oil droplets and have cones with four spectral sensitivities as juveniles and three as adults (chapter ; figure .). This combination, including a UV-sensitive ( nm) cone in juveniles that is lost in adults, gives this fish a bird-like tetrachromacy, the closest any fish gets to the optimal bird plan described at the start of this chapter (figure .). Unfortunately, lungfish seem to be ecologically disappointing. They have few behaviors and live in turbid Queensland rivers. Why they require tetrachromatic vision is a mystery, although they may have retained it for their brief forays on land. The retinal complexity and other body characteristics of Australian lungfish suggest that highly tuned tetrachromatic vision evolved before vertebrates emerged on land. This is supported by the apparent complexity of the retina of the lampreys and the presence of oil droplets in other basal fish lineages, including other (African and South American) lungfish, the bow-fin Amia, and the Coelacanth. It is thus possible that early terrestrial vertebrates were tetrachromatic. Humans, so proud of our ability to experience color, should remember that our earliest mammalian ancestors lived in a world of impressive visual predators. Dinosaurs (like today’s birds, to which they gave rise) likely had tetrachromatic vision enhanced by a suite of oil-droplet filters. By escaping into the dark of night, when dinosaurs were nearly blind, the first mammals gained a toehold on survival and a chance for diversification. The cost was the loss of most of their ancestors’ cone types and associated oil droplets—leaving their descendants to inherit a relatively impoverished world of color.
177
8 Polarization Vision
I
n the South African savannah a well-fed elephant deposits a pile of dung like a scattering of messy brown spheres. Soon, dung beetles arrive and busy themselves, rolling balls of the nutritious stuff away to a safe place where they can deposit an egg inside. Each performs a strange dance before starting, climbing to the top of its ball and twirling in a pirouette. Dung beetles compete for resources, and once their prize ball is constructed, the best way to escape other beetles is to roll the ball in a straight line directly away from the dung heap. The direction chosen is irrelevant, but the line must be straight to get away quickly. To select and remember a direction, the pirouette dance surveys the pattern of polarization in the sky overhead. Since dung beetles can visualize this pattern, soon each is quickly rolling its ball away. Arthropods are famous for their polarization sensitivity, but other animals, including vertebrates are also capable of this. A remarkable feature of some insect systems is that the sky pattern is genetically imprinted into the neural arrangements, all the way through to the central nervous system. However, celestial navigation is not the only use to which animals can put polarization vision. Other functions may include communication, contrast enhancement, and camouflage breaking. This chapter examines how polarization sensitivity is achieved in animals and how it is used in natural behavior.
The Physics of Polarized Light Light is an electromagnetic wave with an electric field that oscillates perpendicular to the direction of travel. Thus, in addition to its wavelength (or frequency) spectrum, light has an oriented electric field (figure .). The axis of this field—the e-vector— generally traces out an ellipse as the beam travels forward (Johnsen, ). However, biologists are most concerned with the two extreme forms of an ellipse—a line and a circle. When the ellipse is flattened to a line, the e-vector is constant, and the light is said to be linearly polarized. Fully linearly polarized light is entirely described by the angle the e-vector makes, and we refer to it as vertically polarized, horizontally polarized, or any angle in between.
Polarization Vision 179
A
rly Circulaed light polariz ly po Linear
larized
light ght
rized li
Unpola
r wave Quarte te pla Linear r e polariz
B
Figure 8.1 Two representations of various states of polarized light starting as a nonpolarized light beam (right) and its polarization through a linear polarizing filter and then conversion to circular polarization through a ¼-wave retarder. (A) Each state of polarization is represented as a cross section through the beam at an instant in time. (B) The e-vector axis and helical circularity are depicted in a three-dimensional representation. (From Wikipedia)
If the e-vector traces out a circle as the beam travels, then the light is said to be circularly polarized, either right-handed or left-handed (Johnsen, ). Different fields of science use different naming conventions for handedness, unfortunately. Although circularly polarized light exists in nature, and a few animals can detect it, the vast majority of biological research involves linearly polarized light. Thus, for the rest of the chapter, unless noted otherwise, we use the term polarized light to denote linearly polarized light. The previous two paragraphs describe fully polarized light. In nature, however, most light is partially polarized, with a polarization percentage ranging from % (unpolarized light) to %. This percentage is known as the degree of polarization. Partially polarized light can be considered to be a mixture of completely polarized light and completely unpolarized light. So, in some ways, color and polarization are analogous. Just as color has three defining characteristics of hue, saturation, and intensity, partially linearly polarized light has three characteristics of angle (e-vector axis), degree (described above), and intensity (which it shares with color). Like color, polarization provides information about a stimulus that brightness alone does not. Although this analogy is useful, the physics of polarized light and the ways in which it is analyzed by animal eyes create important differences between the two visual modalities. First, the degree of polarization of light reflected from a surface
180
Chapter 8
(e.g., the surface of a pond) often depends strongly on on its orientation relative to the incident light. This can make polarization information more unreliable than color. The waxy leaves of many plants reflect polarized light, hang at different angles, and wobble in the wind. If color were reflected like polarization, all these leaves would continually change color. Second, unambiguously determining the three parameters of linear polarization requires input from exactly three receptor sets, each with a polarization sensitivity that is maximally sensitive to a different angle. Any three angles will do as long as no two are ° apart. As we see below, the polarization sensitivity of many animals is based on only two sets of receptors, which—as with dichromatic vision—can lead to confusion. Because three receptor sets completely analyze the polarization, polarization sensitivity is not enhanced by expanding dimensionality beyond three channels, another difference between polarization and color vision.
Polarized Light in Nature The Production of Linearly Polarized Light in Nature Polarized light stimuli are abundant in nature, a fact beautifully documented in a marvellous book by Können (); see also reviews by Cronin and Marshall () and Johnsen (). Although no important source of light (sun, moon, or bioluminescence) is polarized, light may become polarized when it is scattered or reflected (figures . and .). These two fundamental principles produce abundant polarized light in natural scenes, which explains why polarization vision is so common. Light reflected from a smooth surface can become polarized, the degree of which depends on the angle between the surface and the light (Johnsen, ; figure .A, right). Consequently, living or nonliving smooth surfaces such as water, leaves, skin, or scales all have the potential to reflect polarized light (figure .). The other major way in which polarized light is generated is by scattering (see Johnsen, ). Light scattered ° to the original direction of a photon’s travel is highly polarized, with an e-vector that is perpendicular to the plane that includes the axes of the original ray and of the scattered ray. Scattering at other angles produces partially polarized light (figure ., top left), and the degree of polarization decreases if the ray is scattered more times before reaching an eye. This is not often critical in air on sunny days but quickly becomes limiting underwater. In addition to the scatter-mediated polarizations of the blue sky and clear waters, small vesicles or other cellular structures (for example, muscle fibers) in the tissues of a few animals can scatter polarized light. Scattering of light by air molecules in the atmosphere results in a strong band of polarization o from the sun (i.e., overhead at sunrise and sunset). On a clear day at dawn or dusk, the degree of skylight polarization may reach %, decreasing as one approaches the sun or the spot in the sky directly opposite it (figure .). The use of this predictable sky e-vector pattern as a compass for navigation, in particular by insects, is a major foundation for our current understanding of polarization sensitivity in animals. In insects, as we see later, receptor e-vector sensitivities and neural coding are specialized for the analysis of this celestial pattern. Underwater, the polarization pattern in the sky is compressed into Snel’s window and may be used by aquatic animals for orientation or contrast enhancement (figure .), particularly near the surface. Outside this window there is a second polarization pattern caused by the scattering of light from water molecules. Its degree
Polarization Vision 181
A
B
C
D
Figure 8.2 Sources of linearly polarized light in nature. (A) Scatter from particles and reflection from water or other dielectric interphases. (From Wehner, 2001) (B–D) Polarization reflected from animals. (B) Stomatopod (Odontodactylus latirostris) showing polarizing activity in pink antennal scale. Arrows indicate the e-vector orientation for each image. (C) Butterfly (Heliconius cydno) wings. (From Douglas et al., 2007) (D) Cuttlefish (Sepia sp.) arms photographed naturally (left) and converted to a false color image (right) to show degree of polarization (Scale 0–100%). Yellow arrow in D shows the position of the polarizing stripe in Sepia. (From Chiou et al., 2007)
182
Chapter 8
A
B
C
D
Figure 8.3 Polarized light in air and water. (A) Celestial e-vector patterns for solar (black dot) positions near the horizon (left) and higher in the sky (right). Directions and sizes of black bars denote e-vector angle and degree, respectively. (From Wehner, 1983) (B) Fisheye photograph of the sky at dusk through a polarizing filter arranged orthogonal to sky polarization, producing a dark band running N to S. The falsecolor image (right) indicates the degree of polarization (key, 0–100%). (C) Fish underwater with the sun (white arrows) overhead (left) or toward the horizon (right). With the sun overhead, light entering the water is scattered toward the fish with largely horizontal polarization, symbolized by the black double-headed arrows and the band around the fish. This band tilts near sunrise and sunset, but for much of the day the background light is roughly horizontally polarized. (From Hawryshyn, 1992) (D) False-color image (right) of the e-vector angle, as detailed in key. The red water background indicates that polarization is mostly horizontal. Left is the original reef scene as seen through vertical and horizontal polarization filters.
Polarization Vision 183
of polarization is generally greatest in the horizontal viewing direction. The angle of polarization depends on the position of the sun, but it becomes more and more horizontal with increasing depth. The degree of polarization, however, rarely exceeds %, more typically being near %. In water the greatest degree of polarization normally occurs at medium wavelengths (~ nm; Cronin and Shashar, ). Because this large field pattern is predictable in its movement through the day, it is usable for navigation. However, the variable nature of the water’s surface and water clarity, not to mention the effect of depth, would make such a navigation system very difficult to use, and there are no known examples of this.
Natural Sources of Circularly Polarized Light Unlike linear polarization, circular polarization is rare in nature. In fact only one mechanism of producing environmental circular polarization has been proposed: the generation of elliptically polarized light by internal reflection from the air/water interface (Ivanoff and Waterman, ; figure .A). However, circularly polarized light is reflected from a few animals and plants. The fruits of the African marble berry Pollia condensata are the only plant tissue known to reflect circularly polarized light (Vignolini et al., ). The striking, bright blue reflections from Pollia are a mixture of right and left circular polarization, making it unlikely that the polarization state is important to animals. The first animals proven to reflect circularly polarized light were scarab beetles (figure .B), related to the dung beetles that began this chapter. Michelson noted in the early twentieth century that scarabs reflect circularly polarized light, and since his time no other beetles have been found with this feature. Almost all scarabs reflect left circular polarization (Goldstein, ), and it is not clear whether or not the circularly polarized reflection has any biological significance. The polarized reflection of both the fruit of Pollia and in the scarabs is produced by helicoid layers in their cuticles (Caveney, ; Vignolini et al., ). The other animals that commonly reflect circularly polarized light are the stomatopods, or mantis shrimps, many species of which decorate themselves with both circular and linear polarization patterns (figure .C). The mechanism by which they produce this effect is undescribed, but it apparently involves a dichroic polarizer (i.e., polarizing filter), which first linearly polarizes the light, and a retarder, which then converts it to circular polarization (as in figure .B). Finally, there is one animal that is known to emit circularly polarized light. The larvae of Photuris fireflies emit left and right circularly polarized bioluminescence from their left and right lanterns, respectively, for unknown reasons (reviewed by Horvath and Varju, ).
Photoreceptor Design and the First Stage of Neural Processing The Basics of Polarized-Light Absorption in Biological Membranes A visual pigment molecule’s chromophore must absorb a photon to photoisomerize and start the cascade of vision (chapter ). The roughly linear chromophores of visual pigment molecules make them dichroic, most likely to absorb a photon whose
184
Chapter 8
Sun
Z z
A Sea surface
i S
A
r
v v v
r
v B
θ
θ
O
Y y
x
X
B
R
L
C
L
R
Figure 8.4 Sources of elliptically polarized light in nature. (A) Internal reflections of scattered, linearly polarized light underwater become elliptically polarized. (From Ivanoff and Waterman, 1958) (B) The cuticle of several insect species, including scarabs (here, Chrysina beyeri), is circularly polarized. L and R circles here and in (C) indicate the handedness of filter used in front of the camera for each image. (C) The thoracic and abdominal carapace of the stomatopod Odontodactylus cultrifer is strongly linearly polarized (arrows denote polarization orientation in each photograph). In males, the central carina of the telson, outlined on the right and enlarged below, is circularly polarized. (Photographs by Roy Caldwell)
Polarization Vision 185
e-vector is parallel to the chromophore’s long axis. Visual pigment molecules are oriented in the lipid bilayers of photoreceptors. Their chromophores are nearly parallel to the plane of the membrane, maximizing the possibility of intercepting light. This orientation, together with the angle at which light strikes the membrane, determines whether or not a cell can respond differentially to polarized light of different e-vector orientations. Thus, polarization vision ultimately depends on the geometry of photoreceptor cell membranes (figure .).
Rods and Cones and Rhabdoms Visual pigments in vertebrate photoreceptors float in sheets of membrane (rod disks or cone lamellae) oriented perpendicular to light passing through them (chapter ), rotating freely in the membrane’s plane with some limitations (see Roberts et al., ). Because their orientation does not favor any particular e-vector axis, they are unresponsive to changes in the polarization of light when axially illuminated (figure .A). If illuminated from the side, they preferentially absorb light with e-vectors parallel to the membrane plane. There are a few examples of retinas in which rod
Light (axial rays)
A
Light (transverse rays)
B Less absorption
Vertical e-vectors Horizontal not absorbed e-vectors absorbed
All e-vectors equally absorbed Rod or cone disks
Light Absorbed on sides
Light Very strong absorption
Microvilli Figure 8.5 The orientation of visual pigment chromophores and resulting dichroism within photoreceptor membrane. (A) Rod or cone disks or stacked plate membrane with randomly oriented chromophore. Only light from the side of such a structure can absorb e-vector preferentially. (B) Rhabdoms of invertebrates are made of microvilli, tubes of membrane with randomly or specifically oriented visual pigment. Even random arrangement results in some preferential absorption along the long axis of the tube due to the side-on profile of membrane. For high polarization sensitivity, however, organization of the chromophores must be achieved within the cell. (From Bradbury and Vehrencamp, 2011)
186
Chapter 8
disks or cone lamellae are not perpendicular to incoming light (reviewed in Roberts et al., ), and these could have a limited polarization sensitivity. In general, however, polarization sensitivity in vertebrates involves subtle, poorly understood mechanisms. No such limits are set on rhabdomeric photoreceptors, with their microvillar stacks (chapter ). Even if visual pigment molecules were free to rotate in the membrane, the lateral margins of the microvilli would still have intrinsic polarization sensitivity, selectively absorbing light polarized parallel to the microvillar axis (figure .B, left image). However, based on the high degrees of polarization sensitivity in some species, it seems likely that the visual pigment orientation is constrained in some way (figure .B, right image; see review in Roberts et al., ). In general, though, any photoreceptor with microvilli aligned perpendicular to the incident light is bound to be polarization-sensitive, and in fact, many invertebrates with microvillar receptors go to some lengths to destroy their intrinsic polarization sensitivity.
Making and Breaking Polarization Sensitivity Despite the fundamental limitations of vertebrate photoreceptor geometry, many vertebrates show responses to polarized-light fields, and there is direct optical evidence of differential absorption of polarized light in fish cones (Roberts and Needham, ) as well as neurophysiological evidence of polarization sensitivity in fish (Hawryshyn et al., ; reviewed by Kamermans and Hawryshyn, ). The underlying mechanisms are hypothetical, diverse, and in general only weakly documented (reviewed by Kamermans and Hawryshyn, ). In fact anchovies are the only vertebrates with a well-understood retinal design for detecting polarization. They have special cones that lie flat, tangential to the retina’s outer border and thus present their lamellae parallel to the incident light (Fineran and Nicol, ). These fish have two sets of cones of mutually orthogonal retinal orientation and identical spectral sensitivity, making polarization analysis relatively straightforward. Most invertebrate photoreceptors have the opposite problem. The microvilli in their rhabdoms are aligned, which is the best way to build a dense photoreceptor (try to pack straws in a box in a random arrangement and you will see why), giving them polarization sensitivity by default. This can wreak havoc with color vision—imagine trying to pick the real colors out of the maze of polarizations in figure .. There are several ways out of this conundrum, all of which seem to be used by animals. A solution used by some insects, and described for honeybees, is simply to twist the rhabdom along its length, so that not all microvilli are parallel (Wehner and Bernard, ). Another solution is for a single cell to produce microvilli that are mutually orthogonal. This solution is used in many short-wavelength receptors, for example, the R cells in crustaceans. Finally, a cell can produce microvilli that cross over each other or are splayed out like a fan. This is seen in some receptors of stomatopod crustaceans (Marshall et al., ). Of course, it is also possible to remove polarization information at higher levels by combining signals of different receptors. And finally, some insects seem happy to consider color and polarization as aspects of a single stimulus, doing nothing to reduce the polarization sensitivity of at least some of their receptors (Kelber et al., ).
Polarization Vision 187
A
B
C
D
Figure 8.6 Terrestrial and aquatic scenes showing differences in polarization information. Leaves from a bush with waxy leaves (A), converted to false-color images showing degree (B) and angle (C, scale encodes angle). (D) Degree of polarization in an underwater scene containing a fish and a cuttlefish against a coral reef background. Keys are as in figure 8.2C,D.
So far, we have considered classical polarization receptors, the types that were known at the time of Waterman’s () masterful review, all based on absorption of light entering at the end. We close this section with a look at a truly spectacular departure from conventionality, a polarization-sensitive eye that bases its sensitivity both on microvillar alignment and on dielectric reflection of light. This eye, described by Dacke et al. (), exists in one set of eyes in a spider and is the first example of a polarized-light receptor in any arachnid (and the only eye known to use polarization enhanced by reflection). The creature is a ground-dwelling spider, Drassodes cupreus. Being an arthropod, Drassodes has rhabdomeric photoreceptors. The unusual feature of the polarization photoreceptors is that they are aligned and contained in a tapetal mirror-box structure. This allows them to be simultaneously illuminated both directly from overhead and indirectly by reflection from the side (figure .). The microvillar axes extend parallel to the sides of the mirror box and perpendicular to its reflected light. Since the reflected light is polarized parallel to its reflection, it is most strongly reflected when it arrives at the tapetum already polarized parallel to the surface (see figure .A, bottom), and the intensity of the polarization that enters the rhabdomeres from the side varies with incoming polarization angle. The outcome is that these eyes are extremely sensitive to variations in the angle of polarization. Since D. cupreus is active at dusk, when the celestial polarization pattern is strongly oriented north-south (figure .B), the two orthogonal eyes make a perfect pair for the spider’s needs.
188
Chapter 8
A
B
Figure 8.7 Drassodes cupreus and its polarization-sensitive posteromedial (PM) eyes. (A) The spider and two close-up views of its PM eyes taken through orthogonal polarizing filters (e-vector direction represented by double-headed arrows). Note changes in tapetal reflection dependent on filter orientation (Scale: top, 2 mm; bottom, 200 μm). (B) Retinal details. (Top) Light and transmission electron micrograph images of microvilli. (Bottom) Line drawing of spider’s head showing position of PM eyes and diagram of the reflective tapetal plates and microvilli. (From Dacke et al., 1999)
Grouping Receptors for Linear Polarization Sensitivity Having a single type of polarization receptor is not useless; it could be a “matched filter” to an expected external polarization stimulus. Alternatively, the animal could turn its body to note changes in stimulation, which would provide a simple form of polarization analysis. Or it might be compared to a single polarization receptor in a different eye, or a different ocellus, solutions we encounter later in the chapter. But the preferred solution, used almost universally, is to group polarization receptors together. This not only simplifies wiring further down the information stream but also enhances polarization sensitivity. This is necessary because long polarization receptors suffer from self-screening, analogous to the absorptance spreading in long photoreceptors (chapter ). The first layers of visual pigments remove most of the light of the favored polarization, sending increasingly depolarized light down the receptor. One solution is to use only short polarization receptors, which is probably the solution used by the anchovies who lay their polarization-sensitive cones on their sides, giving the light only one photoreceptor diameter to transit. Rhabdomeric photoreceptors use two effective solutions that allow them to retain length for overall sensitivity without suffering the usual problems of self-screening. The first solution is used by many insects, which place the microvilli of their receptors in a fused rhabdom built something like a very tall, sliced pie (see figure .B for examples). At every level, light encounters microvilli of various orientations (and often containing visual
Polarization Vision 189
A 4
1
5 6
3
DRA
2
8 3
2
7 1
4
6
5
7
B Dorsal rim Distal
Proximal Main eye
Sympetrum (odonata)
Gryllus (orthoptera)
Melolontha (Coleoptera)
Apis Cataglyphis (hymenoptera) (hymenoptera)
Drosophila (diptera)
Figure 8.8 Ommatidia in the dorsal rim area (DRA) and main eye of various insects. (A) Left, transverse section of a DRA rhabdom in Gryllus. Center, scanning electron micrograph of DRA and other ommatidia from the cricket, Gryllus campestris. Right, transverse section of a DRA rhabdom of the desert ant Cataglyphys bicolor. Scale bars, 2 μm. (B) Diagrammatic transverse sections through ommatidia in various species. Note that the rhabdoms of the dorsal rim are always orthogonal units. Colors suggest varying spectral sensitivities in some species, with violet representing UV. (From Labhart and Meyer, 1999)
pigments of differing absorption spectra as well). The fused receptor maintains both spectral clarity and polarization sensitivity by a process called “lateral filtering.” The basic idea, first presented by Snyder et al. (), is that light is shared among all rhabdomeres in the fused rhabdom. Each component absorbs its favoured stimulus, be it peak wavelength, polarization angle, or both, and continues to share the remaining light among all receptors in the group. Thus, by removing its preferred stimulus and sharing the remainder, each rhabdomere retains high selectivity to its preferred stimulus along the full length of the receptor. Plus, the design has the advantage of sampling a single visual “pixel” for all analyzed attributes (see chapter ). This is bioengineering of a high order! Crustaceans build fused rhabdoms differently, more like a square layer cake than a sliced pie (figure .A). At each layer, cells on opposing sides of the square send in rhabdomeres from each side, all parallel. At the layer below, the same thing happens, but a different set of retinular cells makes the contributions in the orthogonal direction. The entire rhabdom is like a tall square tower, constructed of hundreds of levels,
190
Chapter 8
A
Compound eye
Ommatidium
Rhabdomere
Microvilli
Rhabdom Photoreceptors B
Retinula cells Pigment granules
Microvilli
Rhabdomeres
Figure 8.9 Rhabdom construction in crustaceans and cephalopods. (A) Generalized crustacean apposition compound eye, ommatidium, and three-dimensional rhabdomere showing orthogonal microvilli made by opposite rhabdomeres and resultant e-vector sensitivities (double-headed arrows, buff and blue colored). (From Kirschfeld, 1972; Stowe, 1983; Marshall et al., 1991; courtesy Mike Bok) Micrograph shows detail of orthogonal microvilli. (B) Top, cephalopod eye and retinal details showing horizontal and vertical projections of the inner orthogonal microvillar directions, including a retinal whole-mount with underlying orthogonal units indicated. (From Talbot and Marshall, 2011) Bottom, retinal threedimensional diagram, rhabdom and microvillar directions with resultant e-vector sensitivities (buff and blue arrows). (From Moody and Pariss, 1961; Yamamoto et al., 1965)
Polarization Vision 191
each with microvilli perpendicular to the ones above and below. The amount of light removed in each layer is small, so successive layers retain high polarization sensitivity, slicing out little bits of the stimulus but affecting it only slightly as it passes through. Consequently, the whole stack retains excellent polarization sensitivity. Think of each retinular cell as a toothbrush, with the brush part being the microvilli. Then each set of bristles nudges against a parallel set from an opposing toothbrush, and gaps between sets of bristles are filled with bristles set perpendicularly and passing in from the side. To make a crustacean rhabdom, you would need seven toothbrushes, one twice as wide as all the others (see figure .A). As in the insect design, the entire mass of microvilli is illuminated by light from a single direction in space. Before leaving crustacean photoreceptors, we need to discuss one exceptional case. Unlike insect ultraviolet receptors, those of the crustaceans (the R cells, chapter ) are not specialized to detect polarization. On the contrary, their structure specifically eliminates any chance of their being polarization sensitive, having microvilli that extend from all four sides of the rhabdom produced by a single R cell (Waterman, ). This is true throughout all the R classes of the stomatopod crustaceans as well, with one important pair of exceptions (Marshall et al., ). In the two most ventral rows of ommatidia in the midband, the R cells have rhabdomeres with elliptical cross sections and unidirectional microvilli parallel to the minor axis of the ellipse. In Rs of the fifth row of ommatidia, the microvilli extend parallel to the row, whereas those of the sixth, most ventral, row extend perpendicular to it (Marshall et al., ). These cells can be over μm long, enough to suffer some loss of polarization sensitivity by self-screening, but this seems to be the stomatopod approach to building a two-channel ultraviolet polarization analyzer. The other group of animals with polarization-sensitive rhabdomeric photoreceptors is the cephalopod molluscs. Virtually all species in this class lack color vision, instead having excellent polarization vision. They too build fused rhabdoms, but these are different from the insect ones because each retinular cell sends off parallel microvilli on two opposite sides (figure .B). Microvilli from four cells, each contributing half their total number of microvilli, meet in a fused rhabdom similar to the one found in an insect ommatidium.
Grouping Receptors for Circular Polarization Sensitivity Rhabdomeric photoreceptors work together to enhance their sensitivity to linearly polarized light, but even without their optical associations, they would have a fundamental polarization sensitivity (figure .). To gain sensitivity to circularly polarized light, however, they must operate as a unit. At least, this is true of the only mechanism yet discovered in animals that possess circular polarization sensitivity, stomatopod crustaceans (Chiou et al., ). These animals use their ready-made linear polarization receptors together with a quarter-wave retarder that converts incoming circularly polarized light into linearly polarized light. The system is illustrated in figure .. The essential trick that stomatopods discovered derives from a property of microvilli that we have not mentioned before—they are birefringent. Birefringency is found in optically anisotropic materials, that is, those that have different refractive indices for polarized light of different e-vector orientations passing through them in a particular direction. It is actually pretty common; if you have ever noticed the odd color patterns you see in car windows when you wear polarized sunglasses, they result from
Chapter 8
birefringence in the stressed safety glass. Calcite crystals are birefringent, and because calcite is often deposited in arthropod cuticle, the cuticle itself is frequently birefringent (although the circular polarization reflected from scarab beetles has a different optical origin, as described earlier). Birefringency in microvilli is produced by their long, thin structure with ordered lipids in their bounding bilayers (see Roberts et al., for more about this). In a fairly long photoreceptor with parallel microvilli throughout, the entire structure can produce a significant phase delay, easily on the order of a quarter-wave. This is exactly the situation in the ventral midband rows of mantis shrimps—remember that their long R rhabdoms have microvilli in a single direction. These cells overlie a typical crustacean rhabdom, with the retinular cells building the standard stacked microvillar polarization analyzer. The entire unit with an R delay section linked to a two-channel linear polarization detector functions as a perfect CPL analyzer (Chiou et al., ; Roberts et al., figure .). The R cells in these ventral midband rows are both linear analyzers in the ultraviolet and phase-delay structures for visible light (which is the spectral region sampled by the main rhabdoms), a spectacular use of parallel microvilli for two completely different visual functions. It appears that in
DH
1
2
3
4
5
6
VH
Light
192
R8
1/4λ retarder
1,2,6,7
LC-Pol
1,4,5
RC-Pol
R1–7
A
B
Figure 8.10 Circular polarization detection in stomatopods. (A) Diagram of a longitudinal section through stomatopod ommatidia including all six midband rows and representative ommatidia from dorsal and ventral hemispheres (peripheral regions; DH, VH). Rows 5 and 6 detect circularly polarized light (CPL). (From Marshall et al., 1991) (B) As CPL passes through R8 cells, it is converted by ¼-wave retardation (see figure 8.1) to linearly polarized light in one of two directions, depending on CPL handedness. The R1–7 cells of these rows absorb this light, being set at 45° to the R8 cell’s fast axis. (From Chiou et al., 2007)
Polarization Vision 193
some mantis shrimps, the lengths of the R cells produce other phase delays, tuning the receptors to polarized light of a particular ellipticity. No known function exists for such tuning.
Processing of Polarized-Light Signals in the Distal Retina By definition, photoreceptors devoted to polarized-light sensitivity show differential responses to stimulus lights of different e-vector orientations. Figure . illustrates typical responses from crustacean, insect, cephalopod, and fish polarization receptors (the fish example is a summed response measured using electroretinographic recordings). Note that in the invertebrates, the sensitivity tends to have a ° modulation. This is true for most fish as well, but the figure shows an unusual species with strong evidence for ° modulation. Remember that the ideal polarization analysis system has three receptor sets, so it is certainly possible that damselfish have the ability to fully analyze linearly polarized light. In the retina the typical optical unit for polarization reception contains orthogonal pairs of receptor sets like the ones in figure .A–C. In insects the pairs are at each level in the rhabdom (most receptor sets in figure .B have orthogonal microvilli, although a few are at ° or ° angles). In crustaceans orthogonality is rigid between successive layers of microvillar stacks (figure .A). And in cephalopods perpendicular microvilli occur in each quartet of rhabdomeres (figure .B). In these invertebrates the receptor groups are optically coupled, so placing them in opponency (i.e., subtracting the signal from one set from that of the other) is an obvious next step. Recordings from primary interneurons on which the receptors synapse nearly always reveal such opponency, set up as in the hypothetical crustacean example illustrated in figure .A. An excellent example of opponent processing in rhabdoms of an insect that are devoted to analyzing sky polarization in the dorsal rim area (described in the next section) is illustrated in figure .B. The arrangement transfers ready-made polarization information to the next processing level (although having only two channels necessarily simplifies polarization information). In cell pairs such as the R polarization receptors of mantis shrimps, which exist in different ommatidia, it remains to be seen if signals are compared or processed independently.
Polarization-Related Behavior Polarization sensitivity is potentially useful to animals for general aspects of vision, such as object recognition, contrast enhancement, camouflage breaking, or signal detection. In this regard the sensitivity is a truly visual capacity, like color or intensity vision. On the other hand, many animals take advantage of the special optical situations that create polarization, analyzing light’s polarization for special purposes. In general the visual aspects of polarization-related behavior are useful for dealing with discrete visual objects, including other animals (e.g., predators, prey, conspecifics). The more general aspects deal with extended polarization sources, where the polarization is produced by reflection from flat surfaces or scattering in the atmosphere and carries special information useful for orientation or navigation (see also chapter ). We begin with this second class of biological applications.
Chapter 8 A
C Normalized response
Response (mV)
30 25 20 15 10 5 0
0
60
120 150 180 240 300 E-vector angle (degrees)
0.8 0.6 0.4 0.2 230 280 180 E-vector angle (degrees)
330
D 27.2
Row 5 90°
26.7 26.2
60°
25.7 25.2 24.7
0
30
60
90
120
150
180
Row 6
37 36 35 34
E-vector angle (degrees)
Response (mV)
1.0
0 130
360
B
Response (mV)
45°
30°
33
0°
32 31
0
30
60 90 120 150 E-vector angle (degrees)
180 0
0.5 1.0 Time (s)
1.5
E Log photon sensitivity
194
0.2 0.1 0 –0.1 –0.2 –0.3
0
30 60 90 120 150 180 E-vector angle (degrees)
Figure 8.11 Responses of polarization receptors. (A) Stomatopod crustacean (Odontodactylus scyllarus) photoreceptor intracellular recording, showing different depolarizations to flashes of light delivered through a polarizing filter rotating from 0° to 360°. (B) Responses of ultraviolet-sensitive (R8) cells in the two ventral rows of the midband of Odontodactylus scyllarus. (A and B after Kleinlogel and Marshall, 2009) (C) Average depolarizations in photoreceptor cells in the simple eye of a diving beetle larva (Thermonectus marmoratus) stimulated by polarized light of different e-vector orientations. Two cell types are present with peak sensitivity to horizontally (H) or vertically (V) polarized light. (After Stowasser and Buschbeck, 2012) (D) Spike trains recorded from axons of a photoreceptor cell in the retina of a squid, Doriteuthis pealei, showing maximal polarization sensitivity to an e-vector orientation near 0°. The raised bar shows the 0.5-s stimulus. (After Saidel et al., 2005) (E) Polarization sensitivity in damselfish Dascyllus melanurus measured using electroretinogram (ERG) responses to UV (360 nm) light. Note the response peaks at 60° intervals. (After Hawryshyn et al., 2003)
Polarization Vision 195
A
B Excitation Inhibition
Pol
Spike/s
60 40 20 0 0
45 90 135 180 225 270 315 360 E-vector orientation (degrees)
Figure 8.12 Polarization processing. (A) Model for opponent processing by crustacean polarization receptors. Four retinular cells with vertical microvilli feed into a common excitatory input onto a polarization-encoding interneuron, here labelled Pol. Three cells with horizontal microvilli provide inhibitory input onto the same cell, setting up the opponency (How and Marshall, unpublished data). (B) Response of a polarization-sensitive interneuron in the cricket Gryllus (like those in figure 8.8) showing opponency at 90° intervals. The interneuron receives primary input from orthogonally arranged blue-sensitive receptors in the dorsal rim area, diagrammed in the inset. (After Labhart and Meyer, 2002)
Behavior Related to Extended Sources of Polarized Light As a relatively flat dielectric surface, water reflects polarized light (figure .A), most strongly at Brewster’s angle (for an air/water interface, ° from the normal). Because of this, any animal flying over water encounters a field of horizontally polarized light. Many insects exploit this, taking extended visual fields of horizontally polarized light as proxies for water. The behavior was discovered by Schwind (, ) in a bug, the backswimmer Notonecta glauca. This predator has regionalized ultraviolet-sensitive polarization receptors with differently oriented rhabdomeres (figure .). In most of the eye, microvilli in the two ultraviolet receptors are parallel, but these switch to an orthogonal arrangement in the ventral part of the eye. When Notonecta hangs upside down from the water surface tension, the orthogonally oriented rhabdomeres enable the backswimmer to analyze polarized light along the surface, possibly for locating floating food (Schwind, ; figure .A,B; see also chapter ). On the other hand, the division between the parallel and the orthogonal microvillar sets is ideal for detecting the polarization reflections from water surface. If the bug is stimulated by bright, potentially horizontally polarized light in the anteroventral region of the eye when flying, it initiates a characteristic plunge reaction, adjusting its body angle to o just before diving in (Schwind, ; figure .C,D). This reorientation is thought to optimize polarization contrast between water and land, so that Notonecta lands in the right place. Wehner () presents this elegantly designed spatial and polarization vision system as an example of a dual-purpose matched filter. A number of other insect species, including mayflies, dragonflies, caddis flies, tabanids, and chironomid midges, show landing behaviors toward large, usually horizontal, polarizing objects. Dispersal flights, seeking out ponds, lakes, and rivers to feed near, mate on, and lay eggs in, are ecological explanations for such behaviors (Horvath and Varjú, ).
196
Chapter 8
To view this image, please refer to the print version of this book
Figure 8.13 Polarization vision in the backswimmer Notonecta glauca. This bug hunts while suspended from the water’s surface (A,B). The ultraviolet-sensitive cells in its rhabdom form two rhabdomeres, parallel throughout the eye above the plane extending 70° below the animal’s horizontal axis and perpendicular below it (in B, C, and D, the red line indicates the plane of this division, and the insets diagram the UV-sensitive rhabdomeres and microvilli). During hunting, the border between the regions lies almost exactly at the edge of Snel’s window, permitting polarization analysis at the water’s surface (A). (Photograph by Pavel Krasensky) (C) During flight, the border looks down at the water, well above Brewster’s angle (53° from the vertical), but before landing (D) Notonecta briefly tilts upward so that the analytical region looks just below this angle, permitting a judgment of the polarization qualities of the reflected light. (After Schwind, 1983, 1984; Wehner, 1987)
In determining the environmental conditions that trigger the plunge reaction, Rudolf Schwind collected Notonecta by getting them to dive at large sheets of UVreflecting plastic and polarizing material. Many other insects are misled by polarization reflections. Mayflies lay eggs on wet roads or cars, mistaking them for water, and many insects are attracted to windows and even oil spills. Anthropogenic sources of highly polarized reflections, it seems, confuse insects, just as lights do at night time. Horvath et al. () call this “polarized light pollution.” The other great natural extended source of strongly polarized light is the sky (figures .A and .A,B). Other very useful light stimuli are present in the sky, such as brightness, color, and sun position, but insects (in particular) have settled on the analysis of the sky’s polarization pattern for orientation when traveling, either on foot or on the wing. This is probably because some part of the sky is generally visible even when the sun may not be due to foliage or partial cloud cover. Navigating insects need to see only a small sky patch, and disruption or alteration of the polarization pattern they view seriously disorients them. All of these navigators use a remarkably similar mechanism,
Polarization Vision 197
A
B
C
Φmax 150°
180° 15 10
120°
5 0
60° 30°
Number of neurons
90°
0°
Figure 8.14 Organization and function of dorsal rim area (DRA) ommatidia. (A) Area sampled by right and left DRAs in the field cricket Gryllus campestris, indicated in gray. (B) E-vector tuning of rhabdoms in the DRA of Gryllus, indicated by letter “T”s as illustrated in the isolated ommatidium at lower right. (C) POL neurons in an optic lobe of the cricket. The neuronal tuning is largely restricted to three e-vector angles near 0°, 60°, and 120°. (After Labhart and Meyer, 2002)
based on a very restricted patch of ommatidia along the dorsal margins of their compound eyes, the dorsal rim area (DRA) of figure .. These ommatidia are aimed across the top of the head, viewing a small, contralateral patch of overhead sky (figure .A). Dorsal rim microvilli are invariably orthogonal in each ommatidium, but their spectral sensitivities vary among species depending on their habitat and the time of day they are active (figure .). An additional, perhaps unexpected, property of dorsal rim ommatidia is that they have extremely poor optics. Inclusions in the cornea or elsewhere in DRA ommatidia significantly broaden their acceptance angles (Δρ, chapter ). In each part of the dorsal rim orientations of rhabdoms in ommatidia are splayed out such that each region has ommatidia with many orientation axes (figure .B). Thus, there is no polarization map of the sky in these rhabdoms; each patch of sky is analyzed by many polarization classes of ommatidia. Ommatidia with similar evector orientations are collected into a common signal, which, with further neuronal processing, ends up in the POL-neurons of the optic lobe (Labhart and Meyer, ; figure .C). Here, finally, we see the required three-axis polarization analysis system mentioned much earlier; the Pol neurons encode the strength of polarization on axes near °, °, and °. The result: the overhead sky is fully analyzed for angle of polarization, degree of polarization, and overall intensity. This is all the insect’s brain needs to know to work out a course. Much of what we know about the work the brain does to get the insect home (or wherever it is going) has emerged from Uwe Homberg’s laboratory in Marburg, Germany. At present the best-understood system is that in the locust, Schistocerca gregaria. This animal’s DRA is organized similarly to that of the cricket, and from there the flow of information has been painstakingly traced into the brain, to a region known as the central complex (figure .). Both the nomenclature and the neuranatomy are complex, so here we present just a superficial look at the circuitry and at a model that represents how polarization information is encoded (see Homberg et al., , for the details). The information flows from POL neurons in the medulla (Me, in figure .A) through the anterior lobe of the lobula (Alo) and passes through a
198
Chapter 8
A
La
Me
Alo1 CB 200μm
B
Protocerebral bridge
Figure 8.15 Organization of polarization information coming from the compound eyes of the locust, Schistocerca gregaria. (A) Information from the DRA neurons transits a series of waystations in the lamina (La) and the medulla (Me), reaching polarization-coding neurons (similar to the POL neurons of crickets) in the anterior lobe of the lobula (Alo1). From there, axons eventually reach the central body (CB). (B) Central body neurons feed into an arching structure, the protocerebral bridge. Here, columns of neurons encode e-vector orientation for the two eyes, suggested by the double-headed arrows at the top of the figure. Asterisks (*) indicate higher-level processing centers. (After Homberg et al., 2011)
number of processing stages on its way to the central body (CB), where it is analyzed as seen in figure .B. The e-vector pattern becomes encoded in columns of cells in another region of the central complex, organized by input coming from the central body (the green neurons in figure .B). From here, information is available at higher levels for behavioral tasks.
Polarization Vision 199
We understand in some detail how the polarization-analysis systems of insects operate to interpret the celestial e-vector pattern. Vertebrates also orient to sky polarization, but here, the neural mechanisms (and even, for the most part, the polarization receptors) are simply not known. The only exception, regarding receptors, is the flopped-over cones of anchovies (Hess, ; Novales Flamarique, ). We have no great confidence in any mechanism available for other vertebrates, although the arguments over these make for interesting reading. Suffice it to say that fish definitely orient to polarized light, and migrating birds use celestial polarization at sunset (the pattern seen in figure .B) to set their magnetic compass for the next day’s travel (Muheim et al., , ; Muheim, ). Because both birds and fish have double cones in their retinas (chapter ), current hypotheses focus on mechanisms for polarization detection involving these cones. We finish this section with a quick look at orientation under overhead polarization by a small selection of animals (figure .). The top image pair in this figure shows Desert ant
Dung beetle
Food
10m
Nest entrance
Fish
Spider
20cm Figure 8.16 (Top left) Behavioral orientation and navigation under celestial e-vector patterns or artificial polarizers. In the field, dung beetles roll balls of dung in straight paths, orienting to sky polarization to avoid contacting other beetles. (From Dacke et al., 2013b) (Top right) The desert ant Cataglyphis makes tortuous foraging trips over hundreds of meters, but once food is found it can return directly to the nest entrance (courtesy R. Wehner). This is oriented using the dorsal rim system. (Bottom left) Fish, such as salmonids, orient to overhead polarization orientation in the laboratory. It is possible that they use the celestial e-vector pattern viewed through Snel’s window in nature. Dots show individual fish swimming directions; the black triangles show the orientation of the e-vector of the polarizer. (From Hawryshyn, 1992) (Bottom right) The spider Drassodes cupreus (figure 8.8) uses separate and orthogonally oriented retinas in the PM eyes to navigate around objects in the environment (black squares) back to nest area (bottom quadrant). In nature, this behavior is likely to be at dawn and dusk when the sky pattern is strong and predictable. The arrowheads show the e-vector orientation of a polarizer placed on the lamp illuminating the arena. (From Dacke et al., 1999)
200
Chapter 8
animals orienting in nature. Dung beetles, met at the start of the chapter, roll their dungballs in as straight a line as they can manage. We meet them again in chapter , but for now, take a look at the quality of their vectors, produced while they are rolling a ball of rather lumpy dung backward and almost upside down, pushing it with their back feet. To the right is a plot of the outward foraging path and return home by a desert ant, another animal that reoccurs in chapter . Its homeward orientation is entirely under the control of its celestial polarization tracker. The bottom two frames show animals orienting in the laboratory under controlled conditions of polarization. The fish orients parallel to the e-vector pattern, but it is unknown if this is relevant to its natural movements. Because salmonids make lengthy migrations both in shallow streams and in rivers and the open ocean, sky polarization would rarely be useful, but it could be significant in the spawning grounds. Similarly, the spider Drassodes shows good orientation under a polarized light, exploring a large arena, passing by the nests that are not its own (the black squares), and returning home accurately. When an unpolarized light was used to illuminate the arena, homefinding suffered significantly.
Behavior Related to Discrete Sources of Polarized Light We have just spent some time with large-field orienting responses. We now turn to tasks that require an ability to recognize polarization patterns (or at least, the presence of polarization) in individual, discrete stimuli—animals, signals, or other objects of special interest. We begin with a task that is, in a sense, a mixture of large- and small-field stimuli because it is based on the contrast of objects against the background polarization field underwater. Squids and cuttlefish hunt in open water where they face the generally horizontally polarized spacelight (figure .D). These animals are well known for their polarization vision, and they use posture and rotational eye movements to hold their polarization receptors oriented rigidly horizontal and vertical with respect to the aquatic world (figure .B). Under laboratory conditions cuttlefish prey on fish that reflect polarized light more effectively than on those whose polarization reflections have been artificially reduced (Shashar et al., ), and there is little doubt that this occurs in nature. Even in freshwater environments, it is likely that animals with good polarization vision that hunt at the air/water interface, such as backswimmers and water beetle larvae, use polarization contrast vision to spot their prey (see Stowasser and Buschbeck, ). Mantis shrimps, prowling the world of coral reefs, might similarly use their high-quality polarization vision (Marshall et al., ) in predation, although this has never been demonstrated behaviorally. In addition to contrast produced by dark, unpolarized prey against the bright polarized field, or that of animals that reflect a contrasting polarization from shiny scales, it has long been suspected that predators with excellent polarization vision detect transparent prey, relying on the polarization contrast created by depolarization of background light or by the scrambling of background polarization by optically active tissues within these prey. Shashar et al. (), for instance, found that squid prey more effectively on birefringent glass beads against a horizontally polarized background than polarization-transparent beads and also that predation on plankton was more effective with a polarized than a depolarized background. Although these results are consistent with the hypothesis that polarization makes transparent prey vulnerable, more recent work with zooplankton under natural conditions tends to reduce support for it (Johnsen et al., ). Intensity contrast produced by scattering
Polarization Vision 201
by the same tissues that are birefringent outweighs the polarization contrast, and of course both values are low for animals that are effectively transparent. So in the end, this appears to be yet another example of the slaying of a beautiful hypothesis by an ugly fact (Thomas Henry Huxley, /). Another potential use of polarization to spot objects is seen in strong and reproducible evasion responses of several types of animals to a looming stimulus visible only in polarization. The first experiments attempting this used a transparent, physical target that removed linear polarization from a polarized background, creating a visual contrast almost entirely in the polarization domain. Crayfish were far more likely to initiate avoidance responses when the target approached against polarized rather than depolarized backgrounds (Tuthill and Johnsen, ). Although these experiments were designed to test the limits of polarization vision in crayfish, the results suggest that visualized polarization contrast can be used to detect objects. Other research has avoided potential artifacts produced by physically moving objects by creating virtual polarization targets using the polarizing properties of liquid crystal displays. Examples of experiments using this approach are illustrated in figure .. Fiddler crabs, Uca vomeris, readily attempt to escape large approaching objects. How et al. () exploited this tendency by placing the crabs on a spherical treadmill (a styrofoam ball floating on a gentle airstream, figure .A) and subjecting them to virtual looming stimuli visible only in polarization—a human viewing the polarization monitor could not detect the stimulus. The crabs’ responses were recorded by video. As is obvious in figure .B, crabs did their best to get away. In fact they showed remarkably fine discrimination of polarization contrast, escaping from e-vector differences just over ° apart. In similar experiments monitoring startle responses of cuttlefish, Temple et al. () found responses to polarization differences of about ° (figure .C). While these responses are strong and reproducible, it’s difficult to know how they fit into the visual ecology of these creatures. Pure-polarization stimuli, of course, do not occur in nature. In addition, these polarization contrasts were at nearly % polarization, so the thresholds would be quite a bit higher in most natural polarization conditions. At present, the virtual approach is outstanding for probing the potential of polarization vision, and naturalistic behavioral experiments are still needed to understand what these animals, and others with polarization vision, really do with their exquisite ability to detect moving areas of polarization contrast. So, we know what animals might do when using their polarization vision for seeing objects. What do we know about what they actually do? Surprisingly, after all this time, the answer has to be “not very much.” There are many strong suggestions and loads of circumstantial evidence, but hard data about animals in their ecological settings are very scarce. Of course the situation is not so different for color vision. Do female cardinals really prefer redder males (see figure .)? Nobody can say for sure, although it seems likely. Chicks certainly find color-contrasted seeds more quickly than camouflaged ones, but they probably do not need tetrachromatic color vision to do that. Anyway, evaluating what animals “see” or what they’re looking for when viewing a polarization scene is not all that clear. Of course we are fully confident that the orientational and navigational abilities described in the previous section are real and biologically relevant, but orienting to polarization patterns in the sky or to the polarization reflected from water does not really feel like “seeing” to visual animals like us. If only from sensory envy, we feel intuitively that there must be something more to polarization vision than just wandering under the desert sky or dancing on top of a dung ball.
Chapter 8
A
B Polarization monitor
Camera
Fiddler crab
0 5 Treadmill
C 1.0 Normalized response strength
202
10
Sepia plangon (cuttlefish)
0.8 0.6 0.4 0.2 0
0
1 2 3 8 16 26 47 88 E-vector difference (degrees, log scale)
Figure 8.17 Responses of animals with polarization vision to virtual looming stimuli. (A,B) Experiments with the fiddler crab Uca vomeris. (After How et al., 2012) (A) The animals are placed on a spherical treadmill and subjected to polarization loom from an LCD monitor. (B) Direction of sprinting during attempted escapes, as a histogram of numbers of escapes in each direction. (C) Normalized strength of startle responses in cuttlefish, Plangon, to e-vector differences in degrees of polarization-only looms. Animals showed significant responses to differences as low as ~1°. (After Temple et al., 2012)
Kelber’s () work with swallowtail butterflies provides one solid piece of evidence that animals really can “see” a polarized-light stimulus, although even this example requires combining color with polarization, in military terms, “sensor fusion.” Swallowtail butterflies (Papilio aegeus) ovposit under shiny green leaves. Evidently they use a combination of polarization and perceived color to make their choice, as they strongly prefer horizontally polarized targets over other choices (figure .). They even switch from their favored green to a yellow-green target if the green one is vertically polarized, and the yellow-green is polarized horizontally. Here is a situation in which the visual system combines color and polarization information to make a behavioral choice. It would be interesting to know how these butterflies evaluate polarized versus nonpolarized objects, but the experiments of Kelber () did not investigate this. If butterflies can recognize the polarization of leaves, it stands to reason that they evaluate the polarization of butterflies. There is good evidence that they do. Tropical
Polarization Vision 203
Choice frequency (%)
100
75
50
25
0 Color and polarization combination Figure 8.18 Choices of female Australian orchard swallowtail butterflies, Papilio aegeus, viewing color/polarization targets, 5-cm circular glass filters each paired with a linear polarizer (the e-vector angle is suggested by the parallel lines on each circle). Filters, presented in pairs, were blue, green, and yellow-green, coded by the colors of the circles and the histograms, which show choice frequency for each member of the pair. Given blue filters, females strongly prefer horizontally polarized stimuli. When both color and polarization are provided, females prefer green targets over yellow-green ones if both are polarized horizontally but switch to the yellow-green one if it is offered with a vertically polarized green target. (After Kelber, 1999)
butterflies often have iridescent wings that reflect partially polarized light (figure .C). In a particular species pair in the genus Heliconius, females of one species (H. cydno) produce a strong polarization pattern, whereas those of H. melopmene are polarizationally featureless (figure .). Sweeney et al. () set out wings of females of both species where they would be visited by conspecific males, using filters to control polarization. Males of H. cydno showed a significantly higher rate of visiting female wings in which the polarization was visible, whereas male H. melopmene chose equally in the two conditions. This supports a role for polarization as a species-specific mating signal for butterflies. In a large selection of over species of nymphalid butterflies from the tropics, Douglas et al. () found that species inhabiting more heavily forested areas were more likely to have polarizing reflections than those in open, sunny habitats. This was interpreted as an adaptation to living in shade, where polarization is rare and thus more noticeable than in the polarization noise of sunny places (e.g., figure .). The first example of the possible use of polarization for signaling was itself discovered in an environment where background polarization is low—not under the rainforest canopy but in marine waters. Cuttlefish and squid produce many patterns on their skin for camouflage, but it was a surprise to learn that they also have dynamic patterns, invisible to us, based on the reflection of linearly polarized light (Shashar et al., ; see figures .D and .). The polarization results from active reordering of high-refractive-index plates of a protein called reflectin in iridophores lying in the skin under the more colorful chromatophores (the cells that produce the longrecognized intensity patterns). The chromatophores only weakly depolarize the polarization reflections from iridophores (Mäthger and Hanlon, ), so the putative
204
Chapter 8
signals can proceed while the cephalopod stays camouflaged. Unfortunately, these must be termed “putative” for now, although circumstantial evidence for their function as signals is strong. They are produced during mating behavior or aggressive encounters, they exist in animals with outstanding polarization vision, and they are actively modulated, in less than a second in many cases. Nevertheless, the crucial A
B
C
Figure 8.19 Linear polarization patterns likely to be involved in signaling. (A) Polarized-light patterns in Heliconius cydno (left) and H. melpomene (right). Each animal’s left wing is illustrated in its natural color; the right wing is a polarization difference image for horizontal versus vertical polarization. Variations in polarization show up as brightness patterns, visible in H. cydno but not H. melpomene. (After Sweeney et al., 2003) (B) “Red” polarizers in a uropod of the mantis shrimp Busquilla plantei, viewed through horizontal (left) and vertical (right) polarizers (indicated by the double-headed arrows). Left image is lighter, showing strong reflection of broadband polarization. The red areas in right image indicate that longer-wavelength light has only weak polarization. (C) “Blue” polarizers in the first maxilliped of another mantis shrimp, Haptosquilla trispinosa, poking its anterior end from a natural burrow (same layout and symbols as in B). Note strong, horizontally polarized blue reflections, which are almost fully extinguished by the vertical polarizer. (Photographs in B and C by Roy Caldwell)
Polarization Vision 205
link—observing that the behavior of a signal receiver is modified in some appropriate way—is missing (Mäthger et al., ). Despite intensive study, the question of polarization communication in cephalopods remains a tantalizing possibility, one of those maddeningly unfinished stories in the biology of polarization vision. The kings (and queens) in the realm of polarization-signals-in-waiting are the mantis shrimps. These creatures, eyes packed with polarization receptors including some for viewing circular polarization, include species that are literally covered with patterns of linear, and often circular, polarization (figures .B, .C, and .B,C). Polarization is produced by at least three different optical mechanisms: scattering (producing “blue” polarizers; Cronin et al., ), absorption by dichroic molecules (producing “red” polarizers; Chiou et al., b), and (probably) dichroism mated to quarter-wave retarders (producing circular polarizers). Unlike cephalopods, there is no evidence that the patterns are internally controlled, but mantis shrimps are enthusiastic about wiggling or waving their polarizers in front of potential mates or competitors (Cronin et al., ; Chiou et al., a,b), modulating the polarization signal. Once again, we have polarizers used actively and eyes that are highly capable of polarization analysis. This time we even have a behavior that is adaptively changed in the presence of polarization signaling; females of some species (including the one in figure .C) behaviorally prefer males with functional polarizers. But the door is still open a crack! Although destruction of the polarizing properties of the blue polarizers deters female interest in males (Chiou et al., a), the process unfortunately also extingishes their beautiful blue color. So we cannot yet be sure that it is the polarization that is compelling, and it is certainly possible that the animals visually interpret their conspecifics’ signals as butterflies do leaves, combining polarization with color for the most effective communication. Thus, we conclude this chapter with a long list of partially answered questions, but obviously there is lots of room for new investigators and new ideas. Polarization vision seems exotic to our species because we have so little empathy with it, but most animals do not worry about the philosophical questions and go about their lives using whatever information they can gather from their sensory worlds. Polarized light is in that sense just another way of seeing familiar things. In another view, however, it is exotic because the physical processes that create it are unique, mostly being quite distinctive from the pigmentary and structural mechanisms used in color production. Blue skies and bright waters are polarization phenomena accessible to many, many eyes but, unfortunately, not ours.
9 Vision in Attenuating Media A young herring is swimming in a large school of other herrings in murky, green water somewhere off the western coast of Europe. Looking around, it is reassured by the presence of its fellow fish. The nearby herring are shiny, and our fish can see the details of each of their scales. However, the herring that are a meter away do not look quite so clear. They are still easily visible, but they do not look as shiny, and the details of their scales, even though they are still large enough to resolve, are too faint to see. The more distant fish also look greener, like the water behind them. The herring that are more than a meter away are just pale, green shadows, barely darker than the background. Our herring swims outside of the school for a moment and suddenly sees a large barracuda. Rather than first appearing as a small fish in the distance and growing larger as it approaches, the predator is suddenly only a meter away and at full size as if it decloaked. The last things the small herring sees are the crystal-clear details of the barracuda’s white teeth. Unless it is a foggy day, air is essentially invisible, with its only effects on vision occurring over distances of kilometers (Killinger et al., ). However, many species live in water, which can appear murky even over distances as short as a few centimeters (Mobley, ). Even in the clear waters of tropical reefs, light that is not blue is strongly absorbed by the water, and light of all wavelengths is scattered to some extent (figure ., upper). This has profound effects on vision that John Lythgoe called “some of the most incapacitating and intractable difficulties that an animal has to face” (Lythgoe, ). Most underwater animals and objects, unless they are small, fade into the background long before they become too small to see. How quickly this happens depends not only on the water itself but also on the direction in which the viewer is looking (Duntley, ). The absorption and scattering of light by natural waters also make the underwater world darker than the terrestrial world. In truly murky waters one can go from day to night simply by descending a meter or two below the surface. In contrast, visible light decreases by only about % after traveling through the entire Earth’s atmosphere. Nevertheless, even on land, visibility can be dramatically limited during periods of fog and heavy rain (figure ., lower). Therefore, it is important to understand vision through attenuating media, especially for visual ecologists studying aquatic species. Few areas of visual ecology are filled with more misconceptions, however. Common myths include () that visibility
Vision in Attenuating Media
Figure 9.1 (Top) Underwater photograph taken at the Great Barrier Reef off the coast of Australia. Note that close objects have higher contrast and a wider color range than more distant objects, which mostly appear blue. (Bottom) Fog on a pond in Durham, NC. Again, distant objects have less contrast, but they do not turn a particular color.
207
208
Chapter 9
is degraded entirely by light scattering, () that water is blue due to light scattering, and () that one can see farther underwater using yellow lens filters because they screen out the blue, scattered light. This chapter dispels these myths and explores how vision is affected by both water and fog. It requires a bit more math than is found in some of the other chapters, but little of aquatic visual ecology makes sense without understanding how water affects visibility.
First, Some Coefficients Before we can discuss how vision works in attenuating media, we need to define four terms, all of which are coefficients found in exponents. As with the Four Horsemen of the Apocalypse, they are not always pleasant to deal with but nevertheless are a part of life. As light passes through an attenuating medium such as water or fog, some of it is absorbed, and some of it is scattered (figure .). First, let us imagine a case where light is absorbed far more than it is scattered: a good example would be a beam of -nm red light in clear oceanic water. Because the absorption of this light by the water molecules occurs randomly, the radiance of the beam diminishes exponentially (Duntley, ): L ( d) = L ο e −ad
(.)
where Lo is the initial radiance of the beam, d is the distance the beam has traveled through the water, and a is called the absorption coefficient (note that the inverse square law of decreasing radiance is valid only for point sources such as stars and small bioluminescent events: see Johnsen, ). However, many media also scatter light. If the scattering is moderate, then the radiance of the beam is: L ( d) = L ο e −(a + b) d
(.)
where b is called the scattering coefficient (Duntley, ). What we mean by “scattering is moderate” is that the light is scattered at most once or twice before reaching the viewer’s eye. Whether this is true or not depends on how much scattering there is relative to absorption and how far the beam travels. Fortunately, many waters are clear enough and viewing distances are short enough that multiple scattering often can safely be ignored. However, if large fuzzy halos surround objects viewed in the habitat, then multiple scattering is significant. This occurs in
Figure 9.2 A beam of light is both absorbed and scattered as it passes through an attenuating medium such as water or fog. Only a portion of the beam continues unhindered.
Vision in Attenuating Media
murky waters and fog and when viewing bright lights at long distances in even clear water. Multiple scattering is a more complicated situation that is discussed at the end of this chapter. Until then, we assume that scattering is moderate and that equation . is accurate. Because the attenuation of radiance depends on the sum of a and b and not their individual contributions, people often use a third coefficient c, which is the sum of a and b. This is known as either the beam attenuation coefficient or the extinction coefficient. We use the former term because it is more commonly found in aquatic literature. The inverse of c is referred to as the attenuation length and is a good measure of how clear the medium is. After a light beam travels one attenuation length, its radiance is % of what it was before ( = /e, where e is the base of the natural logarithm). As a general rule of thumb, a large, black object can just barely be seen from about . attenuation lengths away in brightly lit situations (Zaneveld and Pegau, ). For example, the beam attenuation coefficient for -nm blue-green light in the open ocean is about . m-. This translates to an attenuation length of m, meaning that a large pile of black rocks can be seen from about m away in daylight by humans and animals with good contrast vision (how good is described later). The coefficients a, b, and c all deal with the attenuation of radiance, but the final one we need deals with irradiance. As we discussed in chapter , irradiance is defined in a couple of ways, but visual ecologists usually are interested in the number of photons from all directions that strike a surface (such as the cornea of the eye). Irradiance also decreases as one moves away from the light source in an attenuating medium, the archetypal example being a descent into the ocean on a sunny day. Conveniently enough, this irradiance E also decreases with depth exponentially: E ( d) = E ο e−Kd
(.)
where Eo is the initial radiance and K is called the diffuse attenuation coefficient (Mobley, ). The diffuse attenuation coefficient is different from the others because it depends on more than just the clarity of the medium, being affected by the position of the light source(s), the extent of the medium, and the presence of any reflecting or absorbing surfaces. For example, the diffuse attenuation coefficient of downwelling irradiance in the ocean is different at noon than it is at sunset (Mobley, ). The difference is not usually large though, and most databases of this coefficient are corrected so that they assume that the sun is high in the sky. These four coefficients (a, b, c = a + b, and K) are the primary parameters that determine how visible an animal or object is when viewed in an attenuating medium. However, it is critical to realize that all four can be wavelength dependent. In the case of fog, absorption is essentially nonexistent, and scattering is high and roughly equal at all wavelengths. This is why fog looks white. In the far more commonly encountered case of water, though, wavelength is critical. The primary reason for this is that absorption of light by water itself (and also by chlorophyll and dissolved organic matter) is strongly wavelength dependent (figure .). This wavelength dependence of absorption contributes to the wavelength dependence of both the beam attenuation coefficient c and the diffuse attenuation coefficient K. The scattering coefficient, however, depends less on wavelength than is commonly assumed. For example, in oceanic water the scattering coefficient at nm is less than double what it is at nm (figure .). This is far different from the inverse-fourth-power relationship of Rayleigh scattering that people often assume exists in the ocean, which would demand that the
209
Chapter 9
Scattering or extinction coefficient (m–1)
210
0.5
Attenuation coefficient at 0 m Attenuation coefficient at 100 m Scattering coefficient at 0 m Scattering coefficient at 100 m
0.4
0.3
0.2
0.1
0 400
450
500
550
600
650
700
Wavelength (nm) Figure 9.3 Scattering and beam attenuation coefficients at the surface and at 100 m depth in clear oceanic water. Note that, at both depths, the primary factor affecting total attenuation above 550 nm is absorption and that the scattering coefficients are relatively independent of wavelength. Also note that water at greater depths tends to be clearer due to a lower concentration of phytoplankton, particulates, and dissolved organic matter.
scattering be over nine times as great at nm than at nm. The only water that actually scatters light in this highly wavelength-dependent fashion is extraordinarily pure distilled water, which is found only in certain laboratories. Therefore, the reason that the ocean is blue (and that the veiling light between an underwater object and the viewer is also blue) is that blue light is absorbed the least, not that it is scattered the most. Connected with all this is the fact that photoreceptors do not operate as spectrometers, sampling one wavelength at a time, but instead collect photons over a wide spectral range, weighted by a visual sensitivity function (see chapter ). Therefore, the correct form of equation . (for example) for an actual visual system is the following integral: λ max
Q ( d) =
#
Lo ( λ) e −c (λ) d S( λ) d λ
(.)
λ min
where Q is the quantum catch and S(λ) is the spectral sensitivity of the visual channel (including corrections for the optics of the eye, etc.). Equation . is more complex than equation ., and the added information can obscure the basic structure, so for the rest of this chapter, we use the simpler formulation with the underlying assumption that all terms are wavelength dependent and that weighted integrals may be needed to evaluate the situation for vision purposes. One thing to keep in mind,
Vision in Attenuating Media
however, is that the integral of a number of exponential functions is almost never an exponential function. Therefore, although the radiance attenuates exponentially at each wavelength, the quantum catch—which is integrated over many wavelengths— does not attenuate exponentially. In fairly monochromatic environments such as the ocean, the attenuation will be close to exponential, but this is not guaranteed.
Path Radiance Now that we have discussed the relevant coefficients and the exponential nature of light attenuation, we can examine what happens when something is viewed through an attenuating medium. Suppose a mackerel is being viewed horizontally by a shark at a distance d against a water background in a pelagic environment (figure .). The radiance in the direction of the mackerel (as seen by the shark) is composed of two parts: () the radiance of the mackerel itself and () the radiance of the light scattered into the path between the mackerel and the shark (referred to hereafter as path radiance). As the shark moves away from the mackerel, the viewed radiance of the latter decreases and the path radiance increases, both exponentially. In other words: L ( d) = L ο e −cd + Lb ( 1 − e −cd)
(.)
where Lo is the inherent radiance of the mackerel (i.e., radiance at zero viewing distance) and Lb is the background radiance of the water (Duntley, ; Mertens, ). As can be seen from this equation, as d increases, the path radiance eventually dominates the total radiance, and the mackerel becomes indistinguishable from the background light and is invisible to the shark. Because the filling in of path radiance is entirely due to light scattering, it may seem odd that the coefficient in the second half of the equation is also the beam attenuation coefficient c and not the scattering coefficient b. This can be proven rigorously, but a simple argument shows that the exponent must be c. Suppose that the mackerel has an inherent radiance that matches the background radiance, making it invisible at close range (some well-camouflaged fish come close to this). If the shark were to then move away from the mackerel, the latter would still be invisible (nothing invisible at one wavelength ever becomes visible at the same wavelength as you move away from it). Given the form of equation ., the only way for this to happen is if the path radiance increases at exactly the same rate as the mackerel radiance decreases, which can only happen if the exponents match. By definition, the first exponent is c, so they both have to be c. Of course viewing may not always be horizontal. The shark could instead be below, above, or at any angle relative to our mackerel. In that case the background radiance is no longer constant as the shark moves away from the mackerel. Instead, it also decreases exponentially with its own coefficient KL: Lb ( d) = Lb ( 0) e−K d cos θ L
(.)
where Lb() is the background radiance when the viewer is right next to the fish and θ is the viewing angle (° is looking up, ° is looking down). The cos θ term is there because radiance attenuation coefficients of this sort are usually defined relative to increasing depth, and d cos θ denotes how far down (or up) you have gone as you
211
Chapter 9
A
B Radiance (arbirtray units: background radiance = 1)
212
2.0
1.5
Total radiance
1.0 Path radiance 0.5 Object radiance
0 0
10
20
30
Distance (m) Figure 9.4 (A) Viewing a mackerel (Grammatorcynus bicarinatus) in water. White lines represent the radiance of the mackerel, which is reduced by both scattering and absorption; red lines represent the path radiance, which consists of light scattered into the path between the viewer and the fish. With increasing distance, the path radiance dominates the total radiance and eventually matches the background radiance. (B) The object, path, and total radiance for an object that is twice as bright as the background viewed horizontally in oceanic water at 480 nm.
move away by a distance d. You can get the attenuation coefficient KL from measurements or optical models, but once you get to depths where the shape of the light field is relatively constant (see chapter ), you can replace this number for any viewing angle with the diffuse attenuation coefficient K, for which enormous databases exist (e.g., Worldwide Ocean Optics Database). Combining equations . and . and
Vision in Attenuating Media
again using the argument that an invisible object at zero viewing distance must stay invisible at all longer distances, we get: L ( d) = L ο e −cd + Lb ( 0) ( e −K cos θd − e −cd) , L ο e −cd + Lb ( 0) ( e −K cos θd − e −cd) L
(.)
(Duntley, ). If instead you want to use the background radiance as the viewer sees it, at its position away from the fish—Lb(d) instead of Lb()—then the correct equation is: L ( d) = L ο e −cd + Lb ( d) ( 1 − e −(c − K cos θ) d) , L ο e−cd + Lb ( d) ( 1 − e −(c − K cos θ) d) L
(.)
The two equations are equivalent, but for simplicity’s sake we assume that background radiance is always the radiance viewed by the observer—Lb = Lb(d)—for the rest of the chapter. Equations . and . look worse than equation ., but all they really do is add a term that accounts for the change in background radiance as the viewer changes depth. Remember, however, that all these terms are wavelength dependent and that integrals must be performed if one wants to examine anything from the point of view of a real visual system. This is especially important for upward viewing because then the background radiance depends on c − K (because the cosine of ° is ), which varies in odd ways with wavelength, as we show later. Equation . is the fundamental equation of vision in attenuating media, from which everything else follows. If you remember only one equation from this chapter, remember this one.
Contrast As discussed at the beginning of the chapter, the visibilities of many objects in attenuating media depend more on their contrast than on their size. This is particularly true in aquatic habitats, where even small objects and animals often fade from view before they become too small to resolve. Thus, contrast is central to understanding visibility in many species. One of the first considerations in any analysis of visibility is whether to use Weber or Michelson contrast. Going back to our fish from the previous section, if L(d) is its radiance viewed from a distance d, and Lb(d) is the radiance of the background (as seen by the viewer), then the fish’s Weber contrast (Mertens, ) is: C W ( d) =
L ( d) − Lb ( d) Lb ( d)
(.)
Similarly, if L(d) and L(d) are the radiances of two parts of an object (e.g., light and dark stripes on our fish), then the Michelson contrast of this pattern (Mertens, ) is: C M ( d) =
L 1 ( d) − L 2 ( d) L 1 ( d) + L 2 ( d)
(.)
One should use Weber contrast when dealing with an object on a large background and Michelson contrast when dealing with patterns within an object. Weber contrast goes from − for a black object on any nonblack background to infinity for a
213
214
Chapter 9
luminous object on a black background. Michelson contrast goes from – to for the same conditions. One can of course call the object radiance L(d) and the background radiance L(d) and use the Michelson formulation for the contrast of objects on large backgrounds as well, but we advise against it. The reason is that the Weber formulation for the decrease of contrast as a function of viewer distance is the simple exponential equation: CW ( d) = C ( 0) e −cd or C ( 0) e −(c − K cos θ) d for nonhorizontal viewing L
(.)
Compare this to the unwieldy equation for the same situation using the Michelson contrast formulation: C M ( d) =
CM( 0) e −cd 1 − CM( 0) ( 1 − e −cd)
(.)
Therefore, we use the Weber contrast formulation whenever we are talking about an object on a large background, such as a fish in the ocean or a deer in the fog. Sadly, the attenuation of Michelson contrast in its correct usage (for patterns within objects) depends on the background radiance as well and is the supremely ugly: C M ( d) = C M ( 0 ) c 1 +
−1 2Lb ( ecd − 1) m L1 ( 0) + L 2 ( 0)
(.)
This is only for horizontal viewing where the background radiance is constant. For nonhorizontal viewing this last equation is even worse, so pray that you never have to study the contrast of fish stripes viewed at an angle. These last two equations are included only for completeness, and we do not return to them again. Instead, let us go back to equation .. It encapsulates much of what is needed to understand visibility in water and other attenuating media. In essence, to determine the contrast of an object at a distance at one wavelength, you do not need to know any of the radiances. You only need to know the contrast at zero distance and one (or at most two) attenuation coefficients. The contrast at zero distance is usually called the inherent contrast, and we refer it hereafter as Co for clarity. Remember, however, that to do this correctly for an actual visual system, you need to replace the background and object radiances in equation . with actual quantum catches. Although this can in theory make everything messier, fortunately visual ecologists are usually looking at two types of attenuating media, fog or water. The attenuation in fog is almost entirely due to scattering and is approximately wavelength independent (see later in this chapter), so the contrast is attenuated at nearly the same rate at every wavelength. In water attenuation is highly wavelength dependent, but for this exact reason, the underwater world tends to be nearly monochromatic. It has been shown by numerous researchers (e.g., Zaneveld and Pegau, ) that if you use the radiances and the attenuation coefficient at the peak wavelength of the underwater light (e.g., nm for the open ocean), you will usually get close to the right answer for contrast and its attenuation, as long the spectral sensitivity of the viewer also peaks at approximately the same wavelength. That is: Q o ( d) − Q b ( d ) , C ( λpeak, d) Q b ( d)
(.)
Vision in Attenuating Media
where λpeak is the peak wavelength of the underwater light, and Qo and Qb are the quantum catches of the object and background as seen by the viewer. All bets are off, though, if the viewer has a spectral sensitivity that peaks far from the peak wavelength of the underwater light. Then some truly strange things can happen that are not possible at a single wavelength, such as the contrast going from positive to negative as the viewing distance increases (with the object becoming briefly invisible as the contrast changes signs). Even if the peak spectral sensitivity of the viewer matches that of the underwater light, you can occasionally be unlucky. For example, it is possible for a fish to have a high contrast relative to the background at nm but actually be invisible to a visual system that also peaks at nm if the spectral radiance of the fish and the background are just right. So use this approximation with caution. Another important use of quantum catches is for the calculation of chromatic contrasts. For simplicity’s sake, we have confined ourselves to achromatic contrast in this chapter, but the effects of attenuating media on color are equally important, especially in environments that still have illumination over a broad range of wavelengths (e.g., fog, near-surface clear waters). Color contrasts are modeled in a number of ways (see chapter ), but they all ultimately require quantum catches. As a general rule, and as might be expected, the colors of objects viewed in attenuating media approach the color of the background radiance as viewing distance increases (figure .; see also the washed-out colors in figure .); thus, all the same processes that we describe for achromatic vision are still at work. If we go back to achromatic contrast at one wavelength and also restrict ourselves to horizontal viewing, equation . shows us that the attenuation of achromatic contrast at one wavelength depends entirely on the beam attenuation coefficient c. Although c is the sum of the absorption and scattering coefficients a and b, the attenuation of contrast is not affected by how much of that sum is due to each component. Therefore, two bodies of water that have the same c will attenuate contrast at the same rate, even if one is highly scattering and weakly absorbing and the other is the opposite. More importantly, this also means that contrast at certain wavelengths can attenuate more quickly than at others, even if scattering is less at those wavelengths. As we showed in the section on coefficients, the increase in scattering at shorter wavelengths in real water is actually less than the inverse-fourth-power law that people often assume. An even bigger issue, however, is that although red light is scattered less than blue light, it is absorbed far more. Thus, contrast is attenuated more rapidly at red wavelengths than at blue ones, which has critical implications for how we think about yellow filters in the eyes of aquatic animals, which we will come to soon. The other implication of equation . is that how quickly the contrast of an object attenuates depends on where the viewer is relative to it. If the viewer is looking horizontally, then the coefficient is just the beam attenuation coefficient c, but if the viewer is below the object and looking straight up at it, then the coefficient of attenuation is c − KL. For reasons we will not go into, the attenuation coefficient of background radiance KL is always less than c, so c − KL is always positive. This is a relief because otherwise objects viewed from below would become more visible as you moved away from them! However, c − KL is always less than c, so contrast attenuates more slowly in the view from below. For example, consider a tuna that is below a haddock, slowly sinking while looking up at it. As the tuna descends and moves away from the haddock, the direct light from the haddock is lost to scattering and
215
216
Chapter 9
Figure 9.5 The GretagMacbeth Colorchecker modeled to show how it would appear if viewed horizontally in clear oceanic water at a depth of 5 m and at viewing distances of 2, 4, 8, 16, and 32 m. Note that achromatic contrasts decrease and that all the hues approach the hue of the background water.
absorption. However, because the tuna is getting deeper, the background radiance that it sees is also decreasing, and so the contrast is preserved. The background radiance decreases more slowly than the radiance from the object, so the contrast does eventually go to zero, but it can take a while getting there, especially in clear water where c and KL are similar in magnitude. If the tuna were above the haddock and looking straight down at it, then the attenuation coefficient would be c + KL, which can be almost double the value of c alone. So, the contrast of objects viewed from above decreases rapidly. The central question, though, is how far away can an object be seen? To understand this, we first have to understand contrast thresholds.
Contrast Thresholds As mentioned above, as a viewer moves away from an object (or an object moves away from the viewer), the radiance of the object decreases to zero, and the path radiance increases to match that of the background. Thus, at some distance, the viewer will
Vision in Attenuating Media
Minimum detectable luminance difference (lumens/m2/sr)
10 1
ΔL ∝ L
0.1 0.01 0.001 0.0001
ΔL ∝ √L
0.00001 Mesopic
Scotopic 0.000001 0.000001
0.0001
0.01
Photopic 1
100
Adapting luminance (lumens/m2/sr)
Figure 9.6 The minimum detectable radiance difference as a function of the adapting illumination in humans. Note that for high light levels (upper dashed line curve fit), it is proportional to the illumination, and for low light levels (lower dashed line curve fit), it is roughly proportional to the square root of the illumination. (Data from Blackwell, 1946)
no longer be able to separate the object from the background. The question then is whether the limiting factor is the contrast of the object against the background or the difference between the radiance of the object and the radiance of the background. It turns out that which factor matters depends on the level of the adapting illumination. For “bright” light levels, which for most diurnal terrestrial animals occurs at moderate room light and above, the minimum detectable radiance difference between an object and the background is linearly proportional to the background radiance (figure .). So, for example, if a songbird can just see an aphid on a green stem that is % darker than the stem when standing outside in daylight, then it can also just see the same aphid if you take the aphid and the stem it is sitting on indoors, even though the radiance difference in the latter case is a couple of orders of magnitude lower than in the former. This is known as the Weber-Fechner Law and is the reason why contrast is a more useful measure of visibility than radiance difference. However, like all biological laws, it has limited jurisdiction. At lower light levels the minimum detectable radiance difference is roughly proportional to the square root of the adapting illumination due to photon noise (see chapter ). Therefore, the minimum detectable contrast is not constant but instead is approximately inversely proportional to the square root of the adapting illumination. However, because contrast is such a useful term, visual ecologists have stuck with it for all light levels, with the caveat that the minimum detectable contrast (known hereafter as the contrast threshold) increases at lower light levels as roughly the inverse square root of the adapting illumination (figure .). Unfortunately, the contrast threshold depends on more than just the light levels. Another major factor that affects the threshold is spatial frequency—in other words,
217
Chapter 9
10
Contrast threshold (1 = 100%)
218
1
0.1
0.01
Scotopic 0.001 0.000001
0.0001
Mesopic 0.01
Photopic 1
100
Adapting luminance (lumens/m2/sr) Figure 9.7 The contrast threshold as a function of the adapting illumination in humans. Note that for high light levels it is constant, and for low light levels it is roughly inversely proportional to the square root of the illumination. (Data from Blackwell, 1946)
the fineness of the detail in the viewed image. Most visual systems appear to best detect contrast at a certain spatial frequency, with the contrast threshold rising at higher and lower frequencies (figure .). The increase in threshold at higher frequencies is thought to be due to optical imperfections and diffraction in the visual system, whereas the increase at lower frequencies is thought to be due to the result of lateral inhibition in the retina. Note from figure . that there is a surprising amount of interspecific variation, not all of which is readily explainable. For example, the best contrast threshold in pigeons is %, which is far worse than the –% found in primates and cats. This relatively poor contrast performance has also been found in other birds (e.g., Lind et al., ; Lind and Kelber, ) and is perplexing given the otherwise excellent performance of avian visual systems. Interestingly though, a fivefold reduction in gray levels of a typical indoor scene is barely noticeable (figure .), suggesting that high contrast sensitivity may not be crucial for diurnal terrestrial animals, where scenes generally have high contrast. The contrast threshold can also be affected by the polarity of the contrast, although this appears to have only been studied in humans, where Adrian () showed that negative contrast (dark object on a lighter background) is more easily perceived than positive contrast. The difference is not large under bright illumination but can be as high as a threefold decrease in contrast threshold for small objects viewed under dim illumination. Contrast threshold is also affected by the size of the object, but only when objects are small—close to or below the resolution limit of the visual system once spatial summation is factored in (reviewed by Adrian, ). Finally, contrast threshold depends on how it is measured. A number of different techniques have
Vision in Attenuating Media
1 Pigeon
Hooded rat
Contrast threshold (1 = 100%)
Owl monkey
0.1
Albino rat Goldfish
Human Cat
0.01 0.01
0.1
1
10
100
Adapting luminance (lumens/m2/sr) Figure 9.8 The contrast threshold as a function of spatial frequency for a number of vertebrate species. Note that although the frequency and value of the lowest contrast threshold vary greatly among species, all the curves (with the exception of the albino rat Rattus norvegicus) have approximately the same shape. (Data from Ulrich et al., 1981)
Figure 9.9 Historical room in Bäckaskog castle in Kristianstad, Sweden. (Left) Image using the full 255 gray levels typical of digital images. (Right) The same image reduced to 50 gray levels to simulate the fivefold increase in contrast threshold of birds relative to humans and certain primates and fish. Note that the two images are essentially indistinguishable.
been used, all of which give slightly different thresholds (reviewed by Douglas and Hawryshyn, ). So, how does one pick a contrast threshold for a given viewer, object, and lighting conditions? In the ideal world one would have data on all the factors described above, but this is rare unless your viewer is a primate, cat, goldfish, or possibly a honeybee.
219
220
Chapter 9
Lacking that, one is left guessing. It appears that fish with well-developed eyes looking at other fish-sized objects from ecologically relevant distances in bright light have contrast thresholds on the order of –% (reviewed by Douglas and Hawryshyn, ), so this number is often used in studies on aquatic visibility. One might suspect that all animals with well-developed eyes have similar contrast thresholds in bright light, but as we saw above, birds do significantly worse. Unfortunately, the choice of contrast threshold does matter, especially for viewing low-contrast objects, so one cannot be too random about it. It is our hope that more data will be collected in the future on this crucial aspect of animal visual systems.
Sighting Distance Assuming that you do have a contrast threshold that you can trust, it is now possible to calculate the maximum distance at which something can be detected in an attenuating medium. This distance is, of course, extremely useful for understanding aquatic predator-prey and intraspecific interactions as well as possibly for short-distance navigation, so we will explore it carefully. From earlier in the chapter, we know that contrast at one wavelength attenuates exponentially in the following way: C ( d) = Co e −(c − K cos θ) d L
(.)
where Co is the inherent (initial) contrast. To determine the maximum distance at which the object is still detectable, we substitute the contrast threshold Cmin for C(d) and solve for d. This gives us: dsighting = ln e
Co 1 o$ C min c − KL cos θ
(.)
Thus, this distance (which we refer to hereafter as the sighting distance) is the product of two factors. The first is the natural logarithm of the ratio of absolute value of the initial contrast to the contrast threshold (it has to be the absolute value, because you cannot take the logarithm of a negative number). The second factor, ( 1/c − K L cos θ) , is determined by the clarity of the water and the direction the viewer is looking. In the case of horizontal viewing, equation . reduces to: dsighting = ln e
Co 1 o$ C min c
(.)
Because the first factor is logarithmic, it is harder to change than the second. In other words doubling the inherent contrast of a fish may not change its sighting distance much (figure .), but doubling the clarity of the water that it is swimming in doubles the sighting distance. The second factor deserves more scrutiny because it affects whether a viewer can use filters to see further in attenuating media such as water. Even in relatively wellbehaved media such as oceanic water, this factor acts in unusual ways (figure .). First of all, as we discussed above, it is much larger when looking up than when
Vision in Attenuating Media
0.095
Contrast threshold of viewer (Cmin)
0.085 0.075 0.065 0.055 0.045 0.035 0.025 0.015 0.005 –1.0 –0.8 –0.6 –0.4 –0.2
0
0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 Inherent contrast (Co)
5–6 4–5 3–4
2–3 1–2 0–1
Figure 9.10 Sighting distances depend on the product of two factors. The first factor, plotted here (see boxes for values), is the natural logarithm of the ratio of the inherent contrast (Co) to the contrast threshold of the viewer (Cmin) and thus dimensionless. This graph shows how changing both Co and Cmin affects this first factor and thus sighting distance. Notice that even doubling the inherent contrast has a relatively small effect.
looking down. Second, the factor for upward looking in the ocean is greatest in the green portion of the spectrum even though the underwater illumination in oceanic water peaks in the blue portion. This occurs because—in this case at least—the difference between c and KL is minimal in this portion of the spectrum (remember that upward looking depends on c − KL). Third, the factor is uniformly low in the red and orange portions of the spectrum even though scattering is lowest at these wavelengths. This again shows that visibility is often determined more by light absorption than it is by light scattering. We discuss the implications of this later. However, as usual we need to finish by mentioning that—to be accurate for vision—you need to integrate over the spectral sensitivity function of the viewer. As we said before, one can often get away with just using the radiances and coefficients for the peak wavelengths, but in situations where one cannot (such as for photoreceptors that are sensitive to light far from the peak wavelength of the background light), it is not possible to invert the contrast attenuation equation to get sighting distance. Instead, one must calculate contrast using quantum catches and determine by trial and error when it falls below the contrast threshold of the viewer. There is no closed analytical solution.
221
Chapter 9
180
140
0–10
120 100
10–20 80 60
20–30
40
30–40
Viewing angle (degrees: 0 is looking up)
160
20 0
700
650
600
550
500
450
40–50 400
222
Wavelength (nm) Figure 9.11 The second factor affecting sighting distance is given here in meters (numbers on the plot) as a function of wavelength and viewing angle (θ) for oceanic water at 100-m depth.
Seeing Farther Underwater: Can Color Filters Help? It should be obvious by now that the effects that water and fog have on light propagation greatly limit visibility. Although it is easy to see a mountain that is km away in clear air, the same mountain can be invisible at a distance of m in fog or water, even accounting for its vastly increased angular size. Even in a clear, coral reef habitat, a massive black outcropping cannot usually be seen from more than about m away (Zaneveld and Pegau, ). We can say from experience with both SCUBA and submersible work that it is disconcerting to see entire landscapes appear as if out of nowhere as one moves, even in relatively clear water. Therefore, one might assume that selection favors animals that have evolved some method of seeing farther through attenuating media. One obvious possibility is to lower the contrast threshold. This is theoretically possible because the signal-to-noise ratio that ultimately limits contrast sensitivity is proportional to the square root of the quantum catch (see chapter ). However, as mentioned above and for poorly understood reasons, contrast thresholds do not seem to get much lower than %, even in perfect lighting conditions. Perhaps we
Vision in Attenuating Media
Figure 9.12 The inherent contrast of an underwater object is greatly affected by wavelength. Here, a green turtle photographed in the blue-green waters of the Great Barrier Reef is viewed in color and in the three individual channels. Note that the contrast of the turtle against the background is far higher in the red channel.
will discover animals with substantially lower thresholds, but this has not yet been documented. The second possibility is to boost inherent contrast and reduce contrast attenuation using color filters in the eye. It is well known that many fish have compounds in their lenses that filter out the shorter visible wavelengths, and it has been suggested that these yellow lenses might allow the fish to see farther in the water (reviewed by Lythgoe, ). This hypothesis is tempting because the background light in the ocean is blue, as is most of the path radiance. Indeed, the contrast of a “gray” underwater object (i.e., one that reflects all wavelengths equally) is much higher at longer wavelengths (figure .). For example, if you look down at a gray object that is at m depth in the ocean, its inherent contrast at nm is over seven times its contrast at nm. Unfortunately for animals wanting to see farther underwater, this boost in inherent contrast at longer wavelengths is more than offset by a substantial increase in contrast attenuation at these same wavelengths. So putting on a pair of yellow filters underwater will definitely make everything more contrasty (in fact, it is quite impressive), but that contrast will fade away even more quickly. Putting it all together (figure .) shows that—with the exception of looking upward at depth—sighting distance is longest at the wavelengths of maximal transmission in the habitat. The primary reason for this lack of increased visibility at longer wavelengths despite an extreme increase in inherent contrast goes back to the fact that sighting distance is determined
223
Chapter 9
A
Sighting distance (m)
200
0m 10 m 20 m 40 m 60 m 100 m
Looking up
150
100
50
0 400
450
500
550
600
650
700
600
650
700
500 550 600 Wavelength (nm)
650
700
Sighting distance (m)
B 100
Looking horizontally
50
0 400
450
500
550
C Sighting distance (m)
224
100
Looking down
50
0 400
450
Figure 9.13 Sighting distances for a large object that diffusely reflects 50% of the light that strikes it as a function of depth, wavelength, and viewing angle. The viewer is assumed to have a contrast threshold of 2% for all wavelengths and depths.
by the logarithm of inherent contrast. So even a big boost in inherent contrast at longer wavelengths does not help things much. The situation is even worse than figure . shows because that figure assumes that the contrast threshold is a constant % for all wavelengths and at all depths. Given how rapidly longer wavelengths attenuate as depth increases, the contrast thresholds will go up at these wavelengths, making yellow filters even less useful. Another contrast-enhancing function that has been suggested for the yellow lenses in animals is to reduce intraocular light scattering, that is, scattering by the cornea, lens, and other intraocular media. The assumption behind this hypothesis is that intraocular light scattering is primarily due to Rayleigh scattering and thus is far higher
Vision in Attenuating Media
at short wavelengths. Careful spectral measurements of intraocular light scattering have been performed only on humans, rabbits, and a few other vertebrate model systems, and these have shown that the scattering is biased strongly toward short wavelengths only when the total scattering is very low (e.g., Coppens et al., ). When scattering becomes large enough to degrade vision, it is approximately wavelength independent. This is similar to what we discussed earlier in regard to water. The purest water does scatter blue light far more than red, but it scatters light of either color hardly at all. This is because Rayleigh scattering is an extremely weak process and requires a tremendous number of scatterers to appreciably change the color of the light (Bohren and Huffman, ). In fact one of the few places in nature that you will actually observe color due to Rayleigh scattering is in the blue sky. Even in this case the skylight becomes whiter nearer the horizon due to multiple scattering (Bohren and Clothiaux, ). Thus, it is unwise to invoke Rayleigh scattering and its associated inverse-fourth-power law in biological matters. The wavelength dependence of scattering in situations where scattering actually matters is usually minor. In the case of intraocular scattering, the idea of a blue-biased scattering was likely boosted by the fact that human lenses absorb more and more short-wavelength light as they age, which reduces sensitivity and thus contrast at these wavelengths. Again, it is absorption that matters more than scattering.
Seeing Farther Underwater: Can Polarization Sensitivity Help? Although colored filters do not appear to allow animals to see farther underwater, the ability to sense the polarization of light can, at least in theory, do so. The reason for this is that the underwater light field is polarized, particularly when one is looking horizontally in clear water (see chapter ). More importantly, the path radiance—the light that is degrading the image—is also polarized. In fact, to a good approximation, the degree of polarization of the path radiance is equal to the degree of polarization of the background radiance in the same viewing direction. As we discussed above (see equation .), path radiance slowly fills in as the distance from the viewer to the object increases. Because this path radiance is polarized, we can determine how much filling in has occurred by measuring the degree of polarization of the viewed object (assuming that light from the object itself is unpolarized). Once we know this for every wavelength, we can then remove it and “dehaze” the image. Yoav Schechner and colleagues (Schechner et al., ; Schechner and Karpel, ) developed a simple algorithm based on this idea that is remarkably effective at increasing visibility both underwater and in air, so long as the background spacelight is polarized (i.e., blue sky or relatively clear water). In technological terms the algorithm requires two images of the scene. The first is taken through a polarizing filter that is oriented so that the background radiance is maximized (this background needs to be the blue sky or water). This image is known as Imax. The second is taken through a polarizing filter that is oriented so that the background radiance is minimized and is known as Imin. Idehaze =
( Imax + Imin) − Lpath Lpath 1− L3
(.)
225
226
Chapter 9
where the path radiance Lpath at every point in the image is estimated as ( Imax − I min)/ P3. L 3 and P3 are the radiance and degree of polarization of the background, respectively. The infinity symbol is used in both to emphasize that these are the values at an infinite distance. Using this equation for every pixel in the image does an amazing job of both removing the path radiance and undoing the radiance attenuation of the objects themselves (figure .). In addition, it does so without switching vision to wavelengths where contrast attenuation is higher, so it can actually extend visual range as well as boost inherent contrast. The central question of course is whether any animals with polarization sensitivity have actually evolved an algorithm of this sort. Although equation . at first appears complex, it is not substantially different from the neural algorithms involved in color constancy, edge detection, and visual adaptation. Directly determining whether such an algorithm is actually being employed would be challenging, however. Simpler polarization-based algorithms can also increase sighting distance. Perhaps the simplest for an aquatic species is to have a visual system that is most sensitive to vertically polarized light. Because the e-vector of the underwater light field is roughly horizontal for much of the day (see chapter ), a visual system that is most sensitive to a vertical e-vector will darken the background. If the target animal or object is brighter than the background (and unpolarized), this increases the inherent contrast (see Johnsen et al., , for details). A slightly more complex algorithm (but less
A
B
C
Figure 9.14 (A,B) Underwater scene photographed through vertically oriented (A) and horizontally oriented (B) polarizers. (C) Image after polarization-based dehazing. Note that the dehazing significantly improves the image even though the difference between the two original images is not large.
Vision in Attenuating Media
complex than Schechner’s) can also zero out the background light, boosting contrast dramatically. Interestingly, the retinas of both crustaceans and cephalopods, which make up a substantial proportion of pelagic visual predators, are known to possess orthogonally arranged photoreceptors (Marshall et al., , ; see also chapter ). Where known, these are usually arranged with half the microvilli oriented vertically and the other half horizontally (Talbot and Marshall, ), precisely the required orientation for the algorithms discussed. These animals are also capable of rotational eye movements that could theoretically optimize the signal difference between channels to create the Imax and Imin images needed. Opponency, or signal comparison, between the horizontal and vertical channels is known, at least for the crustaceans and cephalopods (Saidel et al., ; Sabra and Glantz, ; Glantz, ), pointing toward a neural correlate of the suggested algorithms. Although testing any of this neurophysiologically or behaviorally presents a serious challenge, the potential is there for polarization-based methods for seeing farther underwater and on land.
Attenuating Media in Terrestrial Environments Even though many visual ecologists work on terrestrial species, we have only a small section on contrast attenuation in air because it is a far less significant issue in nearly all cases. Air does indeed absorb and scatter light. However, it does so to a much, much lesser extent than water does. Absorption of light in air is truly minimal at ecologically relevant distances. As we discussed in chapter , the atmosphere does absorb a significant fraction of the ultraviolet and infrared portions of sunlight, but this is accomplished only by a full pass through the atmosphere—from outer space to the Earth’s surface. Ultraviolet absorption in particular primarily occurs in the ozone layer, which is at an altitude of to km and well above any macroscopic life forms. Absorption of visible light is negligible, even over impressively large and ecologically irrelevant distances (Killinger et al., ). This leaves scattering, which does exist in the atmosphere and affects visibility at long distances. The scattering coefficient in this case is primarily composed of two parts (figure .). The first is scattering by gas molecules (Bohren and Fraser, ). This scattering does obey the inverse-fourth-power law and is thus responsible for the blue sky. However, at altitudes less than about m, atmospheric light is primarily scattered by small particles called aerosols (such as water droplets). These particles scatter light in a relatively wavelength-independent way and are responsible for the lower visibility that we experience at sea level. The density of the particles and their scattering properties vary tremendously, so visibility also varies greatly, at least for those of us who do not live in pristine deserts. Anyone who lives near a large mountain can attest to this variation—some days the mountain is visible; some days it is not. This is about the extent of it though, as far as visual relevance goes, because the total beam attenuation coefficient is orders of magnitude lower than what is found in water. Perhaps navigating birds care about the clarity of landscape features that are many kilometers distant, but it is doubtful that any other animals do. The only case in which the atmosphere becomes a significantly attenuating medium at ecologically relevant distances is during fog (or—equivalently—inside clouds). As anyone who lives in a fog-prone region is well aware, its effect on visibility is highly variable. Even within the same fog event, visual range can increase from a few meters to over a hundred meters in a matter of seconds. However, even in the thickest
227
Chapter 9
30
400 nm 550 nm 700 nm
25
Altitude (km)
228
20 Molecular component
15 Aerosol component 10 5 0 10–8
10–7
10–6
10–5
10–4
10–3
Scattering coefficient (m–1) Figure 9.15 Scattering coefficients in a standard atmosphere as a function of altitude. The strong wavelength dependence of the molecular component is responsible for the blue sky, but total scattering is dominated at altitudes less than ~3000 m by relatively wavelength-independent scattering from small particles (mostly water droplets). (Data from Overington, 1976)
fog, visible light absorption is not significant, and the effects on visibility are nearly entirely due to scattering. An implication of this is that contrast attenuation in fog is nearly wavelength independent. First, the strong wavelength dependence of water absorption does not play a significant role. Second, the water droplets in fog are usually much larger than the wavelengths of the light being scattered by them, and so they scatter light of all visible wavelengths roughly equally (see Johnsen, ). Finally, even if the particles were small enough to scatter blue light more than red, dense fog would still look white because it is a multiple-scattering medium, which brings us to the next section.
Multiple Scattering and the Effect of Attenuating Media on the Resolution of Fine Details The monster in the attic of the House of Optics is multiple scattering. Although we can safely ignore it for many visually relevant situations, it always lurks at the edge of our perception and is daunting when faced directly. Up until now we have assumed that each photon between the target and its viewer is scattered at most once. This means that photons that originally left the target’s surface are scattered once and never come back to the viewer’s eye and that background light scattered into the path does not get scattered back out of it. When this is true, all the equations we have discussed work, and—more importantly—increasing the viewing distance only affects the contrast of the viewed target. The image itself does not become blurry or lose
Vision in Attenuating Media
detail, it just fades away. However, if photons are scattered more than once on their trip from the target to the viewer, then there is the chance that a photon is scattered out of the path and then scattered again back toward the viewer’s eye (figure ., top and middle panels). When this happens, the photon no longer looks as though it came from the target, or at least not from the same spot on the target. This situation, known as multiple scattering, creates halos around objects (figure ., bottom) and blurs detail. If the viewing distance is far enough, the shape of the original object can be completely obscured, even though its contrast is still high enough to see. There is good news and bad news about multiple scattering. The good news is that you do not have to worry about it in many cases. This is not because water and air do not multiply scatter light—they do. However, two things mitigate the effect of this. First, in water at least, absorption is so strong at many wavelengths that most photons are absorbed before they have a chance to be multiply scattered. Craig Bohren developed a nice demonstration to show that absorption affects multiple scattering. Put two coins in the bottoms of two black containers filled with water that has just enough milk in it to make it cloudy. Then add some food coloring to one of the containers. You will notice that the coin becomes darker but also clearer. Moving back to biology, this is why tannin-stained bodies of fresh water appear so clear. Some of these waters have high scattering coefficients, and some do not, but the photons are absorbed so quickly that they do not have a chance to find out, so the water looks clear. The second mitigating factor is that underwater sighting distances are usually low. Most aquatic species have relatively low inherent contrasts to begin with (see chapter ) and so can only be seen from a couple of attenuation lengths away. At these distances multiple scattering is not yet a big issue. For multiple scattering to be significant, you need objects with very high inherent contrasts that can be seen from many attenuation lengths away. Bioluminescence is the major example of this. However, most bioluminescence occurs in the deep sea, which is extraordinarily clear, and the emitted light becomes too dim to see before multiple scattering is a big issue. Bioluminescence in coastal waters is another story and could be significantly impacted by multiple scattering. Multiple scattering in air really matters only during fog because scattering is high and absorption is negligible. The bad news about multiple scattering is that it is not easy to deal with. This is because its effect on visibility depends not only on the absorption and scattering coefficients but also on something that we have ignored so far, which is the direction the light is scattered. Different waters have different probabilities of scattering light in different directions, and this parameter, known as the volume scattering function, is not easy to measure. Even if you can and do measure it, you then need to use Monte Carlo techniques (i.e., billions of virtual photons racing through your computer) to figure out what will happen to visibility. This is a “cross-this-bridge-when-you-cometo-it” sort of thing that we do not discuss further (see Johnsen et al., , for details). Although we recognize that correctly working with multiple scattering requires substantial computer time, it is nevertheless possible to make a few generalizations. First, because photons that scatter out of the path to the viewer have a chance to return, things are theoretically visible from farther away than in a single scattering situation. We say “theoretically” because these returning photons often do not look as though they came from the original object and instead form a large blurry cloud around it that may not be terribly visible. The only condition we know of where this effect is obvious is for bright lights underwater—multiple scattering allows the general glow from the light to be seen from a much greater distance than would be
229
230
Chapter 9
A
B
Figure 9.16 (A, upper) The only light that reaches the viewer’s eye in this case is light that has never been scattered. The rest of the light has been scattered once and leaves the beam. The image is dim but does not have a halo. (A, lower) Some of the light that reaches the viewer’s eye has been scattered more than once. In addition, some light that originally started in another direction has now been scattered toward the viewer. Together, these multiple scatterings create a halo around the bulb because it looks as if light is coming from directions other than directly from the bulb. (B) An underwater scene where multiple scattering is just beginning to be significant, as can be noted by the halos around the yellow air tanks.
Vision in Attenuating Media
possible if light could only be scattered once (Duntley, ; Mertens, ). As mentioned above, though, it is unknown whether any bioluminescence is bright enough to gain a significant advantage from this effect. Second, multiple scattering tends to reduce whatever wavelength dependence exists. In other words multiple-scattering media tend to become whiter because differences in scattering get washed out when scattering becomes abundant. A good natural example of this is the whitening of the sky as it approaches the horizon. This can of course be enhanced by haze and so on, but even perfectly dry and clear skies become a paler blue near the horizon because light is scattered more than once (Bohren and Fraser, ). Finally, multiple scattering destroys detail because some photons that reach the viewer look as though they came from a different place than they actually started from. For example, in a striped fish, photons from the lighter stripes—if multiply scattered—may look as if they came from the darker stripes. People often assume that attenuating media blur images, but they really do so only in the presence of multiple scattering (or in the less common cases of small refractive index perturbations due to turbulence or transparent zooplankton). In the end, multiple scattering may let you see something from farther away, but what you see will be pale, blurry, and possibly unrecognizable.
In Conclusion As promised, this chapter has more math than most, but there are really only a few basic messages. First, attenuating media such as water have a huge effect on vision. Second, this effect is due as much to light absorption as it is to light scattering. Third, how far something can be seen depends on not only the object being viewed and the clarity of the water but also on the viewing direction and the contrast sensitivity of the viewer. Finally, it is not known whether any animals have developed methods to reduce the effect of attenuating media on their vision. Color filters do not help. Polarization sensitivity may help in attenuating media that are polarized, but it is not known whether any animals take advantage of this potential. Thus, Lythgoe’s “incapacitating and intractable problem” remains a fruitful area for research.
231
10 Motion Vision and Eye Movements
I
n a grassy, open field in southern Spain, the sun is shining brightly. A sharp-eyed observer—one who knew exactly where to look—might spot a tiny killer fly, Coenosia attenuata, perched near the tip of a blade of grass, facing the open sky in the attitude of an insect kind of alertness. The fly is much smaller than a common housefly. Yet its minuscule dark-red eyes must be surveying the overhead light field with astonishing acuity—as a tiny gnat buzzes through the air above the attentive predator, the killer fly launches itself like a dipteran missile directly into the gnat’s path. Swooping into an inverted posture at the last instant, the fly envelops the gnat in the basket formed by its legs—not unlike the capture of small birds by the talons of a falcon—and returns to its perch to devour its prey. Minutes later, it resumes its posture of surveillance of the open air in search of its next meal. The behavior of the killer fly is an example of motion vision in action, reduced to its essentials—a miniature predator, with a visual system and brain that seem to be impossibly tiny, exhibiting a complex, visually oriented aerial hunting behavior reminiscent of that of vertebrates thousands of times its size (see Gonzalez-Bellido et al., ). Killer flies are such effective predators on small, flying insects that they can depopulate an area of prey within a few days, making them perfect biological control agents in greenhouses. The episode just described illustrates how deeply fundamental is the ability to detect, analyze, and act on motion stimuli in visual systems—a fact that Gordon Walls () noted in his encyclopedic coverage of vertebrate eyes. In this chapter we explore the general features and properties of motion vision that apply to all animals and examine how these are tuned for the visual ecological requirements of different species—from small creatures such as fruit flies and jumping spiders to falcons and their prey. As will be seen, far more is known of motion vision in tiny animals than in any of the larger and more familiar species.
Motion Stimuli The visual stimulus for motion is a shift in the position of an image, or portion of an image, on the retina. Normally, we think of stimuli in nature as arising and existing in the external world—the approach of another animal, the passage of a bird, the
Motion Vision and Eye Movements 233
shifting positions of clouds in the sky or leaves on trees, objects blowing in the wind. In reality, however, in almost all animals the great majority of image shifts on the retina are generated by the animal itself, through its own movements. Any change in the position of the eye, the head, or the body is likely to generate motion on the retina (see chapter ). Only completely sessile animals with utterly stable eyes escape a sense of motion dominated by self-generated stimuli. As a consequence most animals have rather complex subsystems within vision to deal with two classes of visual motion stimuli. The first set of movements is as just described, created by the animal itself. These must be analyzed for their inherent information content or actively opposed by stabilizing responses to create a constant visual scene. The second class arises in the external world and must be attended to so that appropriate responses can be made. In the technological world that we now inhabit, where our most common visual targets are video display screens and computer monitors, the concept that most motion stimuli are self-generated might seem odd. Thinking about this for another moment, however, brings the realization that even when sitting, seemingly immobilized, in front of these displays we must be constantly adjusting the positions of our eyes as we scan the screen, change posture, shift our position, reach for other objects, and perform all the other little (or large) movements that sitting permits. For an animal in its natural setting, as for our ancestors (and us, too), the visual world is a constantly shifting scene. It is astonishing that we are so little aware of this. Obviously, we cannot know the visual experiences of other animals, but it seems likely that they also combine self-generated with other-generated motion stimuli into a seamless internal representation of the world and the objects in it. Actually, in most animals, selfgenerated motion stimuli are required to build such a representation. When animals move their retinal images experience what is called optic flow, a series of shifts of elements of the image that provide an enormous information content regarding type of movement, direction of movement, speed, and—counterintuitively—the locations and distances of fixed objects in the external world. We consider these patterns of optic flow (referred to as flow fields) next and then turn to the visual mechanisms used to detect motion. Later we present some behaviors that occur in the context of flow fields.
Optic Flow and Flow Fields As their name suggests, flow fields can emerge from flowing currents in the environment, such as might be experienced when an animal is surrounded by a steady stream of water or a smooth airflow in a breeze. Most commonly, however, flow fields— which produce optic flow on the retina (chapter )—are induced by rotations and/or translations of the position of the eye. Optic flow produced by eye rotations contains little or no information about object location that is not already known from static images of the same scene, but it does provide the critical input needed for visual stabilization systems to operate. A typical rotational optic flow field is schematized in figure .A, where the arrows suggest vectors produced by the movements of elements in a scene (alternatively, they represent the vectors of movements of scene elements on the retina). In pure rotation the vectors all move parallel to each other, and all have identical magnitudes. As long as no objects in the scene are themselves moving, their retinal displacements will all be identical no matter how far any object is located from the animal itself. Such a pattern of optic flow indicates that the eye (or possibly the external world) is rotating. Experimentally, as we shall see, rotations of an external
234
Chapter 10
A
B
C
D
Figure 10.1 Representations of optic flow fields projected onto a retina (these vectors also could represent the motions of objects in visual space). (A) A flow field produced by rotation alone, in which all points in visual space move parallel by equal amounts. (B) A purely translatory flow field has object vectors generally proportional to their angular distances from the point toward which the eye is moving. (C) A similar field with individual objects moving within it. Notice how easy it is to spot the points that move in the “wrong” way. (D) The translatory flow field seen perpendicular to the direction of motion. Objects at different distances have different angular velocities, with the nearest objects apparently moving the fastest. Note how this provides much more information about the structure of space around the animal than pure rotation.
scene are used to study visual stabilization systems. But these are rare—generally nonexistent—in the real world. Optic flow patterns generated by translational movements are far more interesting, and they contain enormous amounts of information. How this information is used by animals is itself a significant question for visual ecologists. For instance, in most animals the eyes can translate only when the animal itself, or at least its head, moves, but animals with stalked eyes can change the eye position without producing any other body movements. It is likely that translational movements of stalked eyes can be visually important, even in a motionless world, but no research has been directed at this question. Here, we consider only body (or head) movements. The pattern of optic flow arising from forward movement is illustrated in figure .B, where each arrow represents a movement vector at a given location. This type of flow field is well duplicated by the “star field” screen savers so popular in the early days of personal computers. The representation applies equally well to the perceived movements of location in the visual field or to the pattern of displacements of images on the retina (or across the ommatidia of compound eyes). The “pole” of the field, representing the central point from which all vectors emerge, represents the point in the visual field toward which the animal is directly moving. So the field immediately provides knowledge of the exact direction of movement and of possible objects to be encountered en route. (Note that a similar pattern of optic flow would arise from the approach of a large object, producing what is called a “looming stimulus,” so the animal must use other cues about its own motion to separate ego motion from external motion in this case.) Although not shown in the figure, the pole of the field opposite to the source of the flow vectors is a point of convergence, where vectors flow back
Motion Vision and Eye Movements 235
together, indicating the location directly opposed to current movement. Note that the presence of an overall flow field does not rule out the ability to dissect out movements of individual objects having self-motion within the flow. As illustrated in figure .C, movement vectors that differ from the flow as a whole can be quite prominent, even when they are small in magnitude. Near the pole of the field, movement vectors are similar for all objects, independently of their distances from the viewer, making this location unambiguous. As the optic flow moves further from the pole, vectors generated by nearby objects gather increasing velocities relative to objects in the same direction but further away. Thus, if an animal is traveling over or near the ground, vectors below it will be large, while those above the horizon will be comparatively small, giving a sense of proximity to the substrate. On the other hand, if the creature is flying through open space, almost everything will be far away, and vectors will be small everywhere (but still largest perpendicular to the axis of travel). Most flying animals, and many swimming ones, pass through a variety of obstacles or other nearby objects, and the flow fields they experience as they travel have a complex structure, providing a rapidly changing pattern as objects approach and then pass the eye. Figure .D attempts to illustrate the flow vectors in such a field as seen perpendicular to the axis of travel. As described later, fields like these provide essential information for all sorts of animals as they move.
Sensing Motion Motion sensing is central to visual perception, perhaps surpassed in importance only by the sensation of light’s presence or absence. In many animals the perceptions of form and of motion are so intimately linked that they may in fact form a single sensory experience. Yet, photoreceptor cells are not inherently sensitive to motion—they only can respond to changes in light level, which might correspond to motion but obviously have many other possible causes. It is possible to imagine a photoreceptor cell with an extended receptive field that somehow encodes motion across its sensory surface, but no such cell has ever been found. Instead, so far as is known, all animals sense motion as temporally correlated changes in stimulation in adjacent sets (minimally, pairs) of primary photoreceptors. Surprisingly, a deep understanding of motion analysis in the retina, or at least at the most fundamental levels of visual processing, exists only for three types of animals—flies, rabbits, and mice. From this meager phylogenetic sample, generalizations have been made to explain motion detection in general, but we do not yet know whether or not these generalizations are valid. Perhaps the elusive single-cell motion detector exists somewhere! Of the animals that have been studied, the best knowledge of the basic circuitry of motion detection comes from flies (see Borst et al., ), so we consider them first. Based on the idea that spatially separated correlations in brightness changes can be interpreted as motion, Werner Reichardt proposed an “elementary motion detector (EMD)” that requires a single pair of photoreceptor cells feeding into a simple network (Reichardt, ). His hypothetical arrangement, now called the “Reichardt detector” or some similar term involving his name, is diagrammed in figure .A. It operates this way. Assume that a simple stimulus—a small spot of light—travels across the receptive fields of the two photoreceptor in the direction of the large arrow
236
Chapter 10
A
B
R
D M S
Figure 10.2 (A) A diagrammatic view of a Reichardt detector, sensitive to motion in the direction of the large open arrow. The stimulus is first detected by the left receptor (R), and the nervous impulse passes ahead to a delay filter (D) and also crosses to a synapse on the path from the right receptor without delay at M (which stands for “multiplier” because impulses are combined at that point). The impulse on the left emerges from D after a short wait and passes to the point M on the left. Meanwhile, if the motion reaches the rightmost receptor at the correct time, the lateral path from this receptor arrives at the same location, where the impulses sum and continue to the output synapse S on the left, stimulating an interneuron and signaling the detection of motion. Motion in the opposite direction, shown by the dotted arrow, causes a similar train of events in the right-side receptor’s path, but in this case the summation leads to inhibition of the interneuron (indicated by the minus sign). (B) Apparatus often used to study motion perception in flying insects. A fly is attached to a wire that leads from a torque meter or some other device sensing the fly’s tendency to turn right or left. The visual surround can be rotated as desired, or patterns within it can be independently moved either physically or by using a computer screen in place of the rotating cylinder. (After Borst et al., 2010)
(the “preferred direction” of the detector). It will first stimulate the receptor (R) on the left, initiating a nerve impulse traveling down the receptor axon to the delay filter (D) and also along the collateral axon extending to the right without any delay. (A delay filter is some component that slows the rate of travel of the impulse, perhaps a slow synaptic connection or, alternatively, an elongated stretch of thin axon—the original conception of the detector was not based on a known neural circuit.) After a short time the light will reach the receptive field of the photoreceptor on the right and will stimulate it, causing a similar impulse to be initiated. Note, however, that because the collateral axon of this receptor synapses onto the same site as the time-delayed axon of the leftmost receptor, if the timing is right, the two impulses will arrive at this site (M) simultaneously, sum together, and excite the next component in the circuit (S), presumably in a postsynaptic layer of the retina, generating a spike or train of impulses sent further into the nervous system. This is the only event that can cause strong excitation. Motion in the opposite direction (the “null direction”) will cause inhibition of the postsynaptic cell (indicated by the minus sign), and other combinations of stimuli will have little effect on it.
Motion Vision and Eye Movements 237
The Reichardt arrangement can certainly detect motion, but it has less obvious properties. For instance, an identical response can be caused by the movement of a chain of lights spaced closely together—more closely than the separation of the receptive fields—drifting slowly across the receptor pair. If the speed of the stimulus and the separation of the lights are right, the output will be identical to that caused by a single light moving more quickly. In other words the Reichardt detector responds to a certain frequency of stimulation of the two receptors in sequence, not to a particular speed. In the laboratory these sorts of stimuli can be presented to animals (commonly blowflies or fruitflies), with a common arrangement being a series of black and white stripes appearing on the surface of a rotating drum (figure .B), rather than a series of lights. The exact cell types that comprise the Reichardt detector in fly retinas are still— frustratingly—unknown, but it is possible to model the performance of an array of detectors and to look for evidence of their effects in other cells, and examples of these cells are now well studied. The outputs of the elementary motion detectors are passed through several layers of neural circuitry in fly optic lobes, ultimately reaching neurons in a portion known as the lobula plate, where their projections form a topological representation of space. Spread out parallel to the plane of the lobula plate are several dozen very large neurons with extended dendritic fields, most of which sample inputs from a large part of the visual field; because of their flat, planar extensions they are termed “tangential cells” (see Hausen, , for a full description). The structures of these cells predict their properties—they respond to events that occupy a large area of visual space (figure .). In particular they respond to large-field visual motion, depolarizing and often producing a chain of action potentials to such motion along a particular axis and hyperpolarizing in the presence of the opposite motion. Other sets of tangential cells respond to local motion within their extensive receptive fields and are therefore perfectly suited to detect small objects moving differently from the background (Egelhaaf, ). The physiological and visual ecological functions of these various classes of interneurons are discussed later, but for now the important thing is that responses can be accurately modeled using a simple array of Reichardt detectors arranged to sample the cell’s extended receptive field (figure .; Borst et al., ). This is extremely powerful evidence for the presence of these postulated simple neural circuits. What about motion detection in vertebrate retinas? Some years ago Barlow and Hill () showed that certain retinal ganglion cells of rabbits respond to directional motion. These direction-selective ganglion cells (or DSGCs) respond much like the tangential cells in flies. Motion in the preferred direction excites DSGCs to produce a rapid train of action potentials, whereas contrary motion inhibits the cells, leading to a significantly reduced rate of firing (figure .). Unlike flies, however, the response does not seem to originate in Reichardt-like detectors; in fact, it took nearly a half century for the vertebrate motion-sensing mechanism to be worked out, and many details of the neural circuitry remain obscure. Because in vertebrates ganglion cells are the only retinal cells that project to higher visual centers, it is clear that motion sensing at higher levels emerges from the DSGCs. Motion-sensitive ganglion cells have been found in a number of vertebrates, but the mechanisms underlying the motion sensitivity have been closely examined only in mammals. The key component turns out to be a particular type of retinal interneuron, named the starburst amacrine cell due to its radial array of dendritic extensions (Taylor and Vaney, ; Fried and Masland, ). As illustrated in figure .A,
Chapter 10
B Elevation (deg)
A
HSN HSE
HSN
40
0 –40 –40
0
Figure 10.3 Wide-field cells in the lobula plate of Drosophila. Left figure shows the projections of three interneurons onto the surface of the lobula plate, a collection of cells in the visual pathway largely devoted to motion analysis. Three graphs on the right side show the region in visual space to which each neuron responds, coded as in a heat map, where blue is the least response and dark red is the strongest. Note how each cell specializes for motion in a region on its side of the body ranging from the dorsal area (HSN) through the equator (HSE) to the ventral field (HSS). (After Schnell et al., 2010)
Elevation (deg)
Elevation (deg)
HSS
40
80
40
80
HSE
40
0 –40 –40
0 HSS
40
0 –40 –40
0 40 Azimuth (deg)
80
A
Figure 10.4 Modeling wide-field sensing in flies. (A) The output of a model simulating responses from cells like those in figure 10.3 based on inputs from an array of Reichardt motion detectors. (B) Actual responses of such cells in a blowfly. The arrows show the direction of movement of the visual surround. A relatively simple model of motion sensing predicts cell properties accurately. (After Borst et al., 2010)
Response (mV)
20 10 0 –10 –20 0
2
4
6 Time (s)
8
10
0
2
4
6 Time (s)
8
10
B 20 Response (mV)
238
10 0 –10 –20
Motion Vision and Eye Movements 239
0.5 sec
Figure 10.5 Schematic showing typical responses of a motion-sensitive ganglion cell. Vertical lines on the horizontal trace indicate action potentials recorded near the axon of this cell in the optic nerve. The ramp and arrows show the timing and direction of a motion stimulus. This cell is excited by upward and inhibited by downward motion.
A
B Preferred direction
Figure 10.6 Motion sensing in vertebrate retinas. (A) Diagram of a starburst amacrine cell (SBAC) showing reponses in dendrites on the right side of the cell to motion in directions indicated by the arrows. (After Fried and Masland, 2006) Because SBACs release inhibitory neurotransmitter at their terminals, the depolarization leads to inhibition in the corresponding postsynaptic direction-sensitive ganglion cell (DSGC). (B) Motion sensitivity in an array of DSGCs all synapsing onto the same starburst amacrine cell, with each ganglion cell shown in the color of the motion direction to which it responds most strongly (the “preferred direction). A single SBAC can signal multiple motion directions to its set of DSGCs. (After Taylor and Vaney, 2003)
each pie-slice sector of a starburst cell is stimulated by the outward movement of light along the radius of the sector to produce a growing wave of depolarization, which leads to a strong release of inhibitory neurotransmitter at the dendritic terminals. The sectors of the starburst cells synapse onto DSGCs, as seen in figure .B. Thus, centripetally moving light inhibits the ganglion cell at that sector, producing the direction selectivity. A particularly elegant aspect of this arrangement is that a single starburst cell fosters a family of preferred directions in the ganglion cells surrounding it. Because there is a network of interconnected starburst cells and DS ganglion cells, the system acts synergistically to cover the retina with a mosaic of direction selectivities, all initiated by the special features of the starburst amacrine cells. Still, mysteries remain (Demb, ; Zhou and Lee, ). Amacrine cells are not themselves light sensitive, and they receive their input from bipolar cells that connect
240
Chapter 10
them to cone photoreceptors. How this is organized and whether or not the bipolar cells themselves contribute directly to motion sensitivity (possibly by Reichardt-type delay filters) remain to be learned. Visual ecologists are concerned less with proximate mechanisms, however, than with ultimate ones. For instance, how do motionselective systems vary according to ecological requirements? Although the vertebrate picture remains obscure, answers to questions like these do exist for a number of invertebrates, as we shall see. One final point about motion vision before we move to examples of how animals deal with the motion they experience in their natural visual worlds. In most cases motion processing is not handled by nervous pathways that are aware of color. Thus, animals with excellent color vision, including bees and humans, are functionally color-blind when seeing motion. In fact many of these animals only use one of the available receptor channels (most often a middle-wavelength-sensitive class) to detect moving stimuli. Even in cases in which other receptor types are added to motion detection (see, for instance, Wardill et al., ), there is no evidence that the animal itself is aware of the color of the moving stimulus.
Eye Movements and Their Relation to Motion Stimuli in Animals Except for the tiny movements of visual tremor, animals move their eyes for only three reasons: () to stabilize vision when they themselves move; () to deal with moving stimuli in the external world; or () to generate a new direction of gaze (Land, ). So although eye movements are obviously controlled by animals’ nervous systems, they can be either responses to external events or self-initiated. For convenience these movements can be placed into a few general classes, all of which are seen in most animals with well-developed eyes (remember that animals can also move their eyes by moving their heads or their bodies). First, there are the smooth, generally slow movements that serve to stabilize vision in a shifting visual scene, for example, when an animal rotates or when it looks sideways while traveling forward. These are generally termed “optomotor responses.” All other types of eye movements, which are faster and often less smooth, are generally associated with the presence of a fovea or acute zone. A second class of eye movements is tracking movements. These, whether slow or fast, stabilize just a small portion of the scene—typically only a single target. Tracking behavior can involve smooth shifting of gaze to follow the target, or it can be achieved via a series of saccades, quick shifts of eye position during which vision is normally blind. Saccadic tracking involves a series of rapid relocations of eye position, each occurring in a small fraction of a second. The final two sets of movements, both extremely rapid, are both saccadic. Fixation of visual attention on a significant object (a simple example is one that suddenly moves in an unexpected location) involves a saccade. The last movement type consists of nonspecific saccades to various locations in the visual field for surveillance, vigilance, or just for random refixations. Eye movements of an additional type, seen in a very few animals, are interesting because of their visual ecological significance. These are the scans—smooth, stereotypical eye movements (or in some cases, movements of the retinas of the eyes) that slide images of scenes or objects relatively slowly over the photoreceptor array. These
Motion Vision and Eye Movements 241
unusual movements are covered later on, but an important point to keep in mind is that except for the scans, animals never “slide their eyes” over any object or scene of interest.
Eye Movements in Mantis Shrimps—A Case Study Only one group of animals is known to use all these types of eye movements, including scans—the mantis shrimps, or stomatopod crustaceans (see figures . and .). Their eyes are unusual in many ways, some of which have been touched on in earlier chapters. Importantly for this chapter, mantis shrimps are very attentive to moving stimuli, and many species are visually adept in using their stalked eyes to spot and watch these and other objects that they find interesting. The eyes are extremely mobile, more so than other stalk-eyed systems, with up to ° of movement on both the horizontal and vertical axes and ° of rotation. The eyes can move independently. They can even look in opposite directions at the same time, a disconcerting behavior for humans to watch. This independent mobility is coupled with, and perhaps demands, a surprisingly casual control system, making it sometimes hard to know just what the animal is watching. Because mantis shrimps exhibit the full repertoire of animal eye movements, they can serve as an entry into understanding how animals use their eyes to deal with motion.
A
B
C
D
Figure 10.7 Eye movements in mantis shrimp. The species in the photographs is Gonodactylus smithii. See the text for discussion of each example. (A) Optokinesis. The arrows suggest slow optokinetic movements, with the thin lines showing quick returns. (B) Saccadic tracking with both eyes, showing independent saccades. (C) A monocular fixation saccade by the left eye. (D) Scanning movements by both eyes. (Photograph by R. L. Caldwell)
Chapter 10
In the midst of all their mobility and independence, mantis shrimps do reveal characteristic optical stabilization behavior when subjected to a moving visual field (figures .A, .A). If restrained in the center of a rotating cylinder, analogous to the situation of the fly in figure .B, their eyes follow the rotation, albeit in a fairly nonrigorous fashion, and in doing so the paths of the eyes describe a sawtooth pattern (Cronin et al., ). This behavior, common among animals stabilizing vision in drifting scenes, is called “optokinetic nystagmus.” In nystagmus, eyes drift with the motion of the drum steadily and reasonably slowly, snapping backward rapidly from time to time (suggested by the zigzag arrows in figure .A). When the drum reverses (near the middle of figure .A), the sawteeth go the other way, almost as if the eyes are irresistibly dragged along by the stripes of the drum. An unusual aspect of mantis shrimp optokinesis is that the two eyes describe quite different patterns; they both generally move to stabilize vision, but their smooth movements and flickbacks occur with different frequencies and at different positions. Notice that before the drum begins rotating, and after it stops, irregular and similarly uncoordinated eye movements are also apparent. Demonstrating their aggressive interest in moving objects, mantis shrimps readily track small targets moved in their visual fields (figures .B and .B; see Cronin et
A
B 90 Angle (deg)
45 0
45 0 –45 –90 90
–45 –90
C 90
0
100 Time (s)
Angle (deg)
Angle (deg)
242
200
45 0 –45 –90 0
D 90
Dorsal
100 Time (s)
90
45
45
Lateral 0
Medial 0
–45
–45
–90 –90 –45 0 45 Ventral
90
200
E
Dorsal
–90 –90 –45 0 45 Ventral
Lateral
90
Figure 10.8 Eye movements in mantis shrimps. Movements of the right eye are in green; those of the left eye are in red. For discussion, see the text. (A) Optokinesis. The dotted lines show the rotation of a striped drum surrounding the animal. (B) Tracking, with the angular position of the target (arrow in panel C) illustrated by the dotted line. Note the fixation saccades by both eyes at the beginning of target motion. (D) Scans (mixed with other eye movements). The illustration of the animal in E shows how the linear array of receptors in the midband is positioned, and the scans occur mostly orthogonal to the orientation of this array. (After Cronin et al., 1988, 1991; Land et al., 1990)
Motion Vision and Eye Movements 243
al., ). Tracking movements here are rarely smooth; instead, these creatures use a series of saccades (suggested by the short sets of arrows in figure .B), repeatedly fixating their target with their acute zones. Figure .B beautifully shows that each eye makes a targeting saccade to the location of the target just as it begins to move; both eyes then irregularly follow the path of the target as it passes by to the right and left, following it with a series of smooth movements and saccades frequently interrupted by saccadic movements to other locations. Tracking is thus somewhat sloppy and is also independent in the two eyes, which continually switch between tracking the target—particularly as it approaches the midline of view—and seemingly ignoring its presence. Overall, a tracking session contains a mixture of smooth movements, tracking saccades, fixation saccades, and apparently random saccades to other locations in the visual field and thus can exhibit multiple classes of eye movements in rapid sequence. As was noted, mantis shrimps are the only creatures known to use scans in addition to all other types of eye movements. The discovery of scanning in these animals was quite a surprise because every previously known case involved eyes that have strip retinas—arrangements of only one or a very few rows of photoreceptors that extend in one direction across the visual field. With such a design, scans are obviously useful for shifting the receptor array through the spatially extended image, much like the raster line scanning the photosensor of an old-fashioned television camera. On the other hand, scanning movements ruin the gaze stabilization required for the optokinetic or tracking behaviors just described. But mantis shrimp eyes do have a linear feature buried in the extended array: the midband, the ocular region that detects color and some aspects of polarization. By sweeping this part of the eye through the extended visual field seen by the rest of the ommatidial array, scans offer a way to analyze color and polarization. This requires time sharing between the small and slow scans and the large, fast other movements used either for stabilization (on the one hand) or saccadic tracking or target acquisition (on the other). Perhaps the characteristically sloppy way in which the stomatopods handle their ocular movements is a reflection of this functional conflict. Eyes of mantis shrimps are extraordinarily active, so it is not easy to recognize the scans, buried as they are in the midst of many other spontaneous and driven eye movements. However, their presence can be discerned in records of eye movements over time. Figure .D illustrates the paths of all the eye movements made by a single animal over a -minute period, during which it was intermittently tracking a small moving target (Land et al., ). The target itself moved horizontally out to about ° to each side, and each eye swung out to about that angle on its own side, much like the animal in figure .B. However, most of the movements are much smaller excursions of the eye along axes perpendicular to the plane of its midband (seen in figure .E). These are scans that are relatively small (about ° long) and slow (~°/s). The saccadic traces in the same record extend up to ° in length and reach angular velocities of nearly °/s (the animal in figure .B, of a different species, produced saccades with angular velocities approaching °/s). The slow scans are well matched to the temporal properties of individual photoreceptors, making visual information continually available throughout each scan. Despite their tiny brains, mantis shrimps apparently can integrate the color and polarization information gained during their scans with the spatial information available from the ommatidial array as a whole and somehow combine all this with their eye position information to maintain a sense of the overall organization of the world around them. Although the eye movements of mantis
244
Chapter 10
shrimps are in many ways unusual, these movements provide them with a functional system that is adaptable to a very broad range of situations and visual habitats.
Eye Movements and Motion Vision in Other Animals Although mantis shrimps are perfect for introducing eye movements and responses to the moving external world, their independent, tripartite eyes produce unusual oculomotor behaviors. We now turn to animals that face specific visual ecological tasks to see how the basic classes of eye movement behavior vary in a selection of animals taken from a range of taxa . We begin with optokinetic responses because these are absolutely essential for stabilizing vision in a changing visual world and are thus fundamental for nullifying the visual degradation caused by whole-field retinal blur.
Mechanisms of Image Stabilization It seems likely that any animal with a decent imaging system has image-stabilizing systems to reduce motion blur. Cubozoan jellyfish, with excellent optics but very poor spatial resolution and no central nervous system, are an exception, marking the divide between simple and more specialized optical stabilizing behavior. Only a sampling of creatures has been examined in detail, of course—the killer fly described at the beginning of this chapter and some of its dipteran relatives have given us some of the best examples. Among vertebrates, fish are particularly interesting because their eyes often sit on the sides of the head with little or no overlap of their visual fields. Despite this, many fish species have bilaterally coordinated patterns of optokinetic nystagmus. When gently restrained and presented with a rotating visual field consisting of vertical dark and light bars (much like the situation of the fly in figure .), butterflyfish—flat-bodied inhabitants of coral reefs—exhibit very similar sequences of eye movements in both eyes (figure .A). These classic patterns of nystagmus have quick (upward on the graph) returns alternating with smooth, sloping movements drifting with the striped pattern (the diagonal dark lines). On the other hand pipefish, relatives of sea horses with turret-like eyes (resembling eyes of chameleons), have only weakly coordinated bilateral patterns of optokinesis, and in fact their drifts poorly match the rotating pattern that surrounds them. Such responses are reminiscent of those of mantis shrimps (figure .A), surely because both animals use their independent eyes primarily to look for objects around them. Unlike the butterflyfish, mantis shrimps and pipefish often sit in burrows or hold on to stable objects as they survey their surroundings with their independent eyes. Cuttlefish are lens-eyed invertebrate swimmers with a classic pattern of nystagmus (figure .C). Each eye’s response is nearly a perfect sawtooth, with the slow movements having angular velocities equal to, or slightly slower than, the surround’s (Collewijn, ). The movements are of limited extent compared to those of fish and most other animals, and the peaks are much more rounded (particularly considering the slow time scale), perhaps a reflection of the slower swimming speeds and reduced mobility of cuttlefish compared to true fish. But even the slowest cephalopod of them all, Nautilus, with lensless eyes, rotates synchronously with a striped drum. The drum must move slowly, and the stripes must be broad, and the animal rotates with its whole body, but the response nevertheless is proper optomotor behavior (Muntz and Raj, ).
Motion Vision and Eye Movements 245
A Butterflyfish
Eye angle (deg)
40 20 0 –20 0
2
4
6
8
10
Time (s) B Pipefish
Eye angle (deg)
40 20 0 –20 –40
1
3
5 Time (s)
7
9
C Cuttlefish
Eye angle (deg)
40 30 20 10 0
0
10
20 Time (s)
30
40
Figure 10.9 Examples of optokinesis in aquatic animals. The movement of the striped drum surrounding each animal is indicated by the slanted lines; movements of the right eye are in green, and the left eye in red. Butterflyfish (A) show largely coordinated optokinetic movements of both eyes, whereas pipefish (B) have less synchronized movements. (After Fritches and Marshall, 2002) For the cuttlefish (C), a marine cephalopod, only the movements of the right eye are shown, but the patterns are much like those of butterflyfish. (After Collewijn, 1970)
It is hard to imagine a situation in nature where an animal is faced with a steadily rotating visual field (unless the animal itself is slowly revolving on its axis, something you do not see very often). So, what do these experiments on optokinesis tell us about what animals do with their eyes in the real world? Experiments with freely moving animals in natural (or at least naturalistic) environments are technically difficult but not impossible, and they reveal eye movements much like the ones just discussed. For instance, swimming fish actually do turn in circles, or at least swim on curved paths, and when doing so they counterrotate their eyes in exactly the pattern seen during optokinesis (figure .). The eyes synchronously rotate against the turn made by
Chapter 10
the body, remaining almost fixed in their gaze angle, and abruptly flick forward to their starting position relative to the body. In the case of the experiment in figure . (Fernald, ), the fish was swimming in a tank with featureless walls, so visual stabilization was brought about mostly by vestibular, not visual, cues. Similar eye movements occur under visual control when fish swim in visually complex environments (e.g., Easter et al., ). Walking rock crabs, eyes perched high on stalks, show beautiful, bilaterally coordinated, ocular stabilization as they wander about in a naturalistic setting (figure .; see Paul et al., ). Note that as the crab turns this way and that, the eyes remain fixed at a nearly constant angle relative to the external scene, rapidly flicking to a new, stable position as the crab makes a turn. Thus, despite the meandering path taken by the animal, the eyes maintain stable visual fields. Obviously, systems for maintaining visual field stabilization are widespread among animals, reemphasizing the importance of resisting movements of images across the A
Degrees
150
100
50
0
0
10
20 Time (s)
40
30
B 150
Degrees
246
100
50
0
0
0.5
1.0
1.5 Time (s)
2.0
2.5
Figure 10.10 Ocular fixation movements in action. These are unrestrained animals moving freely; right eyes are plotted in green, and the left in red, while the black line shows the body axis of the animal. (A) A walking crab, Pachygrapsus marmoratus. Note that as the animal turns during its meanders, the eyes make bilateral saccades to a new fixation point, which they maintain until the crab suddenly turns again. (After Paul et al., 1990) (B) The African cichlid fish Haplochromis burtoni, swimming in a tank. The fish turns smoothly, while the eyes fixate and counterrotate to maintain each fixation. (After Fernald, 1985)
Motion Vision and Eye Movements 247
retina. In animals with particularly mobile eyes, nonvisual sensory systems contribute to keeping the eyes fixed. In vertebrates, as in the example of the cichlid fish cited above, vestibular inputs (from inner-ear organs) sense body or head rotation and initiate countering eye movements. This vestibulo-ocular reflex, or VOR, cannot achieve the precision of direct visual input, but its significance is that it works in parallel with visual stabilizing systems, providing a second level of control. Animals without semicircular canals bring other sensory systems into the stabilization mechanism. Wonderful examples of this are found among crabs, animals with omnidirectional visual fields that are famous for their nonchalance in choosing which part of their body will lead when walking. They unify input from eyes, statocysts (equilibrium detectors), and proprioceptors (monitoring leg movement) to maintain eye orientation as they traverse various obstacles (Nalbach et al., ). The contribution of each sensory system to overall stability varies with the habitat. Beach-living species of crabs, viewing a flat world with consistent visual scenes, primarily use vision for stabilization, with the parts of the eye observing the lateral visual field (which generates the greatest motion flow) contributing the most. Crabs that crawl over rocks and obstacles rely much more on leg proprioceptors, and fully aquatic species, lacking both contact with the substrate and a high-contrast visual scene, are most dependent on their statocysts. Keeping eyes stable in a changing world is what matters. In the visual control of optokinesis, eye movements are driven by a mismatch between the current speed of the eye and the movement of the visual surround. As a consequence, eye speed can never exactly match the rate of movement of the visual scene—if it did, the signal generating the movement would cease. Gains for optokinesis (the ratio of the rate of eye speed to visual scene speed) are typically above . but cannot get much greater than this without causing the control system to become unstable.
Visual Tracking Behavior Given the power of all these stabilizing systems, the wonder is that animals manage to unlock their gaze from the visual scene at all. Yet they must, if only to be vigilant or to look for food in unexpected places. To shift their gaze they must transiently unlock the eyes from the systems that stabilize them through some sort of centrally generated inhibition. The eye can then initiate a saccade to a new location of primary visual attention (as when the moving target is sighted near the starting points in figure .B). In humans (and probably animals in general), visual input ceases during the saccade, giving a temporary loss of stabilizing input and avoiding the momentary sensation of uninterpretable blur. Thus, making a targeting saccade, or just looking around for the sake of vigilance, is not mechanistically difficult; the eye simply needs to have the stabilizing systems disconnected temporarily and brought back online after the saccade. A much more challenging problem is visually following a relatively small object as it moves against a stable (or for that matter, a moving) visual background. The systems that operate for visual stabilization must be overridden if a single object is to be followed—largefield optokinetic forces would strongly resist eye movements for single objects. If the tracking involves locomotion or postural changes, vestibular and proprioceptive countering reflexes would constantly drag the eye off the position of the target as well, even if purely visual reflexes were inhibited. Consequently, tracking requires passing the control of eye movements to a separate subsystem—one that is driven by visual interneurons responding to small items within the larger visual field. In flies these
248
Chapter 10
cells are found in the lobula plate, not far from the tangential cells described earlier; because they respond primarily to small objects in particular visual locations, they are called “figure-detection” or FD cells. These cells drive turning responses much as the tangential cells do, but in response to objects, not backgrounds (see Egelhaaf et al., ). Cells like these serve for orientation to locally moving objects, but they also could play roles in tracking behavior. How specific cells of the nervous systems of flies, or of any other animal for that matter, actually place the eye onto the target of interest is not really clear. Nevertheless, animals certainly find some stimuli to be irresistible. Toads rarely move their eyes, but by rapidly changing head position or posture they snap their attention onto objects that look like dark rectangles, moving along their long axes—much as a worm does (Ewert, ). Praying mantids agree with toads in this one particular: they also are most attracted to rectangles moving on their long axes (Prete et al., ). For animals that prey on small, live invertebrates, such visual cues are reasonable. Many flying insects, from houseflies to dragonflies, respond instantly instead to small dark spots moving at an appropriate speed against the sky. Killer flies and dragonflies are aerial insect predators, and their tracking behavior involves some complex planning, so we return to them shortly. Other insects are more interested in socialization, and males (in particular) will chase almost anything moving if it is small and dark. In the case of male houseflies, Fannia canicularis, chasing behavior is relatively easy to describe (figure .A)—on spotting the appropriate stimulus, the pursuer tracks the flight path of its target, duplicating its moves and seemingly following them with great fidelity. The sequence displayed in figure .A lasted only a second, emphasizing the nearly instantaneous decisions made by the pursuer (Land and Collett, ). The pursuing fly alters its course proportionally to the current angle between its own trajectory and the position of the leader with a lag time of about ms, evidently caused by all the delays in the nervous system from photoreception to motor command—a nearly incredible performance. Slightly more comprehensible are the predatory (sometimes sexual) chases of the water flea Polyphemus pediculus. These also are short and sweet, lasting only a second or two (Young and Taylor, ), but because the chaser is in water it is much slower and also has a body shape that makes it easier to record the body axis angle (figure .B). Polyphemus, named for the Greek cyclops because of its single huge compound eye, can rotate this eye inside the head capsule, so the body angle does not always indicate the angle of sight, but the chasing behavior overall is very much like Fannia’s in that the pursuer’s angular velocity is proportional to the error angle in its current heading (the angle between its axis and the location of the visualized prey— or potential mate). Unlike the fly the water flea has the option of pausing during the chase, seen in both chases illustrated in figure .B. The pauses occur when the target approaches the pursuer and are thought to be responses to expansion of the perceived size of the target; the pursuer waits until the target passes and then resumes the chase, terminating it with a burst of speed. After this short trip underwater, we return to the flies. Flies like to chase all sorts of things, but for males the most interesting thing of all is a female fly. Males of many fly species have a region of enlarged facets, fostering enhanced resolution and sensitivity, facing forward and often slightly upward (chapter ). This patch of ommatidia is sometimes called the “bright zone” because of its greater sensitivity, but the sensitivity is there primarily to give the males greatly improved motion vision in this visual location by delivering more photons to the underlying photoreceptors. (For a discussion
Motion Vision and Eye Movements 249
A
B 1 1 1
20
5 8 6 9 10 7 5
2
2
15 15
10
6
1 mm
5 6 4
3
1
3 15 1
4 5
15
5 cm 1 mm
10
5
1
10
Figure 10.11 Tracking and chasing. Path of the tracker/chaser is plotted in red, with the animal being tracked plotted in black. Numbers adjacent to each track indicate corresponding times. (A) A housefly Fannia canicularis chases another, duplicating the target’s flight with great fidelity. (After Land and Collett, 1974) Points are plotted at intervals of 20 ms. (B) Chases by a water flea, Polyphenus pediculus. (After Young and Taylor, 1988) During the chase in the bottom part of B, the chasing animal mostly watches its target swim by, backing off a bit as the target gets close (around position 10). Points plotted at 100-ms intervals.
of the optics of the bright zone—also known as the “love spot”—see chapter .) When a male fly is after a female, his tracking responses keep her image on bright zone ommatidia. This gives him a decided visual advantage because he can see her at distances where she has no inkling of his existence and can track her movements with extreme speed and fidelity. In male hoverflies and probably other flies with bright zones, this part of the visual field is served with motion-detecting neurons (SF-STMDs, see chapter ) that respond best to moving objects only ~° in diameter (about the visual angle of one ommatidium) and detect them faithfully even when the background is moving and filled with clutter (Nordström et al., ). Although we cannot enter the mind of a sex-obsessed hoverfly, it seems unlikely that the fly responds to the moving stimulus as a distinct object. Instead, the processing networks that handle the visual input produce motor outputs that drive the fly to orient toward the target. When and how (and if) the fly comes to realize that it is actually tracking a female is still a mysterious process. Nevertheless, the male is well supplied with tracking neurons that are sensitive, quickly responding, and object specific. Night-active flies have much slower photoreceptors, and it is unfortunate that no one has yet been sufficiently adventurous to take up a study of visual tracking in nocturnal species of flies. Mantis shrimps track moving objects with a mixture of smooth tracking movements, driven by target error angle, interspersed with frequent saccadic fixations.
Chapter 10
Flies, in contrast, smoothly track almost all the time, switching to saccadic tracking only when the target is moving so quickly that the chaser barely has time to make course corrections. Their smooth tracking ability is unaffected by the presence of a visually complex and confusing background. Praying mantids, insects that can fly but that are sit-and-wait ambush hunters like mantis shrimps (they share a name because they both capture prey with quick, grabbing movements), not surprisingly reveal tracking behavior that is somewhere in between. Praying mantids foveate their prey quickly and accurately, with the head movements necessary for this being driven by object detectors in the periphery of the compound eyes that generate targeting saccades (Rossel, ). An object of interest is first fixated by the ommatidia making up the foveal region and then enthusiastically tracked with head movements (see figure .). If the target appears against a featureless background and moves fairly slowly, tracking is smooth, as seen in the top part of figure ., probably driven by the perceived velocity of the target and not its positional error. However, when the target moves very quickly or is seen against visual clutter, such as grass stalks or even a pattern of random dots, tracking becomes saccadic (figure .B). Essentially, errors accumulate as the animal remains in a head-fixed posture until the head jumps again to the position of the target. Such tracking is really a sequence of repeated target fixations, providing a way to remain aware of the position of a potential prey item when it is hard to see. In an interesting note, mantids use motion vision for target ranging. They can determine the distance to a nearby visual object using binocular cues, but to range more distant targets they sway their bodies back and forth, moving their heads from left to right and back again. This generates a visual flow field in which objects at
A To target
Angle (deg)
90
Target angle Head angle
60 30 0 15
50
55
60
30 35 Time (s)
40
45
20
25
30
35 40 45 Time (s)
5
10
15
20
B 90 Angle (deg)
250
60 30 0
0
25
50
55
60
Figure 10.12 Tracking in praying mantis. (A) Smooth tracking against a featureless background, with the target motion shown by the solid line and the animal’s head position plotted as a dotted line. The diagram to the right shows the angles being plotted. (B) Saccadic tracking against patterned backgrounds. (After Rossel, 1980)
Motion Vision and Eye Movements 251
different distances move laterally at different speeds, a phenomenon called “motion parallax” (see figure .D), allowing the mantid to gauge the distance to its target.
Aerial Interceptors—Capturing Prey on the Wing We introduced aerial acrobats in the last section, flies that expertly follow and recreate the flight paths of interesting objects. Similarly, male flies use their love spots to track down (or up!) females. Flies follow their targets; by constantly steering toward a stimulus, and by flying faster than it, they will eventually impact the target and can decide what happens next. This can lead to long detours in the flight path if the target passes by and must be chased. Other, more impressive aerial predators waste no efforts in following; instead, they head straight for the point of interception with their intended prey. Dragonflies, though they are ancient insects in the fossil record, are nevertheless the supreme aerial hunters among insects. Judging from their rates of success—greater than % of attacks lead to prey captures (compared to ~–% in falcons, the vertebrate experts, discussed later)—they are arguably the most effective flying predators on Earth. Research on the neurobiology underlying prey capture in dragonflies has focused on the Libellulids, perching dragonflies that launch themselves into flight with the passage of a small, flying insect (much like the killer fly that introduced this chapter). Often seen on their favored perches beside small ponds, the dragonflies make rapid takeoffs, execute sharp orienting turns, and intercept their prey in much less than second, entirely under visual control (Olberg et al., ). The control system involves a very small number (perhaps as few as eight pairs) of giant interneurons extending into the ventral nerve cord, the target-selective descending neurons (TSDNs). TSDNs have minuscule receptive fields, as small as ° across, preferred motion directionality, and direct steering movements of the paired wings. To make their interceptions, dragonflies use a simple but robust strategy—they keep the image of the prey at a constant visual angle. As long as this angle is acute, and the dragonfly’s speed exceeds that of the victim (necessary for a constant angle in any case), the trajectories of the two animals must intersect. Thus, the predator need not know the precise distance of the prey when it launches an interception. On the other hand dragonflies never take off after aircraft or birds at distances where their visual sizes are similar to those of nearby insects, so they must have at least a crude way to decide something about the range to the target. This judgment may be made by translations of the head, similar to motion-parallax rangefinding in praying mantids. Dragonflies simplify their visual problems by hunting over open spaces such as ponds, where they have an unobstructed view of the sky. Technical hurdles have limited research so far to perching species, but it is very likely that cruising dragonflies use a similar interception strategy, visually trawling with their upward-looking acute zones and locking onto a constant visual angle when chasing potential prey. Some of the newest work on dragonfly motion vision and its role in flight control is being carried out in a completely artificial flight arena (see figure .). Dragonflies are offered a place to perch, where a tiny capacitor they carry on their thorax is recharged. The capacitor powers a miniaturized, dragonfly-portable data transmitter. Fruit flies in the arena serve as natural prey, and when the dragonfly launches into flight, the electrical activity in one or more TSDNs is relayed wirelessly to a recording station, as the flight path, position of predator and prey, head and body posture, and wing action are all simultaneously monitored by high-speed video cameras. The
252
Chapter 10
Figure 10.13 Flight room at the Janelia Farm research center in Virginia for study of the neurobiology underlying motion sensing and flight control in dragonflies. The room is designed to be naturalistic, with sunlight-intensity lighting and temperatures in which dragonflies naturally hunt. The walls are lined with high-speed video cameras for motion and behavioral tracking and with wireless receivers to receive information from miniature data transmitters carried by dragonflies that send information from visual interneurons and flight muscles. Platform in the center serves as a perch for dragonflies and also allows controlled release of fruit flies used as flying prey. (Photo courtesy Anthony Leonardo)
research exploits the behavior of a free-flying predator operating in a controlled, naturalistic system to learn how its target-tracking and flight control systems interact to create a complete behavioral sequence. Dragonflies may be the ultimate aerial predators on small-sized flying creatures, but at large prey scales, falcons rule the roost. Visual tracking during their impressive, high-speed pursuits is unusual due to a particular feature of their vision. Falcons are, of course, famous for their extremely acute vision, probably the best of all animals. Less well known is that they have two foveas in each eye, each sampling a spot in the anterior visual field—and somewhat surprisingly, it is the more lateralfacing deep fovea (aimed ~° to the axis of the head) that gives these raptors their legendary acuity. Falcons can barely move their eyes at all in the sockets (O’Rourke et al., ), and they must keep their heads pointed more or less straight ahead for streamlining in high-speed dives. Consequently, the only way they can both fly extremely fast and keep the prey centered in the deep fovea of an eye is to approach the prey on a logarithmic spiral path (Tucker et al., ). Figure . shows a number of flights by the same peregrine falcon preying early in the day on songbirds flying
Motion Vision and Eye Movements 253
N
1000 m
Dead Horse Lake Figure 10.14 Aerial tracking by a peregrine falcon at a cliff site in Colorado, showing spiral flight paths made to intercept prey over a lake. Crosses show perches favored by the hunting falcon. The dark trace is a logarithmic spiral, the shape of the flight path that would be followed by the raptor if it kept prey fixated on the deep fovea of its right eye. Most attacks closely follow this track. (After Tucker et al., 2000)
over a lake. The plus signs indicate perches the falcon used while searching for prey. The dark curve is the ideal spiral flight path, which is a good predictor of most flight courses (not all of which ended at the same interception point). This falcon did launch a few straight chases, most of which were shorter than the spiral flights, and it always favored its right eye. The bias could have been related to the topography of the site or to individual preference, but a more intriguing possibility is that, like the Red Baron, the animal was using the glare of the sun in the east for concealment. The main point here is that the visual axis, the need for extraordinarily acute vision (songbird prey were sighted and tracked from up to a mile away), and the requirement for streamlining during the attack phase combine to produce an unusual and effective prey-capture behavior.
Predation Controlled by a Scanning Eye The presence of a strip retina in an eye generally is coupled with the use of scans to cover visual space (Land and Nilsson, ). Except for mantis shrimps, scanning movements are rarely used to fill in image details. Instead, animals scan to pick out
Chapter 10
spot-like objects or other critical shapes in their environments, often for mating or prey capture. In some marine pelagic molluscs and pontellid copepods, either the retina scans inside the eye or the eye turns about its center; there is no overall translation of the eye’s position. Some species of dragonflies with huge compound eyes also hunt using a sort of scanning behavior. They have a strip-shaped acute zone aimed upward and ahead, and they use their forward motion to scan the sky above for the dark silhouettes of prey. The only animals known to use retinal scans to look for specific visual patterns are jumping spiders, whose boomerang-shaped retinas are used in part to recognize the splayed legs of other spiders. However, there is one group of predatory insects that scans while hunting, apparently to recognize prey, check its size, and range its distance in preparation for a strike. These are the larvae of predacious diving beetles. Larvae of the beetle Thermonectes marmoratus (figure .) hang head-downward in water, using their frightening anterior array of eyes and jaws to search for and ultimately seize prey such as larvae of mosquitos or chaoborus midges (glassworms). Two pairs of large, tube-shaped lens eyes face forward, and each contains a beautiful example of a strip retina (figure .). Because the eyes are fixed in the head, the only way to extend the field of this linear photoreceptor array is to swing the head up and
B
D Degrees from eye point
A
75 60 45 30 15 0
0
2
4 Time (s)
6
8
0
2
4 Time (s)
6
8
E 16
C
Distance (mm)
254
12 8 4 0
Figure 10.15 Scanning behavior in larvae of the water beetle Thermonectes marmorata. The color photographs show the whole larva (A) and a close-up of its head (B), with two pairs of large tubular eyes and jaws for seizing prey. Black-and-white photomicrograph (C) pictures the strip retina in one of these eyes; the fan-shaped arrays in gray extending above and below the line of dark pigment are the photoreceptors in this eye. Graphs (D,E) show the eye movements made as a larva approaches its prey, with slow downward scans and rapid upward returns. (Photographs and data courtesy Elke Buschbeck)
Motion Vision and Eye Movements 255
down—which is just what these larvae do when sighting prey. They prefer to strike near the center of their worm-like prey and track its location using vertical scans while slowly approaching (Buschbeck et al., ). At about mm from the prey item, they pounce to seize the victim in their jaws. Although it is not proven, it is possible that the distance to the prey at the instant of the strike is also determined by the scans using motion parallax; mechanical or other cues could also be involved. Note that the scans have a slow downward phase alternating with quick upward returns, implying that visual information is gathered as the head swings downward. Thermonectes larvae are the only animals known to scan visual objects in this way, but head-swinging behavior, of course, is used to generate motion parallax in many animals that have more conventional extended retinas. Whether scanning or generating motion parallax, eyes must move slowly enough to avoid motion blur, so each receptor’s view must remain within its receptive field for at least the time required to generate a signal that can be uniquely localized in space. This is why scanning movements, whether of the retina, the eye, or the whole head of the animal, are always distinctive from saccades, being so much slower.
Active Vision—Living with Flow Fields We began the chapter with a discussion of motion stimuli, and in that discussion we introduced the most common types of motion stimuli, the patterns of optic flow caused by self-motion. We then examined specific types of eye movements that occur in animals either at rest or in motion. It is time to return to whole-field motion stimuli, produced whenever an animal moves its eye, head, or body. The textbook example of a flow field is what we experience when being passively moved, as when looking out of a train window or perhaps the frightening view from the front car of a roller coaster. But perhaps a more useful example is the flow field experienced when driving a car, where properties of optic flow are constantly monitored for the distances to objects, the shape of a curve, the speed of the car both absolutely and in comparison to other moving vehicles, the direction of travel, or the rate of approach to an object slowing down or stopped in front of us. All these perceptual aspects of optic flow seen from a car are obvious with just a little thought. Yet we seldom think about them in connection with the more natural actions of running or walking or even just sitting and looking around. In some cases one does not even have to move to experience some of the most biologically relevant flow fields. Whether they are aware of it or not, animals rely on optic flow to get around in the world. As for so many other aspects of motion vision, the best-studied examples are in the insect world. Whereas flies are favored subjects for physiological research on motion vision, the insects that have been most thoroughly studied in actual motion are the honeybees, Apis mellifera. The visual world flowing past the compound eyes of flying bees informs them about their flight speed, the distance they have traveled, the absolute and relative distances of nearby objects they fly past, the point at which they should extend their proboscis to probe a flower for nectar, the location of the center of the entrance to their nest, their height above the ground or objects extending up from it (as well as the relative heights of such objects), the instant to extend their legs for a smooth landing, and more. It is obviously impossible to review the research behind all these findings here (Srinivasan, , has already done this for us), but it is instructive to consider a couple of particularly illuminating studies. Here, we examine how
256
Chapter 10
bees manage to fly through gaps without hitting the walls and how they use motion cues to discriminate the relative elevations of objects over which they fly. One reason bees are such attractive subjects is that they are so motivated to remember and return to a location where they have found food, and especially sugar water. Once the reward has been found, a bee will enthusiastically fly between the sugar solution and the hive, making it fairly easy to arrange experiments that affect its motion perception en route. Happily, bees are willing to fly down tunnels at such times, and the walls of the tunnels can be patterned in various ways to examine the cues that influence their flight speed and orientation (see figure .). This can be used to learn how bees fly past obstacles and transit gaps without difficulty. Bees were trained to fly down a tunnel to reach their sugar reward. The tunnel walls were decorated with simple patterns—black and white stripes oriented vertically— that provided strong optic flow as a bee flew past. Obviously, the faster the bee flies, the greater the degree of flow. The clever manipulation that the investigators added was to move the pattern on one wall of the tunnel in the direction of or opposed to the flight of the bee, providing different flow stimuli on opposite sides (Kirchner and Srinivasan, ). Consider a pattern on the bee’s left side that moves in the direction of flight. The bee will experience reduced speed of stimulation on that side compared to the static pattern on its right. To balance the flow speed on both sides, she moves closer to the moving pattern, thus re-establishing bilaterally equal stimuli, and flying down the tunnel to the left of center. The opposite holds if the left-side pattern moves against the direction of flight—the bee has to move to its right to cause a relative increase in the stimulation on its right side. In the real world balancing the optic flow on both sides will neatly take the bee between obstacles; she needs do nothing more than keep the flow equal on both sides. The nature of the pattern on the two sides is completely irrelevant, neatly avoiding the problem of incorrectly judging fine textures to be farther away than coarse ones. In an experiment similar to the one just described, bees flew through a tunnel in which the walls tapered to a narrow gap. Although the bees negotiated the gap without difficulty, the interesting observation was that they slowed their flight speed to maintain a constant rate of optic flow as the walls encroached (Srinivasan et al., ). This has the desirable benefit of allowing the bee to travel as fast as it wants in open space, where everything is far away, but providing a speed limit in locally crowded places where collisions are much more likely. Thus, the bee decelerates as she approaches a narrow space, equalizes optic flow on both sides (which naturally centers her in the opening ahead) and accelerates back to cruising speed once through the gap. If a wind is introduced into the tunnel, the bee compensates by adjusting its flight speed for constant optic flow. Bees use optic flow not only to monitor their speeds; as with the speedometerodometer of a car, the same information gives them the distance traveled. We explore this more in chapter . Before leaving bees, however, let us look at another kind of information that a bee acquires via optic flow—determining the relative heights of objects above the ground over which it flies. Traveling in a three-dimensional world, bees need to know how far beneath them the flowers on which they forage are located and—equally importantly—which flowers are above and which are below. Bees were tested for their ability to make this evaluation by first training them to feed from the highest in an array of artificial flowers—black or colored disks on stalks with sugar water on the highest (Lehrer et al., ). They were then allowed to fly over
Motion Vision and Eye Movements 257
Figure 10.16 A flight tunnel used in experiments on motion vision of flying honeybees. The hive of the bees is located at one end of the tunnel, and sugar-water food is available at the opposite end. The photograph shows the patterned walls and some bees making their foraging flights through the apparatus. Overhead cameras record their flights. (Photograph courtesy Marie Dacke)
258
Chapter 10
Figure 10.17 Schematic drawing of the apparatus used to test the abilities of honeybees to judge relative heights of “flowers” using motion vision. The black disks are artificial flowers on which drops of sugar water can be placed as rewards. The bees are later tested for their ability to recognize the vertical position of flowers containing food. (After Lehrer et al., 1988)
a new “flower arrangement” with flowers of various heights (figure .). The bees strongly favored the highest flower, showing that they could discriminate its height when flying over the group. The bees could also be trained to land above flowers that they could not actually access on the floor or at an intermediate height because the artificial flowers had been placed on layered sheets of transparent plastic, proving that the ability to judge height was not simply a result of choosing the flower producing the greatest optic flow. Flying honeybees monitor heights of objects below them and use this information in foraging. Motion cues are used widely, perhaps universally, among flying insects to regulate speed and for directional control and object avoidance. One great advantage of monitoring optic flow rather than the positions of individual objects is that only the drift of patterns across the retina needs to be monitored; an animal does not have to keep a watch on every item in view. It is not a surprise, therefore, that birds—with brains thousands of times larger than those of honeybees—use essentially identical strategies to maneuver themselves successfully, and at high speed, through a forest world crammed full of obstacles (Bhagavatula et al., ). By monitoring relative speeds of objects on the retina, they balance optic flow to pass neatly through gaps and around tree trunks and branches. Of course, birds walk as well as fly, and optic flow is probably one cue walking birds use not only for localizing objects that they pass but also for finding and maintaining fixation on food items on the ground. Tall birds such as herons and cranes search for live food (fish, insects, small vertebrates) near their feet as they stalk through shallow water or grass. Their stalking behavior is accompanied by a series of head bobs, successive fixations on the ground alternating with rapid projections of the head (and eyes) forward to the next fixation point (figure .). The rate of head bobbing varies with walking speed. When hunting, cranes use a walking speed that produces a : duty cycle between the fixation phase and the projection phase, presumably permitting them to note small moving objects when stabilized and to obtain spatial knowledge of their visual surrounds via optic flow when on the move to the next fixation. The crane in figure . had already spotted a food item on the ground and refixated during each projection phase to keep its image centered on both retinas. Walking birds are the only animals known to have found a way to utterly stabilize their vision—albeit intermittently—while steadily making progress through the world and taking full advantage of the information in the self-generated flow field. Their flexible necks and legs also provide them with fully stabilized vision even when grasping a flexible, moving perch (Katzir et al., ; Zeil et al., ).
Motion Vision and Eye Movements 259
a
A
b c
d e f
Horizontal position (arbitrary units)
B c
b
d
e
f
a
0
1
2
3 Time (s)
4
5
6
Figure 10.18 Head movements of a foraging whooping crane searching for food on the ground. In this example, the crane knew the location of the food item (a piece of fish). It fixated the location of the food (shown by the ×) while making a series of head saccades timed to its stepping behavior. Each movement generates visual flow, while the subsequent fixation keeps the food’s location in sight. (A) Successive positions of the head (blue, the bill angle is indicated by the lines), the center of the body (black), and the location of the lower right (green) and left (red) legs and feet. (B) The graph shows the timing of movements of each body region, with letters indicating successive fixations.
Most cases of large-field optic flow are like those we have been discussing, generated by the motion of the animal itself. There are special cases in which the stimulus is produced externally to the creature sensing the flow, and one of the most important examples is a looming stimulus. Looming refers to the rapidly growing size of an approaching object, perceived as an expansion outward of the object’s edges. Animals are supremely sensitive to the strong visual flow produced because it generally indicates either a predatory attack or an imminent collision. Virtually looming stimuli, often reduced to just a rapidly expanding black disk on a white computer screen, are useful in studies of visual perception. Animals regard such stimuli as highly threatening, making them useful for neurophysiological and behavioral work on perception. Cells involved in processing of visual stimuli in many insects, including dragonflies, reliably fire at a predictable time before expected “contact” of the approaching virtual object, independent of its rate of growth, clearly acting as collision alarms. A good example is seen in figure ., illustrating firing of a cell called DIT, found in the ventral nerve cords of dragonflies. This cell fires off a train of action potentials about
260
Chapter 10
–500
–400
–300 –200 –100 Time relative to contact (ms)
0
Figure 10.19 Responses of DIT3, a dragonfly interneuron that responds to looming stimuli, signaling estimated time to contact. Three examples of electrical responses are shown, with the stimulus indicated on the left. The actual stimulus was an expanding circle on a computer screen, with its growth indicated by the arrows. The responding cell fired off action potentials about 150 ms before the expected contact, with similar timing independent of the object’s contrast with background—darker in the top case, equal (on average) in the center, and brighter at the bottom. (Figure courtesy of Robert Olberg)
ms before the expected time of contact and is relatively insensitive to the contrast of the target on its background. For example, in the top panel the target is darker than the gray background, in the middle equal (on average), and in the bottom it is brighter, but the response is launched at about the same time in all three situations. Looming detectors, acting as collision sensors, are commonly found in insects and are probably widespread among animals.
Motion Vision and Visual Ecology This chapter has discussed how motion is perceived and processed by animals and has provided a series of examples of how animals use their eyes to manage perception of motion in their environments. Before ending, let us consider some general points about the visual ecology of motion vision. Thus far, only one review of this topic has appeared, by Eckert and Zeil (). They noted that animal motion vision is largely a function of the properties of the nervous system, the geometry of events and motions around an animal (including those generated by its own behavior), and the motion nature of the visual environment itself. Little research, however, has examined the formal principles of visual function in natural motion worlds. Here, we have mainly covered the receptor and neural side of the story, describing how neural circuitry and visual motion relate to the motion ecology of an animal and, to a lesser degree, to its interactions with its environment (including mates, potential prey, and other animals). Research on the other side of the story, the properties of image motion in the natural world, is now under way, with much of the focus on the motion context for animal signals that themselves are motion based. Peters et
Motion Vision and Eye Movements 261
al. (), for instance, considered how wind-generated motion noise would affect signal detection by an Australian lizard, Amphibolurus muricatus, an animal that uses tail flicks and other body movements as signals. Not surprisingly, the work found that in calm weather, motion signaling is nearly unambiguous. As wind speeds pick up, only some parts of the lizards’ displays would be easily discriminated from environmental motion, and in strong winds the motion signal could easily be confused with background events. The lizards apparently compensate by using signals intermittently in strong winds, perhaps to draw attention to them in a background of more continuous motion. The lizards might also change their location depending on the background conditions at time of display. Previous sections have relied heavily on motion vision in flies, and it is appropriate to return to them to conclude this chapter with a nice example of a neural system that matches its ecology. The flies we usually have to cope with zip around at high speed in the bright light of day, but many flies bumble around effectively in much dimmer light. By characterizing photoreceptor function in a diverse species selection of dipterans from many light environments ( species active from midday to twilight), Laughlin and Weckström () were able to examine how the properties of their photoreceptors varied. Day-active flies traveling at high speed (and thus generating high retinal velocities) in bright light had “fast” photoreceptor cells, reaching peak responses in milliseconds after a light flash. The twilight species, far more cumbrous in their flight, had receptors that responded more slowly by up to an order of magnitude. Differences in response kinetics could be explained by changes in potassium conductance in the photoreceptor cells—the high-speed receptors of diurnal flies rapidly chewed through energy resources due to the need to maintain membrane potentials in the face of large ion currents, whereas the seemingly awkward nocturnal animals had receptors that were fully functional at far lower energy demand. Putting it another way, the nocturnal species economized on photoreceptor energetics by waiting out the time required to capture sufficient photons to produce a useful response. Here, at the close of our tour through motion vision, we see the visual ecology of motion vision reduced to the function of a single class of ion channel.
11 Vision in Dim Light
O
n Barro Colorado Island, in the middle of the Panama Canal, the day is drawing to its inevitable close. The sun’s last rays spread gold across the steamy rainforest canopy, and a relentless chorus of bush crickets begins to crank to life. The howler monkeys, cementing their territories with a final volley of hoots, settle down for the night. An impenetrable twilight slowly shrouds the understory. Concealed by the growing cover of darkness other animals begin to stir, making ready to forage for food and to find mates. One, the nocturnal sweat bee Megalopta genalis, emerges from her hollowed-out stick in search of night flowers. It is incredibly dark, but like all bees, she turns midflight to face her nest, flying in wide slow arcs to reacquaint herself with local visual landmarks before flying off through the trees into the fading twilight. On return, she will use these landmarks to pinpoint her nest stick in the tangled forest undergrowth. By then the darkness will be profound, but her tiny eyes and brain somehow allow her to fly through vines, trees and bushes, recognize the landmarks she saw earlier, and enter her nest laden with pollen. How does she do all this when at the same light levels a human—with large eyes and brain—would essentially be blind? The same question can also be asked of animals living in the vast darkness of the deep ocean, where often the only things that can seen are the fitful sparks of living light emitted by animals themselves. The question’s answer—which reveals the remarkable adaptations that have evolved for vision in dim light—is at the heart of this chapter. Darkness provides excellent advantages for a wide variety of animals, for the simple reason that vision—a primary sense for predators and foragers alike—becomes severely disabled when faced with a paucity of light. Thus, in a fiercely competitive rainforest like the one inhabited by Megalopta, the cover of night provides respite from visually dependent predators and competitors, a fact that has encouraged the evolution of nocturnal activity in many different taxa. In the endlessly dim world of the deep ocean, the cover of darkness is instead permanent, and vision is relentlessly pressed at the limits of the physically possible. In some species the eyes have evolved extreme adaptations for extracting the most fleeting of visual cues. Others have given up the fight altogether, their eyes having regressed to mere vestiges. But in the world’s
Vision in Dim Light
dimmest habitats, vision nonetheless plays a surprisingly important role in the lives of animals. Humans rarely consider the visual world of animals that live in very dim light. Indeed, many of us have the rather misguided belief that because we are unable to see well in dim light no animal can. But in reality nothing could be further from the truth. Many animals—some of them surprisingly familiar—have exquisite vision in dim light. Many are able to distinguish color, orient using the faint pattern of polarized light formed by moonlight, and navigate using landmarks or the positions of stars, abilities that have been particularly well studied in nocturnal arthropods (see below). Although the visual abilities of deep-sea animals are more poorly understood, their highly developed eyes and visual systems argue that they are likely to be formidable, at least for a restricted range of ecologically crucial tasks. This prediction is underscored by the fact that the visual system typically consumes a sizable fraction of an animal’s total energy budget (Laughlin et al., ), and in the deep sea, where food is seldom encountered, the possession of a well-developed visual system invariably indicates a heavy reliance on vision. Irrespective of the habitat or light level, the main task of vision is to reliably distinguish objects against a background, thus allowing the animal to react accordingly. For such reliable discrimination the object must have a sufficiently high physical contrast against the background. This contrast—defined by the object’s relative luminance, color, or polarization—is determined by the spectrum of illumination, the physical properties of the object and its background, and the nature of the medium through which light reaches the eye. In a terrestrial habitat the slightly redder spectrum of illumination provided by a starlit sky (chapter ) may slightly alter the colors of objects, but otherwise object contrasts are pretty much the same at night as they are during the day. However, in ocean habitats, where the illumination spectrum depends on depth and the clarity of seawater varies, the same object may differ markedly in physical contrast from one location to another (a fact that has been elegantly exploited by marine animals for camouflage and signaling—see chapter ). However, at any one mesopelagic location in the ocean, the physical contrast of an object—just as in terrestrial habitats—will differ little from night to day, over light levels that vary by at least eight orders of magnitude. Even though the physical contrasts of objects vary little with light level, their visual contrasts—the contrasts of objects experienced by the visual system—will vary significantly, declining steadily with decreasing intensity. The reason this happens is twofold. First, as light levels fall, animal eyes struggle to absorb sufficient photons to support reliable contrast discrimination. Second, as we discussed in detail in chapter , this declining visual “signal” exposes an increasingly significant visual “noise.” As the visual signal-to-noise ratio plummets, the finer contrasts of the world are soon erased, and the visual impression becomes increasingly restricted to a smaller range of higher contrasts (which are invariably associated with coarser details). Thus, in the end the task of seeing well in a dimly lit world reduces to one of extracting reliable information from what is inherently an unreliable visual signal. The eyes of nocturnal and deep-sea animals tend to perform this task very well, thanks to an impressive suite of both optical and neural adaptations, many of which we will discuss in this chapter. But before we do, let us briefly explore how well these animals—and specifically those better-understood species active at night— can actually see.
263
264
Chapter 11
The Remarkable Visual Abilites of Nocturnal Animals The visual performance of nocturnal animals in dim light is a relatively new area of behavioral research, but many studies over the last – years have arrived at the same undeniable conclusion—nocturnal animals that rely on vision for the tasks of daily life invariably see extremely well. Even though there are now a growing number of studies on vertebrates (particularly birds and primates), most recent work has been restricted to nocturnal arthropods (Warrant, a; Warrant and Dacke, , ), and cephalopods (Allen et al., ), the groups to which we turn first.
Arthropods Nocturnality has arisen in many lineages of arthropods, even among those that are predominantly diurnal, such as the Hymenoptera (bees, ants, and wasps). Despite their tiny eyes and brains, nocturnal arthropods can perform truly impressive visual feats at night (Warrant, a; Warrant and Dacke, , ). Exactly how they do it is still largely unknown, but recent research has shown that their visual abilities can rival those of their diurnal relatives. The best-known nocturnal insects are moths and beetles, which are readily attracted to lights at night. As we saw in chapter , most of these have refracting superposition compound eyes. Compared to an apposition compound eye (the eye type typical of diurnal insects), an equal-sized superposition eye is capable of supplying each photoreceptor with hundreds of times as much light due (among other things) to the presence of a large visual aperture that is created by an internal clear zone and crystalline cones with graded-index optics (see figures ., ., and .). This extra sensitivity has allowed many nocturnal moths and beetles to perform essentially the same visual tasks as their close diurnal relatives. The elephant hawkmoth Deilephila elpenor is a wonderful example. This beautiful and strictly nocturnal moth searches for flowers at night and feeds from them in flight while hovering in front of them, much like a hummingbird. By training these moths to associate a sugar solution with certain colors (figure .A), it was possible to show that they possess trichromatic color vision at nocturnal light levels (figure .B,C; Kelber et al., ). They also demonstrate color constancy, the ability to continue distinguishing colors correctly despite slight changes in the illumination spectrum. These abilities are undoubtedly useful for finding flowers at night. Until this discovery nocturnal color vision was unknown, although it is likely to be widespread. Indeed, it has recently been confirmed in geckos (Roth and Kelber, ) and in nocturnal bees (see below). Curiously, the visual abilities of nocturnal moths are indirectly evidenced by an entirely separate group of arthropods—the nocturnal orb web spiders. The nocturnal orb web spider Neoscona punctigera uses two small pale dots on the ventral side of the abdomen to lure moths to their webs (Blamires et al., ). Despite being small, these dots are broad-spectrum reflectors of moonlight and have high contrast (Chuang et al., ). Why the dots are so attractive remains unknown, but the large numbers of moths they attract is testament to the superior sensitivity (and possibly even spatial resolution) of moth superposition eyes.
Vision in Dim Light
A
B Choice frequency
100 80 60 40 20 0 C Choice frequency
100 80 60 40 20 0
D
Figure 11.1 Nocturnal color vision in insects. (A–C) Color discrimination by the nocturnal hawkmoth Deilephila elpenor. In color discrimination experiments (A), the moth is required to choose a learned colored target from a selection of targets of various other colors. At starlight intensities, Deilephila is capable of discriminating a learned colored target (blue B or yellow C) from eight different shades of gray (both brighter and darker) but not from different shades of the same color. (From Kelber and Roth, 2006) (D) The giant nocturnal Indian carpenter bee Xylocopa tranquebarica, which has color vision at night despite possessing apposition compound eyes. Scale bar = 10 mm. (Photograph reproduced with kind permission of Nicolas Vereecken)
265
266
Chapter 11
Among beetles, the nocturnal dung beetle Scarabaeus zambesianus (figure .A) is capable of using the dim polarization pattern produced by moonlight as a compass cue to roll a ball of dung away from the dung pile along a straight line (which they must do in order to most efficiently avoid competition: figure .C). Proof of this ability can be seen when a linearly polarizing filter is placed over a beetle while rolling under the night sky (figure .D). If the polarization transmission direction of the filter is oriented perpendicular to the dominant direction of linearly polarized light emitted from the sky as the moon is rising, beetles will correspondingly turn by about ° (either left or right), tricked by the sudden change in compass information. When the filter’s transmission direction is instead oriented parallel to the night sky’s dominant polarized light direction, the beetles continue to roll in their original direction. A specialized dorsal region in each dorsal compound eye—known as the “dorsal rim area”—is responsible for analyzing celestial polarized light and providing the information necessary for the beetles to orient (figure .B). The best-known examples of insects that possess a dorsal rim area are all diurnal (see chapter ), and these are well known to use the bright polarization pattern produced by scattered sunlight as a compass cue for straight-line navigation during the day (see chapter ). In contrast, S. zambesianus is the first nocturnal species known to be capable of using celestial polarization cues for this purpose at night (Dacke et al., a). Considering the large number of animals that navigate at night, this ability is again probably widespread. Remarkably, a close relative—the dung beetle Scarabaeus satyrus (figure .A)—is even capable of using the broad stripe of starlight in the Milky Way as a directional cue for straight-line navigation on moonless nights (figure .E; Dacke et al., a). Even though, the paths are not quite as straight as they are when the moon’s pattern of polarized light is present (compare figures .C and .E), they are much straighter than when the beetle’s view of the starry sky is obscured (figure .F). Again, the ability to orient with respect to the Milky Way was previously unknown among animals. Even though superposition eyes are clearly the “eyes of choice” for a nocturnal insect, there are some exceptional species with apposition eyes—nearly all of which are ants, bees, and wasps—and these have no less impressive visual powers. Many of these species, such as the nocturnal bee Megalopta genalis mentioned above (figure .A,B), have become nocturnal only during recent evolutionary history, almost certainly in order to avoid predation and competition (Wcislo and Tierney, ). Even though their apposition eyes have evolved somewhat higher sensitivity to light (around times higher than in their diurnal relatives: Warrant, b), it is nonetheless remarkable how well they see. The giant nocturnal Indian carpenter bee Xylocopa tranquebarica (figure .D), for instance, has trichromatic color vision (Somanathan et al., a). Moreover, like diurnal bees, both Xylocopa and Megalopta are able to learn landmarks around the nest entrance and along the foraging route and to use them to fly through a dark forest and find their way home after a long foraging trip (Warrant et al., ; Somanathan et al., b; Baird et al., ). The large nocturnal bull ant Myrmecia pyriformis (figure .C) is also capable of impressive nocturnal homing behavior (Reid et al., ). Using both the dim nocturnal pattern of celestial polarized light and the panorama of visual landmarks in the surrounding terrain, Myrmecia is able to find its way to and from the nest during nightly foraging trips, long journeys that include climbing up and down tall trees (figure .D)! Among the arachnids, many spiders, scorpions, and camel spiders are nocturnal and have large and sensitive camera eyes (see figure .). Several species—such as the Central American wandering spider Cupiennius salei (Land and Barth, ; Fenk
Vision in Dim Light
A
C
Scarabaeus zambesianus
Scarabaeus satyrus
B D
0°
Ant
Can
E
–90°
+90°
F
Figure 11.2 Nocturnal navigation using celestial cues. (A) The closely related nocturnal South African dung beetles Scarabaeus zambesianus and Scarabaeus satyrus. (B) A false-color scanning electron micrograph of the left dorsal and ventral eyes. A canthus (can), here cut open to allow easier orientation, separates the two eyes. The blue region in the upper half of the dorsal eye indicates the dorsal rim area (or DRA), the region of the eye whose rhabdoms are specialized for the analysis of polarized light (see chapter 8). In other parts of the dorsal eye and throughout the ventral eye, the rhabdoms are unable to process polarized light (eye regions colored green). ant = anterior. Scale bar = 0.5 mm. (C) Top view of paths of dung beetles (S. zambesianus) rolling outward from the center of a circular arena (3 m diameter, large black circle) that was shaded from the light of a full moon (path length: 137 ± 2 cm). The beetles use the dim pattern of polarized light formed around the moon as a compass cue to roll along straight paths. (D) Average angles of turn made by 22 rolling S. zambesianus (from C) when a perpendicularly polarizing filter is suddenly placed over them (red circles, binned in 5° intervals): left turns –77.0° ± 14.7°; right turns 87.9° ± 9.3°. Under a parallel polarizing filter, beetles did not deviate from their original direction (green circles). (E) Top view of rolling paths of dung beetles (S. satyrus) under a clear starlit (moonless) sky (same arena as in C; path length 185 ± 12 cm). (F) Rolling paths of S. satyrus when the view of the starlit sky is obscured by a cardboard cap connected to its head (path length 475 ± 75 cm). (B adapted from Dacke et al., 2003b; C and D from Dacke et al., 2003a; E and F from Dacke et al., 2013a)
267
268
Chapter 11
A
C
B
D Tree
Bush
R
R
E
1m
Nest
Figure 11.3 Nocturnal homing in arthropods. (A,B) The nocturnal Central American sweat bee Megalopta genalis, which is able to learn visual landmarks around its nest entrance at night (a small hollowed-out stick in the undergrowth) and to use them to find home after a foraging trip. The head of this bee is seen face-on in B. Its two antennae (seen in front of the two large compound eyes) house mechanosensory and olfactory sensillae, and three prominent ocelli are visible on the dorsal surface of the head between the two compound eyes. The large mandibles are used for digging out a nest tunnel (with side chambers for brood) in a suitably straight stick suspended above the ground in the rainforest undergrowth. Scale bars 5 mm (A) and 1 mm (B). (C) A worker of the nocturnal Australian bull ant Myrmecia pyriformis, which at night leaves its hole in the ground and forages for food. Scale bar = 1 mm. (Photo courtesy of Ajay Narendra) (D) Bull ant workers (M. pyriformis) use local visual landmarks (in this case a tree and a bush) to guide foraging trips from the nest to the tree (gray lines, foraging trails seen from above). When displaced to the left or the right of the normal route (R), ants are still able to return to their normal foraging route (black lines), showing that they use landmarks to compensate for their displacement. (From Reid et al., 2011, with permission from the author) (E) The Namibian white lady spider Leucorchestris arenicola, which undergoes long migrations from its burrow on moonless nights and very likely uses a combination of terrestrial landmarks and path integration to find its way home again. Scale bar = 10 mm.
Vision in Dim Light
and Schmid, , )—have been shown to have excellent vision, which they use in both prey capture and navigation. For instance, the Namibian white lady spider Leucorchestris arenicola (figure .E) and the wolf spider Lycosa tarantula (OrtegaEscobar and Munoz-Cuevas, ; Reyes-Alcubilla et al., ) have been shown to visually home back to their burrows after making long nocturnal excursions. Male white lady spiders prefer moonless starry nights to search for females and can travel hundreds of meters from their nest on a single night. Their return to the nest is mediated entirely by vision, and the males appear to perform path integration and to use local landmarks in order to find their way home (Nørgaard et al., ). Thus, despite their small eyes and brains, nocturnal arthropods have astonishing visual abilities, including the discrimination of color and the ability to home, both in flight and on foot. Neural strategies at higher levels in the visual system are almost certainly responsible, and although we have some well-supported ideas on how these might work, we still have some way to go before we can prove they exist (see below).
Cephalopods As we saw in chapter , cephalopods (the squids and octopuses) have large and welldeveloped camera eyes. In deep-living species, these eyes can endow cephalopods with extraordinarily high sensitivity (as we discuss below for giant deep-sea squids), but even in shallow-living species, such as cuttlefish, the eyes can permit excellent vision in dim light. This was first noticed in the giant cuttlefish Sepia apama, which each year amass in huge numbers in shallow waters off the southern Australian coast to spawn during the day. At night they become sessile, settling on the bottom, after which they use pigment-filled chromatophores in their skin to produce a body surface pattern that creates a near-perfect match to the substrate on which they are resting (Hanlon et al., ). Cuttlefish use this trick to camouflage themselves from visual predators, and despite being monochromats they achieve this remarkable feat visually by scanning the nearby substrate with their eyes to determine the best camouflage pattern needed (Marshall and Messenger, ). That Sepia apama is able to do this at night is impressive and implies that it has excellent nocturnal vision, a fact that has now been confirmed in its near relative, the Common cuttlefish Sepia officinalis, which has recently been shown to match to substrates at starlight levels of illumination (Allen et al., ).
Terrestrial Vertebrates: Amphibians, Mammals, and Birds In comparison to arthropods, only a few studies have systematically explored the visual abilities of nocturnal vertebrates in dim light, although observations in the wild suggest that many of them see very well. As discussed in chapter , the nocturnal toad Bufo bufo has exquisitely sensitive vision. With its slow visual system and sit-and-wait predatory lifestyle, this toad is able to snap up small prey items that pass nearby at very low levels of illumination (Larsen and Pedersen, ). This behavior is limited only by the thermal noise present in the toad’s photoreceptors (chapters and ; Aho et al., , a), and because the toad is a cold-blooded animal this performance improves at lower body temperatures. A similar visual predation performance has also been reported in the nocturnal squirrel
269
270
Chapter 11
treefrog Hyla squirella (Buchanan, ). In two other nocturnal anuran species—the European tree frog Hyla aborea (Gomez et al., ) and the túngara frog Physalaemus pustulosus (Cummings et al., )—visual signals (from a brightly colored vocal sac) are critical during nocturnal courtship. Thus, many nocturnal frogs and toads—possibly due to having a cold body at night—have exquisite sensitivity and excellent nocturnal vision. The same may be true for other cold-blooded amphibians and reptiles. Indeed, the helmet gecko Tarentola chazaliae is so far the only nocturnal vertebrate known to possess color vision in dim light (Roth and Kelber, ). However, even among the warm-blooded mammals and birds there are many species with excellent nocturnal vision (see Warrant, a, for a full review). This is especially true of those species that rely heavily on vision for locomotion and prey capture. Among primates these include tarsiers (Castenholz, ; Collins et al., ), bushbabies (Charles-Dominique, ), and owl monkeys (Wright, , ; Bearder et al., ), all of which rely on visual cues to accurately leap from tree to tree in the dark, to recognize conspecifics, and to locate and manipulate insect and vertebrate prey. Good examples among birds include owls, frogmouths, nightjars, oilbirds, pauraques, the letter-winged kite, and various water birds, all of which are active fliers and many of which take prey on the wing (see Martin, ; Warrant, a). Of course there are also many nocturnal birds and mammals that have abandoned vision as a primary sense and instead rely more heavily on other senses such as hearing, olfaction, and mechanoreception. This is particularly obvious among flightless nocturnal birds such as the kiwi (Apteryx spp.). These birds have very small eyes relative to body mass (Brooke et al., ), the smallest visual fields known for any bird, and an optic tectum that is markedly reduced (Martin et al., ). In contrast, the olfactory and tactile senses of kiwis are greatly enhanced, with the long bill acting as a highly sensitive olfactory and mechanosensory probe that is used to search the leaf litter for food (Martin et al., ). Similar conclusions can also be drawn for nocturnal primates (Charles-Dominique, ). For example, the small-eyed lorisines (e.g., the potto and the angwantibo) slowly climb trees and traverse branches in the forest canopy, relying heavily on olfaction and audition for capturing slow or sedentary insect prey. Even for nocturnal birds and mammals that are primarily visual, the difficulties associated with seeing at night have resulted not only in enhanced visual sensitivity but also in enhanced sensory sensitivity in general. In actively flying nocturnal birds, for instance, not only is vision enhanced but so too are hearing (e.g., owls), olfaction (e.g., oilbirds), and touch (e.g., frogmouths). The oilbirds are especially interesting. Not only do these large-eyed cave-dwelling birds apparently have enhanced olfactory abilities (Snow, ; Bang and Wenzel, ; Martin, ), they also use echolocation to find their way around within the pitch-dark cave interior (see Martin, ). And like many nocturnal Caprimulgiformes, oilbirds also have long “rictal bristles” that project from the base of the beak (see figure .A). These bristles are assumed to aid in the detection of food, for instance by bending when they come into contact with prey, thus providing a mechanosensory cue for guiding the beak and triggering a peck. Although many anecdotal field observations exist attesting to the visual prowess of nocturnal birds (especially owls) and mammals (particularly primates), exceedingly few studies have been made to behaviorally measure their visual performance quantitatively. There are several studies in which behavioral visual performance in nocturnal species has been measured in bright light, but very few where it has been measured in more ecologically relevant dim light.
Vision in Dim Light
Surprisingly, the absolute sensitivity of vision (i.e., visual threshold) has only been measured in a handful of nocturnal vertebrates. These include domestic cats (Bridgeman and Smith, ; Gunter, ), rats (e.g., Munoz Tedo et al., ), mice (e.g., Herreros de Tejada et al., ; Okawa et al., ), toads (Aho et al., , a), frogs (Aho et al., b), and owls (Hecht and Pirenne, ; Martin, ). The behavioral visual thresholds of owls and cats are roughly . and times lower than human threshold, respectively, whereas the thresholds of rats and mice are similar to that of humans. The fact that the visual sensitivities of owls and cats—both quintissential nocturnal hunters—are only slightly higher than our own seems quite surprising. Because the rod sensitivities in owls, cats, and humans are probably similar (being able to repsond to single photons of light), any differences in sensitivity are likely to be largely optical. Consider for example the eyes of humans and tawny owls. Both eyes have a similar axial length ( mm and . mm, respectively), but the owl has a significantly larger pupil (. mm vs. mm, fully dark adapted). This endows the tawny owl with a lower F-number (. vs. .: chapter ) and thereby a retinal image that is (./.) = . times brighter than in humans, an improvement in optical sensitivity entirely consistent with the . times greater absolute sensitivity measured behaviorally. A similar argument can be made for cats (which have a dark-adapted F-number of .). Of course, differences in the convergence ratios of rods onto the underlying retinal ganglion cells are also likely to account for some of the differences between species (see below). Nevertheless, despite being modest, the extra sensitivity of the owl eye has allowed distinct visual behaviors at night. The Eurasian eagle owl Bubo bubo, for example, reveals a bright patch of UV-reflective feathers on its chest at dusk and dawn and on moonlit nights, which has been shown to be a crucial visual signal during sexual and territorial interactions (Penteriani et al., ). During the breeding season, these owls even use the white feathers of their prey and their own white feces to mark territorial boundaries, thereby providing a nocturnal visual signal to other owls (Penteriani and del Mar Delgado, ). Even though several nocturnal vertebrates are cone monochromats and thus incapable of color vision (e.g., the owl monkey Aotus azarae and the bushbaby Otolemur crassicaudatus, see below), many others may have dichromatic, trichomatic, or even tetrachromatic color vision. A good number of nocturnal mammals are cone dichromats (e.g., the tarsier Tarsius spectrum), and owls—such as the tawny owl Strix aluco (Bowmaker and Martin, )—have at least three visual pigments (two cone, one rod) as well as colored oil droplets in the inner segments of the photoreceptors (which in diurnal animals are known to improve color vision: see chapters and ; also Vorobyev, ). Thus, multidimensional color vision is quite likely in many nocturnal vertebrates. However, with the exception of the nocturnal helmet gecko (see above), the ability of nocturnal vertebrates to see color in dim light has strangely never been tested (Kelber and Lind, ). Many nocturnal vertebrates, such as the tawny owl (Martin, ), have been found to have color vision in bright light, but we still have no idea whether this ability is retained in dim light. The only complete behavioral measurements of acuity in dim light that have been made in nocturnal vertebrates (figure .) are in the great horned owl (Fite, ), the barn owl (Orlowski et al., ), Bourke’s parrot (Lind et al., ), the owl monkey (Jacobs, ), the brown rat (Birch and Jacobs, ), and the domestic cat (Pasternak and Merigan, ). Compared to humans all these species have lower acuity at brighter light levels, but at the dimmest intensities most exceed (or are likely to exceed) the acuity of humans.
271
Chapter 11 Starlight Starlight Full Civil Sunset overcast moon twilight
Daylight
0 Log acuity (arc min–1)
272
Barn owl
Great horned owl
Brown rat
Owl monkey
–1
–2 Human –7 –6 –5 –4 –3 –2 –1
0
1
2
3
4
Bourke’s parrot
Domestic cat
Log luminance (cd/m2)
Figure 11.4 Visual acuity in nocturnal birds and mammals measured behaviorally as a function of luminance. The visual acuity of man (circles), the great horned owl Bubo virginianus, and the owl monkey Aotus trivirgatus (All adapted and redrawn with kind permission from Martin, 1990), with data added for the barn owl (Orlowski et al., 2012), Bourke’s parrot (Lind et al., 2011), the brown rat (Birch and Jacobs, 1979), and the domestic cat (Pasternak and Merigan, 1981). At each intensity, acuity is calculated from the minimum angular width w (in minutes of arc) of a stripe, from a uniform black-and-white square-wave grating, that is just visible to the observer. Acuity = log w –1. (Photo credits: Roger Hall, Dmitry Maslov, Jill Lang, and Ammit, 123RF.com photo agency)
Thus, the little we know about the visual performance of nocturnal vertebrates in dim light suggests that compared to humans they have lower absolute thresholds, the likelihood of color vision, and, at the dimmest light levels, superior spatial resolution. In arthropods, nocturnal visual performance is likely to be better still. But how does this come about? How are eyes able to allow animals to see well in dim light? We now explore the answers to these questions in more detail, beginning with camera eyes.
Nocturnal and Deep-Sea Camera Eyes As we saw in chapter , the optical sensitivity S of an eye—that is, its ability to capture light from an extended luminous source—heavily depends on two important features of the eye’s morphology (equation .): the area of the pupil through which light enters the eye and the size of the solid angle (or receptive field or “pixel”) of visual space viewed by each visual channel (i.e., by each photoreceptor or by each pooled group of photoreceptors). Larger pupils supplying light to wider visual channels result in increased optical sensitivity, and both strategies are commonplace in nocturnal and
Vision in Dim Light
deep-sea eyes of all types and are particularly obvious in nocturnal and deep-sea camera eyes. They are, for instance, readily seen in the eyes of nocturnal birds (Hall and Ross, ), primates (Kirk, ), and spiders (Land, ) as well as in the eyes of deep-sea fish (Wagner et al., ; Warrant and Locket, ).
Optical Adaptations for Vision in Dim Light Compared to the camera eyes of diurnal vertebrates, those of nocturnal species frequently have a large, highly curved and powerful cornea and a significantly thickened lens that shortens the eye’s focal length and shifts the lens closer to the retina (figure .C,D). Typically, the shorter focal length leads to a lower F-number (the ratio of focal length to pupil diameter) and thus to a greater sensitivity (equation .). As discussed in chapter , an eye of lower F-number receives a brighter retinal image and is thus better adapted to vision in dim light. Indeed, a survey of F-numbers in nocturnal species shows that they are typically in the range .–., whereas in diurnal species they are commonly greater than . and usually higher (Warrant, a). For instance, our own dark-adapted eye has an F-number of .. The demand for high sensitivity via a large pupil has resulted in the evolution of very large eyes relative to head size in nocturnal and deep-sea animals (Walls, ; Collin et al., ; Brooke et al., ; Garamszegi et al., ; Kirk, ; Howland et al., ; Thomas et al., ; Hall and Ross, ). Nocturnal spiders, tarsiers, and owl monkeys are excellent examples: in the latter, pupils can reach cm in diameter. The posterior median eyes of the net-casting spider Dinopis subrufus—which completely dominate its head and have a staggeringly low F-number of .—have the largest single lenses known in arthropods (up to . mm in diameter: figure .E). Not surprisingly, their optical sensitivity is almost times higher than that in the anterior lateral eyes of the diurnal jumping spider Phidippus johnsoni (lens diameter . mm, F-number = .; table ., equation .). In tarsiers (figure .A), skillful hunters of nocturnal insects (Collins et al., ), the skull is dominated by enormous eye sockets (figure .B). Incredibly, each eye has a volume exceeding that of the brain (Collins et al., ), and the eyes are believed to be the largest relative to body mass of any mammal (Polyak, ). Moreover, investigations of the brain reveal that the primary visual cortex (V) of the tarsier Tarsius spectrum is unique among primates in that it occupies the largest proportion of the neocortex yet reported for this group (Collins et al., ). This finding no doubt reveals the importance of sensitive high-resolution vision for the tarsier’s demanding predatory behavior and its locomotory habit of leaping from tree to tree. The massive eyes of deep-sea cephalopods, like the colossal squid Mesonychoteuthis hamiltoni and the giant deep-sea squid Architeuthis dux (figure .), are also large in order to provide exquisitely sensitive vision in dim light. These enormous animals patrol the boundary between the mesopelagic and bathypelagic zones at a depth of around m, where essentially no downwelling daylight remains. At around cm in diameter, their eyes are among the largest that have ever existed and are much larger than the next biggest marine camera eyes—those of the similarly heavy and equally deep-diving swordfish (Xiphias gladius), which have eyes about cm across (Fritsches et al., ). It turns out that the extremely large eyes of deep-sea squid are probably not for detecting other objects (e.g., prey and conspecifics) illuminated by the dim downwelling daylight. For this purpose eyes no larger than those of swordfish
273
274
Chapter 11
A
C
Corneal diameter
Corneal diameter
D
B
E
Figure 11.5 Optical adaptations for nocturnal vision in camera eyes. (A) The Western tarsier Tarsius bancanus, whose huge eyes (each larger than its brain!) are likely to support excellent nocturnal vision. (From Rowe, 1996) (B) The skull of a tarsier (Tarsius sp.), with huge eye sockets relative to head size. Scale bar = 10 mm. (Image courtesy of © Bone Clones, www.boneclones.com) (C) A schematic comparison of eye morphology in the nocturnal bushbaby Galago (left) and the diurnal marmoset Callithrix (right). The larger corneal diameter of the bushbaby allows a larger pupil and greater sensitivity, and its thicker and more proximally placed lens shifts the posterior nodal point (roughly at the lens center) closer to the retina. This shortens the focal length, lowers the F-number, and increases sensitivity. It also leads to a smaller and thus less well-resolved image (as shown by the arrows). (Redrawn with kind permission from Kirk, 2004) (D) Schematic cross sections through the eyes of the diurnal swan Cygnus olor (left) and the nocturnal owl Bubo bubo (right). Note the large tubular form of the owl’s eye and the more proximal position of the lens. (Diagram adapted from Walls, 1942) Scale bar = 10 mm. (E) The large posterior median eyes of the nocturnal net-casting spider Dinopis subrufus, which have an F-number of 0.58. (Adapted from Sinclair, 1985) Scale bar = 1 mm.
Vision in Dim Light
Figure 11.6 Fresh head of a giant squid (likely from the genus Architeuthis). The clearly visible eye has a 90-mm pupil. The squid was caught on February 10, 1981, by fisherman Henry Olsen about 10 miles offshore from Kahana Bay, Oahu, Hawaii, and the picture was taken by Ernie Choy at the pier. Scale = 10 cm (calibrated by the standard fuel hose across the pupil). (Reprinted from Nilsson et al., 2012, with permission from Elsevier)
would be needed—eyes any larger would provide little extra performance for the much greater cost of having a larger eye (Nilsson et al., ). Instead, the benefit of such giant eyes seems to be the detection of the faint extended fields of planktonic bioluminescence triggered by the swimming of their major enemy, the sperm whale. By detecting these terrifying predators using the bioluminescent emissions that they trigger—at a range of up to around m—their giant eyes could endow squids with an “early warning system” and the best chances of escape. Another common adaptation to life in dim light is an eye of tubular shape, and this is found in many nocturnal birds and deep-sea fish. The large forward-pointing eyes of owls are a good example (figure .D; Walls, ; Murphy et al., ; Martin, , ), as are the dorsally pointing eyes of mesopelagic deep-sea fish that strain to see the small silhouettes of food and prey against the dim downwelling daylight above (figure .; Collin et al., ; Wagner et al., ; Warrant and Locket, ). A tubular form allows a portion of a larger (and thus more sensitive) spherical eye to fit onto a smaller head (Walls, ). This reduces the weight (payload) of the eye and thus the energetic cost of maintaining visual function while at the same time allowing a maximal pupil diameter (which improves sensitivity) and a comparatively longer focal length (which improves resolution). The price paid for this is a restricted visual field: the long tubular optics reduces the region of space from which light reaches the retina. In birds such as owls this is potentially a problem because in terrestrial habitats there is much to be seen in all directions. To get around this, many owls have the remarkable ability to turn their heads by an extraordinary ° and thus direct their gaze in new directions. The bizarre mesopelagic barreleye fish Macropinna microstoma— whose two huge dorsal tubular eyes peer through the transparent fluid-filled dome of its head (figure .E)—solve the same problem in a completely different manner. They can instead rotate their eyes frontally, and thus view the region in front of their mouth (Robison and Reisenbichler, )! In other deep-sea fish, the tubular eye’s narrow dorsal field of view—which may exclude potential dangers, or even food sources, lurking in the unseen parts of the fish’s
275
276
Chapter 11
A
B
8
6 9
Ip 1
43
10
11
5 2
C
7
D
c
l ir
a
m
E
Vision in Dim Light
Figure 11.7 Tubular eyes in deep-sea fish. (A) A dorsal tubular eye with a laterally placed retinal diverticulum. 1 = accessory retina, 2 = main retina, 3 = retina of the retinal diverticulum, 4 = epidermal window, 5 = reflective sheet of guanine crystals, 6 = the eye’s visual field, 7 = the retinal diverticulum’s visual field. (B) Frontal view of the head of a scopelarchid showing the visual fields of the two eyes. 8 = dorsal binocular visual field, 9 = visual field of the right eye, 10 = total visual field for light reaching the accessory retina through the lens pad (lp), 11 = ventral extension of the visual field provided by the lens pad. (A and B adapted from Munk, 1980. Reprinted from Warrant and Locket, 2004, with permission from John Wiley & Sons) (C) A hatchetfish (unknown species) with large dorsally directed tubular eyes. (© Monterey Bay Aquarium, photo by David J. Wrobel) (D) Transverse sections through the tubular eye of the hatchetfish Opisthoproctus soleatus (showing the main retina m, the accessory retina a, the spherical lens l, and the iris ir). The cornea c extends between the arrowheads. Scale bar = 0.5 mm. (From Collin et al., 1997. Reprinted from Warrant and Locket, 2004, with permission from John Wiley & Sons) (E) The mesopelagic barreleye fish Macropinna microstoma showing the transparent fluid-filled dome of the head through which the two dorsal tubular eyes are directed. The two small dark structures on the anterior surface of the head are olfactory organs. (Image courtesy of Bruce Robison and used with permission from the Monterey Bay Aquarium Research Institute)
immediate surroundings—is compensated by the presence of several rather clever adaptations (figure .A). The first of these is a second “accessory retina” lining the wall of the tubular eye ( in figure .A, a in figure .D). Even though it is located far too close to the lens to receive a focused image, this accessory retina nevertheless receives light signals that originate near the side of the fish (figure .A), effectively extending the visual field of the tubular eye by up to ° laterally. A movement or a bioluminescent flash would be sufficient to trigger the photoreceptors of the accessory retina, thus alerting the fish to the fact that it was not alone. In the pearleyes (fish of the family Scopelarchidae), an even greater extension of the visual field is provided by the presence of a “lens pad” (Locket, , ; Munk, ), a transparent light-guiding structure on the side of the eye (lp in figure .B). In other fish an extension of the eye’s visual field is provided by a small eye-like structure—the “retinal diverticulum” (Pearcy et al., )—that bulges outward from the lateral side of the tubular eye, just proximal to the lens (figure .A). Light originating from the side of the fish and up to ° ventral can be caught by the slender photoreceptors of the diverticulum. This light is reflected from an appropriately placed mirror of shiny guanine crystals and reaches the photoreceptors after passing through a clear epidermal window (figure .A). It had always been assumed that this light is unfocused. However, in one remarkable species—the mesopelagic spookfish Dolichopteryx longipes—this mirror of shiny crystals is arranged as a stack of narrowly spaced guanine plates that systematically change orientation along the length of the mirror (Wagner et al., ), allowing it to focus images on the diverticular retina. This is the only known example of an ocular image being formed by a mirror in a vertebrate. There are many deep-sea fish that do not have dorsal tubular eyes but instead have eyes placed on the side of the head in the manner more typical for fish. For such eyes, which have large lateral visual fields, a potential problem arises frontally, the direction a forward-swimming fish is most likely to encounter and pursue prey: lateral receptive fields restrict the extent and sensitivity of frontal vision. Frontal binocular overlap—and distance discrimination—is also compromised. This problem has been
277
278
Chapter 11
A
B
n
t c
Figure 11.8 Rostral aphakic gaps in the eyes of deep-sea fish. (A) A rostral aphakic gap (arrow) in the eye of Bathytroctes microlepis. Together with sighting grooves in the snout, the rostral aphakic gaps entirely expose the lens of each eye in the frontal visual field. The temporally placed foveae that view the same frontal field are then assured of maximum light capture. (Reproduced with kind permission from Munk, 1980) (B) A rostral aphakic gap (arrow) allows full illumination of only the central (c) and temporal (t) areas of the retina. Unfortunately, the nasal (n) areas of the retina also receive unfocused light. (Adapted from Locket, 1977. Reprinted from Warrant and Locket, 2004, with permission from John Wiley & Sons)
overcome in many deep-sea fish by a large “rostral aphakic gap,” a frontal elongation of the pupil far beyond the margin of the lens (figure .A; Munk and Frederiksen, ). This, together with sighting grooves along the snout, allows the full diameter of the lens to collect light frontally and to focus it onto the temporal (i.e., posterior) part of the retina (figure .B). Without an aphakic gap, only a fraction of the lens would have been exposed frontally, severely limiting light capture. As described in chapter , the temporal retina frequently possesses an acute fovea or an area centralis, a region with densely packed ganglion cells, and the rostral gap ensures that it receives a bright image. However, light incident more laterally has the chance to leak unfocused into the eye through the aphakic gap, contaminating the lateral image. But this price is apparently worth the gain in frontal visual performance. Finally, in addition to sensitive optics, many nocturnal and deep-sea animals also have a reflective layer within the retina, behind the photoreceptors, called the tapetum lucidum (or simply “tapetum”) that allows incident light to be reflected back through the retina (figure .A). The familiar bright glow of animal eyes illuminated by car headlights at night (figure .B) or the round “eye glow” of superposition compound eyes (see figure .B) are both testimony to the action of a tapetum. This reflective layer can be made of diverse materials including shiny guanine crystals (e.g., nocturnal spiders and deep-sea fish), riboflavin (e.g., bushbabies), collagen (most ruminant mammals), and air-filled cuticular tracheoles (e.g., beetles and moths). The tapetum allows a second chance for absorption of light that was not absorbed during its first passage through the photoreceptors, thus effectively doubling the path length for photoreception. Despite this improvement in sensitivity, the unconstrained reflection provided by a flat tapetum can degrade spatial resolution (Walls, ; Munk, ; Nicol, ; Warrant and McIntyre, ; Kirk and Kay, ). This may be the reason why many nocturnal animals do not possess them: even though tapeta are common among the nocturnal Strepsirrhine primates—lemurs, lorises (figure
Vision in Dim Light
A
B
Figure 11.9 The tapetum lucidum and the eye glow of animal eyes. (A) The freshly excised and dissected eye of a pig showing the retinal eye cup and the blue-green tapetum lucidum. (Photo courtesy of Pam Gregory, Tyler Junior College, Tyler, Texas) (B) The bright tapetal eye glow of a male pigmy slow loris (Nycticebus pygmaeus), a native of Indochina, caught in the flash of a camera. (Photo courtesy of K.A.I. Nekaris, taken in the Seima Protection Area, Cambodia)
.B), bushbabies, pottos and aye-ayes (Charles-Dominique, )—they are absent in the nocturnal Haplorhine primates (tarsiers and the owl monkey Aotus). Tapeta are also uncommon in the eyes of nocturnal birds, being found only in the eyes of some Caprimulgiformes, notably the nightjars (Nicol and Arnott, ) and the common pauraque (Rojas et al., ). In contrast, tapeta are very common in nocturnal arthropods and in deep-sea fish.
Neural Adaptations for Vision in Dim Light In the camera eyes of vertebrates, a common evolutionary response to life in dim light has been a drastic change in the relative proportions of the rod and cone photoreceptors. Even though the rods—responsible for vertebrate vision in dim light—are the dominant photoreceptor type in most vertebrate retinas, they are particularly dominating in the retinas of nocturnal and deep-sea species. For instance, in many deep-sea fish the retina is populated more or less entirely by rods (Wagner et al., ), and in amphibians, birds, and mammals, the ratio of rods to cones is much greater in nocturnal than in diurnal species. Among birds, an extreme case is the oilbird Steatornis caripensis (see figure .), where there are rods for every cone and million rods/mm (see Martin et al., ), the highest rod density recorded in any vertebrate. In diurnal birds the ratio is considerably lower, with the number of cones typically exceeding that of the rods: in the diurnal cattle egret Bubulcus ibis, the rod:cone ratio is .: (Rojas et al., ). The same pattern can be found in mammals (Peichl, ). At an eccentricity of mm temporal of the fovea (– mm in figure .C,D), the density of rods in the nocturnal owl monkey Aotus azarae is about times greater than the density of cones, but in the diurnal capuchin monkey Cebus apella the ratio is closer to : (Ogden, ; Wikler and Rakic, ; Yamada et al., ; Silveira et al., ). Moreover, in Cebus, rods are absent altogether in the central fovea; instead, there is a massive peak in cone density, a characteristic of many diurnal vertebrates, including primates
279
Chapter 11 D
Thousands of ganglion cells mm–2 1.0–1.4 N 1.5–1.9 2.0–2.4 2.5–2.9
T
V
C
Density (cells/mm2)
A
106 Rods
105
104
Cones Aotus
103 –15
1 mm
–10
–5
0
5
10
15
B D
OS
g
Density (cells/mm2)
280
106
105
104
Rods
Cones Cebus
103 –15
–10
–5 0 5 Eccentricity (mm)
10
15
Figure 11.10 Retinal specializations for dim-light vision in vertebrates. (A) The distribution of retinal ganglion cells in the retina of the mesopelagic lanternfish Lampancytus macdonaldi, which lives at a depth of between 550 and 1100 m. The retina is shown as a flat mount, and cell densities are given in thousands of cells/mm2. The large dot represents the exit of the optic nerve. T = temporal (with frontal visual field), N = nasal (with posterior visual field), D = dorsal, V = ventral. (Adapted from Wagner et al., 1998) (B) The grouped retina of a deep-sea fish, seen schematically in tangential view at the level of the inner-outer segment junctions (left). Rod outer segments (os) are assembled as groups (seen in a longitudinal view, right) within cups of reflective guanine crystals (g). This spatial summation of rod signals significantly improves sensitivity to the dim extended space light. (A and B reproduced from Warrant, 2004, with kind permission from Springer Science+Business Media) (C,D) Photoreceptor densities as a function of retinal eccentricity in the nocturnal owl monkey Aotus (C) and the diurnal capuchin monkey Cebus (D). Note the larger rod density and lower cone density at all eccentricities in the owl monkey and the great reduction of cones in its fovea (at eccentricity = 0 mm). Nasal and temporal eccentricities are represented by positive and negative values, respectively. (Adapted and redrawn from Yamada et al., 2001)
like ourselves. In Aotus, however, this large peak is missing—the rods, absent in the central fovea of Cebus, dominate the fovea of Aotus (figure .C,D). Part of the reason for the great reduction of cones in Aotus is that it has lost two of the three classes of cones typical of diurnal primates like Cebus or ourselves (Wikler and Rakic, ). Aotus has lost its S-cone due to a defect in the S-class opsin gene (Wikler and Rakic, ; Jacobs et al., , ), but retains a single M/L cone class with an absorption peak at nm. The owl monkey Aotus is thus a cone monochromat. In oilbirds, the explanation for the enormous rod:cone ratio (:) is a remarkable retinal adaptation that is also found in deep-sea fish: the rods of these animals are
Vision in Dim Light
A
C
D
d
r
1
B
r
OS
2
3
r p
Figure 11.11 The banked retina of the nocturnal oilbird and deep-sea fish. (A,B) Side view (A) and frontal (B) view of the head of the Oilbird Steatornis caripensis, showing the prominent eyes. Note the bird’s rictal bristles in A. (C) A section through the banked retina of the oilbird, showing the rod outer segments (r) arranged in three layers. Scale bar = 6 mm. (D) A schematic banked retina (modeled from a deep-sea fish) consisting of three banks of rods (1, 2, 3). Such a retina is an adaptation for increasing rod convergence onto single ganglion cells and for increasing the path length for light absorption. os = rod outer segment, d = distal, p = proximal. (A–C from Martin et al., 2004, reproduced from Warrant, 2008a, with the author’s kind permission and with permission from Elsevier. D reproduced from Warrant, 2004, with kind permission from Springer Science+Business Media)
arranged in layers, one above the other, to create a so-called “banked” or “tiered” retina (figure .C,D; Locket, ; Martin et al., ). In oilbirds (figure .A,B) there are three such layers through which the incoming light must pass (figure .C), and in the deep temporal foveae of the alepocephalid deep-sea fish Bajacalifornia drakei, there are no fewer than layers of rod outer segments (Locket, )! Stacking rods on top of each other in this fashion has two main advantages. First, the path length of light traveling through the retina is greatly increased. Second, the convergence of rods onto each underlying ganglion cell (see below) can be dramatically enhanced. Both of these strategies maximize photon absorption and endow the eye with a significantly higher sensitivity. Another impressive retinal adaptation for maximizing sensitivity is well known from the eyes of several taxa of deep-sea fish: instead of being isolated, rod outer segments are bundled into groups of or more within round cups of retinal epithelium cells filled with reflective guanine crystals (figure .B; Locket, ). These reflective cups optically isolate the receptor groups from each other, effectively turning the rod bundle into a kind of “macroreceptor” with the outer segments in close contact,
281
282
Chapter 11
so that light entering the cup is trapped and shared by all of them. This will result in a much wider—and presumably more sensitive—receptive field than is achievable by a single rod. In the pearleye Scopelarchus guntheri, rods, each approximately . μm wide, fill a cup that is μm wide (Locket, ), coarsening spatial resolution from .° between rods to approximately ° between groups. Thus, this new macroreceptor has a receptive field over seven times wider than a rod, increasing sensitivity by times. As we see below, such “spatial pooling” of visual signals for improved sensitivity can also be mediated neurally, at a higher level of processing. Interestingly, grouped macroreceptors have recently been discovered in the eyes of a freshwater electric fish, although it is the cones that are grouped rather than the rods, and instead of enhanced sensitivity, the main advantage is improved vision in turbid water (Kreysing et al., ). In nocturnal and diurnal species, even the rods themselves seem to differ. As we saw in chapter , in a wide variety of nocturnal and crepuscular mammals the rod nuclei function as tiny lenses that more effectively collect and focus the incoming light into the rod outer segment, where it is absorbed (Solovei et al., ). However, perhaps the greatest neural determinant of light capture, resolution, and sensitivity in a vertebrate camera eye is the extent of convergence (or summation), via the bipolar cells, of photoreceptor signals onto the underlying ganglion cells. As we saw in chapter , in all vertebrate retinas, photoreceptors typically converge in large numbers onto single ganglion cells, the exact “convergence ratio” depending on the distance (eccentricity) from the center of the fovea (or area centralis). The central fovea, as we might recall from chapter , is the retinal location where the convergence ratio is the lowest. Indeed, in the central foveae of diurnal anthropoid primates, signals from a single cone photoreceptor are divided between two “midget” (parvocellular) ganglion cells, one OFF type and one ON type (Wässle and Boycott, ). As one moves in any direction away from the fovea, the convergence ratio increases and, in the periphery of the retina, can become very great indeed, with large numbers of photoreceptor signals converging onto single ganglion cells. A larger photoreceptor pool builds a larger and more sensitive ganglion cell receptive field, and in regions of the retina where this occurs, ganglion cell density is low, and, as a consequence, so too is the local spatial resolution. Vertebrates active in dim light typically have higher convergence ratios of photoreceptors (i.e., rods) to ganglion cells than found in diurnal vertebrates, even in the fovea (Hughes, ; Kirk and Kay, ). For example, in the nocturnal owl monkey Aotus, the convergence ratios of rods to ganglion cells at an eccentricity of – mm (see figure .C,D) are around six to seven times greater than those in the diurnal capuchin monkey Cebus (Silveira et al., , ; Lima et al., ; Yamada et al., ). In fact, at any one eccentricity, the ganglion cells of Aotus have receptive field areas that are about five times larger than those of Cebus, resulting in three times fewer ganglion cells (in total) within the eye. These differences are particularly obvious in the fovea: Aotus has fewer than % of the ganglion cells that are found in the fovea of Cebus (Silveira et al., ). Thus, the higher convergence ratios found in Aotus compared to Cebus, together with monochromacy and low ganglion cell densities (especially centrally) suggest that the retinas of owl monkeys are adapted for coarser but more sensitive vision at very low light levels. The same kinds of adaptations are also found in mesopelagic deep-sea fish straining to see extended objects in the dim downwelling daylight. These fish often have a rather uniform and low-density distribution of ganglion cells and lack a fovea (in
Vision in Dim Light
contrast, foveae are typical of deep-sea fish that need to accurately localize pinpoints of bioluminescence—see chapter ). Good examples include lower mesopelagic lantern fish in the genus Lampanyctus (Wagner et al., ). These have a uniformly low density of ganglion cells and presumably high rod convergence throughout the retina (figure .A). With only to ganglion cells/mm, visual acuity is no better than approximately .°, a value only marginally better than the compound eyes of insects! On the other hand, the sensitivity of the eye can be very high. In the lantern fish Lampanyctus macdonaldi, which has a pupil diameter of . mm, the sensitivity S of the eye to an extended source is μm sr (equation .), a very high value for a vertebrate eye, in fact approximately times more sensitive than that for a nocturnal toad (see table .; Warrant and Nilsson, ; Warrant and Locket, ).
Nocturnal and Deep-Sea Compound Eyes Many of the same kinds of mechanisms used in camera eyes to improve sensitivity to a dim extended scene are also found in compound eyes (Meyer-Rochow and Nilsson, ; Warrant and Dacke, ). Remarkably, this can even be the case in apposition compound eyes (figure .A), the design more typical of insects and crustaceans living in bright habitats. As we saw above, there are several notable examples of nocturnal insects—such as the bees Megalopta genalis (figure .A,B) and Xylocopa tranquebarica (figure .D), several wasps (e.g., Apoica pallens: Greiner, ), and various mosquitoes (Land et al., , )—that all reveal outstanding visual behaviors at night despite this eye design. Part of the secret lies in the construction of their ommatidia. Compared to its day-active relatives, Megalopta has very large corneal facets lenses and among the widest rhabdoms recorded for an insect apposition eye (figure .B), around μm in diameter (compared to around μm in diurnal bees: Greiner et al., a). Nonetheless, the larger pupil and much wider receptive fields that result bestow only about times greater sensitivity, far short of the million times difference in light level experienced by nocturnal and diurnal bees. It turns out that this shortfall is bridged by slow photoreceptors of high gain (Frederiksen et al., ) and extensive summation of light in space and time (see below). Similar photoreceptor adaptations for vision in dim light are also found in flies (Weckström and Laughlin, ) and cockroaches (Weckström et al., ; Heimonen et al., , ; Salmela et al., ). Another good example is the benthic isopod Cirolana borealis (now known as Natatolana borealis). This species, found as deep as m in clear ocean waters or in shallower waters that are dark and murky, has immensely sensitive apposition eyes. Compared to the ommatidia of relatives living in much shallower or clearer water (figure .A), those of Cirolana are gigantic, with huge corneal lenses ( μm wide) of short focal length ( μm), and huge photoreceptors ( μm wide × μm long) sitting in cups of reflective pigment (Nilsson and Nilsson, ). These features endow Cirolana with record high sensitivity for an apposition eye: μm sr. This very high sensitivity comes only at the cost of resolution. In Cirolana the ommatidia have visual fields of approximately °, but for the scavenging lifestyle that this animal leads, sensitivity is probably of greater importance. It is, however, the other major class of compound eyes—the superposition eyes (figure .B)—that are better known for their high sensitivity. As we saw in chapter , their large “superposition apertures” can supply hundreds (or even thousands) of
283
284
Chapter 11
A
C Cirolana
Callinectes
C CC C
CC O. aygulus Rh Rh Re
B
O. alexis
O. belial
Vision in Dim Light
Figure 11.12 Specializations for dim-light vision in compound eyes. (A) Ommatidial structure in the benthic isopod Cirolana borealis (left) and the coastal crab Callinectes ornatus (right). Receptive fields, apertures, and rhabdoms are much larger in the deep-living Cirolana borealis, an indication of the greater sensitivity of this eye (which is reinforced by the presence of a reflective pigment tapetum, Re). C = corneal facet lens, CC = crystalline cone, Rh = rhabdom. Scale bar = 100 μm. (After Land, 1984, using diagrams from Nilsson and Nilsson, 1981, and Waterman, 1981. Reproduced from Warrant and Locket, 2004, with permission from John Wiley & Sons) (B) A transmission electron micrograph showing a distal transverse section through the rhabdom of the nocturnal bee Megalopta genalis, which is one of the widest rhabdoms found among insect apposition eyes. Eight photoreceptor cells are visible, each of which contributes microvilli to the rhabdom. Scale bar = 2 μm. (C) The size of the superposition aperture in three species of dung beetles, the nocturnal Onitis aygulus (upper panel), the crepuscular Onitis alexis (middle panel), and the diurnal Onitis belial (lower panel). The circular superposition aperture is indicated in white on the surface of each eye. The dashed circles refer to “effective apertures,” theoretically derived apertures in which each facet contributes light equally. In reality, facets near the edge of the real aperture contribute less light than those near the center. Note how the superposition aperture is smaller in beetles from brighter habitats. (Adapted from McIntyre and Caveney, 1998, and reproduced from Warrant, 2008a, with permission from Elsevier) Scale bar = 0.5 mm.
times more light to each photoreceptor than is possible in a similarly sized apposition eye. It turns out that the size of the superposition aperture—and thus the sensitivity of the eye—is adapted to the light intensity that the eye normally encounters. This can be seen in the superposition eyes of dung beetles from the single genus Onitis (McIntyre and Caveney, : figure .C). Individual species fly in search of dung at different times of day. The superposition apertures of nocturnal species (width A = μm in O. aygulus) are considerably larger than those of crepuscular species (A = μm in O. alexis). These in turn are more than twice as large as those of diurnal species (A = μm in O. belial). Moreover, the nocturnal species O. aygulus has huge contiguous rhabdoms ( μm wide × μm long) compared to the diurnal species O. belial where they are small (. × μm) and widely spaced. And unlike O. aygulus, diurnal species like O. belial also have sheaths of screening pigment around their rhabdoms, which cuts down light flux even more (Warrant and McIntyre, ). These differences are reflected in the optical sensitivities (S) of their eyes (table .): S = μm sr in O. aygulus but only . μm sr in O. belial. Of course, there are many deep-sea superposition eyes that also have exquisite sensitivity. The shrimp Oplophorus (Land, , ), an animal that lives at a depth of about m, has an F-number of just ., a reflective tapetum, and enormous rhabdoms ( × μm). These eyes thus attain a very high sensitivity— μm sr, one of the largest values known in the animal kingdom. This high sensitivity comes only at the cost of spatial resolution: Oplophorus would not be able to distinguish point light sources closer than about ° apart (Land, ). Despite these impressive optical sensitivities, compound eyes may still struggle to collect sufficient light for vision in dim light. This is particularly likely for apposition eyes. As we saw above, despite their very wide facets and large rhabdoms, the apposition eyes of the nocturnal bee Megalopta genalis are not particularly sensitive. In fact when these bees are skillfully negotiating trees in the rainforest at night, each photoreceptor is absorbing fewer than five photons per second (Warrant et al., ). How
285
286
Chapter 11
is their remarkable visual performance nonetheless possible with so few photons? The answer lies in the neural summation of photons in space and time, a strategy that resides in the neural circuits processing the incoming visual signal (Snyder, , ; Laughlin, , ; Warrant, ).
Higher Neural Enhancement of Sensitivity: Spatial and Temporal Summation When light gets dim, the visual systems of nocturnal and deep-sea animals can improve visual reliability by responding more slowly, either by having slower photoreceptors, or by neurally integrating signals at a higher level in the visual system. A longer response time in dim light increases the signal-to-noise ratio and improves contrast discrimination by suppressing photon noise at temporal frequencies that are too high to be resolved reliably (Hateren, ). A long visual integration time is a common feature of photoreceptors in nocturnal and deep-sea animals. In the dark-adapted nocturnal toad Bufo bufo it can be especially long—up to around seconds (Aho et al., a). The nocturnal spider Cupiennus salei (Pirhofer-Waltz et al., ) also has very long visual integration times—up to ms in darkness (in day-active houseflies integration times are more than times shorter!). In the deep sea, several crustaceans have been shown to have dark-adapted integration times in the range – ms, although in one exceptional isopod (Booralana tricarinata) it reaches ms (Frank et al., ). Such temporal summation, however, only comes at a price: it can drastically degrade the perception of fast-moving objects. Too much temporal summation could be disastrous for a fast-flying nocturnal animal that needs to rapidly judge the presence of approaching obstacles! Not surprisingly, slowly moving animals such as toads are more likely to employ temporal summation. Eyes can also improve visual reliability by summing photons in space. Instead of each visual channel collecting photons in isolation (as in bright light), the transition to dim light can activate specialized lateral neurons that couple the channels— defined by ganglion cells in vertebrate camera eyes or the ommatidia in arthropod compound eyes—together into groups. Evidence of lateral neurons exists in the first optic ganglion (lamina ganglionaris) of the nocturnal bee Megalopta genalis (Greiner et al., b). The bee’s four classes of lamina monopolar cells (or L fibers, L–L) are housed within each neural “cartridge” of the lamina, a narrow cylinder of lamina tissue that resides below each ommatidium. Compared to the L fibers of the diurnal honeybee Apis mellifera, those of Megalopta have lateral processes that extensively spread into neighboring cartridges (figure .A): cells L, L, and L spread to , , and lamina cartridges, respectively, whereas the homologous cells in Apis spread, respectively, to only , , and cartridges (Greiner et al., b, ). Such spreading has also been found in the L fibers of nocturnal cockroaches (Ribi, ), fireflies (Ohly, ), and hawkmoths (Strausfeld and Blest, ) and has been interpreted as an adaptation for spatial summation (Laughlin, ). Each summed group—themselves now defining the channels—could collect vastly more photons over a much wider visual angle, that is, with a greatly enlarged receptive field. This “spatial summation” results in a simultaneous and unavoidable loss of spatial resolution. Despite being much brighter, the image becomes necessarily coarser.
Vision in Dim Light
A Apis
Megalopta L3
L2
L2
L4
L
L3
L4
L
M 100 μm
M
B
C Star light
Moon Street Mid Room light light dusk light
Star light
Moon Street Mid Room light light dusk light
Optimum νmax (cycles/deg)
0.3 With optimal summation Without summation
V=240 deg s–1 0.2
0.1 Megalopta 0 –2
Apis
0 1 2 5 –1 0 1 2 5 –2 –1 3 4 3 4 Log (intensity, photons/μ m2/s/sr) Log (intensity, photons/μ m2/s/sr)
Figure 11.13 Spatial summation in nocturnal bees. (A) Comparison of the first-order interneurons—L-fiber types L2, L3, and L4—of the Megalopta genalis female (left) and the worker honeybee Apis mellifera (right). Compared to the worker honeybee, the horizontal branches of L-fibers in the nocturnal halictid bee connect to a much larger number of lamina cartridges, suggesting a possible role in spatial summation. L = lamina, M = medulla. Reconstructions from Golgi-stained frontal sections. (Adapted from Greiner et al., 2004b; Ribi, 1975) (B,C) Spatial and temporal summation modeled at different light intensities in Megalopta genalis (B) and Apis mellifera (C) for an image velocity of 240°/s (measured from Megalopta genalis during a nocturnal foraging flight: Warrant et al., 2004). Light intensities are given for 540 nm, the peak in the bee’s spectral sensitivity. Equivalent natural intensities are also shown. The finest spatial detail visible to flying bees (as measured by the maximum detectable spatial frequency, νmax) is plotted as a function of light intensity. When bees sum photons optimally in space and time (solid lines), vision is extended to much lower light intensities (non-zero νmax ) compared to when summation is absent (dashed lines). Note that nocturnal bees can see in dimmer light than honeybees. Gray areas denote the light intensity window within which each species is normally active (although honeybees are also active at intensities higher than those presented on the graph). (Figure reproduced from Warrant, 2008a, with permission from Elsevier)
287
288
Chapter 11
Even though summation compromises spatial and temporal resolution, the gains in photon catch are so enormous that vision in dim light can be greatly improved. This is especially true in small eyes such as those of arthropods. If a locust, an insect with apposition eyes, employs summation optimally, it has the potential to see reliably at light intensities up to , times dimmer than those in which they would normally become blind (Warrant, ). The same conclusion can be drawn for bees. If one investigates this theoretically (Warrant, ), then both Megalopta (figure .B) and Apis (figure .C) are able to resolve spatial details in a scene at much lower intensities with summation than without it (Theobald et al., ). These theoretical results assume that both bees experience an angular velocity during flight of °/s, a value that has been measured from high-speed films of Megalopta flying at night. At the lower light levels where Megalopta is active, the optimum visual performance shown in figure .B is achieved with an integration time of about ms and summation from about ommatidia (or cartridges). This integration time is close to the photoreceptor’s dark-adapted value (Warrant et al., ), and the extent of predicted spatial summation is very similar to the number of cartridges to which the L and L cells actually branch (Greiner et al., b), thus strengthening the hypothesis that the lamina monopolar cells are involved in spatial summation. Even in the honeybee Apis, summation can improve vision in dim light (figure .C). Interestingly, the Africanized race, Apis mellifera scutellata, and the closely related Southeast Asian giant honeybee Apis dorsata, both forage during dusk and dawn and even throughout the night if a moon half-full or larger is present in the sky. Behavioral experiments show, however, that even the strictly day-active European honeybee is capable of seeing coarse habitat features such as large pale flowers at moonlight intensities. This ability can be explained only if bees optimally sum photons over space and time (Warrant et al., ), and this is also revealed in figure .C (for an angular velocity of °/s). At the lower light levels where Apis is active, the optimum visual performance shown in figure .C is achieved with an integration time of about ms and summation from about three or four cartridges. As in Megalopta, this integration time is close to the photoreceptor’s dark-adapted value, and the extent of predicted spatial summation is again very similar to the number of cartridges to which the L and L cells actually branch. Thus, in the quest to extract reliable information from a very dim scene, the eyes of nocturnal and deep-sea animals have been forced to dramatically sacrifice spatial and temporal resolution. But as we have seen, these sacrifices have not been in vain— nocturnal animals are able to discriminate color and to perform complex navigational feats. The latter—visual navigation at night—is especially demanding. In fact, it is demanding even in bright daylight. Exactly how animals navigate is the topic of the next chapter.
12 Visual Orientation and Navigation
I
t is a sunny late afternoon on a beach on the west coast of Italy, not far from Pisa. With sunset approaching, the sandhoppers are on the move! Sandhoppers, smallish semiterrestrial amphipods that look like nothing more than animated brown beans (figure .), hop up or down the beach as they commute between the damp sands where they burrow for safety or hide for the day and the patches of sand at the waters edge or the zones of debris where they forage. Look carefully—you can see that they are making their way up or down the beach eastward and westward, perpendicular to the north/south axis of the water’s edge. As the sun dips below the horizon, clouds cover the sky. Many sandhoppers have buried themselves in the sand, but the movements of the ones still visible in the gloom are haphazard and seemingly disoriented. A few hours later the sky has cleared once more, and the full moon is riding well above the horizon. It illuminates a scene where we see that the active little creatures have reemerged to the surface of the sand. They hop along, just as well oriented as they were during the day. On another beach, not far away, the shoreline runs east and west, and sandhoppers here are cheerfully making their commutes, but they hop mainly north and south when the sun or moon is out. The sandhoppers, Talitrus saltator, are demonstrating their ability to use the sun or moon as a compass. Like most animals, these amphipods have a hierarchy of mechanisms that assist them as they move about on the beach, but one of the most reliable is the position of the sun or moon in the sky. Orienting like this is by no means simple—as Juliet herself pointed out to Romeo, the moon is inconstant. Although she was referring to the phases of the moon, the moon and sun both travel continuously, crossing the full width of the sky in hours. Any animal that relies on their position must take this into account. Nevertheless, the sun has directed the movements of animals almost from the beginnings of vision itself, so we will return to these sandhoppers in a few pages to consider how sun compass orientation works. Orientation refers to an animal’s ability to move or posture itself in a desired direction relative to its environment. The ability to orient is virtually a universal feature of animal life. Many animals go a step further and navigate through the environment, finding their way from their current location to a specific destination that might be meters or kilometers away. Orientation mechanisms, and even more those that
290
Chapter 12
Figure 12.1 A sandhopper, Talitrus saltator, on a sandy beach in Italy. The animal is just under 1 cm in length. (Photograph provided by Riccardo Innocenti)
underlie navigation, are often complex and multimodal, involving not only visual cues but also sensory information about gravity, magnetic fields, chemical stimuli, mechanical and auditory cues, and even internal stimuli. Earlier chapters have mentioned feats of orientation, such as how dung beetles roll their balls of treasure on straight paths, oriented by celestial cues such as the polarization of the sky or the axis of the Milky Way. The field of research on how animals orient and navigate is enormous, even just considering visual aspects of orientation and navigation, so we here offer what we hope is an appetizing sampler. As for so many other aspects of visual ecology, many of the critical observations have involved invertebrate animals, but work on vertebrates is very active as well. The main advantage of invertebrates is that they are motivated to do simple behaviors predictably. Vertebrates tend to mix simple behavior with more complicated activities, making interpretation difficult. We begin with a look at some of the very simplest mechanisms of orientation involving visual cues. These fundamental behaviors go back to the origins of photoreception and are still common today—even in some quite complex animals.
Simple Means of Visual Orientation: Photokinesis and Phototaxis A “kinesis” is a response to a stimulus that leads to a change in locomotion, either in speed or in rate of turning. Photokinesis, of course, means that the response is to a light stimulus. Although it is definitely a stretch to call a kinetic response an “orienting”
Orientation and Navigation
response, such an extremely simple behavior can produce something that looks like orientation, even in single-celled bacteria. If a creature stops moving or slows down appreciably when it perceives light, it will tend to spend more time in lighted places than in dark ones. Similarly, creatures that move in straight lines in the dark but initiate a series of turns in the light will tend to become concentrated in patches of light in an otherwise dark world. So even in the absence of the simplest possible imaging device, such beings do show a sort of ability to end up in the right place. Taxes (pronounced “tax-eez”), in contrast, are true orienting responses. Phototaxis just means a directional response to light. In general, positive phototaxis means moving toward a light source, and negative phototaxis produces a response in the opposite direction. Many animals demonstrate simple phototaxes—every diver who has switched on her dive light in a dark sea knows that within seconds the lamp becomes enveloped in a swarm of tiny creatures unable to resist the siren song of a bright light source. But clearly, there is more to the story of the lives of these plankton than this, because the same animals do not find themselves trapped at the surface of the ocean when the sun rises or the moon shines. Consequently, many behaviors that have been ranked as taxes by photobiologists do not reflect what animals do in their natural lives. Dive lights did not exist when phototactic marine animals evolved, and we will quickly leave them behind. But first we should take a short look at a couple of animals that are themselves biological dive lights—fish that inhabit the dark, mesopelagic zone of the ocean. Figure . shows two deep-sea fish (and one shallow-water one) that use their own light—bioluminescence—to attract prey (full disclosure: these animals have never been observed doing this, for the obvious reason that seeing rare behavior in rare fish deep in a dark sea, depending only on bioluminescence, is virtually impossible). The light organ of the grim-faced angler fish, an undescribed species, extends like a brilliant torch over its mouth, bringing planktonic prey or other small swimming creatures to the predator through their own strongly phototactic behavior. This does risk bringing in a predator instead of prey, of course, so the angler must be judicious in using the lamp. The dragonfish Bathophilus nigerrimus uses its light lure more cautiously, suspending it on a long, dangling appendage below the jaw, reducing the chance that an encounter with a predator will be lethal. Other deep-sea fish, and even some shallow-water ones like the flashlight fish Photoblepharon, also have well-placed lights that surely serve as lures for unsuspecting prey (Morin et al., ). Photoblepharon can turn its bacterial light source on or off at will with a retractable shutter (figure .C,D). A good question is why so many planktonic creatures are devotedly phototactic, given the risks involved. It appears that the response is an unavoidable byproduct of a desirable behavior, the ability to regulate depth or to engage in other simple photobehaviors that evolved in the context of natural light fields. Planktonic larvae of crabs have jewel-like little compound eyes (figure .) and show strongly positive phototaxis to bright lights. The light response is useful for determining their spectral sensitivity and thus for modeling their behavior in the field (figure .). The consistently strong phototaxis suggests that they should congregate at the water’s surface during the day, but in fact they migrate downward at sunrise and only move to shallow waters again in the evening (Forward et al., ; reviewed in Forward, ). The paradox was resolved when larvae were tested for their light responses in downwelling light fields that have angular distributions like those actually occurring in natural waters (chapter ). In a natural light field, larvae show responses that duplicate
291
292
Chapter 12 A
B
C
D
Figure 12.2 Top two photographs are of deep-sea fish with bioluminescent lures. (A) An undescribed species of anglerfish with a luminescent lure above its mouth. (B) Bathophilus nigerrimus displaying a lure that it places on the end of a long trailing appendage below the chin. It also has a bioluminescent patch under the eye. (Photographs by E. Widder, copyright 1983, 1999 Harbor Branch Oceanographic Institution) (C,D) The flashlight fish Photoblepharon, which has a bright luminescent patch directly under the eye; this animal feeds on many forms of zooplankton, all of which are positively phototactic. (The patch is closed with a shutter in frame D.) The other bright patches are reflections from the strobe flash used to photograph the fish; they are not self-luminescent. (Photographs by J. Morin)
their behavior in the field (Forward, ). The behavior is simple but extremely flexible, in that environmental variables such as temperature, salinity, and hydrostatic pressure interact with phototaxis to generate vertical movements that are adaptive for survival in nature. One particularly impressive phototactic behavior shown by crab larvae and many other zooplankton is the shadow response. When subjected to a sudden decrease in light intensity, these animals instantly sink, initiating a strong negative phototaxis (figure .). The response is adaptive for escaping predators, and in fact the strength of the response increases in water that contains the chemical odors of common predators such as ctenophores or fish. So even the supposedly simplest orientation responses to light can generate quite sophisticated behavior. In the end, though, phototaxis has its faults, most notably when it leads victims straight to the mouth of a deep-sea anglerfish.
Orientation and Navigation
A
B
Percent phototaxis
60 50 40 30 20 10 0 350
400
450
500
550
600
650
Wavelength (nm) C
Percent descending
50 Ctenophore odor
40 30 20
Odor-free water
10 0
0
10
20
30
40
50
60
70
Percent decrease in irradiance Figure 12.3 (A) The third zoeal stage of the larva of the crab Rhithropanopeus harrisi. (B) Phototaxis to various wavelengths of light by early-stage zoea larvae, showing two spectral sensitivity peaks near 380 and 500 nm. (C) Descent by these larvae (a mixture of negative phototaxis and passive sinking) on a sudden decrease in light intensity in clean water and in water that had contained ctenophores, a predator on larvae. Note that the responses occurred in much smaller light decreases when ctenophore odor was present. (Graphs modified from Forward, 2009)
Compasses, Landmarks, and Panoramas Orientation Based on Celestial Cues Beaches are dangerous places—exposed to sun and wind, inundated by tides and waves, and infested with predatory crabs, birds, and small mammals. Sandhoppers cope with these threats by foraging carefully and by burying themselves where they minimize the physical and biological threats of beach life. Moving up and down the beach perpendicular to the tide line to reach the water, their feeding region, or the best place to bury themselves, they are guided by many visual orienting cues, a major one being sun compass orientation. This a basic form of celestial orientation, useful
293
294
Chapter 12
A
B Tyrrhenian Sea
Feniglia
Giannella
N
N
Adults
Giannella 285°
285˚ 256˚
162° 160°
Monte Argentario
N
160° Feniglia
Immature
N 285˚
280˚
172°
160°
Figure 12.4 Genetic inheritance of orientation in different populations of Talitrus saltator on the west coast of Italy. (B) Circles show the directions of hopping under a natural sky, with each dot showing the orientation angle of one individual and the arrow indicating the mean direction of travel of all animals. Triangle outside the circumference of the orientation circle shows the seaward direction at the home beach of each population, also indicated in the satellite view of the region (A). Both experienced adults and naive immature animals that had been raised in the laboratory oriented in the proper direction to the sun. (Modified from Pardi and Scapini, 1983) (Image data for A: Google, DigitalGlobe, European Space Imaging)
because the positions of astronomical objects are physically determined. Having obtained the solar angle, sandhoppers innately “know” the right direction to go. When animals from beaches with different orientations are tested in the lab in sunlight, they continue to orient as if they are on the beach where they were collected, and thus head off in a direction specific to their home beach (figure .). The knowledge of direction clearly is under heavy selection, not surprisingly, given that a mistake can easily be lethal. Thus, even inexperienced animals generally choose the right orientation when raised in the lab and tested away from any cues other than the sun (figure .; Pardi and Scapini, ). See Scapini () for a review of the biology of sun compass orientation in sandhoppers. Simple sun compass orientation will point a sandhopper in the right direction but by itself cannot send the animal to a specific location. To do this, the animal would also need to know where it is at the start, what path angle is required to reach the goal, and how far it needs to go. The zonal orientation of sandhoppers seems to be all they need—it takes them to the water for feeding and hydration and to a damp location up the beach where they can bury themselves safely out of sight. Dung-ball rolling by beetles uses many cues to maintain a bearing, including the sun when it is visible (and the Milky Way at night; chapter ), but successive runs by individual beetles vary considerably in their orientation to the sun (or other cues) (figure .). Not surprisingly, if a beetle and its ball are picked up and then placed back on the ground, the beetle takes off in about the same direction as before. On the other hand, if the beetle constructs a new dung ball after the displacement, its next path’s bearing is quite random (figure .; Baird et al., ).
Orientation and Navigation
Morning
Same ball
Afternoon
New ball
Figure 12.5 Orientations of ball-rolling dung beetles, Scarabaeus nigroaeneus, in the field, South Africa. Each small black circle shows the direction an individual beetle selected. Left panels show the orientations of individual beetles in the morning and afternoon, oriented so that the sun is at the top (indicated by the arrow) in both cases. Note that each beetle selected an apparently random direction for rolling. On the right are examples of directions selected by individual beetles after being placed back in the dung pile on the same ball (top), or removed from the previous ball and obligated to construct a new one. In these panels, the arrow indicates the direction rolled with the first dung ball. Animals continued along the same axis if continuing to roll the same ball but selected a random rolling direction after making a second ball. (After Baird et al., 2010)
Other animals orient to the sun with much more subtlety and sophistication. Honeybees use a sun compass (or the polarized-light pattern in the sky—chapter — as a backup when the sun is hidden in clouds) to get the reference angle to fly to their foraging patches and back home again. Knowing the angle to the sun, together with an innate knowledge of the offset angle from the solar azimuth to the goal and the use of optic flow (chapters and ) to measure distance as they fly to it (addressed
295
296
Chapter 12
later on), honeybees can travel kilometers to get to a given destination. Similarly, the monarch butterflies inhabiting the eastern parts of the United States migrate all the way to central Mexico to overwinter, getting directional information in part using a sun compass. Birds, some of the greatest migrators on Earth, are famous for their use of both solar and star-based orientation systems (Sauer and Sauer, ; Emlen, ). Their knowledge of the nighttime sky is not innate; instead, it depends on experience with seeing a rotating field of stars. Indigo buntings, for example, will orient to any polestar when trained in a planetarium. To these animals, “north star” is defined as “the star that rotates the least” (Emlen, ). Their knowledge of constellations is limited (if it exists at all), but they have an innate mechanism to learn the direction to fly based on any concentric, moving pattern of stars in the sky. Birds that have never encountered a rotating sky (even if only in a planetarium) are disoriented when tested with a normal night sky, one that presents no problems to birds that have viewed the sky in motion. Sun compass (and star compass, and moon compass) orientation is useful because the positions of astronomical objects are physically determined and fully reliable. Of course, their reliability is compromised by their apparent movements through the sky caused by the Earth’s rotation. Animals such as sandhoppers, honeybees, and monarch butterflies must possess a mechanism that compensates for the changing positions of sun or moon caused by the Earth’s rotation. Because solar (or lunar) azimuth is mostly determined by time of day, the mechanism requires () internal knowledge of the required bearing to the goal, () an internal clock, and () knowledge of the angle from the sun to take to achieve the desired bearing at each time. It is not difficult to establish that sandhoppers “know” the right direction to go and thus meet requirement ()—if they are artificially entrained to a light:dark cycle that is offset from the natural cycle, they confidently head off in the predictably wrong direction when they see the real sun (see similar results below for monarch butterflies). Not surprisingly, given that the sun and moon move with independent time courses, they maintain separate internal clocks for the sun and moon compasses (Ugolini et al., ; Scapini, ). Like the sun compass clocks of sandhoppers, those of monarch butterflies also produce incorrect bearings if the monarchs are entrained to an artificial light:dark cycle that is offset from the natural one. For example, if the artificial cycle is advanced by hours relative to the natural one, the real sun at noon would have moved ° past its expected position, which is internally near its position around sunrise, at : a.m. Therefore, if the butterflies want to fly south but perceive the sun to be in the east, they head about ° to the right of the correct bearing (figure .). Most insect clocks tick away in their brains, but an odd thing about monarchs is that their sun compensation clock resides in their antennae. Thus, antennaless butterflies completely lose their sense of time (figure .; Martin et al., ). In fact, the antennae have their own sets of photoreceptors. If they are covered with black paint, the butterflies maintain a sense of time, but their internal clock drifts away from the natural cycle of the solar day. Sandhoppers, too, have special photoreceptor sets for measuring time, different from those used for sun compass orientation; but in these little crustaceans both receptor types are in the compound eyes (Forward et al., ). Being able to orient may be all that an animal needs to survive in its niche, but many animals need more. They have to reach a destination that is specific and necessary for their survival. A sun compass is a valuable tool to have in such a task, as it can give a very accurate estimate of the vector the animal must maintain during its
Orientation and Navigation
0600–1800
1200–2400
Intact animals
Figure 12.6 The orientations during flight outdoors near noon of intact and antennaless monarch butterflies, Danaus plexippus, before and after being subjected to a 6-hour phase delay in the light:dark cycle. Animals on the left were in their normal light cycle, whereas those on the right had been entrained to the shifted cycle. The arrows show the mean vector for each group; the longer the arrow, the stronger the orientation. Intact animals showed time compensation, treating the southward sun as if it had been in the west. Butterflies that had their antennae removed were unable to entrain to the light:dark cycle and thus unable to orient. (After Martin et al., 2009)
Antenna-less animals
travel. The same is true of the various orienting mechanisms based on celestial polarization patterns discussed in chapter , which produce accuracies every bit as good as those of direct sun (or moon) orientation. Having clock compensation makes these compasses valuable at any time of day and over any duration of travel. The azimuth of the sun’s path across the sky, and the polarization patterns it produces, are not linear functions of time. They change most slowly near sunrise and sunset, and the rate of change varies with time of year and latitude. Many animals that are superb navigators correct very accurately for these irregularities, implementing a function called the “solar ephemeris” (see Wehner and Lanfranconi, ). As a result, they are able to determine which way to go—and maintain that course—at any time of day or year with outstanding accuracy. But navigation is more than holding an accurate course. An animal must also know where it is when it starts, what absolute course is required to reach its goal, what route it should use to get there, how far it has gone as it moves along the way, and even how to recognize its goal when it gets there. We turn to the last question first—just how does an animal come to know that it is in the right place?
Visual Landmarks Many animals commute between their homes, nests, or hives and places that they have learned to find food. Their behavior is not so different from our commutes between the house and the grocery store. They are very familiar with the endpoints and know their environs in great detail, but the terrain along the way is largely ignored except for a few crucial waypoints. Animals such as bees and wasps make their commute by air, and in many cases the actual travel is simply a matter of distance and direction. As mentioned above, direction generally comes from the celestial compass,
297
298
Chapter 12
and we will get to distance measurements later in the chapter. At the endpoints of the trip, orientation is handled by a thorough knowledge of local landmarks. These flying insects inspect the home environment visually and make a similar inspection at the destination as they prepare to land at the food supply. This inspection involves visual scans of the locales using flight patterns that are both stereotyped and highly varied in detail. Because these scans are made on departure from home and are particularly pronounced the first time a bee leaves a newly discovered food source, they have been called “turn back and look” behavior in which the departing bee stops, reverses its direction of view to face the reward (or nest opening), and enters a very characteristic swaying flight as she examines the surrounding features (Lehrer, ). Among other things that she learns while “turning back and looking” are the shapes, colors, and locations of landmarks near the point of interest. But what, exactly, is a landmark? Humans have a pretty clear idea of what makes a good landmark, but our view is not necessarily shared by other animals. Furthermore, although we mostly choose landmarks by their visiblity, many animals use chemical or auditory landmarks in their travels. Here, as visual ecologists, we restrict our focus to visual landmarks. It seems obvious that a good landmark should be prominent and distinctive, but how these properties are evaluated by various species will certainly vary. Zeil et al. () suggest that good landmarks have three properties: salience, permanence, and relevance. Salience refers to visual-system-specific cues such as shape, size, or color and to good discriminability from background. Permanence (or reliability) means that the landmark should be adequately long-lasting and fixed in place. Relevance refers to a clear connection between the place of interest or a waypoint on the way to it. Short-lived animals such as many insects can clearly judge the salience of a landmark on first encountering it, but it is not clear how they evaluate other properties of a landmark without experience. We first consider landmark learning in flying insects. To be honest, these, together with their cousins the ants, are really the only animals in which the process has been investigated in detail, simply because it is not difficult to reconstruct both the geometry of their walks or flights and their lines of sight as they make their inspections. Figure . illustrates one process of visually learning a landmark, the turn-backand-look flights of honeybees (Lehrer, ). On the first departure from a newly discovered food source (the canonical one, sugar water), the bee immediately turns around—so quickly that it has happened by the first video frame. It then makes a number of up-and-down, back-and-forth (and side-to-side, not visible in this projection) flights, mostly facing the opening to the food source but also looking around the area, eventually flying home at the upper right. The second departure, after the bee has returned once for food, is followed by a reduced version of the first one, and this time the bee moves a bit further away before turning around and before completing many fewer scans. The third departure is even more cursory, with a straight flight out, a few small scans, and a flight away. Once bees are fully familiar with the target’s surroundings and appearance, they just shoot straight out of the opening on their way home, like miniature darts (figure ., lower right). Visual learning flights by wasps are much like those of bees, not a surprise considering their kinship. Solitary, ground-dwelling wasps of the genus Cerceris were encouraged to memorize carefully the nest opening each time they left home, by partially concealing the nest and thus making it difficult to find when they returned (Zeil, ). On the next departure, they engaged in a very specific and characteristic nestorientation behavior, somewhat reminiscent of the turn-back-and-look behavior just
Orientation and Navigation
1
3
5 cm
2
n
Figure 12.7 “Turn-back-and-look behavior” in foraging honeybees, Apis mellifera. Bees’ positions at video-frame intervals (40 ms) are indicated by a dot for the head and a line for the body axis. Number above each illustration indicates the number of flights the bee has made to the feeding station, the hole in the center of the disk (the first visit is number 1; n means “many visits,” after the turn-back-and-look behavior phase has been completed). (After Lehrer, 1993)
described for bees. The wasp walked out from the nest entrance and turned around. Then she flew backward and side to side in a series of arcs, growing steadily in angular extent and also increasing in range from the nest entrance (figure .). This behavior lasted for many seconds, often more than a quarter-minute. This initial series can lead to larger circles at a greater altitude around the nest, and often around the surrounding area, before she finally departs. During the nest-inspection period, the wasp does not fixate the nest on the frontal part of the eye, but rather to the side by around °, probably as a compromise between seeing the nest and watching where she’s flying. When a landmark is present, she consistently faces toward it across the nest entrance (figure .). All this behavior is thought to provide a series of snapshots of the view of home for comparison with the view when the wasp returns from foraging (Zeil, ). The snapshots could include both geometric cues and image-motion cues derived from motion parallax (see chapter ). Having gathered information about the view of home or of the surrounds of a feeding location, what do bees learn? It turns out that this visual data gives extremely robust information about how to find the desired destination. If a bee learns the position of a food source relative to a single landmark, this is all that is required to search
299
300
Chapter 12
11 12 8
6
5 9 3
10
7
1
Figure 12.8 Nest orientation in the solitary wasp Cerceri rybyensis on leaving its nest hole (marked by a cross). Position of the wasp’s head is indicated by a dot, and its body axis by a line. Positions are indicated at intervals of 40 ms and numbered at intervals of 1 s. The position of a landmark (a cylinder 2.2 cm wide × 6.3 cm tall) is indicated by the dark circle. (After Zeil, 1993)
Figure 12.9 Landmark learning in honeybees. Vertical axis indicates the time spent in each square of the test area by a trained bee; black circles indicate positions of landmarks (cylinders 4 cm wide × 40 cm tall). Position where the bee has been trained to find food is at the center of the area, indicated by the arrows on two sides, but during testing no food was provided. (Based on Cartwright and Collett, 1982)
in the proper location. This is seen in figure ., on the left (from Cartwright and Collett, ). The landmark was a black cylinder, cm tall, and during training the food was placed some cm away. On removal of the food, the bee initiated a search centered on the position of the food, so the earlier visits clearly had given her quite precise information about its location. Most bees were able to judge the proper distance from the landmark even if its size were changed, apparently through motion parallax (Lehrer and Collett, ). Giving a bee three landmarks (figure ., right) permits extremely precise searches—note how much sharper the search peak is in this case. Significantly, moving the three landmarks away or even changing their sizes did not disrupt the search at all—the bee consistently searched at the point where the angles from the (missing) food source to the landmark set were the same as during training. This led Cartwright and Collett () to propose the “snapshot model” of
Orientation and Navigation
x1
x2
Retina
Retina
Snapshot
Snapshot
1
2
Figure 12.10 The “snapshot model” of landmark orientation in honeybees. The bee sees the indicated scene of landmarks (outer circle, 1) at position X1 and compares it to a “snapshot” in visual memory (inner circle). It then moves to position X2 in order to create a match between the viewed scene and the snapshot. (After Cartwright and Collett, 1982)
visual orientation (figure .): the bees store a representation of the appearances of the positions of the landmarks in memory and attempt to match their current view to the stored snapshot. Modeling experiments using the snapshot model reproduce bee-havior very well. One other case of landmark orientation is particularly illuminating. The tropical bee Megalopta, which we met in chapter , lives in the Panamanian rainforest and does all its foraging at night. Its nests are tiny holes in twigs found among foliage under the rainforest canopy. Anyone who has attempted to walk without artificial lighting in a rainforest at night knows that this is an extraordinarily challenging situation—you literally cannot see your hand in front of your face, much less a twig with a hole in its end down in the bushes. Yet Megalopta routinely forages away from home in this level of darkness for many minutes, unfailingly returning to the nest and entering immediately. When leaving, the bee does an orientation flight similar to those described already, facing back toward the nest hole and orienting to landmarks around it (figure .). Given an array of sticks, she chooses the one in the right position every time. This is a truly impressive visual feat for near-total darkness. What makes a landmark salient? For bees, an important quality is its color. This is not so surprising in day-active honeybees (Cheng et al., ), which generalize learned colors when returning to a set of learned landmarks with subtly shifted hues. Now that you have heard about Megalopta, it should not be a shock to learn that a nocturnal bee, in this case an Indian carpenter bee, Xylocopa, also recognizes the colors of landmarks—in their case lit only by starlight (Somanathan et al., a). This should come as no surprise; we saw in chapter that nocturnal color vision, once thought to be rare, is reasonably widespread after all. The other group of animals in which landmark orientation has been extensively studied is the ants. Taxonomic fellows of bees, adult ants have quite a different task to
301
Chapter 12 5 cm 5 cm
302
Landmark Departure
Nest Return Figure 12.11 Nocturnal learning and recognition of landmarks by the Panamanian bee, Megalopta genalis. (Left) A departure from the nest twig, videotaped from below, with an artificial landmark, a white square card, placed nearby. Bee’s position is indicated by a dot (head) and line (body axis) at intervals of 40 ms. (Right) Bees that have learned the position of their nest in an array of similar twigs return to the nest at the learned position, not the actual nest (marked with a star). (After Warrant et al., 2004)
face when orienting. They have neither the ability to move in three dimensions nor an easy way to inspect a complex set of landmarks rapidly. Living on surfaces, and endowed with highly developed chemical senses, ants live in a world of scents and tastes. Their chemosensory abilities are less useful, however, in deserts and on hot, tropical soils because any scent cues that they might deposit are quickly degraded or vaporized. The ants that have attracted the attention of visual ecologists inhabit just such places— the flat, empty deserts of Tunisia and the brushy scrubland of the Australian outback. We now visit these arid environments to see how ants manage their travels. Rüdiger Wehner, with many students and colleagues, has been studying vision, orientation, and navigation in the Tunisian desert ants of the genus Cataglyphis for nearly half a century. The work has revealed many unsuspected abilities in insects, making this elegant ant (figure .) an invertebrate model organism second only to the honeybee for studies of its navigation biology (reviewed in Wehner, ). Cataglyphis is a trim, long-legged creature well adapted for keeping its body high above the hot sands of the daytime desert and traveling fast and far. In fact, it is a daytime forager, where its ability to tolerate the heat and direct sunlight lowers the risk of being picked off by a wandering predator. Nevertheless, the animal needs to make quick, wandering excursions in search of food. Once a portable morsel of food is encountered (or bitten off ), the ant returns on a bee-line (ant-line?) home, even over distances exceeding m, directly to the vicinity of the nest, which is only a small hole in the sand. The task is accomplished by keeping accurate track of the vector home throughout the search by use of a compass and odometer. The compass is—not surprisingly— celestial, robustly based on the position of the sun and equally well on the skylight polarization pattern (using ultraviolet-sensitive dorsal rim polarization receptors). With it, the ant knows throughout each leg of its meandering outward journey which
Orientation and Navigation
Figure 12.12 Landmark orientation in the desert ant Cataglyphis bicolor, an individual of which is shown at the top left. Blue dots are paint marks used to identify individual ants. (Photo courtesy R. Wehner) Photograph at the right shows an array of artificial landmarks used to study orientation in this species in its actual habitat in Tunisia, arranged on the desert floor. (From Wehner, 2003) Graphs at the bottom show search behavior as paths (left) or time in a single grid square (right; darker indicates more time) by ants searching for food they have been trained to find midway between two landmarks. (From Möller, 2001)
way it is heading. The odometer is not visual but is thought to be based on mechanical information generated while walking. Each vector during the meander is handled by a central path integrator (no small task for a .-mg brain), and when the ant is ready to head home the length and solar (or polarizational) angle of the home vector are already known. The ant then runs down the vector, and if it does not find its nest at the end of the run it initiates an efficient search pattern to locate it.
303
304
Chapter 12
The presence of any landmarks near a food item helps the ant to find it again precisely, and ants well remember the visual locations of landmarks and their sizes as seen from the position of the food (see figure ., top right). Unlike bees and wasps, ants do not know the distances of landmarks from the target destination. If trained on two small cylinders, with the food source midway between them, they will search for it in the trained location (figure ., bottom). However, if these small cylinders are moved further apart, the ants still recognize them as landmarks but do not know where to search for the food source. Double the distances of the landmark cylinders while simultaneously doubling their size produces a precise search back in the middle (Möller, ; original data from Wehner and Räber, ). The implication is that ants do not use motion parallax or other potential cues to measure distance, only perceived size. Perhaps this is because motion parallax generated by walking over irregular terrain is not as reliable as that available in flight. Cataglyphid ants use landmarks not only to locate the food but also as waypoints along the route to the food (Collett et al., ). Mammals, of course, use landmarks (as you well know because this is probably how you find your way to the library from home). A special feature of the mammalian sense of space is that they (at least rodents, bats, and possibly primates) have a class of cells in the hippocampus of the brain called “place cells.” These fire when an animal is in a particular location in open space and could provide a way for the animal to know where it is, even in the absence of obvious external stimuli. Thus, they seem to act a little like internal landmarks. (A related cell type in the primate hippocampus is called a “spatial view cell” because these are active when an animal inspects an object in a particular absolute direction, independent of its own posture.) Place cells are at least in part fixed by visual cues. If a particular visual landmark, a “cue card,” is moved to a new location in an arena, place cells in a rat’s hippocampus reorient as if the arena rotated with it. They fire in the same relative location, with the cue card the reference
Figure 12.13 Place cells in the rat, Rattus norvegicus. Figure shows a small arena within which the rat could wander freely while the activity of a single cell in its hippocampus was monitored for spiking activity. Spiking rate is indicated by color: bluer and darker colors indicate the highest levels of activity. The arena had a white “cue card” placed along one edge (indicated by the arc), which was moved to a different location for the recording session on the right. Note how the favored locations of this place cell tracked the location of the visual cue. (After Muller and Kubie, 1987)
Orientation and Navigation
for this location (figure .). This is fairly strong evidence that the sense of place, at least in rodents, is established by vision.
Panorama Orientation and Canopy Orientation The sky is not the only source of potential compass information. The visual panorama itself is useful as long as the panorama is generated primarily by objects and terrain at a distance large enough to remain stable during locomotion, or at least during the travel between waypoints used along a route. Wood ants, Formica rufa, have a much richer visual environment than ants from deserts, and they use both local landmarks and views of the panorama to orient their trips to a food source (Graham et al., ). Ants were trained to find a feeder m away from their starting point and were provided with a limited view of the laboratory as well as of a prominent black cylinder, which served as a landmark (figure .A). Ants that found the feeder were allowed to continue training for several trials and were then tested in the same task, both with and without the landmark. With the landmark present, ants routinely headed in its direction and then diverted to the direction of the feeder (figure .B). Somewhat surprisingly, even when there were no landmarks present, the ants continued to follow a path that initially directed them toward the [absent] landmark before turning back toward the feeder. Even when the landmark was placed on the “wrong” side, almost all ants continued to visit the location of the missing landmark rather than changing loyalties to the new one (figure .C). The most likely explanation for the use of the dog-leg during travel is that the ants had learned to use the panorama of the laboratory for guidance, and because this was identical with and without the presence of the landmark, they could continue to orient properly throughout their familiar path (Graham et al., ). Panorama use seems sensible for wood ants, but deserts like those of Tunisia often offer nothing to see in the panoramic view, especially from the height of an ant (see figure .), although Cataglyphis does use local landmarks when available, as noted above. Other deserts do not offer such dire prospects. The scrub desert of central Australia is abundantly provided with visual landmarks such as gum trees, brush, and shrubs, and the Australian red honey ant Melophorus bagoti thrives there, foraging during the day in the southern summer. Like Cataglyphis, Melophorus is capable of using a celestial compass and odometer information to support a path-integration system, but with landmarks generally being convenient, it often prefers to use these for laying out regular travel routes to and from the nest (also a hole in the ground). Ken Cheng’s research group has used this tendency to inspect how ants might combine landmark information with view of the distant panorama to orient its foraging runs (Wystrach et al., ). After being trained to visit a feeder, ants would enthusiastically run home via a characteristic route through a string of landmarks of varying sizes and shapes provided between the feeder and the nest (figure .). If the landmarks were shifted among their individual positions, the ants showed some disorientation on the homeward run, as if encountering the “wrong” landmark confused them, but still found their way home (figure .). Ants even managed fairly well with no landmarks at all, suggesting that the presence of landmarks was useful to them, but views of the overall panorama were sufficient to get them headed in the right direction. Wystrach et al.
305
Chapter 12
A Feeder
300 cm
50 cm 25 cm Start 220 cm
B Y-position relative to start (cm)
306
300
C
D
300
300
250
250
250
200
200
200
150
150
150
100
100
100
50
50
50
0 –100 –50
F
0
50 100
0 –100 –50
0
50 100
0 –100 –50
0
50 100
X-position relative to start (cm) Figure 12.14 Landmark learning and use in the ant Formica rufa. (A) The platform used to train the ants, showing the starting position, position of the black landmark, and location of the training feeder. (B) Tracks of a single ant trained with the landmark to the left, showing the tendency to head toward the landmark and then redirect toward the food. This behavior by trained ants continued during testing with no food and with the landmark removed (C, tracks of several trained ants). The ants continued the route they had learned when the landmark was present. (D) Even if the learned landmark was placed to the right, almost all trained ants continued to use the route they had learned initially. (After Graham et al., 2003)
() concluded that the two sets of visual cues interact, and that the views of the landmarks and panorama together set up a series of visual objectives that move the ants to the nest. In , Bert Hölldobler (), working with the delightfully named African stink ant (Palthothyreus tarsatus), reported an entirely new form of panorama orientation that involved only the overhead features of the visual surround. These hymenopterans live in forests south of the Sahara and construct an enormous nest (up to some square meters in area) with multiple entrances. Each worker ant uses only a single exit, which it must unfailingly find on return from a foraging excursion of several meters into the forest. The sky overhead is obscured by trees, and the sun (and even the polarization pattern of the sky) is seldom visible due to heavy cloud cover in these equatorial forests. Although chemical trails almost surely assist them, removing the soil from their favored routes does not disrupt their homeward orientation, even if they are randomly placed at any point along the newly scraped trail, and even if their view of all surroundings is obscured by a screen that is used to surround them as they
Distance from feeder to nest (m)
Orientation and Navigation
10
Training
8
Transposition test F
D E
6
A
C
4
F
B A
2 0 –3
C B
D
–2
–1
0
E
1
2
3 –3
–2
–1
0
1
2
3
0
1
2
3
Distance from feeder to nest (m)
Lateral deviation from feeder (m) 10
Trained
Transposition test
8 6 4 2 0 –3
–2
–1
0
1
2
3 –3
–2
–1
Lateral deviation from feeder (m)
Figure 12.15 Landmark use in the Australian desert ant Melophorus bagoti, an individual of which is shown carrying a reward of a cookie crumb in the photograph on the upper left. Ants were trained to return from a feeder to the nest (indicated by a star on the graphs) on a route (middle photograph) that involved a number of movable landmarks (“Training,” left graph). The trained ants rapidly threaded their way through the landmark sequence (“Trained”). When landmarks were transposed, as in the panel marked “Transposition test,” the ants were disoriented only slightly from their learned paths (far right panel). Melophorus apparently combines its views of landmarks and the visual panorama of distant objects (top right photographs, the blurred version approximates an ant’s view of the scene) to find its way through an unfamiliar sequence of landmarks. (Modified from Wystrach et al., 2011; ant photograph courtesy P. Schultheiss, others courtesy A. Wystrach)
travel. Instead, it became clear that they were fully oriented only when they could see the treescape above them. Rainforests have distinctive patterns of foliage seen against the sky (figure .), and Hölldobler () demonstrated that these were sufficient to produce excellent orientation in the laboratory. He termed this “canopy orientation,” and it is likely to be a common orientation mechanism for animals inhabiting forest floors.
307
308
Chapter 12
Figure 12.16 Images of the Panamanian rainforest canopy taken at intervals along a trail through the forest. Note that each view is quite distinctive and that the sequence of views could serve for canopy orientation as a series of landmarks for a forest floor animal.
We end this section on panorama orientation with examples taken from opposite extremes of the animal kingdom. Box jellyfish, or cubozoa, have eyes in four groups, of which eight (two per group) have superb (if defocused) lens optics (Nilsson et al., ). These simple animals have no central nervous system to which the eyes connect, only a rather diffuse nerve net serving a single nerve ring that circles the animal, with a ganglion found near each group of six eyes. Because they swim under the mangrove canopy in marine swamps, they are at risk of rapid deportation if they find themselves drifting into the main streams of mangrove channels. They use their eyes to orient effectively to the mangrove canopy, rapidly swimming toward cover whenever they are placed up to nearly m from the canopy edge. This is not the finely oriented canopy orientation of stink ants, but considering the reduced nervous system present in jellyfish, it is an extraordinarily impressive orientation behavior for a simple organism. The second example is from a vertebrate—specifically, hatchlings of the loggerhead sea turtle (Caretta caretta), when they emerge onto the sands of their natal beach. Although they hatch at night, these little creatures are at extreme risk of predation by stalking birds and roving mammals while on land. They are so motivated to get going they look more like tiny wind-up toys than living creatures, furiously racing to the water’s edge. Being oriented properly is a critical survival skill, and even though they have never seen a beach before (or even been out in the open), they normally have no problem heading in the right direction, even when the sea is not visible from their nest. The key to orientation is the elevated horizon found toward the landward direction, formed by dunes or beach vegetation. If hatchling turtles are released in an area with one side partially occluded by an elevated, dark horizon they head away in the opposite direction (figure .; Salmon et al., ). The orientation remains true even if the sky above the artificial, raised horizon is brighter than the open and opposite one (a situation that would occur in nature if, for instance, the moon hung in the sky opposite the sea). A simple type of panorama orientation,
Orientation and Navigation
Figure 12.17 Seaward orientation by hatchling loggerhead turtles, Caretta caretta. The photographs at the top show hatchlings on their first emergence from the nest (left) and making their way down the beach to the ocean (right). (Photographs courtesy of J. Wyneken) When tested with a surround that was raised by 8° on one side, they oriented away from the occluded side, whether the light (indicated by a sun graphic) was toward the open side (left) or the occluded side (right). The dots show the orientations of individual turtle nestlings. The gray arrows show the average orientation in both cases. (After Salmon et al., 1992)
it nevertheless gets a completely naive little turtle quickly into the relative safety of the ocean.
Visual Odometry Above, we noted that ants are thought to use mechanical cues, most likely tied to their stepping movements, to gauge how far they have walked. If an animal uses landmarks, this is all the better because the “odometer” can be reset at each landmark, and
309
310
Chapter 12
a new compass heading and rezeroed odometer can handle the next phase of the trip. An animal like Cataglyphis does not often have the luxury of landmarks, and its pathintegration system has to convert each compass bearing and odometric reading into a running calculation of the direction and distance to home—which it does with uncanny accuracy. But what about a flying animal? Surely it does not count wingbeats, running at hundreds of cycles a second? Or does it monitor its energy consumption, its “gas mileage”? The answer is more interesting, taking us back to vision and a familiar topic, optic flow. We read in chapter how honeybees and other flying animals use optic flow to monitor flight speed and to maneuver safely among obstacles. For the same reasons that make these useful insects amenable to studies of so many kinds, some already covered in this chapter, they have been the animals of choice for work on odometry in flight. The techniques follow some of the protocols that have been successful for other studies of their motion vision, flying the bees through tunnels (Srinivasan et al., ). The bees are first trained by placing a reward a certain distance down the tunnel, which is lined with vertical stripes, across the flight path (figure ., bottom). If the reward is later removed, the bees are quite good at knowing where it was, and they will search in the vicinity of the trained distance (figure ., top). That this is due to monitoring of optic flow is indicated by flying the bees in a tunnel with stripes of similar thickness and spacing but oriented parallel to the flight axis. In this situation, the bees lose almost completely their ability to estimate distance and fly back and forth throughout most of the length of the tunnel in their search (figure ., top). When bees are flown in tunnels narrower than the one they were trained in, the apparent velocity of lateral optic flow is larger at the same flight speed (chapters , ), and they underestimate the distance they have traveled; wider tunnels (with stripes flowing more slowly at the sides) produce overestimates (figure ., middle). Later experiments allowed the bees to fly through open air before reaching a feeder placed in a tunnel (Srinivasan et al., ). The bees “reported” to their nestmates how far they had flown when they performed their famous dances back at the hive. Bees that had flown to an outdoor feeder some m from the hive, and with the food at the start of the tunnel (so they didn’t have to fly down it) reported that the food was far closer than when they had to fly down a tunnel for m to reach food only m from the hive. The bees had used the sum of all optic flow they encountered in flight to give them the distance to food. Outdoors, the optic flow is slow because it comes from far-away objects (like watching a distant mountain as you drive past it). The tunnel generates far more flow, and the bees treat this as a large distance (one wonders how fast they thought they were flying!). Srinivasan et al. () could actually calculate how fast the odometer turns, based on the rate of optic flow. More recent work by Eckles et al. () shows that bees that forage by flying into the forest canopy use optic flow to estimate how far up they have flown. Obviously, this is important information for a bee that has to ascend up to m to find flowers.
Orientation, Navigation, and Multimodality It is not easy to orient. Animals like sandhoppers only have to decide whether to go up or down the beach, and to do this they bring in a host of overlapping, redundant visual analyses: sky brightness, sky color, sun position, location of vegetation and trees, and even possibly polarization pattern. To this they add nonvisual cues such
Orientation and Navigation
Relative frequency
150
100
Reward Axial stripes
50 Cross Stripes 0
Relative frequency
150
Reward 7 cm tunnel 14 cm tunnel 22 cm tunnel
100
50
0 Reward Train
14 cm
20 cm 3.20 m
Figure 12.18 Visual odometry in honeybees trained to find food at a particular distance into a tunnel lined with vertical stripes. The food’s location is marked as “Reward” (bottom image), which was at the same distance from the entrance in all experiments. The bees were then tested in tunnels lacking food, with various diameters (middle graph). “Relative frequency” indicates the normalized proportion of bees in each test searching at the indicated distance. The top panel shows results from the control tunnel (at the bottom) and one with longitudinal stripes. In the presence of stripes parallel to the flight direction, and therefore not offering motion stimuli, the bees search throughout almost the entire length of the tunnel. (After Srinivasan et al., 1997)
as beach slope, gradients of wetness, and chemicals present in the near-ocean air and in the sand. For an animal to navigate successfully is a challenge well above that of simple orientation; its only real hope of success is to bring in every possible sensory modality in a web of signal mixing and multimodal analysis. In this chapter so far, we have isolated a single sensory system, vision, in keeping with the subject of the book, but we end this chapter with an example of how vision itself is part of a multimodal sensory system, magnetoreception.
311
312
Chapter 12
Original formulations of how animals sense the Earth’s magnetic field were based on the use of internal magnets or some sort of sensory interaction involving ferromagnetic materials in the body. In some animals, this certainly could be the foundation of magnetoreception, but another hypothesis has gained ascendency in recent years, based on processes that require light to operate. The mechanism involves the interactions of excited states of molecules with earth-strength magnetic fields and is called the radical-pair hypothesis. For a discussion of how this could work, see Ritz et al. (). The relevance of the radical-pair mechanism to this chapter is that a reasonable way to create a radical pair involves photoexcitation. Paraphrasing Ritz et al. (), the system can work if it contains a fixed array of radical pairs that are functionally linked to light-sensitive pigments. Thinking of fixed arrays of photopigments leads one straight to the retina, and in the last decade or so, experimental support has been found for the notion that birds have a magnetic compass located in their retinas (Ritz et al., , ; Mouritsen et al., ; Heyers et al., ). If a compass based on the radical-pair mechanism is in fact present in bird retinas, it appears that the photopigment involved is not opsin but is instead the nonvisual light-sensing pigment cryptochrome. Cryptochrome is expressed in avian retinas in a class of retinal ganglion cells, and it has the required photodynamics to produce the radical pairs needed for magnetoreception. The retina is obviously itself an array, although how cryptochrome is oriented for the required directional sensitivity is not clear, and it is possible that bird head movements add to the sense of direction. Nevertheless, there is strong evidence that the cryptochrome-containing retinal ganglion cells send connections to visual centers in the brain (Heyers et al., ). It is quite possible that birds literally “see” their way to their migratory destinations. With these observations, we have completed our all-too-brief tour of how vision is involved in keeping animals going in the right direction and taking them where they want to go. We now bravely proceed to an even higher-level view of visual function: the role of vision in advertising and hiding!
13 Signals and Camouflage
A
juvenile swordfish is slowly cruising the depths of the Sargasso Sea when it spots a small object silhouetted against the dim and blue downwelling light. It is hard to see clearly, but the object resembles the body of a small fish drifting sideways, as if it were injured, sick, or dying. The swordfish, which has been traveling the empty sea for over a day without seeing any suitable food, turns upward and begins to approach this easy prey with ever-increasing speed. Just as it is about to strike, the swordfish briefly notices a much larger and blue cylindrical object surrounding the silhouette of the fish. Suddenly, everything vanishes and the swordfish feels a stabbing pain in its right flank, where a circular plug of flesh has been neatly extracted. The small sickly fish and the blue cylindrical object are now nowhere to be seen. The swordfish has just been outwitted by the cookie-cutter shark (Isistius brasiliensis), a -cm-long and relatively sluggish animal with a unique feeding strategy that uses a form of bioluminescent camouflage known as counterillumination to hide the bulk of its silhouette while leaving a small portion of its ventral surface visible to be used as a lure (Widder, ; figure .). This allows it to prey on animals that are much larger and faster. As we have seen in this book, vision serves many purposes, but two of the most important are predation and reproduction. Therefore, it is not surprising that a diverse assemblage of species have evolved clever ways to hide themselves and equally clever ways to signal information to both conspecifics and heterospecifics. Camouflage is usually used either to avoid being eaten or to improve the chances of getting within striking range before being detected, although it can also be seen in conspecific interactions (e.g., “sneaker” males). Signals are used to communicate with conspecifics, most often for reproductive and social purposes, and to either lure heterospecifics or warn them of chemical or other defenses. Both camouflage and signals often need to function in a number of different optical environments and against visual systems with varying abilities, many of which are under strong selective pressure to either break the camouflage or eavesdrop on the signal. In certain cases, an individual needs to signal clearly to a conspecific while remaining inconspicuous to other species. This evolutionary arms race between hider and seeker, the potential need to send private signals, and the intimate connection with the optical environment make camouflage and signaling exciting areas of research for the visual ecologist.
314
Chapter 13
Figure 13.1 (Top) The silhouette of the cookie-cutter shark (Isistius brasiliensis), as viewed from below without counterillumination. (Bottom) The silhouette of the same animal with its many small ventral photophores turned on. Note that a small patch near the mouth still casts a silhouette that is thought to act as a lure.
The study of animal camouflage and signaling has grown tremendously since Hugh Cott’s and Abbot Thayer’s pioneering work during the first half of the twentieth century (Thayer and Thayer, ; Cott, ). A complete review of the subject would extend beyond optics, physiology, and morphology into perceptual and cognitive psychology and easily fill several large volumes (e.g., Fox and Vevers, ; Ruxton et al., ; Stevens and Merilaita, ). Because this book has primarily focused on the connection between visual systems and the optical environment, this chapter primarily concerns itself with the various mechanisms of camouflage and signaling and their success in different habitats, leaving the perceptual, cognitive, and evolutionary aspects for others to review (e.g., Stevens, ). It also focuses more on the principles underlying the various mechanisms rather than on an attempt to review all the biological instances of each.
Detection versus Recognition Before we discuss the mechanisms of camouflage, there are a few terms that need to be defined. The fields of camouflage and signaling are famously dense with conflicting and complex definitions that we will not reproduce here (but see review by Stevens and Merilaita, ). However, it is important to distinguish detection from recognition. A signal or animal is detected if it can be visually separated from the background; it is recognized if it can be identified as belonging to a particular category relevant to the viewer (in most cases, as being either a conspecific or a potential competitor, predator, or prey). In relatively featureless habitats such as the open ocean and the sky, reducing detection is often the central goal of camouflage, since most detected objects will be approached and investigated in more detail, regardless of their initial familiarity. In these habitats, we see camouflage strategies that reduce the contrast of the object as much as possible, such as transparency, mirroring, countershading, and counterillumination (figure ., left). We also see signals that are relatively simple in optical terms, such as bioluminescence. However, in more complex habitats such as forests and coral reefs, an animal’s camouflage may make it detectable but not recognizable. The primary camouflage strategy in these habitats involves patterning of the surface via pigments and structural colors. These patterns either break up boundaries and three-dimensional form, making the animal difficult to recognize as a single object, or they disguise the animal as some component of the habitat that is innocuous to the relevant viewers (e.g., algae,
Signals and Camouflage
A
B
Figure 13.2 (A) Animals in featureless environments, such as this snub-nosed dart (Trachinotus blochii), tend to employ strategies that reduce contrast and thus detectability. (B) In contrast, animals in complex environments, such as the giant Australian cuttlefish (Sepia apama), tend to employ strategies that reduce recognition, usually via color patterns and in some cases texture.
rocks: see figure ., right). Thus, the camouflaged animal is seen but not identified as important, or possibly identified as being important but in the wrong category. The perceptual mechanisms underlying recognition are exceedingly complex, and— as is well known by biologists searching for animals in native habitats—experience greatly improves the odds of finding previously unrecognizable animals. Thus, this area of camouflage, which is related to the field of machine vision, is one of the great unsolved problems of visual cognition. Complex optical habitats also make signaling more challenging, and signals must often be multimodal to be clearly separated from the background. Thus, the steady glow of a deep-sea bioluminescent octopus is replaced with the flashing color patterns found on Heliconius butterflies in flight. No discussion of camouflage and signaling is simple, however, and one could argue that a fish hiding in the open ocean is not reducing detectability but instead mimicking water, or that a leafy sea dragon (Phycodurus eques) is not mimicking algae but rendering itself undetectable on a noisy background. However, we feel that the distinction between strategies that (generally) reduce detection in simple environments and those that reduce or enhance recognition in complex habitats is sharp enough to guide a discussion of the visual ecology of camouflage and signals. The remainder of this chapter is broken up by mechanism based on the four ways that matter can interact with light: () emission (bioluminescence), () transmission (transparency), () reflection (mirroring), and () absorption (pigmentary colors) (figure .). The first three mechanisms, at least when used as camouflage, are primarily found in pelagic species. The last is found in nearly all habitats and has the most varied expression. This division is not perfect because pigments require reflective tissues below them that reflect the light, and some mirrors are colored. Nevertheless, it serves as a useful way to organize a discussion of camouflage and signaling mechanisms.
Camouflage via Emitted Light—Counterillumination In both terrestrial and aquatic habitats, fewer photons are going up than going down. On land this is because most natural surfaces reflect less than half the light that strikes them. In aquatic habitats this is due to the fact that most down-traveling photons are
315
316
Chapter 13
A
C
B
D
Figure 13.3 The four primary mechanisms of camouflage and signaling are based on the four ways in which light can interact with matter. (A) Emission: counterillumination in the squid Abralia veranyi. (Courtesy of Edith Widder) (B) Transmission: transparency in the heteropods mollusk Carinaria sp. (C) Reflection: the reflective chrysalis of a common crow butterfly (Euploea core). (Courtesy of Pon Malar) (D) Absorption: colored patterns due to pigmentary colors in the reef stone fish (Synanceia verrucosa).
absorbed before they have a chance to be scattered back upward. The latter effect is much stronger than the former, so while the ground is about / to / as bright as the sky on land, the upward radiance in natural waters is only / to /, the value of the downward radiance (Mobley, ; Bohren and Clothiaux, ). Because it is this upward radiance, either from the ground or from the watery depths, that illuminates the ventral surface of an animal, its low value relative to the downward background radiance means that animals viewed from below are nearly always seen as black silhouettes. This is true whether the ventral surface is colored black or white (Johnsen, ). For example, birds, unless they reflect direct sunlight down to the viewer, nearly always appear black when flying far overhead, and scuba divers swimming above are black regardless of the color of their wetsuits (again, unless they reflect direct sunlight to the viewer). Calculations of the contrasts of ventral surfaces in underwater light fields typical of ocean and coastal water have shown that they are essentially independent of the color of the ventral surface, always appearing black (Johnsen, ; Johnsen and Sosik, ). This black silhouette is especially problematic for aquatic animals because water is an attenuating medium, and—as discussed in chapter —contrast is attenuated slowly when viewed from below, so the silhouette can be visible from a large distance. Add to this the fact that the upward viewing direction provides by far the most light, and that many deep-sea species have eyes that look directly upward (figure .), and this ventral silhouette becomes a substantial vulnerability. For opaque animals, there appear to be only two solutions to this problem. The first, which is discussed in the section on mirroring, is to have no ventral surface,
Signals and Camouflage
Figure 13.4 The barreleye spookfish (Opisthoproctus soleatus), one of many deep-sea fish with eyes that look directly upward, presumably to catch the most light and thus improve vision.
replacing it with a keel-like silvery structure that reflects horizontal light downward. The second is for the ventral surface to emit light via bioluminescence (see chapter ). This solution, named counterillumination to parallel the concept of countershading, has been adopted by a number of species from a diverse array of taxa. It is particularly common and well developed in mesopelagic squid, fish, and crustaceans (Young and Roper, ; Herring, , ; Widder, ; Johnsen et al., ), but counterillumination is also a hypothesized function for the ventral photophores found in shallow-water and benthopelagic taxa such as the midshipman fish Porichthys (Harper and Case, ), leiognathid fish (McFall-Ngai and Morin, ), and sepiolid squid (e.g., Euprymna scolopes) (Jones and Nichigushi, ). Interestingly, although counterillumination would also be useful for crepuscular and nocturnal insects, it has not been documented. Nor has it been hypothesized for any other terrestrial animal. Although the central concept of counterillumination—matching the downwelling light to hide one’s silhouette—is straightforward, the strategy is poorly studied and involves more subtleties and limitations than is often appreciated. The first limitation is intensity. Bioluminescence requires ATP at a rate of roughly kcal per mole of photons emitted (for fireflies; kcal/mole for the ostracod Cypridina), and Young and Roper () estimated that counterillumination during the day by the squid Histioteuthis heteropsis at a depth of m in clear oceanic water accounted for .–.% of the animal’s resting metabolic rate. Because the downward radiance they need to match goes up by a factor of about for every -m decrease in depth, this percentage of resting metabolic rate would rise to –% at m and –% at m. Thus, Young and Roper concluded that daytime counterillumination is limited to animals deeper than – m in clear water.
317
318
Chapter 13
In addition to producing enough light, the counterilluminating animals also have to match the brightness of the downwelling light. The visual feedback mechanism by which this is accomplished is poorly understood and is likely to be interesting because many counterilluminators cannot see their own ventral photophores (although some have photophores that emit light into their own eyes). A further complication is that—ideally—the emitted light has to match the background light from a number of viewing directions, not just from directly below. In other words, the animal has to reproduce the angular distribution of the underwater light field. This has been tested only in the deep-sea hatchetfish (Argyropelecus affinis) and the viperfish (Chauliodus sloani), both of which use an ingenious combination of mirrors within the photophores to match the shape of the light field (Denton, ; Denton et al., ). Whether they can change this distribution to match different distributions of light (for example at shallower depths) is unknown. A third limitation is the acuity of the viewer’s visual system. With the exception of a few species (mostly squid), the ventral surface is not evenly lit but instead studded with a relatively small number of discrete light organs (figure .). Because oceanic water is so clear (see chapter ), the light from these separate organs is not blurred together, even at large distances (see Johnsen et al., ). Therefore, viewers with acute vision can break the camouflage. Acute vision is likely relatively rare in the nocturnal and deep waters where counterillumination occurs due to spatial summation (see chapter ) but may nevertheless occur and be used to break this form of camouflage. The final limitation is that the counterilluminator ideally needs to match the spectrum of the downwelling light. This is advantageous even though many of the animals in the deep sea have monochromatic visual systems. If the spectrum of the bioluminescence does not match the spectrum of the downwelling light, then the perceived brightness may not match either. This is because even monochromatic visual systems vary in their spectral sensitivity. Therefore, their perception of the brightness of the background and the photophores also varies (see chapter ) unless the underlying spectra themselves match. In addition, some deep-sea animals have yellow filters in their lenses that can enhance the difference between the background and the emitted light. Indeed, the spectrum of counterillumination is often broader than that of the downwelling light, and Muntz () showed that the filters in the eyes of many mesopelagic fish are well suited to maximize this difference. A final issue is that the spectrum of the downwelling light is not constant—it is broader near the surface and in coastal waters than it is in the depths of the open
Figure 13.5 (Left) The euphausiid shrimp Meganyctiphanes norvegica. (Right) The deep-sea hatchetfish Argyropelecus aculeatus. Each animal is directly above an image of the emissions of its ventral photophores. Note their relatively wide spacing compared to that found in the squid Abralia (figure 13.3). (Courtesy of Edith A. Widder)
Signals and Camouflage
ocean. Thus, an animal that moves from one water type to another, for example via diel vertical migration, needs to adjust the spectrum of its counterillumination. This has been studied in only a handful of cephalopod genera (Abraliopsis and Abralia), which were found to employ a clever solution (Young and Mencher, ). When tested in cold water, these squid emitted blue light that matched that found in the deep sea. When tested in warm water, however, they emitted light with a broader spectrum, better matching that found near the surface. The conclusion of the authors was that these squid were using water temperature as a proxy for depth and adjusting the emission of their ventral photophores accordingly. In sum, counterillumination is a marvelously developed camouflage mechanism. However, much of our knowledge comes from only a small fraction of the species that employ it, and many central questions remain unanswered.
Signaling via Emitted Light—More Bioluminescence With the exception of its use in counterillumination, illumination, and certain startle responses, bioluminescence is used as signal. Its hypothesized signaling functions are diverse and range from luring to warning to sexual signaling (figure .). Some of these functions, such as sexual signaling in fireflies, are well established (reviewed by Lewis and Cratsley, ). Others, such as luring in deep-sea anglerfish, are considered intuitively obvious but have not been experimentally verified. Some, such as the burglar alarm hypothesis—where light is emitted to attract higher-order predators that then deter (or eat) the animal that is preying on the bioluminescing organism— are one step above fantasy (though see Mensinger and Case, , for an experimental test). The primary difficulty in verifying most hypothesized functions is that most bioluminescing organisms are deep-sea species that are not amenable to behavioral assays. Thus, we know a great deal about sexual signaling in fireflies and ostracods but next to nothing about the possible signaling functions of the complex photophores found on many deep-sea squid. It is known, however, that bioluminescence is a highly efficient optical signal. The primary reason for this is that it is nearly always emitted against a black background, which has two advantages. First, this means that the contrast of the signal is effectively infinite (see chapter ). Thus, the distance at which it can be detected is limited only by the absolute sensitivity of the viewer (see chapter ). This is why stars can be seen at night but a sparrow flying at m during the day cannot, even though the former are inconceivably more distant than the latter. The light from photophores is, of course, much dimmer than that of stars, but even in an attenuating medium such as water, a typical flash can be seen by a typical deep-sea fish at distances up to m (Warrant and Locket, ). The second advantage of emitting light in the dark is that the background is simple. Even the complex geometry of a tropical forest is simplified in the dark, allowing firefly signals to be seen from a substantial distance as long as no structure is directly between the beetle and the viewer (figure .). In contrast, the yellow light organ of a firefly sitting in the canopy during the day would likely remain undetectable at distances greater than a meter, likely even closer for animals with less acute vision than humans. Another advantage of bioluminescent signals is that they can be turned on and off. Certain signals based on pigmentary and structural colors can be covered and uncovered, but this requires movement and overlapping structures. In contrast, the ease
319
320
Chapter 13
Defense Startle
Dinoflagellates, squid, stern-chaser myctophid
Counterillumination
Many: crustaceans, fish, squid
Misdirection: smoke screen
Many: crustaceans, polychaetes scyphozoans, chaetognaths, squids, tube-shoulder fishes, ctenophores, siphonophores, larvaceans?
Distractive body parts
Octopoteuthis squid, brittle stars, polychaetes, siphonophores
Burglar alarm
Dinoflagellates, jellies, other?
Sacrificial tag
Pelagic sea cucumbers, jellies, polychaetes
Warning coloration (deter settlers)
Jellies, brittle stars? (tube worms, clam)
Lure prey attract host: bacteria
Anglerfishes, siphonophores, cookie cutter shark, squid?
Lure with external light (evaluate habitat?)
Sperm whale? megamouth shark?
Stun or confuse prey
Squid, headlamp myctophid?
Illuminate prey
Flashlight fish, dragonfishes
Mate attraction/ recognition (swarming cue)
Ostracods, Japettella octopus? lanternfish, flashlight fish, anglerfish? syllid polychaetes, others?
Offense
Figure 13.6 Various established and hypothesized functions of bioluminescence in marine species. Those that are not signals are left in gray. (From Haddock et al., 2010)
Signals and Camouflage
Figure 13.7 Fireflies at night. Note that the dark and simple background makes the signal easy to discern. (Courtesy of Judd Patterson)
with which most bioluminescence can be activated and deactivated not only helps ameliorate the conflict between sending a conspicuous signal and remaining safe from visual predation but also allows the creation of signals in both time and space. These patterns seem to be primarily used for courtships displays. The flashing courtship displays among fireflies are well studied and even include an example of aggressive mimicry, where Photuris females imitate the flash patterns of Photinus females in order to attract and eat Photinus males (Lloyd, , ). In non-firefly species, less is known. The best-studied bioluminescent courtship displays in marine organisms are found in the ostracods (Morin, ; Morin and Cohen, ; Rivers and Morin, ). The males of these small crustaceans essentially vomit small packets of the components of the bioluminescence reaction into the water as they swim, creating species-specific displays in both time and space (figure .). There is even documentation of nonbioluminescent sneaker males waiting near the displays to mate with the females that are attracted to the light (Rivers and Morin, ). Complex temporal patterns of bioluminescence are also observed in species that do not appear to use bioluminescence for courtship. For example, in many ctenophores and pennatulaceans (sea pens), the light is emitted in slow waves down the length of the body, and certain bioluminescent coral and pelagic octopods appear to twinkle with large numbers of photophores turning on and off asynchronously. However, the most startling display is found in coronate medusae of the genus Atolla. Repeated mechanical stimulation of these jellyfish leads to a rotating “pinwheel” display in which photophores are activated and deactivated in turn (figure .). It is thought that these highly visible displays have evolved to attract higher-order predators via the process
321
322
Chapter 13
Figure 13.8 The bioluminescent courtship displays of various species of ostracod. The arrows denote the direction in which the individual puffs of light were produced. (Modified from Cohen and Morin, 2003, by Todd Oakley)
Figure 13.9 (Left) The coronate medusa Atolla wyvillei. (Right) One moment in the animal’s bioluminescent “pinwheel” display. (Courtesy of Edith Widder)
described earlier, but this has not yet been demonstrated (although, as mentioned in chapter , an Atolla mimic built from blue LEDs was used to attract a giant squid). The primary disadvantage of bioluminescence relative to other forms of signals is of course that it requires energy and reactants. Pigmentary and structural colors also require energy (and possibly specific dietary requirements) to construct and maintain, but bioluminescence is the only form of signal (and camouflage) that consumes energy and reactants as it is being used. The amount of required energy is far lower than might be expected—as mentioned above, only about – kcal/mole of photons (to provide perspective, it takes about minutes for a square meter of sidewalk to intercept a mole of visible solar photons on a sunny day at noon). Even the brightest and largest bioluminescent displays only emit at most - moles of photons per second (Widder, ) and thus consume very little energy (though even this small amount may be significant for small deep-sea animals with low metabolic rates). The real limiting factors are the reactants, in particular the substrate known as the
Signals and Camouflage
luciferin, which is dietarily derived (see Haddock et al., ). Because each photon produced requires the oxidation of one molecule of luciferin, a bright signal uses up stores of this substrate quickly. The other disadvantage of bioluminescence as a signal is connected with its central advantage—it is highly visible. In dark featureless habitats, any source of light may be investigated. Thus, signaling, particularly for extended periods of time, is hazardous. Indeed, surveys of spontaneous bioluminescence in the ocean find that it is nearly nonexistent. Interestingly, the converse is found in terrestrial habitats, where a summer field can be filled with the flashing of fireflies (figure .).
Camouflage via Transmitted Light—Transparency Transparency is perhaps the most conceptually straightforward camouflage mechanism—the animal matches the background because the background is seen through the animal. It is also a surprisingly common tactic, with significant amounts of transparent tissue found in representatives from nearly all phyla (figure .). As with most camouflage mechanisms, transparency appears to have evolved multiple times, in this case restricting itself almost entirely to marine pelagic species. The majority of transparent species are found in the following groups, all of which are almost exclusively marine and pelagic: cubozoans, hydromedusae, ctenophores, hyperiid amphipods, tomopterid polychaetes, pterotracheid and carinarid heteropods, pseudothecosomatous pteropods, cranchiid squid, thaliaceans, and chaetognaths (figure .). In contrast, transparency is found in only a few freshwater pelagic and marine benthic species, and limited to the wings of insects on land (reviewed by Johnsen, ). The main hypothesized reasons for the lack of transparency in terrestrial species are () the larger refractive index difference between tissue and air, which leads to surface reflections, () the greater need for support structures on land, which are often mineralized and thus opaque, and () the greater need for UV-protective pigments on land, which may also absorb in the visible range. The lack of transparency in benthic species is less well understood but may be related to the delicate nature of many transparent animals and thus their greater likelihood of being damaged by contact with the substrate. Another possibility is that benthic transparent animals may be revealed by the shadows that result from the lensing of light by their tissues. Although it is in theory a perfect camouflage solution, transparency suffers from a number of limitations. The foremost of these is that it is difficult to achieve. Unlike other camouflage mechanisms that only require a change of the animal’s surface, transparency involves the entire volume of the animal, throughout which the animal must minimize both the absorption and scattering of light. Minimizing absorption is relatively simple, since few biological molecules absorb visible light unless they have evolved to do so (hemoglobin and myoglobin being major exceptions). Minimizing light scattering is far more difficult, because it requires that the animal have a nearly constant density on all size scales greater than half a wavelength of light (e.g., ~ nm for visible light: see Johnsen and Widder, ). Given that many intracellular components are larger than this and that most animals of any significant size have nerves, internal vasculature, connective tissue, and digestive systems, constant density is a challenging requirement. All the ways in which transparent animals solve this problem are not yet known, but potential methods include extreme flattening (either of the whole animal or of the cellular tissue alone, which then covers a larger
323
324
Chapter 13
A
C D
B
E
F
G
H
I
K
J
L
Figure 13.10 Assemblage of transparent animals: (A) Amphogona apicata (hydromedusa), (B) Corolla calceola (pteropod), (C) Iasis zonaria (salp), (D) Vogtia sp. (siphonophore), (E) Naiades cantrainii (polychaete), (F) Japetella diaphana (pelagic octopus), (G) Beroë forskalii (ctenophore), (H) Greta oto (nymphalid butterfly), (I) Bathochordeus charon (larvacean), (J) Periclimenes holthuisi (anemone shrimp), (K) Leptocephalus eel larva, (L) Phylliroë bucephala (pelagic nudibranch). (Images courtesy of Laurence Madin, Steve Haddock, Jeff Jeffords, and David Tiller)
Signals and Camouflage
Porifera Mertensiidae Platyctenida Pleurobrachiidae Beroida Haeckeliidae Cestida Thalassocalycida Lobata Anthozoa Scyphozoa Cubozoa Hydrozoa Platyhelminthes Nemertea Brachiopoda Bryozoa Pogonophora Polycheata Hirudinea Oligochaeta Polyplacophora Aplacophora Monoplacophora Bivalvia Scaphopoda Gastropoda Nautiloidea Coleoidea Chaetognatha Nematoda Onychophora Crustacea Chelicerata Uniramia Echinodermata Larvacea Styelidae, pyuridae Molgulidae Other ascidea Thaliacea Cephalochordata Ichthyes Tetrapoda
Ctenophora
Cnidaria
Annelida
Mollusca
Arthropoda
Chordata
Figure 13.11 Transparency and pelagic existence mapped onto a phylogeny of the major phyla in the Animalia. Open square indicates pelagic existence is rare within adults of the group; filled square indicates pelagic existence is common. Open circle indicates transparency is rare within adults of the group; filled circle indicates transparency is common. (See Johnsen, 2001, for details)
transparent tissue such as mesoglea), using clearing agents to increase the density of the cytoplasm and interstitial fluids to match that of protein, adjusting the size and spacing of organelles and other ultrastructural components, and having highly hydrated tissue. Most of these solutions limit the physiology and complexity of the affected tissues, which is at least one reason why transparent species tend to be slower and less reactive than related opaque taxa (reviewed by Johnsen, ).
325
326
Chapter 13
Other limitations of transparency as a form of camouflage are visual in nature. First, even transparent substances reflect and refract light if they have a different refractive index than their surroundings. Because it is impossible for animals to have the same refractive index as water (or air), they can potentially reflect and refract light toward a viewer and thus be detected. This effect would be most pronounced in air, where the index difference is greatest, and—fascinatingly enough—some of the large transparent wings of certain moths have antireflection coatings, presumably to reduce the chances of being detected via reflected light (Yoshida et al., ). In water, the problem is less severe but still a concern in two habitats. Near the surface the underwater light is still direct enough to create significant reflections from transparent animals. Many tissues, such as the comb rows of ctenophores, become obvious and even iridescent when viewed near the surface on a sunny day. The other problematic habitat begins at a depth of about m (or at any depth at night), where many fish and crustaceans begin to use bioluminescent searchlights mounted under their eyes. Because the background water reflects essentially none of this light, even the smallest reflection from a transparent animal can be enough to reveal it. In response, many transparent species develop a layer of pigment that strongly absorbs the blue bioluminescence of these searchlights (reviewed by Johnsen, ). This change can be seen between closely related species that inhabit different depths or within an individual over its life history or during diel vertical migration. Certain cephalopods, ever the masters of dynamic camouflage, can rapidly alternate between transparency and heavy pigmentation as the light conditions alternate between bright directed light, characteristic of searchlights, and dim diffuse light, characteristic of the ambient illumination at depth (Zylinski and Johnsen, ; figure .). The other limitation of transparency camouflage is that it can—in theory at least— be broken by viewers with UV or polarization sensitivity. Certain transparent animals are vulnerable to detection at UV wavelengths because they absorb light in this portion of the spectrum. These visibly transparent but UV-opaque species are found in oceanic waters at depths where significant UV radiation still penetrates, so it is hypothesized that the lower transparency is due to the presence of UV-protective compounds (Johnsen and Widder, ; figure .). Because many zooplanktivorous fish in these same near-surface habitats have UV-sensitive visual pigments, this UV opacity potentially increases the vulnerability of the plankton to visually mediated predation. Johnsen and Widder () analyzed this conflict between UV damage and UV predation in multiple species of transparent zooplankton and found that the animals minimized their risk of predation by primarily restricting their opacity to the UVB portion of the spectrum (– nm), where UV-visual pigments are less sensitive and there is less light overall. Transparency camouflage may also be broken by species with polarization sensitivity because certain transparent tissues are birefringent (see chapter ), and the light in clear oceanic waters is polarized, especially in the horizontal viewing direction. Because birefringent materials change the polarization of light transmitted through them, they can become visible against a polarized background. Thus, it has been hypothesized that one function for the polarization sensitivity found in crustaceans, cephalopods, and certain fish is to detect transparent prey. This hypothesis is especially attractive because birefringent organic tissues (e.g., muscle, connective tissue) tend to be protein-rich and thus nutritious. It has also been shown that certain squid preferentially strike at birefringent versus nonbirefringent transparent beads when viewed against a polarizing background (Shashar et al., ).
Signals and Camouflage
Figure 13.12 The bolitaenid octopus Japetella heathi in its transparent and pigmented forms. It rapidly switches from the former form of camouflage to the latter if illuminated by a directed beam of light. (Courtesy of Sarah Zylinski)
Johnsen et al. () used lab-based and in situ polarization imaging to test whether tissue birefringence increased visibility. Many groups, particularly the cephalopods, pelagic snails, salps, and ctenophores, were found to have ciliary, muscular, or connective tissues with striking birefringence when viewed between crossed polarizers (figure .). In situ polarization imagery of the same species showed that although the degree of underwater polarization was fairly high (~% in horizontal lines of sight), tissue birefringence played little to no role in increasing visibility. This is most likely due to the low radiance of the horizontal background light compared to the downwelling irradiance. In fact, the dominant radiance and polarization contrasts of the
327
Chapter 13
A
B 50
40 Percent transparency
328
30
20
10
0 280
320
360
400
440
480
Wavelength (nm)
Figure 13.13 (A) The pelagic tunicate Salpa cylindrica. (Courtesy of Edith Widder) (B) Its transmittance of visible and ultraviolet radiation. Note that, despite being highly transparent, it is nearly opaque in the UV portion of the spectrum, especially the UVB portion (280–320 nm).
animals turned out to be due to unpolarized downwelling light scattered from highindex tissues (e.g., comb rows on ctenophores) viewed against the darker and polarized horizontal background light. This is a classic example of how lab-based imagery can be misleading because the illumination does not match what is found in nature.
Signaling via Transmitted Light—Transparency as a Signal Component Because transparent tissue is by definition inconspicuous, it cannot directly be used for signaling. However, as with the cookie-cutter shark described at the start of this
Signals and Camouflage
Figure 13.14 Selected images of transparent plankton viewed between parallel polarizer filters (top panels) and crossed polarizers (bottom panels). (Left) A chain of the salp Cyclosalpa floridana. (Middle) The lobate ctenophore Bolinopsis sp. (Right) The salp Salpa cylindrica (solitary form). Note that these three species all contain highly birefringent tissues, as evidenced by their high visibility when viewed between crossed polarizers. (Courtesy of Edith Widder)
chapter, a partial camouflage strategy can also be used as signal. In the case of transparency an animal can use this strategy to hide the bulk of its form while leaving a few portions visible. These remaining visible tissues are then often brightly colored, creating a signal that is isolated from the animal producing it. Possible examples of this combination of transparency camouflage and color signals can be found among the anemone shrimp (genus Periclimenes). These animals, despite being benthic, are extraordinarily transparent. However, they also have distinct and conspicuous color patterns. The function of the markings is not known, but both conspecific and interspecific signaling are possible because anemone shrimp are “cleaners” (species that enjoy a symbiotic relationship with large fish that involves eating their ectoparasites). Most cleaner species are highly colorful and some perform dances that are thought to be part of their communication with client fish (e.g., Chapuis and Bshary, ). Thus, conspicuous color patterns on a transparent body may be a way for anemone shrimp to signal to client fish without being recognizable as a prey item. Transparency has also been shown to be a component of a luring system in siphonophores (e.g., Purcell, ; Haddock et al., ). Certain siphonophores (e.g., Agalma okeni) have highly transparent bodies with pigmentation only on the tips of the stinging tentacles, which appear to have evolved to mimic small prey items such
329
330
Chapter 13
Figure 13.15 The anemone cleaner shrimp Periclimenes holthuisi. (Courtesy of Jeff Jeffords)
Figure 13.16 Strongly colored lures on the tips of the feeding tentacles of the siphonophore Resomia ornicephala, an example of aggressive mimicry. (Courtesy of Steve Haddock)
as copepods and larval fish (figure .). We have seen these animals while scuba diving, and they appear as innocuous schools of small animals. The main bulk of the animal is invisible except at close range, and then only by animals with acute vision.
Camouflage via Reflected Light—Mirroring Another way for an animal to resemble its surroundings is for it to mirror them. This works best in relatively featureless environments that also exhibit some degree of symmetry. So, as with transparency, this camouflage tactic is primarily found
Signals and Camouflage
in pelagic species but has been hypothesized for the metallic chrysalises of certain butterflies (see figure .). As discussed in chapter , the underwater light field becomes increasingly symmetrical around the vertical axis as the sun (or moon) nears the zenith and also with increasing depth regardless of the distribution of light in the sky. In this situation, a vertical mirror can be highly cryptic because it reflects light from a region of the underwater light field that has roughly the same spectral radiance as the region that is directly behind the animal, thus significantly lowering the contrast of the animal (figure .). This camouflage strategy is common in both freshwater and marine habitats but has an odd phylogenetic distribution. It is a common camouflage strategy in multiple families of bony fish (Carangidae, Characidae, Clupeidae, Megalopidae, Myctophidae, Sternoptychidae, and many others) but nearly absent in all other species—with the exception of the vertically oriented guts of certain transparent heteropods, squid, and pelagic octopuses. This does not appear to be due to an inability of other species to make the required reflective structures because multiple taxa use mirrors of similar optical design to camouflage individual organs (e.g., eyes and guts in cephalopods, ventral surfaces of certain pontellid copepods, guts of heteropods). Also, many taxa use mirrors in their eyes, either as tapeta or focusing elements, and even more use colored mirrors as signals (described below). One possible explanation is bony fish are one of the few aquatic taxa that both are laterally flattened and maintain a vertical posture in the water. Other species are either more cylindrical (e.g., sharks, cephalopods) or, if
Figure 13.17 (Top) Reflection from vertical mirrors underwater (black lines) can look just like the view behind the animal because the light field is symmetric around the vertical axis. (Bottom) A vertical mirror photographed at 10-m depth shows that mirroring also work as crypsis in shallow waters if the sun is high in the sky.
331
332
Chapter 13
Figure 13.18 (Left) The snub-nosed dart (Trachinotus blochii) well camouflaged by reflection from its silvery sides. (Right) The same fish a moment later, after it has tilted slightly, reflecting downwelling light to the viewer and becoming more visible.
flattened, do not have a fixed orientation (e.g., cestid ctenophores, certain crustacean larvae). The importance of maintaining a vertical orientation can be seen in figure .. Even a small tilt from the vertical can make the animal more apparent. However, fish do not have perfectly flat lateral surfaces either, so orientation of the reflecting structures requires some care. As was first shown by Eric Denton in the s (reviewed by Denton, ), the mirrors in fish (which are structural reflectors composed of thin layers of high-index guanine and low-index cytoplasm) do not have a fixed orientation relative to the scales but instead are oriented within each scale so that they are roughly vertical (figure .). This allows the fish to act as a flat vertical mirror even with a curved lateral surface. In addition, the tissue between the reflectors on the dorsal surface of the fish is heavily pigmented, allowing the fish to match both the dim upwelling light when viewed from above and the brighter horizontal light when viewed from the side. A further refinement found in some fish is that the A
B
Figure 13.19 (A) The common bleak fish Alburnus alburnus. (Courtesy of Piet Spaans) (B) Cross section of the fish showing the vertical orientation of most of the reflecting structures (red) in the scales. (From Denton, 1970)
Signals and Camouflage
reflectors are not perfectly vertical but angled slightly upward. This allows them to reflect light from a slightly higher angle of elevation. This incident light is slightly brighter than the background behind the fish, but the reflected light ends up matching because the reflectors do not reflect % of the light. In other words, the slight tilt of the reflectors compensates for the less-than-perfect efficiency of the reflectors. A final adjustment found in certain deep-sea fish is that the reflectors are optimized to reflect blue light from above to viewers that are below the fish. This is optimal because deep-sea light is primarily blue (see chapter ). Because the color of the light reflected from structural reflectors of this sort depends on the angle of the incident light, these fish appear bronze when looked at in a dish, which can be misleading. In general, though, the structural reflectors on mirrored fish reflect nearly all wavelengths equally and thus appear silvery rather than colored. In essence, these animals have made a biological analogue of polished metal. The details of this are reviewed in Johnsen (), but briefly, most seem to use one of two mechanisms to achieve this. Certain fish, such as herring, have three banks of reflectors that each reflect a different portion of the spectrum and together act to reflect the whole visible portion (Denton and Nicol, a,b). Other species appear to have reflectors composed of many layers with differing thicknesses (McKenzie et al., ). These, termed chaotic reflectors, again strongly reflect the entire visible spectrum. Regardless of the mechanism, mirroring is an excellent form of camouflage and has been shown to be more robust to changes in the optical conditions (e.g., water color, time of day, depth, viewing direction) than camouflage based on pigmentary colors (Johnsen and Sosik, ). The primary disadvantages—aside from the need to maintain a vertical posture and the reliance on a symmetric light field—are that the camouflage can potentially be broken by visual systems sensitive to UV wavelengths and can definitely be broken by visual systems that can detect the polarization of light. The former occurs when the mirrors do not reflect UV radiation (in habitats where UV radiation is present), and the latter because mirrors change the polarization of the incident light as it is reflected. The potential for UV vision to break mirror camouflage has not yet been explored in detail. This is because most measurements of the reflectance of mirrored species have not extended into the ultraviolet and also because no behavioral tests have been performed. However, Shashar et al. () have examined the potential for mirror camouflage in fish to be broken by animals with polarization vision and recent in situ polarization imagery and modeling by three of the authors of this book (Johnsen, Cronin, and Marshall) have shown that it would be difficult for a mirrored fish to be cryptic to polarization-sensitive viewers and that many species of pelagic silvery fish from the Great Barrier Reef are highly visible when viewed with polarizationsensitive cameras (e.g., figure .).
Signaling via Reflected Light—Structural Colors Reflective structures are also used by a myriad of taxa for signals. These are referred to as structural colors and are particularly diverse and well developed in insects (especially beetles and butterflies), birds (e.g., figure .), mollusks, (especially cephalopods, but also a few gastropods), fish, and certain primates (reviewed by Kinoshita, ). They are also found in most ctenophores and many annelids, but these structures are not usually thought to serve a visual function. Interestingly, structural colors
333
334
Chapter 13
Figure 13.20 (Left) Intensity image of the bluefin trevally (Caranx melampygus), showing that it matches the background light well. (Right) Image of the same fish showing the degree of polarization of the fish and the background. The color bar to the right shows the degree of linear polarization, with blue being 0% and white being 100%. Note that the polarization of the light reflected from the fish is far lower than the polarization of the background.
Figure 13.21 Structural colors in the plumage of the male of the Indian peafowl (Pavo cristatus). (© Jebulon/Wikimedia Commons)
appear to always be colored. In other words, they reflect only a portion of the visual spectrum, usually a small portion. To our knowledge, broadband mirror-like reflectance has not been proven to serve a signaling function in any animal. In fact, the narrow spectrum of nearly all structural colors, when compared to those based on absorption by pigments, is one of their primary advantages. As we discussed in chapter , the color of an object depends both on its reflectance and on
Signals and Camouflage
the color of the light illuminating it. More precisely, the radiance is proportional to the product of the illuminating spectrum and spectral reflectance. Color constancy (see chapter ), which appears to be found in many taxa, helps moderate, but does not completely eliminate, the effects of changing illumination on the perceived color of an object. And, as we also discussed in chapter , certain habitats (e.g., aquatic and forest) have particularly variable illumination, and the illumination in all terrestrial habitats changes dramatically during crepuscular periods. Thus, colors based on pigments, with their broad reflectance spectra, can look quite different under different illuminants (e.g., figure .). In contrast, the narrow spectra of light reflected from most structural colors do not change shape as the illuminating spectra change shape. The reflected radiance may change in intensity as the illumination changes, but hue will remain fairly constant. This relative stability of structural colors under varying illumination may explain why they are often found in optically dynamic habitats such as forests. Offsetting this ability to create a highly saturated signal that is constant under varying illumination is the fact that both the intensity and hue of many structural colors depend on the relative orientations of the illuminant, the viewer, and the reflective structures. This is because most structural colors are mediated by the stacks of alternating refractive indices described above, and thus, the intensity and hue of the reflected light are orientation dependent. In certain cases this orientation dependence can be advantageous because motion of the reflective structure can create a flashing display, as has been hypothesized for various species of butterflies (see Pirih et al., ). In many other cases, though, it may be a distinct limitation. To address this, some species have evolved two- and three-dimensional structures that strongly reflect light of the same hue from all directions. In cases where these structures are highly periodic, they are known as photonic crystals (e.g., the colorful spines of the polychaete Aphrodite; reviewed by Vukusic and Sambles, ). However, even less periodic structures can reflect the same color over a wide range of viewing angles, although the color tends to be less pure (e.g., the wings of the Morpho butterfly; reviewed by Kinoshita and Yoshioka, ).
Camouflage via Absorbed Light—Color Patterns By far the best-studied camouflage mechanism is the use of pigments—molecules that selectively absorb light of certain wavelengths. Camouflage involving pigments ranges from the simple red or black pigmentation of deep-sea organisms that has evolved to foil bioluminescent searchlights, to the stark black-and-white countershading of penguins and killer whales, to the highly complex and, in some cases, dynamic patterns of certain vertebrates and cephalopods. Even an optical environment as simple as the pelagic ocean contains a remarkable variety of pigmentation strategies (figure .). In more complex habitats, such as forests and coral reefs, the variety of colors and patterns is dazzling. Entire books can and have been written about camouflage via pigmentation (e.g., Thayer and Thayer, ; Cott, ; Fox and Vevers, ; Ruxton et al., ; Stevens and Merilaita, ) in addition to an extensive record in scientific journals. We cannot possibly hope to review this entire subject within this chapter, and so, as before, we confine ourselves to a discussion of the basic principles and the relative advantages and disadvantages of pigmented camouflage versus other forms.
335
336
Chapter 13
Figure 13.22 The coloration of oceanic species as a function of depth. (From Johnsen, 2007)
First and foremost, it is important to remember that pigments can only absorb light. It is only our human preference for things we can see over those that we cannot that lead us to name pigments by the colors they do not absorb rather than by those they do. This has three major implications. First, it can cause us to misinterpret certain situations. For example, many deep-sea benthic species display a pale-orange coloration that is much less saturated than the deep-red coloration found in the animals swimming just above them (figure .). The function of the pigment responsible (usually a carotenoid), however, is not to give the animal an orange appearance but to absorb enough blue light so that it matches the reflectance of the substrate for the wavelengths of bioluminescence and the remaining solar radiation. For the animals swimming above in the water column, the reflectance of the surrounding water
Signals and Camouflage
Figure 13.23 (Left) The red-eye gaper (Chaunax stigmaeus) photographed at depth under broadspectrum lighting. The blue channel of the same image (Right) shows how the animal would appear under the natural blue illumination found at that depth. Note that the reflectance of the fish at blue wavelengths (lowered by the presence of the red pigment) matches that of the surrounding substrate. Note also how apparent the unpigmented ventral fins are under blue light.
is very low for all wavelengths, so a higher concentration of blue-absorbing “red” pigment is required. Thus, deep-sea pelagic animals are often deep red, and deep-sea benthic animals are often pale orange because they are regulating light absorption at blue wavelengths to match the background. The second major implication of the fact that pigments can only absorb light is that any pigment must have a reflective layer underneath it or its color will not be visible. The only exceptions to this are transparent animals viewed under transmitted light, for example, certain colored jellyfish viewed against the downwelling light in shallow waters. All remaining animals require some sort of underlying reflective layer. In certain cases the reflection is due to a dedicated structure that efficiently reflects much of the incident light. Examples of these are the leucophores of cephalopods (Hanlon and Messenger, ) and the highly scattering nanospheres on the white wings of certain Pierid butterflies (Stavenga et al., ). However, for most animals, the light is reflected by scattering within the subsurface tissues (e.g., dermal connective tissue in vertebrates). Thus, the default color of many organisms, particularly large and complex ones that scatter a great deal of light, is white. This in turn has two implications for the visual ecologist. First, in countershaded animals, it is not the ventral surface that is made white but the dorsal surface that is made dark. This difference can be seen clearly in surface and cave populations of the Mexican blind cavefish (figure .). Second, and more importantly, the increased reflectance one sees in the UV portion of the spectrum for many animals may not have an adaptive function in most if not nearly all cases. Instead, it is often a simple consequence of the lack of UV absorption by the overlying pigment, allowing the structures underneath to scatter and reflect the light. Many carotenoid pigments, for example, do not absorb UV radiation, and thus many yellow, orange, and red animals have a UV peak in their reflectance spectrum that may have no biological relevance. A final major implication for those visual ecologists interested in the chemistry and genetics of coloration is that a reflectance spectrum with two peaks may be due to only one pigment that absorbs the light of the intermediate wavelengths, and, conversely, that a reflectance spectrum with one peak may be due to two pigments that
337
338
Chapter 13
Figure 13.24 The Mexican blind cavefish Astyanax mexicanus. (Top) Individual from surface population. (Bottom) Individual from cave population. Note that although the dorsal coloration differs between the two, the ventral coloration remains the same. (From Johnsen, 2003)
absorb light of wavelengths above and below the peak. This may appear obvious in retrospect, but it is surprisingly easy to confuse the peaks of a reflectance spectrum with the action of the pigments when it is in fact the valleys that are truly relevant. As mentioned above, pigmentation in camouflaged animals ranges from completely unpatterned to exceedingly complex patterns. Although there are, of course, exceptions, in general the complexity of body coloration in camouflaged animals mirrors the complexity of the background. When the background is featureless (snow, ocean, sky), the camouflaged animal is usually one color (or possibly countershaded) in an attempt to match the spectral radiance of the background. When the background is complex (forest, coral reef), most animals use complex patterns that obscure distinctive body features (e.g., eyes), the outline of the body, and in some cases provide false depth and shading cues that disrupt the general three-dimensional impression of the body (e.g., the cuttlefish in figure . and the stone fish in figure .). Interestingly, although structural colors could in theory provide the same level of pattern complexity, nearly all complex patterns in organisms are due to pigments. In addition, although counterillumination is a dynamic form of camouflage, and certain structural colors in fish and cephalopods are also changeable, most cases of dynamic camouflage are due to pigments. In fish and a few other vertebrates, this ability to change color is usually due to the migration of pigments. In cephalopods, it is due to the muscle-mediated expansion and contraction of small sacs containing pigments (reviewed by Hanlon and Messenger, ). This last example is the only case in which pigmented camouflage can change as rapidly as counterillumination, and in general, pigmented camouflaged animals are more vulnerable to rapidly changing optical conditions than are counterilluminating, transparent, and mirrored animals.
Signaling via Absorbed Light—More Color Patterns As impressive as the diversity of pigment-based camouflage can be, the diversity of pigment-based color signals is truly astonishing. In addition to incredible variation among species, especially among tropical birds, coral reef fish, beetles, moths, and butterflies, there is also impressive diversity within different populations of certain species, even those from the same geographic region (e.g., figure .). In addition,
Signals and Camouflage
Almirante (Mainland)
Aguacate Peninsula (Mainland)
Bastimentos Island
Bocas Island
Cayo Agua
Uyama River (Mainland)
Guabo River (Mainland)
Popas Island
Rambala (Mainland)
Shepherd Island
San Cristobal Island
Solarte Island
Figure 13.25 Twelve color morphs of the strawberry poison dart frog (Oophaga pumilio), all from the Bocas del Toro region of Panama. (From Siddiqi et al., 2004)
many species, especially among coral reef fish, have completely different color patterns as juveniles and adults, and of course many species have sexually dimorphic color patterns. The diversity of these colors, the pigments underlying them, and their various aposematic and reproductive functions have been extensively reviewed (Cott, ; Fox and Vevers, ; Stevens, ). Thus, as before, we confine ourselves to a few central optical and visual points. In general, a color signal is expected to be conspicuous, at least to the intended viewer. However, conspicuousness is difficult to define. It is (relatively) easy to estimate whether a color signal is detectable or not, using either behavioral assays or visual models (e.g., the noise-limited color model of Vorobyev and Osorio, ). The more complicated issue is whether one can say that one color signal is more conspicuous than another when both are well above the detection threshold for a given visual system and situation. For example, consider a -nm blue laser pointer and -nm green laser pointer, both shining on a sheet of white paper and being viewed by color-normal humans. According to perceptually uniform color models such as CIE , the chromatic distance between the blue laser and the paper is far larger than the chromatic distance between the green laser and the paper. However, people in general would likely be divided as to whether the green or the blue laser pointer was more conspicuous, especially if their perceived brightnesses were set to be equal. In fact, it is difficult to clearly define what is meant by a more conspicuous signal. One possibility is that a more conspicuous signal may be detected more rapidly or
339
340
Chapter 13
selectively attended to in a field of less conspicuous signals. To the best of our knowledge, the first has not been tested and the second has not been correlated with color distances. Of course it is well documented that certain species prefer signals to have a greater chromatic distance from the background, but this is generally thought to occur because this increased distance is possibly an honest indicator of the quality of the individual (Zahavi et al., ). This makes it difficult to determine whether preferential responses to a more conspicuous signal are due to increased detectability or simply to underlying motivation. All this is relevant because it is tempting to use chromatic distance (especially the just-noticeable-differences calculated from the noise-limited color model of Vorobyev and Osorio, ) to compare the conspicuousness of different signals and then use this information to make ecological and evolutionary inferences. Until more is known about how animals compare color signals that are well above the detection threshold, this approach is suspect. There are, however, two cases in which larger chromatic distances between the signal and the background do imply increased detectability. The first is when the background is noisy (e.g., a coral head encrusted with algae and many differently colored invertebrates). In this case, the detection threshold is higher than would be the case for a simple background, and a signal with a greater chromatic distance from the background has a greater chance of being above this threshold. The second is when signals are viewed through an attenuating medium such as water or fog (see chapter ). Because scattering and absorption by the medium makes any viewed object look more and more like the background with increasing viewing distance, everything eventually becomes indistinguishable (figure .). However, colors that are separated by a larger chromatic distance will remain distinct at a greater distance. Aside from these two cases, however, one must be careful about comparing the conspicuousness of highly visible color signals. An important implication of this is that a wide diversity of colors can be chosen for color signals as long as they are sufficiently distinguishable from the background. Nevertheless, we do see that some colors are chosen more than others for signaling in certain habitats. In the case of dense forests, it has been noted that red and blue coloration is more common than in other terrestrial habitats. For aquatic species, it is known that fish on coral reefs are often violet and yellow, and fish in greenish coastal and fresh waters are often orange and red (Lythgoe, ; Marshall and Johnsen, ; figure .). Although there are, of course, many nonoptical reasons why certain colors are chosen, the three patterns just mentioned may at least partially be explained by optical and visual arguments. In the case of forests it is important to remember that the light environment within them is highly variable due to spectral filtering of light by leaves and the relationship of gaps to the location of the sun, clouds, and blue sky (see chapter ). Thus, as we discussed in the section on structural colors, signals with narrow reflectance spectra provide a more constant signal. Natural pigments usually have broad reflectance spectra, but they can appear narrower to a viewer if they primarily reflect light at the upper and lower edges of the viewer’s spectral sensitivity range. Blue/ violet and red signals do this and also contrast with green foliage, which may partially explain why these colors are chosen by certain forest insects and birds. The situation is different in aquatic habitats. Here the illumination is relatively constant, but with a far narrower spectral range (see chapter ). Although initial intuition might suggest that the best color for a signal viewed under relatively monochromatic illumination is one that reflects strongly in that limited spectral region (i.e., choose a blue signal for blue illumination), this only creates a bright signal, not one that
Signals and Camouflage
UV
350 nm
400 m Violet flower Background fog
450 nm
Blue flower 500 nm
B
550 nm G
Figure 13.26 Maxwell triangle for a honeybee (Apis mellifera) viewing a violet flower and a blue flower through fog. At close viewing distances, the two flowers are readily distinguishable, but as viewing distance increases (indicated by arrows), the apparent colors of both flowers approach that of the fog background. At a certain distance, the chromatic contrast between the two flowers will be too low to detect, and they will be indistinguishable. In general, the greater the original chromatic contrast between the two flowers, the greater the distance at which they can be distinguished from each other. This is one of the cases where increased chromatic distance is definitely known to affect detectability.
Figure 13.27 (Left) The dottyback (Pseudochromis paccagnellae) photographed at depth on a coral reef. (Right) Head of male three-spined stickleback (Gasterosteus aculeatus) photographed in the greenish water that it usually inhabits. (Right photo courtesy of Piet Spaans)
contrasts chromatically with the background. Instead, to generate a distinguishable color signal under illumination with a narrow spectral range, one must use a pigment whose reflectance changes sharply over that region. In the case of blue coral reef waters, red and orange pigments look black and blue, and many green pigments look light blue. Only violet, yellow, and some green pigments, which have reflectance
341
342
Chapter 13
Figure 13.28 (Top) The GretagMacbeth ColorChecker™ as it appears in daylight. (Bottom left) The same colorchecker as it would appear if it were photographed at 40-m depth in clear ocean water. Note that the only squares that appear to have a color distinct from the blue water are those that were originally violet or yellow. (Bottom right) The same colorchecker as it would appear if it were photographed at 20-m depth in green coastal water. Note that the only squares that appear to have a color distinct from the green water are those that were originally red or orange. In both cases the squares that still have a color separate from the background are those whose reflectance spectrum changes over the wavelength range of the illumination.
spectra that change rapidly in the spectral region of the blue illumination, appear to have a color separate from blue (figure .). Analogously, in green coastal and fresh waters, it is primarily the orange and red pigments that have reflectance spectra that change over the green portion of the spectrum and thus appear to have a color distinct from background. Fluorescent pigments, however, are not subject to this restriction because their reflected light does not have to be a subset of the spectrum of the illumination. Therefore, one can have red coloration in blue waters at depth, as is seen in certain fluorescent corals (e.g., Read et al., ). A final issue to discuss in this all-too-brief introduction to camouflage and signaling is the potential for private channels. As we have seen, animal visual systems vary in their abilities, including differences in absolute sensitivity, color perception, spectral range, spatial and temporal resolution, and their ability to detect various aspects of polarization. Some of these parameters tend to be roughly equal among animals in a given environment—for example, high absolute sensitivity in the deep sea—but most vary substantially within a group of interacting species. Thus, it is possible for an animal to send a signal that is readily apparent to a certain set of animals (usually, but not always, conspecifics) but inconspicuous to others. Indeed, any analysis of a colored visual scene will show that certain organismal features are more apparent in certain wavebands and using certain opponencies than when using others (e.g., figure .).
Signals and Camouflage
Figure 13.29 (Left) Full-color image of two surf parrotfish (Scarus rivulatus: top right), a lined butterflyfish (Chaetodon lienolatus: middle), a minifin parrotfish (Scarus altipinnis: bottom), and the head of a Hussar (Lutjanus adetii: upper left). (Top middle) The same image viewed by an animal with only a medium-wavelength (i.e., green) visual pigment. Note that the patterns on the three parrotfish are no longer visible. (Top right) The image now viewed by an animal with only a long-wavelength (i.e., red) visual pigment, which increases the achromatic contrast of the details on the parrotfish. (Bottom two panels) A dichromatic viewer can distinguish more than a monochromat. The ability to perceive hue differences in this case is better for a dichromat with a short- and a long-wavelength pigment than for a dichromat with a short- and a medium-wavelength pigment. However, neither dichromat distinguishes all the hues as well as a trichromat. In particular, note that the parrotfish in the upper right has many hue shifts that have no achromatic or dichromatic contrast. Also note that the butterflyfish in the center is conspicuous to all the modeled visual systems, suggesting that its signal is not private.
Hypothesized cases of what have been termed private channels include, but are not limited to, the red patches of the female crab-spider Misumena vatia (Hinton, ); the red bioluminescence produced by three genera of deep-sea dragonfish (Partridge and Douglas, ); the ultraviolet patterning of certain Heliconius butterflies (Bybee et al., ), fish (Cummings et al., ; Siebeck et al., , ), and passerine birds (reviewed in Stevens and Cuthill, ); the polarized reflectance of scarab beetles (Brady and Cummings, ), butterflies (Sweeney et al., ), cephalopods (Mäthger et al., ), and stomatopods (Chiou et al., ); and the contrasting stripes of the coral reef fish Pygoplites diacanthus and Thalassoma lunare (Marshall, a). In general, these studies are based on optical measurements and subsequent calculations that demonstrate that a signal is more detectable for one species than for another, although a few have demonstrated this difference via behavioral assays (e.g., Cummings et al., ). However, as is noted by Brandley et al. (), to date no study of private channels shows that the signaler benefits from the signal being cryptic to potential eavesdroppers or that the potential eavesdropper would benefit from detecting the signal. In other words, it remains to be shown that the privacy of certain signals is actually adaptive. Demonstrating this conclusively will likely be challenging because it ideally involves measures of fitness and at a minimum involves carefully
343
344
Chapter 13
controlled behavioral tests, but is a necessary piece of this fascinating and rapidly developing subfield of visual ecology.
In Conclusion And so we come to the end of this brief introduction to visual ecology. As we hope we have made clear, it is a highly multidisciplinary field that demands many different approaches. In fact, it may seem that to succeed, one requires the mind of a physicist, the hands of a surgeon, and the intuition of a first-class dog trainer! Seemingly simple behavioral experiments are fraught with issues of interpretation, and even the hairiest mathematical models of vision and optics just barely scratch the surface of the true underlying complexity. Few other subjects ask us to understand so many different concepts and to be familiar with so many different organisms. Perhaps this is why visual ecologists are so happy to collaborate and why the field operates as a relatively collegial superorganism. We need each other. However, we feel that visual ecology returns to us far more than it asks. Perhaps more than most, the subject truly lives up to the promise of science as art. It combines rigor and intellectual surprise with both beauty and humor. Who is not awed by the bioluminescence of the deep sea or not tickled at the thought of African beetles rolling dung under the polarized moonlight? It is perhaps not surprising that many visual ecologists are also photographers and painters and that nearly all simply must tell you just one more funny story about an odd animal and its visual pursuits. We, the authors, could not have asked for a more rewarding way to spend our days and for better colleagues to spend them with. We hope the same for you and that this book serves you well.
Glossary α-Band: the main absorbance peak of a visual pigment, responsible for most of its wavelength specificity and the single wavelength label (λmax) ascribed to a visual pigment. Absorbance: the (usually base-) logarithm of the quotient of I (incident light intensity) divided by It (transmitted light intensity). Absorbance and absorptance have similar values when both values are small but are not equivalent. Absorptance: the fraction of light absorbed. In a nonscattering substance it is equal to − It/I, where It is transmitted light intensity and I is incident light intensity. Absorption: the process by which a photon is converted into other forms of energy. Sometimes also used as a synonym for absorptance. Acceptance angle (Δρ): the angular half-width of a photoreceptor’s (typically Gaussian) receptive field profile. Accommodation: the ocular processes that allow the eye to focus on objects at different distances from the eye (or to focus sequentially on objects at the same distance but seen in different optical media). In vertebrates, accommodation typically alters the position or shape of the lens via the actions of the ciliary body. Active vision: analysis of visual space that involves eye movements, head movements, and movements of the body. Often involves flow fields and motion parallax. Acute zone: a region of an invertebrate eye where spatial resolution is maximized. In apposition compound eyes this region is characterized by large corneal facet lenses and narrow interommatidial angles. Aggressive mimicry: a form of mimicry in which an ambush predator hides its true form so that it appears to be a prey item or collection of prey items. Airglow: weak source of light in the upper atmosphere caused by the recombination of ions. Airy disk: see Diffraction. Amacrine cell: a cell type in the vertebrate retina that regulates the signals of bipolar cells before they are transferred to the retinal ganglion cells. Some are sensitive to visual motion (e.g., starburst amacrine cells). Angle of polarization: the angle of the e-vector of linearly polarized light, by convention from ° to °, with ° being horizontal Also called plane of polarization. Aphakic gap: an elongation of the pupil in fish that is beyond the margin of the lens. This exposes the full diameter of the lens in certain directions and thus allows the eye to increase light capture. Area centralis: see Fovea. Attenuation: reduction of light intensity by absorption and scattering. Aurora: source of green and red light caused by the collision of charged particles from the sun with molecules in the upper atmosphere. β-Band: a secondary absorbance peak to the short-wavelength side of a visual pigment’s α-band that may also contribute to the overall sensitivity of the visual pigment. Banked or tiered retina: a retina in which the receptive segments of the photoreceptors are not arranged in a single layer but in two or more layers, one behind the other.
346
Glossary
Bioluminescence: a form of chemiluminescence created by organisms. Bipolar cell: a cell type in the vertebrate retina that collects signals from either rod or cone photoreceptors and relays these signals to the retinal ganglion cells either directly or indirectly (via the horizontal cells and amacrine cells). Birefringence: an optical property in which the refractive index of a material depends on the polarization of the light passing through it. Blackbody radiation: electromagnetic radiation emitted by an object due to the thermally mediated motion of its atoms. Brewster’s angle: the angle of incidence at a surface where reflection of unpolarized light creates full polarization perpendicular to the incident plane. Brewster’s angle depends on the ratio of the refractive indices of the two media. Bright zone: a region of an apposition compound eye where sensitivity is maximized, characterized by large corneal facet lenses and interommatidial angles that are similar to, or larger than, those in other parts of the eye. Bump: the photoreceptor’s response to a single photon. Cartridge: in the first optic neuropil of insects (the lamina), a narrow cylinder of neural tissue that resides below each ommatidium. Each cartridge is more or less identical in terms of the numbers and types of cells it contains. Caustic flicker: fluctuations in the intensity of light at a point or object caused by water ripples that refract and focus light. Chappuis band: absorption band of ozone in the red portion of the visible spectrum. Chemiluminescence: light emitted via certain chemical reactions. Chromatic aberration: the inability of a lens to focus light of different wavelengths on the same image plane (shorter wavelengths are focused closer to the lens). Chromaticity: a measure of the spectral distribution of light based on a given color visual system. Chromaticity diagram: a graphical representation of color sensitivity, defined by the relative excitations of spectral sensitivities viewing a particular spectrum or color. In trichromatic vision, a Maxwell triangle is an example. Chromophore: a small molecule attached to a larger one (generally to a protein) that gives the complex color. In visual pigments, the chromophores are vitamin A derivatives. CIE: International Commission on Illumination, an organization devoted to creating standards for describing the human-perceived brightness and color of light. Ciliary photoreceptor: a photoreceptor derived from a cell that contains a ciliary structure. Most photoreceptors in vertebrates are ciliary types, whereas invertebrates tend to use them less commonly. Circularly polarized light (CPL): a polarization state of light in which the e-vector rotates one full circle for each wavelength traveled by the light, thus describing a circle as seen from the wave front or a helix as seen from the side. Can be either right-handed or left-handed. Clearing agent: a hypothetical high-index soluble substance that would raise the refractive indices of the interstitial fluids and possibly the cytoplasm of animals to reduce light scattering and increase tissue transparency. Used by embryologists to make the internal structures of embryos more visible but has not yet been found in any transparent animal. Coefficient, absorption: measure of the absorption characteristics of a medium; its inverse is equal to the average distance a photon travels before being absorbed. Usually denoted by a lower case “a.”
Glossary
Coefficient, beam attenuation: measure of the attenuation characteristics of a medium; its inverse is equal to the average distance a photon travels before being absorbed or scattered. Equals the sum of the absorption coefficient and scattering coefficient. Usually denoted by a lowercase “c.” Coefficient, diffuse attenuation: measure of the attenuation of either vector or scalar irradiance. Usually denoted by “K.” Coefficient, scattering: measure of the scattering characteristics of a medium; its inverse is equal to the average distance a photon travels before being scattered. Usually denoted by a lowercase “b.” Color temperature: for a given spectrum of light, the temperature of a blackbody radiator that emits the spectrum that most closely approximates it. Color vision: the ability to distinguish stimuli based on differences in their spectral content independently of their perceived brightnesses. Cone photoreceptor: a vertebrate ciliary photoreceptor with a tapering outer segment and with the visual pigment located in folded lamellae. Cones are usually involved in photopic vision. Contrast, chromatic: the distance between two perceived hues in a color space. Contrast, Michelson: the difference between the radiance of an object and the radiance of the background, divided by the sum of the two radiances. Contrast, Weber: the difference between the radiance of an object and the radiance of the background, divided by the radiance of the background. Convexiclivate fovea: see Fovea. Cosine corrector: a device that is placed over a light detector to ensure that it correctly measures vector irradiance. Usually in the form of a small white disk that diffuses incident light. Counterillumination: camouflage strategy in which an animal obscures its silhouette by using bioluminescence to illuminate its ventral surface. Crystalloluminescence: light emitted during crystal formation. Dark noise: the visual noise arising from the thermal activation of rhodopsin molecules in the absence of light. Sometimes referred to as thermal noise. Degree of polarization: also percentage polarization, the amount to which an electromagnetic wave is polarized from to %. Dehydroretinaldehyde: the aldehyde form of vitamin A (dehydroretinol), used as a visual pigment chromophore by some animals. Also called “dehydroretinal” or “retinal.” Dichroism: the property of a substance to absorb light with one e-vector orientation more strongly than others. Dichromatic color vision: a color-vision system based on two classes of color receptors. Diffraction: the constructive and destructive interference induced in incident wave fronts of monochromatic light as they pass through an aperture whose diameter is comparable to or smaller than the light’s wavelength. For a point light source this results in a blurred image consisting of a bright central circular spot (known as the Airy disk) surrounded by concentric bright and dark rings. The angular half-width of the Airy disk is equal to the wavelength divided by the aperture diameter. Diopter: the unit of optical power (symbolized by D) of a focusing system, equal to the reciprocal of the focal length measured in meters. Direction-selective ganglion cell (DSGC): a retinal ganglion cell in vertebrates that shows excitation for motion in its receptive field along a particular axis and inhibition when the motion is in the opposite direction.
347
348
Glossary
Dorsal rim: a region of ommatidia in compound eyes, usually arranged in a narrow strip, pointing upward and responsible for detection of the celestial e-vector pattern. Double, twin cones (DT): pairs of cones closely apposed where members may be electrically coupled. As used here, twin cones have members that are anatomically identical and contain the same visual pigment, double cones have members that are anatomically different and contain different visual pigments. Eccentricity: a measure of retinal distance (in millimeters) from the center of the fovea in a vertebrate eye. Alternatively, a measure of the angular distance (in degrees). Electroretinogram (ERG): the extracellular electrical response waveform of various cells in the retina, including photoreceptors and their associated interneurons, to flashes (or other modulations) of light. Elliptically polarized: a situation in which the electric field vector describes an ellipse at a plane normal to the propagation direction of light. Linearly and circularly polarized light are the two extreme forms of elliptically polarized light. e-Vector: the electrical vector of an electromagnetic wave (in this case visible light), perpendicular to the direction of propagation. Flow field: the pattern of motion vectors of the stimuli produced by particular locations in space as they move in unison due to translation and/or rotation, generally of the eye of an animal or of the animal’s self-motion but also by objects moving jointly in the external world, as in looming stimuli. In vision, also called optic flow. Fluorescence: optical process by which a portion of light that is absorbed by a pigment is reemitted at a longer wavelength. F-number: a measure of the light-gathering capacity (or image brightness) of an optical system, equal to the system’s focal length divided by its aperture (pupil) diameter. Lower F-numbers indicate brighter images. Fovea: a small region of the vertebrate retina where the density of visual cells (and thus the spatial resolution) is maximized. More correctly, such a region is referred to as an area centralis. If in addition the retinal surface within this region is dimpled (or pitted) inward it is referred to as a fovea. If this pit is particularly deep and steep-sided, the fovea is referred to as a convexiclivate fovea. Full-width-half-max: defined for spectral peaks. The wavelength range over which the intensity is at least one half the peak intensity. G-protein-coupled receptor (GPCR): one of a large family of membrane-bound proteins characterized by having seven transmembrane helices and the ability to bind a ligand and subsequently activate a G-protein (a signaling protein that has GTPase activity). Opsin is an example. Grating: a pattern of black and white stripes of particular spatial frequency whose intensity modulation can be described by a square wave or a sine wave. Head bob: a type of head motion during which the head is alternately projected forward and held steady (to stabilize vision intermittently). Characteristic of birds and a few other animals. Hue: a descriptor of color described by the different wavelengths or dominant wavelengths in the spectrum and resulting in terms such as blue, green, yellow, or red. Illuminance: vector irradiance, weighted by the human photopic sensitivity curve. Illuminant: light falling on an object or area. Infrared: electromagnetic radiation with a wavelength (in a vacuum) between . μm and μm.
Glossary
Interreceptor angle (Δϕ): the angular separation of photoreceptors in the retina. In compound eyes this is equivalent to the interommatidial angle, the angular separation of ommatidia. Intrarhabdomal filter: a section of photostable, colored material located in a photoreceptive rhabdom, involved in spectral tuning of the underlying photoreceptor. Irradiance, scalar: the energy or photon flux per unit area striking a surface, weighted equally from all directions. Irradiance, vector: the energy or photon flux per unit area striking a surface, weighted by the cosine of the angle between the incident light and the perpendicular to the surface. Kinesis: a response to a stimulus that leads to a change in locomotion, either in speed or rate of turning. A kinetic response to light is called photokinesis. Lambda-max (λmax ): the wavelength of the peak in a spectrum. For visual pigments, it is the peak of their spectral absorbance. Landmark: in vision, any relatively prominent and permanent object that can serve as a reference point for orientation or navigation. Light guide: a cylindrical or fiber-shaped structure that captures light by total internal reflection. Photoreceptors are usually light guides. Such a device can also conduct a light stimulus from the optics to the retina. Linear polarization: refers to light in which the e-vectors of constituent photons are all oriented on the same axis or in the same plane. Sometimes termed plane polarization. Lobula plate: ganglionic region in the optic tracts of insects that is primarily involved with processing motion. Looming stimulus: an approaching object that produces a radial flow field. Often indicates imminent collision. Lux: SI measure of illuminance. Matched filter: in spatial vision, a sensory filter created when the spatial layout of a matrix of receptors is matched to the spatial demands of a specific task (thereby allowing the sensory periphery to detect and analyze a major aspect of the stimulus, freeing the brain to analyze other aspects). Matthiessen lens: a spherical lens, typical of aquatic animals, containing a parabolic gradient of refractive index from the lens center (where refractive index is around .), in all radial directions to its edge (where refractive index is around .). Characterized by having a focal length that is . times the lens radius. The refractive index gradient of Matthiessen lenses minimizes spherical aberration. Maximum detectable spatial frequency (νmax ): the finest spatial frequency that an eye can discriminate at any given light level. Occurs when visual signal becomes equal to visual noise. Mechanoluminescence: light emitted by deforming, fracturing, or crystallizing certain substances. Microspectrophotometry: any technique that permits measuring the absorption spectra of very small objects, such as single photoreceptor cells. Microvillus: a membranous protrusion from a cell surface shaped like a tiny tube, typically only a few cell membrane thicknesses in radius. Midget ganglion cell: A type of retinal ganglion cell with smaller dendritic fields that project their axons to the parvocellular layers of the lateral geniculate nucleus in the brain. In the center of the fovea they receive input from single cones.
349
350
Glossary
They constitute around % of retinal ganglion cells and are responsible for highresolution color vision. Minimum contrast threshold: for a given visual system, the lowest contrast that can be detected. Modulation transfer function (MTF): the ratio of image contrast to object contrast as a function of spatial frequency (used for describing the optical quality of a lens). Monochromatic: used to describe electromagnetic radiation with a very narrow spectral distribution. Monte Carlo algorithm: numerical method to study light propagation in which the paths of large numbers of virtual photons are calculated. Moon compass: see Sun compass. Motion parallax: the visual sensation given when the eye, head, or body is moved in space, causing nearby objects to show larger and more rapid motions than more distant objects. Multifocal lens: See Chromatic aberration. Multiple scattering: situation in which a photon is scattered more than once between leaving an object and arriving at a viewer’s eye. Tends to blur detail. Noise: uncertainty arising in the visual signal due to the random and unpredictable nature of photon arrival (photon shot noise), the inability of photoreceptors to respond identically to each photon of incident light (transducer noise), and the thermal activation of rhodopsin molecules (dark noise). Nystagmus: a type of eye movement profile characterized by slow movements in one direction and quick flick-backs of eye position. Generally involved in optokinesis. Offset sensitivity: a spectral sensitivity that differs from the spectral distribution of the illuminant or of a viewed background. Oil droplet: in vision, a small spherical lipid compartment, usually containing a strongly colored material. Often associated with cone photoreceptors and involved in spectral tuning. Ommatidium: single optical unit of a compound eye, consisting of two lenses (the corneal facet lens and underlying crystalline cone) overlying a bundle of photoreceptors (typically eight) that together build a light-sensitive structure called the rhabdom. Opponency: the combination of neuronal outputs to create a difference signal, often used in color or polarization processing. Opsin: one of a large monophyletic family of GPCRs that bind a retinoid chromophore and become photosensitive. Used in all true visual systems of animals. Optic flow: see flow field. Optical cross talk: the absorption in a photoreceptor of light that was intended for a neighboring photoreceptor (but that escaped the neighbor due to its inability to contain it). Optical cutoff frequency (νco ): the finest spatial frequency passed by an optical system. Optical density: the base- logarithm of the quotient of I (incident light intensity) divided by It (transmitted light intensity). See absorbance. Optical sensitivity: a measure of a photoreceptor’s ability to capture photons when an eye of given optical construction views an extended light source (given in units of μm steradian). Optokinesis: the visual behavior involved in stabilizing visual fields in whole-field motion of the visual field. Generally involves nystagmus. Optomotor response: the movement response of an eye to a motion stimulus, generally to whole-field motion. One example is optokinesis.
Glossary
Outer segment: the cellular compartment of a rod or cone photoreceptor that contains the photoreceptive membranes that house the visual pigment. Parasol ganglion cell: A type of retinal ganglion cell with large dendritic fields that project their axons to the magnocellular layers of the lateral geniculate nucleus in the brain. They constitute around % of retinal ganglion cells and are responsible for luminance vision. They are not particularly sensitive to color. Path radiance: light that is scattered into the path between an object and a viewer that reduces the contrast of the viewed object. Phase velocity: in any light wave, the speed of the peaks of the waveform. Only one of several speeds of light and can be greater than the speed used in special relativity. Photocyte: biological cell that emits light. Photokinesis: see Kinesis. Photon noise: variance in the number of photons that strike a given surface over time. Due to the random nature of photon arrivals (which is governed by Poisson statistics) and equal to the square root of the average photon arrival rate. Photon: indivisible unit of electromagnetic energy. Photonic crystal: a structure that is highly ordered and periodic on the size scale of a wavelength of light. Often highly colored and iridescent. Photopic: pertaining to vision under bright conditions, such as daylight. Photopigment: any chemical that exhibits a chemical change when exposed to light. In vision, these are primarily the visual pigments or other opsin-based molecules. Phototaxis: see Taxis. Piezoluminescence: light emitted by the deformation of certain substances. Pigment: a substance, generally an organic molecule, that selectively absorbs a portion of the visible spectrum. Planckian locus: line on a chromaticity diagram that denotes the chromaticities of blackbody radiators of various temperatures. POL interneuron: an identified interneuron in insect optic neuropil (in the medulla) that integrates input from dorsal rim photoreceptors. Polarization vision (PV): used here to refer to visual sensitivity to linearly or circularly polarized light. Polarization: in light, the axis of oscillation of the electric field. Polarized light (PL): light in which the e-vector lies in a plane (for linearly polarized light), or in which the e-vector rotates clockwise or anticlockwise along the beam axis (for circularly polarized light). Porphyropsin: a visual pigment containing a dehydroretinaldehyde chromophore. Private signal: a signal that is conspicuous to a viewer with one set of visual abilities but relatively cryptic to a viewer with a different set. To be truly private, the privacy of the signal should be adaptive. Quantal: pertaining to photons rather than energy. Often used with irradiance or radiance. Quarter-wave retarder: a structure that alters the polarization of light traveling through it (through birefringence), such that the vectorial components of the e-vector are slowed by a difference that is °, resulting in a conversion of linearly polarized light to circularly polarized (and vice versa). R– photoreceptors: in crustaceans, these are proximally placed receptor cells whose microvilli usually fuse into a single (sometimes tiered) rhabdom, generally containing middle-wavelength-absorbing visual pigments.
351
352
Glossary
R photoreceptor: a distally placed (crustacean) or separated (insect) photoreceptor that usually contains an ultraviolet-sensitive or violet-sensitive visual pigment. Radiance: the energy or photon flux per unit area striking a surface divided by the solid angle of the source of the radiation. Radiometry: the technical field devoted to the measurement of irradiance and radiance. Raman scattering: a process by which a photon loses energy (and thus increases its wavelength) on being scattered. Rayleigh scattering: light scattered by particles much smaller than the wavelength of light. Rayleigh scattering produces the celestial polarization pattern as well as the blue color of the daytime and moonlit (but not the twilight) sky. Receptive field: the region of space within which a light stimulus can be placed that affects the response of a visual cell. Reflectance: the fraction of incident light reflected from a surface. Refractive index: an optical property of a material that determines the phase velocity of light that passes through it. Reichardt detector: a hypothetical neural circuit for detecting motion, involving two photoreceptor units with delay filters and summing of signals. Also known as an “elementary motion detector” or EMD. Retina: the light-sensitive layer of cells and tissues that lines the inner surface of the eye. Retinal ganglion cell: a visual cell in the inner layers of the vertebrate retina that, depending on eccentricity, receives inputs from one to several hundred photoreceptors indirectly via bipolar cells and amacrine cells. The axons of ganglion cells carry highly processed information from the eye to the brain via the optic nerve. The two most common types—parasol ganglion cells and midget ganglion cells—form two parallel matrices within the retina. Their dendritic field sizes increase with eccentricity, with a concomitant decrease in ganglion cell density and visual spatial resolution. Retinaldehyde: the aldehyde of vitamin A (retinol), used as the most common chromophore of visual pigments. Also called “retinal” or “retinal.” Retinoid: any chemical derivative of vitamin A or closely similar chemicals. Rhabdom: the photoreceptive element of invertebrate eyes, formed by a group of long thin microvillar rhabdomeres (either fused together or separated), each contributed by a single rhabdomeric photoreceptor. Rhabdomeric photoreceptor: a type of photoreceptor in which the visual pigments reside in closely packed microvilli, lacking any evidence of a cilium. Rhodopsin: a visual pigment with a retinal chromophore. Rod photoreceptor: a vertebrate ciliary photoreceptor with a cylindrical outer segment and with the visual pigment located in stacked membrane disks. Rods are usually involved in scotopic vision. Saccade: a rapid eye movement, during which vision is temporarily blinded. Scattering: the process by which a photon is absorbed by an atom or molecule and then rapidly reemitted, generally in a new direction. Scotopic: refers to dark or dim-light conditions. Self-screening: the result of absorbance over a long photoreceptor, which tends to broaden the spectral sensitivity and decrease polarization sensitivity. Sensitivity hypothesis: the concept that photoreceptors should be spectrally matched to the illuminant or to a viewed background.
Glossary
Sensitizing pigment: a pigment that is closely associated with a photopigment and that transfers energy to it, usually by Förster resonance. Sighting distance: for a given visual system and set of optical parameters, the maximum distance at which an object can be seen. Snel’s window: when looking directly upward underwater, °-wide region within which the entire above-water scene is compressed by refraction at the air-water interface. Solar elevation: the number of degrees that the sun is above the horizon. Solid angle: three-dimensional analogue of angle. Spatial frequency: a measure of spatial detail equal to the number of grating cycles (i.e., black-and-white stripe pairs) that occupy a single degree of visual space (in units of cycles per degree). Finer spatial details are described by higher spatial frequencies. Spectral sensitivity: the overall wavelength sensitivity of a photoreceptor, resulting from absorption by visual pigment(s) within the photoreceptor, filtering at any optical level, and the photoreceptor’s length and self-screening properties. Specular reflection: reflection as from a mirror, where the reflected ray leaves the surface at the same angle that the incident ray arrived. Specular reflection is typical of shiny surfaces. Spherical aberration: the inability of a lens with a spherical surface to focus all accepted light rays within the same image plane. Square root law of visual detection: also called the “Rose–de Vries” law. At low light levels the visual signal-to-noise ratio, and thus contrast discrimination, is proportional to the square root of photon catch. sRGB: The general industry standard for converting spectral radiance and irradiance into the red, green, and blue channels of monitors. Starburst amacrine cell (SAC): a cell in vertebrate retinas that is sensitive to motion and that in turn stimulates or inhibits direction-selective ganglion cells. Steradian: three-dimensional analogue of the radian. Defined as the area of the relevant region divided by the square of the distance to it. Strip retina: a type of retinal array in which the photoreceptors are organized into one or a few parallel rows. Structural color: a color not based on the absorption of light by pigments but by the wavelength-dependent reflection of light from periodic structures of alternating refractive indices. Sun compass: an orienting mechanism in which an animal uses the sun’s position as a reference to move in a set direction, often with time compensation. If the response is to the moon’s position, it is called a moon compass. Superposition aperture: the aperture of corneal facet lenses through which a superposition compound eye collects and focuses light onto a single rhabdom in the retina. In a larger superposition eye this aperture may consist of several hundred facet lenses. Tapetum lucidum: a reflective layer behind the retina, typical of nocturnal and deep-sea animals, that allows incident light a second chance for absorption by the photoreceptors. Made of interference reflectors in vertebrates, arachnids, and crustaceans and from air-filled tracheoles in insects. Target-selective descending neuron (TSDN): a motion-sensitive cell described in dragonflies that responds only to the images of small, moving visual targets. Taxis: a directional response to a stimulus. In the case of a light stimulus, the response is called phototaxis.
353
354
Glossary
Tetrachromatic color vision: a color-vision system based on four classes of color receptors. Thermal noise: see Dark noise. Thermal radiation: see Blackbody radiation. Transducer noise: the visual noise arising from the inability of the photoreceptors to produce an identical electrical response to each absorbed photon. Triboluminescence: light emitted by the fracturing of certain substances. Trichromatic color vision: a color-vision system based on three classes of color receptors. Ultraviolet: the region of the electromagnetic spectrum that has wavelengths (in vacuum) between . μm and . μm. At the Earth’s surface, there is negligible solar radiation with wavelengths shorter than . μm. Veiling light: see path radiance. Vestibulo-ocular reflex (VOR): a compensatory eye movement that is controlled by cells that sense mechanical rotation rather than directly by visual input. Visual fixation: the act of placing the image of an object on a particular part of the retina, usually the acute zone or the fovea. Sometimes called “foveation.” Visual pigment: the molecule produced by the binding of a retinoid to an opsin. Visual scan: an eye movement that is slower than other movements, made on one axis, and generally (but not always) slow in angular velocity. During visual scans, photoreceptors continually monitor visual input. Visual streak: a horizontally extended fovea or acute zone that creates a band of high visual cell density (and high spatial resolution) aligned with the horizon. Common in animals living in flat and relatively featureless habitats, where the horizon is a major remaining feature. Volume scattering function: a function that describes how much light is scattered into different directions by a scatterer. Waveguide: a light guide that has a diameter approaching the wavelength range of visible light and that thus propagates light as waveguide “modes,” stable patterns of light within the waveguide. Wavelength-specific behavior: a stereotyped behavior in response to the presence of a specific waveband of light (such as blue). Weber-Fechner law: a general rule about sensory systems that states that the relative difference between signal and background, rather than the absolute difference, determines its detectability. In vision, this law does not hold at dimmer light levels (see Square root law of visual detection). Zodiacal light: solar radiation reflected by dust particles in the plane of the solar system. A significant source of light in the moonless night sky.
References Adrian, W. (). Visibility of targets: Model for calculation. Light. Res. Tech. , –. Aho, A.-C., Donner, K., Helenius, S., Larsen, L. O., and T. Reuter (a). Visual performance of the toad (Bufo bufo) at low light levels, retinal ganglion cell responses and prey-catching accuracy. J. Comp. Physiol. A , –. Aho, A.-C., Donner, K., Hydén, C., Larsen, L., O., and T. Reuter (). Low retinal noise in animals with low body temperature allows high visual sensitivity. Nature , –. Aho, A.-C., Donner, K., and T. Reuter (b). Retinal origins of the temperature effect on absolute visual sensitivity in frogs. J. Physiol. , –. Allen, J. J., Mäthger, L. M., Buresch, K. C., Fetchk, T., Gardner, M., and R. T. Hanlon (). Night vision by cuttlefish enables changeable camouflage. J. Exp. Biol. , –. Andersson, S., and T. Amundsen (). Ultraviolet colour vision and ornamentation in bluethroats. Proc. R. Soc. Lond. B , –. Austin, A. D., and A. D. Blest (). The biology of two Australian species of dinopid spider. J. Zool. Lond. , –. Baird, E., Byrne, M. J., Scholtz, C. H., Warrant, E. J., and M. Dacke (). Bearing selection in ball-rolling dung beetles: Is it constant? J. Comp. Physiol. A , –. Baird, E., Kreiss, E., Wcislo, W. T., Warrant, E. J., and M. Dacke (). Nocturnal insects use optic flow for flight control. Biol. Lett. , –. Baldwin, J., and S. Johnsen (). The importance of color in mate choice of the blue crab Callinectes sapidus. J. Exp. Biol. , –. Baldwin, J. L., and S. Johnsen (). The male blue crab, Callinectes sapidus, uses both chromatic and achromatic cues during mate choice. J. Exp. Biol. , –. Bang, B. G., and B. M. Wenzel (). Nasal cavity and olfactory system. In A. S. King and J. McLelland (Eds.), Form and Function in Birds, Vol. , pp. –. Oxford: Academic Press. Barber, V. C., Evans, E. M., and M. F. Land (). The fine structure of the eye of the mollusk Pecten maximus. Z. Zellforsch. , –. Barlow, H. B. (). Retinal noise and absolute threshold. J. Opt. Soc. Am. , –. Barlow, H. B. (). What causes trichromacy? A theoretical analysis using comb-filtered spectra. Vision Res. , –. Barlow, H. B., and R. M. Hill (). Selective sensitivity to direction of movement in ganglion cells of the rabbit retina. Science , –. Baylor, D. A., Matthews, G., and K. W. Yau (). Two components of electrical dark noise in toad retinal rod outer segments. J. Physiol. , –. Baylor, E. R. (). Air and water vision of the Atlantic flying fish, Cypselurus heterurus. Nature , –. Bearder, S. K., Nekaris, K.A.I., and D. J. Curtis (). A re-evaluation of the role of vision in the activity and communication of nocturnal primates. Folia Primatol. , –. Bennett, A.T.D., Cuthill, I. C., and K. J. Norris (). Sexual selection and the mismeasure of color. Am. Nat. , –. Bhagavatula, P. S., Claudianos, C., Ibbotson, M. R., and M. V. Srinivasan (). Optic flow cues guide flight in birds. Curr. Biol. , -. Birch, D., and G. H. Jacobs (). Spatial contrast sensitivity in albino and pigmented rats. Vision Res. , –. Blackwell, H. R. (). Contrast thresholds of the human eye. J. Opt. Soc. Am. , –. Blamires, S. J., Lai, C. H., Cheng, R. C., Liao, C. P., Shen, P. S., and I. M. Tso (). Body spot coloration of a nocturnal sit-and-wait predator visually lures prey. Behav. Ecol. , –.
356
References
Blest, A. D., Hardie, R. C., McIntyre, P., and D. S. Williams (). The spectral sensitivities of identified receptors and the function of retinal tiering in the principal eyes of a jumping spider. J. Comp. Physiol. A , –. Blest, A. D., and M. F. Land (). The physiological optics of Dinopis subrufus, a fish lens in a spider. Proc. R. Soc. Lond. B , –. Blest, A. D., McIntyre, P., and M. Carter (). A re-examination of the principal retinae of Phidippus johnsoni and Plexippus validus (Araneae: Salticidae): Implications for optical modelling. J. Comp. Physiol. A , –. Bohren, C. F. (). Clouds in a Glass of Beer: Simple Experiments in Atmospheric Physics. New York: John Wiley & Sons. Bohren, C. F. (). What Light through Yonder Window Breaks? More Experiments in Atmospheric Physics. New York: John Wiley & Sons. Bohren, C. F., and E. E. Clothiaux (). Fundamentals of Atmospheric Radiation. Weinheim: Wiley-VCH. Bohren, C. F., and A. B. Fraser (). Colors of the sky. Phys. Teach. , –. Bohren, C. F., and D. R. Huffman (). Absorption and Scattering of Light by Small Particles. New York: John Wiley & Sons. Bond, D. S., and F. P. Henderson (). The Conquest of Darkness. Alexandria, VA: Defense Documentation Center, . Borst, A., Haag, J., and D. F. Reiff (). Fly motion vision. Annu. Rev. Neurosci. , –. Bowmaker, J. K. (). Evolution of vertebrate visual pigments. Vision Res. , –. Bowmaker, J. K., and E. R. Loew (). Vision in fish. In A. Kaneko and R. H. Masland (Eds.), The Senses: A Comprehensive Reference, pp. –. Oxford: Elsevier. Bowmaker, J. K., and G. R. Martin (). Visual pigments and colour vision in a nocturnal bird, Strix aluco (tawny owl). Vision Res. , –. Bradbury, J. W., and S. L. Vehrencamp (). Principles of Animal Communication (nd ed.). Sunderland, MA: Sinauer Associates. Brady, P., and M. E. Cummings (). Natural history note: Differential response to circularly polarized light by the jewel scarab beetle Chrysina gloriosa. Am. Nat. , –. Brandley, N. C., Speiser, D. I., and S. Johnsen (). Review: Eavesdropping on visual secrets. Evol. Ecol. , –. Bridgeman, C. S., and K. U. Smith (). The absolute threshold of vision in cat and man with observations on its relation to the optic cortex. Am. J. Physiol. , –. Briscoe, A. D., and L. Chittka (). The evolution of colour vision in insects. Annu. Rev. Entomol. , –. Britt, L. L., Loew, E. R., and W. N. McFarland (). Visual pigments in the early life stages of Pacific Northwest fishes. J. Exp. Biol. , –. Brooke, M. de L., Hanley, S., and S. B. Laughlin (). The scaling of eye size with body mass in birds. Proc. R. Soc. Lond. B. , –. Buchanan, B. W. (). Low-illumination prey detection by squirrel treefrogs. J. Herpetol. , –. Buddenbrock, W. von, and I. Moller-Racke (). Uber den lichtsinn von Pecten. Pubbl. Staz. Zool. Napoli , –. Burton, B. G., and S. B. Laughlin (). Neural images of pursuit targets in the photoreceptor arrays of male and female houseflies Musca domestica. J. Exp. Biol. , –. Burton, B. G., Tatler, B. W., and S. B. Laughlin (). Variations in photoreceptor response dynamics across the fly retina. J. Neurophysiol. , –. Buschbeck, E. K., Sbita, S. J., and R. C. Morgan (). Scanning behavior by larvae of the predacious diving beetle, Thermonectus marmoratus (Coleoptera: Dysticidae) enlarges visual field prior to prey capture. J. Comp. Physiol. A , –. Busserolles, F. de, Fitzpatrick, J. L., Paxton, J. R., Marshall, N. J., and S. P. Collin (). Eye-size variability in deep-sea lanternfishes (Myctophidae): An ecological and phylogenetic study. PLoS ONE (), e.
References 357
Bybee, S. M., Yuan, F. R., Ramstetter, M. D., Llorente-Bousquets, J., Reed, R. D., Osorio, D., et al. (). UV photoreceptors and UV-yellow wing pigments in Heliconius butterflies allow a color signal to serve both mimicry and intraspecific communication. Am. Nat. , –. Carleton, K. C. (). Visual communication in East African cichlid fishes: Diversity in a phylogenetic context. In F. Ladich, S. P. Collin, P. Moller, and G. B. Kapoor (Eds.), Communication in Fishes, Vol. . Enfield, NH: Science Publishers. Cartwright, B. A., and T. S. Collett (). How honey bees use landmarks to guide their return to a food source. Nature , –. Castenholz, A. (). The eye of Tarsius. In C. Niemitz (Ed.), Biology of Tarsiers, pp. –. Stuttgart: Gustav Fischer Verlag. Caveney, S. (). Cuticle reflectivity and optical activity in scarab beetles: The role of uric acid. Proc. R. Soc. Lond. B , –. Caveney, S., and P. D. McIntyre (). Design of graded-index lenses in the superposition eyes of scarab beetles. Phil. Trans. R. Soc. Lond. B , –. Chapuis, L., and R. Bshary (). Signaling by the cleaner shrimp Periclimenes longicarpus. Anim. Behav. , –. Charles-Dominique, P. (). Ecology and Behaviour of Nocturnal Primates (trans. R. D. Martin). New York: Columbia University Press. Charman, W. N. (). The vertebrate dioptric apparatus. In J. R. Cronly-Dillon and R. L. Gregory (Eds.), Vision and Visual Dysfunction, Vol. , pp. –. Basingstoke: Macmillan. Cheng, K., Collett, T. S., and R. Wehner (). Honeybees learn the colours of landmarks. J. Comp. Physiol. A , –. Chiao, C.-C., Vorobyev, M., Cronin, T. W., and D. Osorio (). Spectral tuning of dichromats to natural scenes. Vision Res. , –. Chiou, T. H., Kleinlogel, S., Cronin, T. W., Caldwell, R., Loeffler, B., Siddiqi, A., and J. Marshall (). Circular polarization vision in a stomatopod crustacean. Curr. Biol. , –. Chiou, T-H., Marshall, N. J., Caldwell, R. L., and T. W. Cronin (a). Changes in light reflecting properties of signalling appendages alter mate choice behaviour in a stomatopod crustacean Haptosquilla trispinosa. Mar. Fresh. Behav. Physiol. , –. Chiou, T.-H., Mäthger, L. M., Hanlon, R. T., and T. W. Cronin, (). Spectral and spatial properties of polarized light reflections from the arms of squid (Loligo pealeii) and cuttlefish (Sepia officinalis L.). J. Exp. Biol. , –. Chiou, T-H., Place, A. R., Caldwell, R. L. Marshall, N. J., and T. W. Cronin (b). A novel function for a carotenoid: Astaxanthin used as a polarizer for visual signalling in a mantis shrimp. J. Exp. Biol. , –. Chittka, L. (). Does bee color vision predate the evolution of flower color? Naturwissenschaften, , –. Chittka, L., and R. Menzel (). The evolutionary adaptation of flower colors and the insect pollinators’ color vision. J. Comp. Physiol. A , –. Chuang, C. Y., Yang, E. C., and I. M. Tso (). Deceptive color signaling in the night: A nocturnal predator attracts prey with visual lures. Behav. Ecol. , –. Clarke, R. J., and H. Ikeda (). Luminance and darkness detectors in the olivary and posterior pretectal nuclei and their relationship to the pupillary light reflex in the rat. I. Studies with steady luminance levels. Exp. Brain. Res. , –. Cleary, P., Deichsel, G., and P. Kunze (). The superposition image in the eye of Ephestia kühniella. J. Comp. Physiol. , –. Cohen, A. C., and J. G. Morin (). Sexual morphology, reproduction and the evolution of bioluminescence in Ostracoda. Paleo. Soc. Pap. , –. Collett, T. S., Dillman, E., Giger, A., and R. Wehner (). Visual landmarks and route following in desert ants. J. Comp. Physiol. A , –. Collett, T. S., and M. F. Land (). Visual control of flight behaviour in the hoverfly, Syritta pipiens L. J. Comp. Physiol. , –.
358
References
Collewijn, H. (). Oculomotor reactions in the cuttlefish, Sepia officinalis. J. Exp. Biol. , –. Collin, S. P. (). Behavioural ecology and retinal cell topography. In S. N. Archer, M.B.A. Djamgoz, E. Loew, J. C. Partridge, and S. Vallerga (Eds.), Adaptive Mechanisms in the Ecology of Vision, pp. –. Dordrecht: Kluwer Academic Publishers. Collin, S. P., Davies, W. L., Hart, N. S., and D. M. Hunt (). The evolution of early vertebrate photoreceptors. Phil. Trans. R. Soc. B , –. Collin, S. P., Hoskins, R. V., and J. C. Partridge (). Tubular eyes of deep-sea fishes: A comparative study of retinal topography. Brain Behav. Evol. , –. Collin, S. P., and J. D. Pettigrew (). Retinal topography in reef teleosts. I and II. Brain Behav. Evol. , –. Collins, C. E., Hendrickson, A., and J. H. Kaas (). Overview of the visual system of Tarsius. Anat. Rec. A A, –. Coppens, J. E., Franssen, L., and T.J.T.P. Van den Berg (). Wavelength dependence of intraocular stray light. Exp. Eye Res. , –. Cott, H. B. (). Adaptive Coloration in Animals. London: Methuen & Co. Cronin, T. W., Caldwell, R. L., and N. J. Marshall (). Tunable colour vision in a mantis shrimp. Nature, , –. Cronin, T. W., and T. M. Frank (). A short-wavelength photoreceptor class in a deep-sea shrimp. Proc. R. Soc. Lond. B , –. Cronin, T. W., and J. Marshall (). Patterns and properties of polarized light in air and water. Phil. Trans. R. Soc. B , –. Cronin, T. W., and N. J. Marshall (). A retina with at least ten spectral types of photoreceptors in a mantis shrimp. Nature, , –. Cronin, T. W., Marshall, N. J., and R. L. Caldwell (). Spectral tuning and the visual ecology of mantis shrimps. Phil. Trans. R. Soc. Lond. B , –. Cronin, T. W., Marshall, N. J., Caldwell, R. L., and N. Shashar (). Specialization of retinal function in the compound eyes of mantis shrimps. Vision Res. , –. Cronin, T. W., Marshall, N. J., and M. F. Land (). Optokinesis in gonodactyloid mantis shrimps (Crustacea; Stomatopoda; Gonodactylidae). J. Comp. Physiol. A , –. Cronin, T. W., Nair, J. N., and R. D. Doyle (). Ocular tracking of rapidly moving visual targets by stomatopod crustaceans. J. Exp. Biol. , –. Cronin, T. W., and N. Shashar (). The linearly polarized light field in clear, tropical marine waters: Spatial and temporal variation of light intensity, degree of polarization and e-vector angle. J. Exp. Biol. , –. Cronin, T. W., Shashar, N., Caldwell, R. L., Marshall, J., Cheroske, A. G., and T. H. Chiou (). Polarization vision and its role in biological signaling. Integr. Comp. Biol. , –. Cummings, M. E., Bernal, X. E., Reynaga, R., Rand, A. S., and M. J. Ryan (). Visual sensitivity to a conspicuous male cue varies by reproductive state in Physalaemus pustulosus females. J. Exp. Biol. , –. Cummings, M. E., Rosenthal, G. G., and M. J. Ryan (). A private ultraviolet channel in visual communication. Proc. R. Soc. Lond. B , –. Curcio, C. A., Sloan, K. R., Kalina, R. E., and A. E. Hendrickson (). Human photoreceptor topography. J. Comp. Neurol. , –. Cuthill, I. C., Partridge, J. C., Bennett, A.T.D., Church, S. C., Hart, N. S., and S. Hunt (). Ultraviolet vision in birds. Behavior , –. Dacey, D. M. (). The mosaic of midget ganglion cells in the human retina. J. Neurosci. , –. Dacke, M., Baird, E., Byrne, M., Scholtz, C., and E. J. Warrant (a). Dung beetles use the Milky Way for orientation. Curr. Biol. , –. Dacke, M., Byrne, M., Smolka, J., Warrant, E., and E. Baird (b). Dung beetles ignore landmarks for straight-line orientation. J. Comp. Physiol. A , –. Dacke, M., Nilsson, D. E., Scholtz, C. H., Byrne, M., and E. J. Warrant (a). Insect orientation to polarized moonlight. Nature , .
References 359
Dacke, M., Nilsson, D. E., Warrant, E. J., Blest, A. D., Land, M. F., and D. C. O’Carroll (). Built-in polarizers form part of a compass organ in spiders. Nature , –. Dacke, M., Nordström, P., and C. H. Scholtz (b). Twilight orientation to polarised light in the crepuscular dung beetle Scarabaeus zambesianus. J. Exp. Biol. , –. Demb, J. B. (). Cellular mechanisms for direction selectivity in the retina. Neuron , –. Denton, E. J. (). The responses of the pupil of Gekko gekko to external light stimulus. J. Gen. Physiol. , –. Denton, E. J. (). On the organization of reflecting surfaces in some marine animals. Phil. Trans. R. Soc. Lond. B , –. Denton, E. J. (). Reflectors in fishes. Sci. Am. , –. Denton, E. J., Gilpin-Brown, J. B., and P. G. Wright (). The angular distrbution of the light produced by some mesopelagic fish in relation to their camouflage. Proc. R. Soc. Lond. B , –. Denton, E. J., and J.A.C. Nicol (a). Reflexion of light by external surfaces of the herring Clupea harengus. J. Mar. Biol. Assoc. UK , –. Denton, E. J., and J.A.C. Nicol (b). Studies on reflexion of light from silvery surfaces of fishes, with special reference to the bleak Alburnus alburnus. J. Mar. Biol. Assoc. UK , –. Denton , E. J., and F. J. Warren (). The photosensitive pigments in the retinae of deep-sea fish. J. Mar. Biol. Assoc. UK , –. Detto, T. (). The fiddler crab Uca mjoebergi uses colour vision in mate choice. Proc. R. Soc. Lond. B , –. De Vries, H. (). The quantum character of light and its bearing upon threshold of vision, the differential sensitivity and visual acuity of the eye. Physica , –. De Valois, S., Morgan, H., and D. M. Snodderly (). Psychophysical studies of monkey Vision—III. Spatial luminance contrast sensitivity tests of macaque and human observers. Vision Res. , –. Dogiel, A. S. (). Ueber die nervösen Elemente in der Retina des Menschen. Erste Mittheilung. Arch. Mikrosk. Anat. , –. Douglas, J. M., Cronin, T. W., Chiou, T.-H., and N. J. Dominy (). Light habitats and the role of polarized iridescence in the sensory ecology of neotropical nymphalid butterflies (Lepidoptera: Nymphalidae). J. Exp. Biol. , –. Douglas, R. H. (). The ecology of teleost fish visual pigments: A good example of sensory adaptation to the environment? In F. G. Barth and A. Schmid (Eds.), Ecology of Sensing, pp. –. Berlin: Springer-Verlag. Douglas, R. H., and C. W. Hawryshyn (). Behavioral studies of fish vision: An analysis of visual capabilities. In R. H. Douglas and M. B. A. Djamgoz (Eds.), The Visual System of Fish, pp. –. New York: Chapman & Hall. Douglas, R. H., Hunt, D. M., and J. K. Bowmaker (). Spectral sensitivity tuning in the deep-sea. In S. P. Collin and N. J. Marshall (Eds.), Sensory Processing in Aquatic Environments, pp. –. New York: Springer. Douglas, R. H., and N. J. Marshall (). A review of vertebrate and invertebrate ocular filters. In S. N. Archer, M.B.A. Djamgoz, E. R. Lowe, J. C. Partridge, and S. Vallerga (Eds.), Adaptive Mechanisms in the Ecology of Vision, pp. –. London: Kluwer Academic Publishers. Douglas, R. H., Mullineaux, C. W., and J. C. Partridge (). Long-wave sensitivity in deepsea stomiid dragonfish with far-red bioluminescence: Evidence for a dietary origin of the chlorophyll-derived retinal photosensitizer of Malacosteus niger. Phil. Trans. R. Soc. Lond. B , –. Douglas, R. H., and J. C. Partridge (). On the visual pigments of deep-sea fish. J. Fish. Biol. , –. Douglas, R. H., Partridge, J. C., Dulai, K., Hunt, D., Mullineaux, C. W., Tauber, A. Y., and P. H. Hynninen (). Dragon fish see using chlorophyll. Nature , –. Doujak, F. E. (). Can a shore crab see a star? J. Exp. Biol. , –.
360
References
Dubs, A., Laughlin, S. B., and M. V. Srinivasan (). Single photon signals in fly photoreceptors and first order interneurons at behavioural threshold. J. Physiol. , –. Duntley, S. Q. (). The Visibility of Submerged Objects. Final Report to Office of Naval Research. Duntley, S. Q. (). Light in the sea. J. Opt. Soc. Am. , –. Dyer, A. G., Boyd-Gerny, S., McLoughlin, S., Rosa, M.G.P., Simonov, V., and B.B.M. Wong (). Parallel evolution of angiosperm colour signals: Common evolutionary pressures linked to hymenopteran vision. Proc. R. Soc. Lond. B , –. Easter, S. S., Johns, P. R., and D. Heckenlively (). Horizontal compensatory eye movements in goldfish (Carrassius auratus). I. The normal animal. J. Comp. Physiol. , –. Eckert, M. P., and J. Zeil. (). Towards an ecology of motion vision. In J. M. Zanker and J. Zeil (Eds.), Motion Vision: Computational, Neural, and Ecological Constraints, pp. –. Berlin: Springer. Eckles, M. A., Roubik, D. W., and J. C. Nieh (). A stingless bee can use visual odometry to estimate both height and distance. J. Exp. Biol. , –. Egelhaaf, M. (). On the neuronal basis of figure-ground discrimination by relative motion in the visual system of the fly. Biol. Cybern. , –. Egelhaaf, M., Hausen, K., Reichardt, W., and C. Wehrhahn (). Visual course control in flies relies on neuronal computation of object and background motion. Trends Neurosci. , –. Emlen, S. T. (). Celestial rotation: Its importance in the development of migratory orientation. Science , –. Enoch, J. M., and F. L. Tobey (). Waveguide properties of retinal receptors: Techniques and observations. In J. M. Enoch and F. L. Tobey (Eds.), Vertebrate Photoreceptor Optics, pp. –. Berlin, Heidelberg, New York: Springer. Ewert, J.-P. (). Motion perception shapes the visual world of amphibians. In F. R. Prete (Ed.), Complex Worlds from Simpler Nervous Systems, pp. –. Cambridge, MA: MIT Press. Exner, S. (). Die Physiologie der facettirten Augen von Krebsen und Insecten. Leipzig, Vienna: Franz Deuticke. English translation: Hardie, R. C. (). The Physiology of the Compound Eyes of Insects and Crustaceans. Berlin, Heidelberg, New York: Springer. Fasick, J. I., and P. R. Robinson (). Spectral-tuning mechanisms of marine mammal rhodopsins and correlations with foraging depth. Visual Neurosci. , –. Fenk, L. M., and A. Schmid (). The orientation-dependent visual spatial cut-off frequency in a spider. J. Exp. Biol. , –. Fenk, L. M., and A. Schmid (). Flicker-induced eye movements and the behavioural temporal cut-off frequency in a nocturnal spider. J. Exp. Biol. , –. Fernald, R. D. (). Eye movements in the African cichlid fish, Haplochromis burtoni. J. Comp. Physiol. A , –. Fineran, B. A., and J.A.C. Nicol (). Studies on the photoreceptors of Anchoa mitchilli and A. hepsetus with particular reference to the cones. Phil Trans. R. Soc. B , –. Fite, K. V. (). Anatomical and behavioural correlates of visual acuity in the great horned owl. Vision Res. , –. Fleishman, L. J., Loew, E. R., and M. J. Whiting (). High sensitivity to short wavelengths in a lizard and implications for understanding the evolution of visual systems in lizards. Proc. R. Soc. Lond. B , –. Forward, R. B. Jr. (). Larval biology of the crab Rhithropanopeus harrisii (Gould): A synthesis. Biol. Bull. , –. Forward, R. B. Jr., Bouria, M. H., Lessios, N. N., and J. H. Cohen (). Orientation to shorelines by the supratidal amphipod Talorchestia longicornis: Wavelength specific behavior during sun compass orientation. J. Exp. Mar. Biol. Ecol. , –. Forward, R. B. Jr., Cronin, T. W., and D. E. Stearns (). Control of diel vertical migration: Photoresponses of a larval crustacean. Limnol. Oceanogr. , –.
References 361
Fox, H. M., and G. Vevers (). The Nature of Animal Colours. London: Sidgwick & Jackson. Franceschini, N., Hardie, R., Ribi, W., and K. Kirschfeld (). Sexual dimorphism in a photoreceptor. Nature , –. Frank, T. M., Johnsen, S., and T. W. Cronin (). Light and vision in the deep-sea benthos. II. Vision in deep-sea crustaceans. J. Exp. Biol. , –. Frank, T. M., and E. A. Widder (). UV light in the deep-sea: In situ measurements of downwelling irradiance in relation to the visual threshold sensitivity of UV-sensitive crustaceans. Mar. Fresh. Behav. Physiol. , –. Frederiksen, R., Wcislo, W. T., and E. J. Warrant (). Visual reliability and information rate in the retina of a nocturnal bee. Curr. Biol., –. Fried, S. I., and R. H. Masland (). Image processing: How the retina detects the direction of image motion. Curr. Biol. , R–R. Fritsches, K., Brill, R., and E. J. Warrant (). Warm eyes provide superior vision in swordfishes. Curr. Biol. , –. Fritches, K. A., and N. J. Marshall. (). Independent and conjugate eye movements during optokinesis in teleost fish. J. Exp. Biol. , –. Fuortes, M.G.F., and S. Yeandle (). Probability of occurrence of discrete potential waves in the eye of Limulus. J. Gen. Physiol. –. Gagnon, Y. L., Söderberg, B., and R.H.H. Kröger (). Optical advantages and function of multifocal spherical fish lenses. J. Opt. Soc. Am. A, , –. Garamszegi, L. Z., Møller, A. P., and J. Erritzøe (). Coevolving avian eye size and brain size in relation to prey capture and nocturnality. Proc. R. Soc. Lond. B. , –. Garm, A., Oskarsson, M., and D.-E. Nilsson (). Box jellyfish use terrestrial visual cues for navigation. Curr. Biol. , –. Garstang, R. H. (). Mount Wilson Observatory: The sad story of light pollution. Observatory , –. Gibson, J. J. (). The Perception of the Visual World. Boston: Houghton Mifflin. Gilbert, C., and N. J. Strausfeld (). The functional organization of male-specific visual neurons in flies. J. Comp. Physiol. A , –. Gislén, A., Dacke, M., Kröger, R.H.H., Abrahamson, M., Nilsson, D. E., and E. J. Warrant (). Superior vision under water in a human population of sea-gypsies. Curr. Biol. , –. Gislén, A., Warrant, E. J., Dacke, M., and R.H.H. Kröger (). Visual training improves underwater vision in children. Vision Res. , –. Glantz, R. M. (). Polarization sensitivity in crayfish lamina monopolar neurons. J. Comp. Physiol. A , –. Goldstein, D. H. (). Polarization properties of Scarabaeidae. Appl. Opt. , –. Gomez, D., Richardson, C., Lengagne, T., Plenet, S., Joly, P., Léna, J.-P., and M. Théry (). The role of nocturnal vision in mate choice: Females prefer conspicuous males in the European tree frog (Hyla arborea). Proc. R. Soc. Lond. B , –. Gonzalez-Bellido, P. T., Peng, H., Yanga, J., Georgopoulosc, A. P., and R. M. Olberg (). Eight pairs of descending visual neurons in the dragonfly give wing motor centers accurate population vector of prey direction. Proc. Natl. Acad. Sci. USA , –. Gonzalez-Bellido, P. T., Wardill, T. J., and M. Juusola (). Compound eyes and retinal information processing in miniature dipteran species match their specific ecological demands. Proc. Natl. Acad. Sci. USA , –. Govardovskii, V. I. (). On the role of oil drops in colour vision. Vision Res. , –. Govardovskii, V. I., Fyhrquist, N., Reuter, T., Kuzmin, D. G., and K. Donner (). In search of the visual pigment template. Vis. Neurosci. , –. Graham, J. B., and R. H. Rosenblatt (). Unique adaptation in an intertidal fish. Science , –. Graham, P., Fouria, K., and T. S. Collett (). The influence of beacon-aiming on the routes of wood ants. J. Exp. Biol. , –.
362
References
Greiner, B. (). Visual adaptations in the night-active wasp Apoica pallens. J. Comp. Neurol. , –. Greiner, B., Ribi, W. A., and E. J. Warrant (a). Retinal and optical adaptations for nocturnal vision in the halictid bee Megalopta genalis. Cell Tissue Res. , –. Greiner, B., Ribi, W. A., and E. J. Warrant (b). Neuronal organisation in the first optic ganglion of the nocturnal bee Megalopta genalis. Cell Tissue Res. , –. Greiner, B., Ribi, W. A., and E. J. Warrant (). A neural network to improve dim-light vision? Dendritic fields of first-order interneurons in the nocturnal bee Megalopta genalis. Cell Tissue Res. , –. Gronenberg W., and N. J. Strausfeld (). Descending pathways connecting the male-specific visual system of flies to the neck and flight motor. J. Comp. Physiol. A , –. Gunter, R. (). The absolute threshold for vision in the cat. J. Physiol. , –. Haddock, S.H.D., and J. F. Case (). Bioluminescence spectra of shallow and deep-sea gelatinous zooplankton: Ctenophores, medusae and siphonophores. Mar. Biol. , –. Haddock, S.H.D., Dunn, C. W., Pugh, P. R., and C. E. Schnitzler (). Bioluminescent and red-fluorescent lures in a deep-sea siphonophore. Science , . Haddock, S.H.D., Moline, M. A., and J. F. Case (). Bioluminescence in the sea. Annu. Rev. Mar. Sci. , –. Hall, M. I., and C. F. Ross (). Eye shape and activity in birds. J. Zool. , –. Hamdorf, K., Hochstrate, P., Höglund, G., Moser, M., Sperber, S., and P. Schlecht (). Ultraviolet sensitizing pigment in blowfly photoreceptors R–; probable nature and binding sites. J. Comp. Physiol. A , –. Hanke, F. D., and G. Dehnhardt (). Aerial visual acuity in harbor seals (Phoca vitulina) as a function of luminance. J. Comp. Physiol. A , –. Hanke, F. D., Kröger, R.H.H., Siebert U., and G. Dehnhardt (). Multifocal lenses in a monochromat: The harbour seal. J. Exp. Biol. , –. Hanlon, R. T., and J. B. Messenger (). Cephalopod Behaviour. Cambridge: Cambridge University Press. Hanlon, R. T., Naud, M. J., Forsythe, J. W., Hall, K., Watson, A. C., and J. McKechnie (). Adaptable night camouflage by cuttlefish. Am. Nat. , –. Hardy, A. C. (). The Open Sea. London: Collins. Harper, R. D., and J. F. Case (). Disruptive counterillumination and its anti-predatory value in the plainfish midshipman Porichthys notatus. Mar. Biol. , –. Hart, N. S. (). The visual ecology of avian photoreceptors. Prog. Ret. Eye. Res. , –. Hart, N., Bailes, H., Vorobyev, M., Marshall, N. J., and S. Collin (). Visual ecology of the Australian lungfish (Neoceratodus forsteri). BMC Ecol. , . Hart, N. S., and M. Vorobyev (). Modelling oil droplet absorption spectra and spectral sensitivities of bird cone photoreceptors. J. Comp. Physiol. A , –. Hartline, H. K. (). The discharge of impulses in the optic nerve of Pecten in response to illumination of the eye. J. Cell. Comp. Physiol. , –. Hastings, J. W., and J. G. Morin (). Bioluminescence. In C. L. Prosser (Ed.), Natural and Integrative Animal Physiology, pp. -. New York: Wiley-Liss. Hateren, J. H. van (). Photoreceptor optics, theory and practice. In D. G. Stavenga and R. C. Hardie (Eds.), Facets of Vision, pp. –. Berlin, Heidelberg, New York: Springer. Hateren, J. H. van (). Spatiotemporal contrast sensitivity of early vision. Vision Res. , –. Hateren, J. H. van, Hardie, R. C., Rudolph, A., Laughlin, S. B., and D. G. Stavenga (). The bright zone, a specialized dorsal eye region in the male blowfly Chrysomyia megacephala. J. Comp. Physiol. A , –. Hateren, J. H. van, and D. E. Nilsson (). Butterfly optics exceed the theoretical limits of conventional apposition eyes. Biol. Cybern. , –. Hattar, S., Liao, H. W., Takao, M., Berson, D. M., and K. W. Yau (). Melanopsin-containing retinal ganglion cells: Architecture, projections, and intrinsic photosensitivity. Science : –.
References 363
Hausen, K. (). Die Brechungsindices im Kristallkegel der Mehlmotte Ephestia kühniella. J. Comp. Physiol. , –. Hausen, K. (). The lobula-complex of the fly: Structure, function and significance in visual behavior. In M. A. Ali (Ed.), Photoreception and Vision in Invertebrates, pp. –. New York: Plenum Press. Hausmann, F., Arnold, K. E., Marshall, N. J., and I.P.F. Owens (). Ultraviolet signals in birds are special. Proc. R. Soc. Lond. B , –. Hawryshyn, C. W. (). Polarization vision in fish. Am. Sci. , –. Hawryshyn, C. W., Moyer, H. D., Allison, W. T., Haimberger, T. J., and W. N. McFarland, (). Multidimensional polarization sensitivity in damselfishes. J. Comp. Physiol. A , –. Hecht, S., and Pirenne, M. H. (). The sensibility of the nocturnal long-eared owl in the spectrum. J. Gen. Physiol. , –. Hecht, S., Schlaer, S., and M. H. Pirenne (). Energy, quanta, and vision. J. Gen. Physiol. , –. Heimonen, K., Immonen, E. V., Frolov, R., Salmela, I., Juusola, M., Vähäsöyrinki, M., and M. Weckström (). Signal coding in cockroach photoreceptors is tuned to dim environments. J. Neurophysiol. , –. Heimonen, K., Salmela, I., Kontiokari, P., and W. Weckström (). Large functional variability in cockroach photoreceptors: optimization to low light levels. J. Neurosci. , –. Heine, L. (). Über die Akkommodation des Schildkrötenauges (Emys europaea). Zentralblatt Physiol. , –. Hemmi, J. M., Marshall, J., Pix, W., Vorobyev, M., and J. Zeil (). The variable colours of the fiddler crab Uca vomeris and their relation to background and predation. J. Exp. Biol. , –. Henderson, S. R., Reuss, H., and R. C. Hardie (). Single photon responses in Drosophila photoreceptors and their regulation by Ca+. J. Physiol. , –. Herreros de Tejada, P., Munoz Tedo, C., and C. Carmen (). Behavioral estimates of absolute visual threshold in mice. Vis. Neurosci. , –. Herring, P. J. (). Bioluminescence in decapod crustacea. J. Mar. Biol. Assoc. UK , –. Herring, P. J. (). Bioluminescence in the crustacea. J. Crustac. Biol. , –. Herring, P. J. (). Systematic distribution of bioluminescence in living organisms. J. Biolum. Chemilum. , –. Herring, P. J., and J. G. Morin (). Bioluminescence in fishes. In P. J. Herring (Ed.), Bioluminescence in Action, pp. –. New York: Academic Press. Heyers, D., Manns, M., Luksch, H., Güntürkün, O., and H. Mouritsen (). A visual pathway links brain structures active during magnetic compass orientation in migratory birds. PLoS ONE (), e. Hinton, H. E. (). Possible significance of red patches of female crab-spider, Misumena vatia. J. Zool. , –. Hoeppe, G. (). Why the Sky Is Blue: Discovering the Color of Life. Princeton, NJ: Princeton University Press. Hölldobler, B. (). Canopy orientation: A new kind of orientation in ants. Science , –. Homberg, U., Heinze, S., Pfeiffer, K., Kinoshita, M., and B. el Jundi (). Central neural coding of sky polarization in insects. Phil. Trans. R. Soc. B , –. Hornstein, E. P., O‘Carroll, D. C., Anderson, J. C., and S. B. Laughlin (). Sexual dimorphism matches photoreceptor performance to behavioural requirements. Proc. R. Soc. Lond. B , –. Horridge, G. A. (). The separation of visual axes in apposition compound eyes. Phil. Trans. R. Soc. Lond. B , –. Horváth, G., Kriska, G., Malik, P., and B. Robertson (). Polarized light pollution: A new kind of ecological photopollution. Front. Ecol. Environ. , –. Horvath, G., and D. Varjú (). Polarized Light in Animal Vision. Berlin, Heidelberg: Springer-Verlag.
364
References
How, M. J., Pignatelli, V., Temple, S. E., Marshall, N. J., and J. M. Hemmi (). High e-vector acuity in the polarisation vision system of the fiddler crab Uca vomeris. J. Exp. Biol. , –. Howland, H. C., Merola, S., and J. R. Basarab (). The allometry and scaling of the size of vertebrate eyes. Vision Res. , –. Hubel, D. H. (). Eye, Brain and Vision. Scientific American Library, No. . W. H. Freeman: New York. Hughes, A. (). The topography of vision in mammals of contrasting life style: Comparative optics and retinal organisation. In F. Crescitelli (Ed.), Handbook of Sensory Physiology, Vol. VII/, pp. –. Berlin, Heidelberg, New York: Springer. Hughes, A., and H. Wässle (). An estimate of image quality in the rat eye. Invest. Ophthalmol. Vis. Sci. , –. Hulbert, O. (). Explanation of the brightness and color of the sky, particularly the twilight sky. J. Opt. Soc. Am. A , –. Hurley, A. C., Lange, G. D., and P. H. Hartline (). The adjustable “pin-hole camera” eye of Nautilus. J. Exp. Zool. , –. Huxley, T. H. (/). Collected Essays, Vol. (Cambridge Library Collection—Philosophy). Cambridge: Cambridge University Press. Ivanoff, A., and T. H. Waterman (). Elliptical polarization of submarine illumination. J. Mar. Res. , –. Jacobs, G. H. (). Visual capacities of the owl monkey (Aotus trivirgatus). II. Spatial contrast sensitivity. Vision Res. , –. Jacobs, G. H. (). Comparative Color Vision. New York: Academic Press. Jacobs, G. H., Deegan, J. F., Neitz, J. Crognale, M. A., and M. Neitz (). Photopigments and color vision in the nocturnal monkey, Aotus. Vision Res. , –. Jacobs, G. H., Neitz, M., and J. Neitz (). Mutations in S-cone pigment genes and the absence of colour vision in two species of nocturnal primate. Proc. R. Soc. Lond. B , –. Jerlov, N. G. () Marine Optics. Amsterdam: Elsevier. Jinks, R. N., Markley, T. L., Taylor, E. E., Perovich, G., Dittel, A. I., Epifanio, C. E., and T. W. Cronin (). Adaptive visual metamorphosis in a deep-sea hydrothermal vent crab. Nature , –. Johnsen, S. (). Hidden in plain sight: The ecology and physiology of organismal transparency. Biol. Bull. , –. Johnsen, S. (). Cryptic and conspicuous coloration in the pelagic environment. Proc. R. Soc. Lond. B , –. Johnsen, S. (). Lifting the cloak of invisibility: The effects of changing optical conditions on pelagic crypsis. Integr. Comp. Biol. , –. Johnsen, S. ().The red and the black: Bioluminescence and the color of animals in the deep sea. Integr. Comp. Biol. , –. Johnsen, S. (). Does new technology inspire new directions? Examples drawn from pelagic visual ecology. Integr. Comp. Biol. , –. Johnsen, S. (). The Optics of Life. Princeton, NJ: Princeton University Press. Johnsen, S., Frank, T. M., Haddock, S.H.D., Widder, E. A., and C. G. Messing (). Light and vision in the deep-sea benthos. I. Bioluminescence at – m depth in the Bahamian Islands. J. Exp. Biol. , –. Johnsen, S., Kelber, A., Warrant, E. J., Sweeney, A. M., Lee, R. H. Jr., and J. Hernández-Andrés, (). Crepuscular and nocturnal illumination and its effects on color perception by the nocturnal hawkmoth Deilephila elpenor. J. Exp. Biol. , –. Johnsen, S., Marshall, N. J., and E. A. Widder (). Polarization sensitivity as a contrast enhancer in pelagic predators: Lessons from in situ polarization imaging of transparent zooplankton. Phil. Trans. R. Soc. Lond. B , –. Johnsen, S., and H. M. Sosik (). Cryptic coloration and mirrored sides as camouflage strategies in near-surface pelagic habitats: Implications for foraging and predator avoidance. Limnol. Oceanogr. , –.
References 365
Johnsen, S., and E. A. Widder (). The physical basis of transparency in biological tissue: Ultrastructure and the minimization of light scattering. J. Theor. Biol. , –. Johnsen, S., and E. A. Widder (). Ultraviolet absorption in transparent zooplankton and its implications for depth distribution and visual predation. Mar. Biol. , –. Johnsen, S., Widder, E. A., and C. D. Mobley (). Propagation and perception of bioluminescence: Factors affecting the success of counterillumination as a cryptic strategy. Biol. Bull. , –. Jones, B. W., and M. K. Nishiguchi (). Counterillumination in the Hawaiian bobtail squid, Euprymna scolopes Berry (Mollusca: Cephalopoda). Mar. Biol. , –. Kamermans, M., and C. Hawryshyn (). Teleost polarization vision: How might it work and what might it be good for. Phil Trans. R. Soc. B , –. Katz, B., and B. Minke (). Phospholipase C-mediated suppression of dark noise enables single-photon detection in Drosophila photoreceptors. J. Neurosci. : –. Katzir, G., Schectman, E., Carmi, N., and D. Weihs (). Head stabilization in herons. J. Comp. Physiol. A , –. Kelber, A. (). Why “false” colours are seen by butterflies. Nature , . Kelber, A. (). Invertebrate colour vision. In E. Warrant and D.-E. Nilsson (Eds.), Invertebrate Vision, pp. –. Cambridge: Cambridge University Press. Kelber, A., Balkenius, A., and E. J. Warrant (). Scotopic colour vision in nocturnal hawkmoths. Nature , –. Kelber, A., and O. Lind (). Limits of colour vision in dim light. Ophthalmol. Physiol. Opt. , –. Kelber, A., and D. Osorio (). From spectral information to animal colour vision: Experiments and concepts. Proc. R. Soc. Lond. B , –. Kelber, A., and L.S.V. Roth (). Nocturnal colour vision—not as rare as we might think. J. Exp. Biol. , –. Kelber, A., Thunell, C., and K. Arikawa (). Polarisation-dependent colour vision in Papilio butterflies. J. Exp. Biol. , –. Kelber, A., Vorobyev, M., and D. Osorio (). Animal colour vision—behavioural tests and physiological concepts. Biol. Rev. , –. Kidd, R., Ardini, J., and A. Anton (). Evolution of the modern photon. Am. J. Phys. , –. Killinger, D. K., Churnside, J. H., and L. S. Rothman (). Atmospheric optics. In M. Bass, E. W. Van Strylan, D. R. Williams, and W. L. Wolfe (Eds.), Handbook of Optics II, pp. .–. New York: McGraw-Hill. Kinoshita, M., Shimada, N., and K. Arikawa (). Colour vision of the foraging swallowtail butterfly Papilio xuthus. J. Exp. Biol. , –. Kinoshita, S. (). Structural Colors in the Realm of Nature. Hackensack, NJ: World Scientific Publishing. Kinoshita, S., and S. Yoshioka (). Structural colors in nature: The role of regularity and irregularity in the structure. Chem. Phys. Chem. , –. Kirchner, W. H., and M. V. Srinivasan (). Freely flying honeybees use image motion to estimate object distance. Naturwissenschaften , –. Kirk, E. C. (). Comparative morphology of the eye in primates. Anat. Rec. A A, –. Kirk, E. C., and R. F. Kay (). The evolution of high visual acuity in the Anthropoidea. In C. F. Ross and R. F. Kay (Eds.), Anthropoid Origins: New Visions, pp. –. New York: Kluwer Academic/Plenum Publishers. Kirschfeld, K. (). Die projektion der optischen umwelt auf das raster der rhabdomere im komplexauge von Musca. Exp. Brain Res. , –. Kirschfeld, K. (). The visual system of Musca: Studies on optics, structure and function. In R. Wehner (Ed.), Information Processing in the Visual Systems of Arthropods, pp. –. Berlin, Heidelberg, New York: Springer-Verlag. Kirschfeld, K. (). The absolute sensitivity of lens and compound eyes. Z. Naturforsch. C, –.
366
References
Kirschfeld, K., Franceschini, N., and B. Minke (). Evidence for a sensitizing pigment in fly photoreceptors. Nature , –. Kirschfeld, K., and A. W. Snyder (). Waveguide mode effects, birefringence and dichroism in fly photoreceptors. In A. W. Snyder and R. Menzel (Eds.), Photoreceptor Optics, pp. –. Heidelberg, New York: Springer-Verlag. Kleinlogel, S., and N. J. Marshall (). Ultraviolet polarisation sensitivity in the stomatopod crustacean Odontodactylus scyllarus. J. Comp. Physiol. A , –. Kondrashev, S. L. (). Long-wave sensitivity in the masked greenling (Hexagrammos octogrammus), a shallow-water marine fish. Vision Res. , –. Können, G. P. (). Polarized Light in Nature. Cambridge: Cambridge University Press. Korringa, P. (). Relations between the moon and periodicity in the breeding of marine animals. Ecol. Monogr. , . Koshitaka, H., Kinoshita, M., Vorobyev, M., and K. Arikawa (). Tetrachromacy in a butterfly that has eight varieties of spectral receptors. Proc. R. Soc. Lond. B , –. Kral, K. (). Binocular vision and distance estimation. In F. R. Prete, H. Wells, P. H. Wells, and L. E. Hurd (Eds.), The Praying Mantids: Research Perspectives, pp. –. Baltimore: Johns Hopkins University Press. Kreysing, M., Pusch, R., Haverkate, D., Landsberger, M., Engelmann, J., Ruiter, J., et al. (). Photonic crystal light collectors in fish retina improve vision in turbid water. Science , –. Kröger, R. H. H., Campbell, M. C. W., Fernald, R. D., and H. J. Wagner (). Multifocal lenses compensate for chromatoc defocus in vertebrate eyes. J. Comp. Physiol. A. , –. Kunze, P. (). Eye glow in the moth and superposition theory. Nature , –. Kunze, P. () Comparative studies of arthropod superposition eyes. Z. Vergl. Physiol. , –. Kunze, P., and K. Hausen (). Inhomogeneous refractive index in the crystalline cone of a moth eye. Nature , –. Labhart, T., and E. P. Meyer (). Detectors for polarized skylight in insects: A survey of ommatidial specializations in the dorsal rim area of the compound eye. Micro. Res. Tech. , –. Labhart, T., and E. P. Meyer (). Neural mechanisms in insect navigation: Polarization compass and odometer. Curr. Opin. Neurobiol. , –. Labhart, T., and D. E. Nilsson (). The dorsal eye of the dragonfly Sympetrum: Specializations for prey detection against the blue sky. J. Comp. Physiol. A. , –. Land, M. F. (). Image formation by a concave reflector in the eye of the scallop, Pecten maximus. J. Physiol. , –. Land, M. F. (a). A multilayer interference reflector in the eye of the scallop Pecten maximus. J. Exp. Biol. , –. Land, M. F. (b). Activity in the optic nerve of Pecten maximus in response to changes in light intensity and to pattern and movement in the optical environment. J. Exp. Biol. , –. Land, M. F. (). Functional aspects of the optical and retinal organization of the mollusc eye. Symp. Zool. Soc. Lond. , –. Land, M. F. (). Superposition images are formed by reflection in the eyes of some oceanic decapod crustacea. Nature , –. Land, M. F. (). The optical mechanism of the eye of Limulus. Nature , –. Land, M. F. (). Optics and vision in invertebrates. In H. Autrum (Ed.), Handbook of Sensory Physiology, Vol. VII/B, pp. –. Berlin, Heidelberg, New York: Springer. Land, M. F. (). The resolving power of diurnal superposition eyes measured with an ophthalmoscope. J. Comp. Physiol. A. , –. Land, M. F. (). The morphology and optics of spider eyes. In F. G. Barth (Ed.), Neurobiology of Arachnids, pp. –. Berlin, Heidelberg: Springer-Verlag. Land, M. F. (a). Variations in the structure and design of compound eyes. In D. G. Stavenga and R. C. Hardie (Eds.), Facets of Vision, pp. –. Berlin, Heidelberg, New York, London, Paris, Tokyo: Springer.
References 367
Land, M. F. (b). The eyes of hyperiid amphipods: Relations of optical structure to depth. J. Comp. Physiol. A. , –. Land, M.F. () Motion and vision: Why animals move their eyes. J. Comp. Physiol. A , –. Land, M. F. (). On the functions of double eyes in midwater animals. Phil. Trans. R. Soc. Lond. B , –. Land, M. F. (). The spatial resolution of the pinhole eyes of giant clams (Tridacna maxima). Proc. R. Soc. Lond. B , –. Land, M. F., and F. G. Barth (). The quality of vision in the Ctenid spider Cupiennius salei. J. Exp. Biol. , –. Land, M. F., Burton, F. A., and V. B. Meyer-Rochow (). The optical geometry of euphausiid eyes. J. Comp. Physiol. , –. Land, M. F., and T. S. Collett (). Chasing behaviour of houseflies (Fannia canicularis). J. Comp. Physiol. , –. Land, M. F., and H. Eckert (). Maps of the acute zones of fly eyes. J. Comp. Physiol. A , –. Land, M. F., Gibson, G., and J. Horwood (). Mosquito eye design, conical rhabdoms are matched to wide aperture lenses. Proc. R. Soc. Lond. B , –. Land, M. F., Gibson, G., Horwood, J., and J. Zeil (). Fundamental differences in the optical structure of the eyes of nocturnal and diurnal mosquitoes. J. Comp. Physiol. A , –. Land, M. F., Marshall, N. J., Brownless, D., and T. W. Cronin (). The eye-movements of the mantis shrimp Odontodactylus scyllarus (Crustacea: Stomatopoda). J. Comp. Physiol. A , –. Land, M. F., and D. E. Nilsson (). General-purpose and special-purpose visual systems. In E. J. Warrant and D. E. Nilsson (Eds.), Invertebrate Vision, pp. –. Cambridge: Cambridge University Press. Land, M. F., and D.-E. Nilsson (). Animal Eyes (nd ed.). Oxford: Oxford University Press. Land, M. F., and D. C. Osorio (). Waveguide modes and pupil action in the eyes of butterflies. Proc. R. Soc. Lond. B , –. Lane, A. P., and W. M. Irvine (). Monochromatic phase curves and albedos for the lunar disk. Astron. J. , –. Larsen, L. O., and J. N. Pedersen (). The snapping response of the toad, Bufo bufo, towards prey dummies at very low light intensities. Amphibia-Reptilia , –. Laughlin, S. B. (). Neural principles in the peripheral visual systems of invertebrates. In H. Autrum (Ed.), Handbook of Sensory Physiology, Vol VII/B, pp. –. Berlin, Heidelberg, New York: Springer. Laughlin, S. B. (). Invertebrate vision at low luminances. In R. F. Hess, L. T. Sharpe, and K. Nordby (Eds.), Night Vision, pp. –. Cambridge: Cambridge University Press. Laughlin, S. B. (). Retinal information capacity and the function of the pupil. Ophthal. Physiol. Opt. , –. Laughlin, S. B. (). Matched filtering by a photoreceptor membrane. Vision Res. , –. Laughlin, S. B. (). The metabolic cost of information—a fundamental factor in visual ecology. In F. G. Barth and A. Schmid (Eds.), Ecology of Sensing, pp. –. Berlin, Heidelberg, New York: Springer. Laughlin, S. B., Blest, A. D., and S. Stowe (). The sensitivity of receptors in the posterior median eye of the nocturnal spider Dinopis. J. Comp. Physiol. , -. Laughlin, S. B., de Ruyter van Steveninck, R. R., and J. C. Anderson (). The metabolic cost of neural information. Nature Neurosci. , -. Laughlin, S. B., and P. G. Lillywhite (). Intrinsic noise in locust photoreceptors. J. Physiol. , –. Laughlin, S. B., and M. Weckström (). Fast and slow photoreceptors—a comparative study of the functional diversity of coding and conductances in the Diptera. J. Comp. Physiol. A , –.
368
References
Lawrence, S. J., Lau, E., Steutel, D., Stopar, J. D., Wilcox, B. B., and P. G. Lucey (). A new measurement of the absolute spectral reflectances of the moon. Lunar Planet. Sci. , -. Leech, D., and S. Johnsen (). Avoidance and UV vision. In W. Helbling and H. Zagarese (Eds.), UV Effects in Aquatic Organisms and Ecosystems, pp. –. London: Royal Society of Chemistry. Leech, D. M., and C. E. Williamson. (). In situ exposure to UV radiation alters the depth distribution of Daphnia. Limnol. Oceanogr. , –. Lehrer, M. (). Why do bees turn back and look? J. Comp. Physiol. A , –. Lehrer, M., and T. S. Collett (). Approaching and departing bees learn different cues to the distance of a landmark. J. Comp. Physiol. A , –. Lehrer, M., Srinivasan, M. V., Zhang, S. W., and G. A. Horridge (). Motion cues provide the bee’s visual world with a third dimension. Nature , –. Leibowitz, H. W., and D. A. Owens (). Can normal outdoor activities be carried out during civil twilight? Appl. Opt. , –. Lewis, S. M., and C. K. Cratsley (). Flash signal evolution, mate choice, and predation in fireflies. Annu. Rev. Entomol. , –. Liebman, P. A., and G. Entine (). Visual pigments of frog and tadpole (Rana pipiens). Vision Res. , –. Lillywhite, P. G. (). Single photon signals and transduction in an insect eye. J. Comp. Physiol. , –. Lillywhite, P. G. (). Multiplicative intrinsic noise and the limits to visual performance. Vision Res. , –. Lillywhite, P. G., and S. B. Laughlin (). Transducer noise in a photoreceptor. Nature , –. Lima, S.M.A., Silveira, L.C.L., and V. H. Perry (). Distribution of M ganglion cells in diurnal and nocturnal New-World monkeys. J. Comp. Neurol. , –. Lind, O., and A. Kelber (). The spatial tuning of achromatic and chromatic vision in budgerigars. J. Vision , –. Lind, O., Sunesson, T., Mitkus, M., and A. Kelber (). Luminance-dependence of spatial vision in budgerigars (Melopsittacus undulatus) and Bourke’s parrots (Neopsephotus bourkii). J. Comp. Physiol. A , –. Lisney, T. J., and S. P. Collin (). Retinal ganglion cell distribution and spatial resolving power in elasmobranchs. Brain Behav. Evol. , –. Lloyd, J. E. (). Aggressive mimicry in Photuris: Firefly femmes fatales. Science , –. Lloyd, J. E. (). Aggressive mimicry in Photuris fireflies: Signal repertoires by femmes fatales. Science , –. Locket, N. A. (). Adaptations to the deep-sea environment. In F. Crescitelli (Ed.), Handbook of Sensory Physiology, Vol. VII/, pp. –. Berlin, Heidelberg, New York: Springer. Locket, N. A. (). The multiple bank rod foveae of Bajacalifornia drakei, an alepocephalid deep-sea teleost. Proc. R. Soc. Lond. B , –. Locket, N. A. (). On the lens pad of Benthalbella infans, a scopelarchid deep-sea teleost. Phil. Trans. R. Soc. Lond. B , –. Loew, E. R., and J. N. Lythgoe (). The ecology of cone pigments in teleost fishes. Vision Res. , –. Loew, E. R., and J. N. Lythgoe (). The ecology of colour vision. Endeavour , –. Loew, E. R., and W. N. McFarland (). The underwater visual environment. In R. H. Douglas and M. B. A. Djamgoz (Eds.), The Visual System of Fishes, pp. –. London: Chapman & Hall. Lohmann, K. J., and A.O.D. Willows (). Lunar-modulated geomagnetic orientation by a marine mollusk. Science , –. Longcore, T., and C. Rich (). Ecological light pollution. Front. Ecol. Environ. , –. Losey, G. S., Cronin, T. W., Goldsmith, T. H., Hyde, D., Marshall, N. J., and W. N. McFarland (). The UV visual world of fishes: A review. J. Fish. Biol. , –.
References 369
Losey, G. S., McFarland, W. N., Loew, E. R., Zamzow, J. P., Nelson, P. A., and N. J. Marshall (). Visual biology of Hawaiian coral reef fishes. I. Ocular transmission and visual pigments. Copeia , –. Lynch, D. K., and W. Livingston (). Color and Light in Nature. Cambridge: Cambridge University Press. Lythgoe, J. N. (). The Ecology of Vision. Oxford: Clarendon Press. Lythgoe, J. N. (). Light and vision in the aquatic environment. In J. Atema (Ed.), Sensory Biology of Aquatic Animals, pp. –. New York: Springer Verlag. Lythgoe, J. N., Muntz, W.R.A., Partridge, J. C., Shand, J., and D. M. Williams (). The ecology of the visual pigments of snappers (Lutjanidae) on the Great-Barrier-Reef. J. Comp. Physiol. A , –. Lythgoe, J. N., and J. C. Partridge (). Visual pigments and the acquisition of visual information. J. Exp. Biol. , –. Lythgoe, J. N., and J. C. Partridge (). The modeling of optimal visual pigments of dichromatic teleosts in green coastal waters. Vision Res. , –. Malmström, T., and R.H.H. Kröger (). Pupil shapes and lens optics in the eyes of terrestrial vertebrates. J. Exp. Biol. : –. Maloney, L. T. (). Evaluation of linear-models of surface spectral reflectance with small numbers of parameters. J. Opt. Soc. Am. A , –. Marshall, B. R., and R. C. Smith (). Raman scattering and in-water ocean optical properties. Appl. Opt. , –. Marshall, J., Collin, S. P., Hart, N. S., and H. J. Bailes (). Vision in lungfish. In J. M. Jorgensen and J. Joss (Eds.), The Biology of Lungfishes, pp. –. Enfield, NH: Science Publishers. Marshall, J., Cronin, T. W., and S. Kleinlogel (). Stomatopod eye structure and function: A review. Art. Struct. Dev. , –. Marshall, J., Cronin, T. W., Shashar, N., and M. Land (). Behavioural evidence for polarisation vision in stomatopods reveals a potential channel for communication. Curr. Biol. , –. Marshall, J., and J. Oberwinkler (). The colourful world of the mantis shrimp. Nature , –. Marshall, N. J. (). A unique color and polarization vision system in mantis shrimps. Nature , –. Marshall, N. J. (a). Communication and camouflage with the same “bright” colours in reef fishes. Phil. Trans. R. Soc. Lond. B , –. Marshall, N. J. (b). The visual ecology of reef fish colours. In Y. Espmark, T. Amundsen, and G. Rosenquist (Eds.), Animal Signals: Signalling and Signal Design in Animal Communication, pp. –. Trondheim: Tapier. Marshall, N. J., Cronin, T. W., and T. M. Frank (a). Visual adaptations in crustaceans: Chromatic, developmental and temporal aspects. In S. P. Collin and N. J. Marshall (Eds.), Sensory Processing in Aquatic Environments, pp. –. New York: Springer. Marshall, N. J., Jennings, K., McFarland, W. N., Loew, E. R., and G. S. Losey (b). Visual biology of Hawaiian coral reef fishes. II. Colors of Hawaiian coral reef fish. Copeia , –. Marshall, N. J., Jennings, K., McFarland, W. N., Loew, E. R., and G. S. Losey (c). Visual biology of Hawaiian coral reef fishes. III. Environmental light and an integrated approach to the ecology of reef fish vision. Copeia , –. Marshall, N. J., and S. Johnsen (). Camouflage in marine fish. In M. Stevens and S. Merilaita (Eds.), Animal Camouflage: Current Issues and New Perspectives, pp. –. New York: Cambridge University Press. Marshall, N. J., Kent, J., and T. W. Cronin (). Visual adaptations in crustaceans. In S. N. Archer, M.B.A. Djamgoz, E. Lowe, J. C. Partridge, and S. Vallerga (Eds.), Adaptive Mechanisms in the Ecology of Vision, pp. –. London: Kluwer Academic Publishers. Marshall, N. J., Land, M. F., King, C. A., and T. W. Cronin (). The compound eyes of mantis shrimps (Crustacea, Hoplocarida, Stomatopoda). I. Compound eye structure: The detection of polarised light. Phil. Trans. R. Soc. Lond. B , –.
370
References
Marshall, N. J., and J. B. Messenger (). Colour-blind camouflage. Nature , –. Marshall, N. J., and M. Vorobyev (). The design of color signals and color vision in fishes. In S. P. Collin and J. N. Marshall (Eds.), Sensory Processing in Aquatic Environments, pp. –. New York: Springer. Marshall, N. J., Vorobyev, M., and U. E. Siebeck (). What does a reef fish see when it sees a reef fish? Eating “Nemo.” In F. Ladish, S. P. Collin, P. Moller, and B. G. Kapoor (Eds.), Fish Communication, Vol. , pp. –. Enfield, NH: Science Publishers. Martin, C., Gegear, R. J., and S. M. Reppert (). Antennal circadian clocks coordinate sun compass orientation in migratory monarch butterflies. Science , –. Martin, G. R. (). Colour vision in the tawny owl (Strix aluco). J. Comp. Physiol. Psych. , –. Martin, G. R. (). Absolute visual threshold and scotopic spectral sensitivity in the tawny owl Strix aluco. Nature , –. Martin, G. R. (). An owl’s eye—schematic optics and visual performance in Strix aluco L. J. Comp. Physiol. , –. Martin, G. R. (). Birds by Night. London: T. and A. D. Poyser. Martin, G. R. (). Form and function in the optical structure of bird eyes. In M.N.O. Davies and P. R. Green (Eds.), Perception and Motor Control in Birds: An Ecological Approach, pp. –. Berlin, Heidelberg, New York: Springer. Martin, G. R., Rojas, L. M., Ramirez, Y., and R. McNeil (). The eyes of oilbirds (Steatornis caripensis): Pushing at the limits of sensitivity. Naturwissenschaften , –. Martin, G. R., Wilson, K. J., Wild, J. M., Parsons, S., Kubke, M. F., and J. Corfield (). Kiwi forgo vision in the guidance of their nocturnal activities. PLoS ONE , e. Mäthger, L. M., and R. T. Hanlon (). Anatomical basis for camouflaged polarized light communication in squid. Biol. Lett. , –. Mäthger, L. M., Shashar, N., and R. T. Hanlon (). Do cephalopods communicate using polarized light reflections from their skin? J. Exp. Biol. , –. Matthiessen, L. (). Über die Beziehungen, welche zwischen dem Brechungsindex des Kernzentrums der Krystalllinse und den Dimensionen des Auges bestehen. Pflügers Arch. , –. Mattila, K. (). Synthetic spectrum of the integrated starlight between , and , Å. Part . Method of calculation and results. Astron. Astrophys. Ser. , S–S. Maximov, V. V. (). Environmental factors which may have led to the appearance of colour vision. Phil. Trans. R. Soc. Lond. B , –. McFall-Ngai, M., and J. G. Morin (). Camouflage by disruptive illumination in leiognathids, a family of shallow-water, bioluminescent fishes. J. Exp. Biol. , –. McFarland, W. N. (). The visual world of coral reef fishes. In P. F. Sale (Ed.), The Ecology of Fishes on Coral Reefs, pp. –. San Diego: Academic Press. McFarland, W. N., and F. W. Munz (). Part II: The photic environment of clear tropical seas during the day. Vision Res. , –. McIntyre, P. D., and S. Caveney (). Graded-index optics are matched to optical geometry in the superposition eyes of scarab beetles. Phil. Trans. R. Soc. Lond. B , –. McIntyre, P. D., and S. Caveney (). Superposition optics and the time of flight in onitine dung beetles. J. Comp. Physiol. A , –. McKenzie, D. R., Yin, Y., and W. D. McFall (). Silvery fish skin as an example of a chaotic reflector. Proc. R Soc. Lond. A , –. McReynolds, J. S., and A.L.F. Gorman (). Membrane conductances and spectral sensitivities of Pecten photoreceptors. J. Gen. Physiol. , –. Meinel, A., and M. Meinel (). Sunsets, Twilights, and Evening Skies. New York: Cambridge University Press. Mensinger, A. F., and J. F. Case (). Dinoflagellate luminescence increases susceptibility of zooplankton to teleost predation. Mar. Biol. , –. Mertens, L. E. (). In-Water Photography: Theory and Practice. New York: John Wiley & Sons.
References 371
Meyer-Rochow, V. B., and H. L. Nilsson (). Compound eyes in polar regions, caves, and the deep-sea. In E. Eguchi and Y. Tominaga (Eds.), Atlas of Arthropod Sensory Receptors: Dynamic Morphology in Relation to Function, pp. –. Berlin, Heidelberg, New York: Springer. Mizunami, M. (). Information processing in the insect ocellar system: Comparative approaches to the evolution of visual processing and neural circuits. Adv. Insect Physiol. , –. Mobley, C. D. (). Light and Water Radiative Transfer in Natural Waters. San Diego: Academic Press. Möller, R. (). Do insects use templates or parameters for landmark navigation? J. Theor. Biol. , –. Moody, M. F., and J. R. Parriss (). The discrimination of polarized light by Octopus—a behavioural and morphological study. Z. Vergl. Physiol. , –. Moore, M. V., Pierce, S. M, Walks, H. M., Kvalvik, S. K., and J. D. Lim (). Urban light pollution alters the diel vertical migration of Daphnia. Verh. Internat. Verein. Limnol. , –. Morin, J. G. (). Coastal bioluminescence: patterns and functions. Bull. Mar. Sci. , –. Morin, J. G. (). “Firefleas” of the sea: Luminescent signaling in marine ostracode crustaceans. Fla. Entomol. , –. Morin, J. G., and A. C. Cohen (). Bioluminescent displays, courtship, and reproduction in Ostracodes. In R. Bauer and J. Martin (Eds.), Crustacean Sexual Biology, pp. –. New York: Columbia University Press. Morin, J. G., Harrington, A., Nealson, K., Kreiger, N., Baldwin, T. O., and J. W. Hastings (). Light for all reasons: Versatility in the behavioral repertoire of the fl ashlight fish. Science , –. Mouritsen, H., Janssen-Bienhold, U., Liedvogel, M., Feenders, G., Stalleicken, J., Dirks, P., and R. Weiler (). Cryptochromes and neuronal-activity markers colocalize in the retina of migratory birds during magnetic orientation. Proc. Natl. Acad. Sci. USA , –. Muheim, R. (). Behavioural and physiological mechanisms of polarized light sensitivity in birds. Phil. Trans. R. Soc. B , –. Muheim, R., Phillips, J. B., and S. Åkesson (). Polarized light cues underlie compass calibration in migratory songbirds. Science , –. Muheim, R., Phillips, J. B., and M. E. Deutschlander (). White-throated sparrows calibrate their magnetic compass by polarized light cues during both autumn and spring migration. J. Exp. Biol. , –. Müller, J. P. (). Zur vergleichenden Physiologie des Gesichtssinnes des Menschen und der Tiere nebst einem Versuch über die Bewegungen der Augen und über den menschliehen Blick. Leipzig: Cnobloch. Muller, R. U., and J. L. Kubie (). The effects of changes in the environment on the spatial firing of hippocampal complex-spike cells. J. Neurosci. , –. Munk, O. (). On the occurrence and significance of horizontal band-shaped retinal areae in teleosts. Vidensk. Medd. Dan. Naturhist. Foren. , –. Munk, O. (). Hvirveldyrøjet: Bygning, Funktion og Tilpasning. Copenhagen: Berlingske Forlag. Munk, O. (). Conceptions of dynamic accommodation in vertebrate eyes. Acta Hist. Sci. Nat. Med. , –. Munk, O., and R. D. Frederiksen (). On the function of aphakic apertures in teleosts. Videnskabelige meddelelser fra Dansk naturhistorisk forening i København , –. Munoz Tedo, C., Herreros De Tejada, P., and D. G. Green (). Behavioral estimates of absolute threshold in rat. Visual Neurosci. , –. Muntz, W.R.A. (). On yellow lenses in mesopelagic animals. J. Mar. Biol. Assoc. UK , –. Muntz, W.R.A., and J. Gwyther (). Visual acuity in Octopus pallidus and Octopus australis. J. Exp. Biol. , –.
372
References
Muntz, W.R.A., and U. Raj (). On the visual system of Nautilus pompilius. J. Exp. Biol. , –. Munz, F. W., and W. N. McFarland (). The significance of spectral position in the rhodopsins of tropical marine fishes. Vision Res. , –. Murakami, M., and T. Kouyama (). Crystal structure of squid rhodopsin. Nature , –. Murphy, C. J., Evans, H. E., and H. C. Howland (). Towards a schematic eye for the great horned owl. Fortschr. Zool. , –. Murphy, C. J., and H. C. Howland (). On the gekko pupil and Scheiner’s disc. Vision Res. : –. Nagata, T., Koyanagi, M., Tsukamoto, H., Saeki, S., Isono, K., Shichida, Y., et al. (). Depth perception from image defocus in a jumping spider. Science , –. Nagle, M. G., and D. Osorio (). The tuning of human photopigments may minimize redgreen chromatic signals in natural conditions. Proc. R. Soc. Lond. B , –. Nalbach, H.O., Zeil, J., and L. Forzin (). Multisensory control of eye-stalk orientation in space: Crabs from different habitats rely on different senses. J. Comp. Physiol. A , -. Neumeyer, C. (). Evolution of colour vision. In J. R. Cronly-Dillon and R. L. Gregory (Eds.), Vision and Visual Dysfunction: Evolution of the Eye and Visual System, Vol. , pp. –. London: Macmillan Press. Neumeyer, C. (). Tetrachromatic color-vision in goldfish—evidence from color mixture experiments. J. Comp. Physiol. A , –. Neumeyer, C. (). Color vision in lower vertebrates. In W.G.K. Backhaus, R. Kliegl, and J. S. Werner (Eds.), Color Vision—Perspectives from Different Disciplines, pp. –. Berlin: Walter de Gruyter and Co. Nicol, J.A.C. (). The Eyes of Fishes. Oxford University Press: Oxford. Nicol, J.A.C., and H. J. Arnott (). Tapeta lucida in the eyes of goatsuckers (Caprimulgidae). Proc. R. Soc. Lond. B. , –. Nilsson, D. E. (). A new type of imaging optics in compound eyes. Nature , –. Nilsson, D. E. (). Optics and evolution of the compound eye. In D. G. Stavenga and R. C. Hardie Eds.), Facets of Vision, pp. –. Springer: Berlin, Heidelberg, New York. Nilsson, D. E. (). Eyes as optical alarm systems in fan worms and ark clams. Phil. Trans. R. Soc. Lond. B , –. Nilsson, D.-E. (). The evolution of eyes and visually guided behaviour. Phil. Trans. R. Soc. B , –. Nilsson, D. E., Gislén, L., Coates, M. M., Skogh, L., and A. Garm (). Advanced optics in a jellyfish eye. Nature , –. Nilsson, D. E., Land, M. F., and J. Howard (). Afocal apposition optics in butterfly eyes. Nature , –. Nilsson, D. E., Land, M. F., and J. Howard (). Optics of the butterfly eye. J. Comp. Physiol. A , –. Nilsson, D. E., and H. L. Nilsson (). A crustacean compound eye adapted for low light intensities (Isopoda). J. Comp. Physiol. , –. Nilsson, D. E., and S. Pelger (). A pessimistic estimate of the time required for an eye to evolve. Proc. R. Soc. Lond. B , –. Nilsson, D. E., Warrant, E. J., Johnsen, S., Hanlon, R., and N. Shashar (). A unique advantage for giant eyes in giant squid. Curr. Biol. , –. Nordström, K., Barnett, P. D., and D. C. O’Carroll (). Insect detection of small targets moving in visual clutter. PLoS Biol. , –. Nordström, K., and D. C. O’Carroll (). Feature detection and the hypercomplex property in insects. TINS , –. Nørgaard, T., Henschel, J. R., and R. Wehner (). Use of local cues in the night-time navigation of the wandering desert spider Leucorchestris arenicola (Araneae, Sparassidae). J. Comp. Physiol. A , –.
References 373
Novales Flamarique, I. (). Unique photoreceptor arrangements in a fish with polarized light discrimination. J. Comp. Neurol. , –. Nyholm, S. V., and N. J. McFall-Ngai (). The winnowing: Establishing the squid-Vibrio symbiosis. Nat. Rev. Microbiol. , -. O’Carroll, D. C. (). Feature-detecting neurons in dragonflies. Nature , –. Ogden, T. E. (). The receptor mosaic of Aotes trivirgatus: Distribution of rods and cones. J. Comp. Neurol. , –. Ohly, K. P. (). The neurons of the first synaptic regions of the optic neuropil of the firefly, Phausius splendidula L. (Coleoptera). Cell Tissue Res. , –. Okawa, H., Miyagishima, K. J., Arman, A. C., Hurley, J. B., Field, G. D., and A. P. Sampath (). Optimal processing of photoreceptor signals is required to maximize behavioural sensitivity. J. Physiol. , –. Okawa, H., and A. P. Sampath (). Optimization of single-photon response transmission at the rod-to-rod bipolar synapse. Physiology , –. Olberg, R. M. (). Object- and self-movement detectors in the ventral cord of the dragonfly. J. Comp. Physiol. A , –. Olberg, R. M. (). Identified target-selective visual interneurons descending from the dragonfly brain. J. Comp. Physiol. A , –. Olberg, R. M. (). Visual control of prey-capture flight in dragonflies. Curr. Opin. Neurobiol. , –. Olberg, R. M., Worthington, A. H., and K. R. Venator (). Prey pursuit and interception in dragonflies. J. Comp. Physiol. A , –. Orlowski, J., Harmening, W., and H. Wagner (). Night vision in barn owls: Visual acuity and contrast sensitivity under dark adaptation. J. Vision :, –. O’Rourke, C. T., Hall, M. I., Pitlik, T., and E. Fernández-Juricic (). Hawk eyes I: Diurnal raptors differ in visual fields and degree of eye movement. PLoS ONE (), –. Ortega-Escobar, J., and Munoz-Cuevas, A. (). Anterior median eyes of Lycosa tarentula (Araneae, Lycosidae) detect polarized light: Behavioral experiments and electroretinographic analysis. J. Arach. , –. Osorio, D., Marshall, N. J., and T. W. Cronin (). Stomatopod photoreceptor spectral tuning as an adaptation for colour constancy in water. Vision Res. , –. Østerberg, G. A. (). Topography of the layer of rods and cones in the human retina. Acta Ophthalmol. (Suppl ), –. Ott, M., and F. Schaeffel (). A negatively powered lens in the chameleon. Nature , –. Overington, I. (). Vision and Acquisition: Fundamentals of Human Visual Performance, Environmental Influences, and Applications in Instrumental Optics. New York: Pentech Press. Owens, G. L., Rennison, D. J., Allison, T., and J. S. Taylor (). In the four-eyed fish (Anableps anableps), the regions of the retina exposed to aquatic and aerial light do not express the same set of opsin genes. Biol. Lett. , –. Pahlberg, J., and A. P. Sampath (). Visual threshold is set by linear and nonlinear mechanisms in the retina that mitigate noise. Bioessays , –. Palczewski, K., Kumasaka, T., Hori, T., Behnke, C. A., Motoshima, H., Fox, B. A., et al. (). Crystal structure of rhodopsin: A G protein-coupled receptor. Science , –. Pardi, L., and F. Scapini (). Inheritance of solar direction finding in sandhoppers: Mass-crossing experiments. J. Comp. Physiol. , –. Parker, A. G. (). Colour in Burgess Shale animals and the effect of light on evolution in the Cambrian. Proc. R. Soc. Lond. B , –. Parry, J.W.L., Carleton, K. L., Spady, T., Carboo, A., Hunt, D. M., and J. K. Bowmaker (). Mix and match color vision: Tuning spectral sensitivity by differential opsin gene expression in Lake Malawi cichlids. Curr. Biol. , –. Partridge, J. C., and R. H. Douglas (). Far-red sensitivity of dragon fish. Nature , –. Pasternak, T., and W. H. Merigan (). The luminance dependence of spatial vision in the cat. Vision Res. , –.
374
References
Paul, H., Nalbach, H.-O., and D. Varjú (). Eye movements in the rock crab Pachygrapsus marmoratus walking along straight and curved paths. J. Exp. Biol. , –. Paulus, H. F. (). The compound eyes of apterygote insects. In G. A. Horridge (Ed.), The compound Eye and Vision of Insects, pp. –. Oxford: Oxford University Press. Pearcy, W. G., Meyer, S. L., and O. Munk (). A “four eyed” fish from the deep sea. Nature : –. Peichl, L. (). Diversity of mammalian photoreceptor properties: Adaptations to habitat and lifestyle? Anat. Rec. A , –. Peitsch, D., Fietz, A., Hertel, H., Desouza, J., Ventura, D. F., and R. Menzel (). The spectral input systems of hymenopteran insects and their receptor-based color-vision. J. Comp. Physiol. A , –. Penteriani, V., and M. del Mar Delgado (). Owls may use faeces and prey feathers to signal current reproduction. PLoS ONE (), e. Penteriani, V., Mar Delgado, M. del, Alonso-Alvarez, C., and F. Sergio (). The importance of visual cues for nocturnal species: Eagle owls signal by badge brightness. Behav. Ecol. , –. Peters, R., Hemmi, J., and J. Zeil (). Image motion environments: Background noise for movement-based animal signals. J. Comp. Physiol. A , –. Pierscionek, B. K., and D.Y.C. Chan (). Refractive index gradient of human lenses. Optometry Vis. Sci. , –. Pignatelli, V., Champ, C., Marshall, J., and M. Vorobyev (). Double cones are used for colour discrimination in the reef fish, Rhinecanthus aculeatus. Biol. Lett. , –. Pirenne, M. H. (). Vision and the Eye. London: The Pilot Press. Pirhofer-Waltz, K., Warrant, E. J., and F. G. Barth (). Adaptations for vision in dim light: Impulse responses and bumps in nocturnal spider photoreceptor cells (Cupiennius salei Keys). J. Comp. Physiol. A , –. Pirih, P., Wilts, B. D., and D. G. Stavenga (). Spatial reflection patterns of iridescent wings of male pierid butterflies: Curved scales reflect at a wider angle than flat scales. J. Comp. Physiol A , –. Planck, M. (). On the law of distribution of energy in the normal spectrum. Annal. Phys. , –. Polyak, S. (). The Vertebrate Visual System. Chicago: University of Chicago Press. Prete, F. R., Hurd, L. E., Branstrator, D., and A. Johnson (). Responses to computer-generated visual stimuli by the male praying mantis, Sphodromantis lineola (Burmeister). Anim. Behav. , –. Purcell, J. E. (). Influence of siphonophore behavior upon their natural diets: Evidence for aggressive mimicry. Science , –. Read, K.R.H., Davidson, J. M., and B. M. Twarog (). Fluorescence of sponges and coelenterates in blue light. Comp. Biochem. Physiol. , –. Reichardt, W. (). Autocorrelation, a principle for the evaluation of sensory information by the central nervous system. In W. A. Rosenblith (Ed.), Sensory Communication, pp. –. New York: John Wiley & Sons. Reid, S. F., Narendra, A., Hemmi, J. M., and J. Zeil (). Polarised skylight and the landmark panorama provide night-active bull ants with compass information during route following. J. Exp. Biol. , –. Reyes-Alcubilla, C., Ruiz, M. A., and J. Ortega-Escobar (). Homing in the wolf spider Lycosa tarantula (Araneae, Lycosidae): The role of active locomotion and visual landmarks. Naturwissenschaften , –. Reymond, L. (). Spatial visual acuity of the eagle Aquila audax: A behavioural, optical and anatomical investigation. Vision Res. , –. Ribi, W. A. (). The first optic ganglion of the bee I. Correlation between visual cell types and their terminals in the lamina and medulla. Cell Tissue Res. , –. Ribi, W. A. (). Fine structure of the first optic ganglion (lamina) of the cockroach Periplaneta americana. Tissue Cell , –.
References 375
Rickel, S., and A. Genin (). Twilight transitions in coral reef fish: The input of light induced changes in foraging behavior. Anim. Behav. , –. Rieke, F. (). Seeing in the dark: Retinal processing and absolute visual threshold. In T. Albright and R. H. Masland (Vol. Eds.), A. I. Basbaum, A. Kaneko, G. M. Shepherd, and G. Westheimer (Ser. Eds.), The Senses: A Comprehensive Reference, Vol. : Vision I, pp. – . Oxford: Academic Press. Rieke, F., and D. A. Baylor (). Molecular origin of continuous dark noise in rod photoreceptors. Biophys. J. , –. Ritz, T., Adam, S., and K. Schulten (). A model for photoreceptor-based magnetoreception in birds. Biophys. J. , –. Ritz, T., Dommer, D. H., and J. B. Phillips (). Shedding light on vertebrate magnetoreception. Neuron , –. Ritz, T., Thalau, P., Phillips, J. B., Wiltschko, R., and W. Wiltschko (). Resonance effects indicate a radical-pair mechanism for avian magnetic compass. Nature , –. Rivers, T. J., and J. G. Morin (). Complex sexual courtship displays by luminescent male marine ostracods. J. Exp. Biol. , –. Roberts, N. E., Chiou, T.-H., Marshall, N. J., and T. W. Cronin (). A biological quarter-wave retarder with excellent achromaticity in the visible wavelength region. Nature Photonics , –. Roberts, N. W., and M. G. Needham (). A mechanism of polarized light sensitivity in cone photoreceptors of the goldfish Carassius auratus. Biophys. J. , –. Roberts, N. W., Porter, M. L., and T. W. Cronin (). The molecular basis of mechanisms underlying polarization vision. Phil Trans. R Soc. B , –. Robison, B. H., and K. R. Reisenbichler (). Macropinna microstoma and the paradox of its tubular eyes. Copeia , –. Rodieck, R. W. (). The First Steps in Seeing. Sunderland, MA: Sinauer Associates. Rojas, L. M., McNeil, R., Cabana, T., and P. Lachapelle (). Behavioural, morphological and physiological correlates of diurnal and nocturnal vision in selected wading bird species. Brain. Behav. Evol. , –. Rojas, L. M., Ramirez, Y., McNeil, R., Mitchell, M., and G. Marin (). Retinal morphology and electrophysiology of two Caprimulgiformes birds: The cave-living and nocturnal oilbird (Steatornis caripensis) and nocturnally foraging common pauraque (Nyctidromus albicollis). Brain. Behav. Evol. , –. Rose, A. (). The relative sensitivities of television pickup tubes, photographic film and the human eye. Proc. Inst. Radio Eng. NY , –. Rossel, S. (). Foveal fixation and tracking in the praying mantis. J. Comp. Physiol. , –. Roth, L.S.V., and A. Kelber (). Nocturnal colour vision in geckos. Proc. R. Soc. Lond. B (Suppl.), S–S. Roth, L.S.V., Lundström, L., Kelber, A. Kröger, R.H.H., and P. Unsbo (). The pupils and optical systems of gecko eyes. J. Vision :, –. Rowe, N. (). The Pictorial Guide to the Living Primates. Charlestown, RI: Pogonias Press. Rozenberg, G. V. (). Twilight: A Study in Atmospheric Optics. New York: Plenum Press. Rutowski, R. L., and E. J. Warrant (). Visual field structure in a butterfly Asterocampa leilia (Lepidoptera, Nymphalidae): Dimensions and regional variation in acuity. J. Comp. Physiol. A , –. Ruxton, G. D., Sherratt, T. N., and M. P. Speed (). Avoiding Attack: The Evolutionary Ecology of Crypsis, Warning Signals and Mimicry. New York: Oxford University Press. Sabra, R., and R. M. Glantz (). Polarisation sensitivity of crayfish photoreceptors is correlated with their termination sites in the lamina ganglionaris. J. Comp. Physiol. , –. Saidel, W. M., Lettvin, J. Y., and E.F.J. MacNichol (). Processing of polarized light by squid photoreceptors. Nature , –. Saidel, W. M., Shashar, N., Schmolesky, M. T., and R. T. Hanlon (). Discriminative responses of squid (Loligo pealeii) photoreceptors to polarized light. Comp. Biochem. Physiol. A , –.
376
References
Salmela, I., Immonen, E. V., Frolov, R., Krause, S., Krause, Y., Vähäsöyrinki, M., and M. Weckström (). Cellular elements for seeing in the dark: Voltage-dependent conductances in cockroach photoreceptors. BMC Neurosci. , . Salmon, M., Wyneken, J., Fritz, E., and M. Lucas (). Seafinding by hatchling sea turtles: Role of brightness, silhouette and beach slope as orientation cues. Behaviour , –. Sawicka, E., Stramski, D., Darecki, M., and J. Dubranna (). Power spectral analysis of waveinduced fluctuations in downward irradiance within the near-surface ocean under sunny conditions. In Proceedings of Ocean Optics XXI, Glasgow, – October, Glasgow, Scotland. Sauer, E. G. F., and E. M. Sauer (). Star navigation of nocturnal migrating birds. Cold Spring Harbor Symp. Quant. Biol. , –. Scapini, F. (). Keynote papers on sandhopper orientation and navigation. Mar. Freshw. Behav. Physiol. , –. Schechner, Y. Y., and N. Karpel (). Recovery of underwater visibility and structure by polarization analysis. IEEE J. Ocean Eng. , –. Schechner, Y. Y., Narasimhan, S. G., and S. K. Nayar (). Polarization-based vision through haze. Appl. Opt. , –. Schnell, B., Joesch, M., Forstner, F., Raghu, S. V., Otsuna, H., Ito, K., et al. (). Processing of horizontal optic flow in three visual interneurons of the Drosophila brain. J. Neurophysiol. , –. Schwab, I. R. (). Evolution’s Witness: How Eyes Evolved. Oxford: Oxford University Press. Schwassmann, H. O., and L. Kruger (). Experimental analysis of the visual system of the four-eyed fish Anableps microlepis. Vision Res. , –. Schwind, R. (). Visual system of Notonecta glauca: A neuron sensitive to movement in the binocular visual field. J. Comp. Physiol. , –. Schwind, R. (). Geometrical optics of the Notonecta eye: Adaptations to optical environment and way of life. J Comp. Physiol. , –. Schwind, R. (). Zonation of the optical environment and zonation in the rhabdom structure within the eye of the backswimmer, Notonecta glauca. Cell Tissue Res. , –. Schwind, R. (). The plunge reaction of the backswimmer Notonecta glauca. J. Comp. Physiol. A , –. Shashar, N., Hagan, R., Boal, J. G., and R. T. Hanlon (). Cuttlefish use polarization sensitivity in predation on silvery fish. Vision Res. , –. Shashar, N., Hanlon, R. T., and A. deM. Petz (). Polarization vision helps detect transparent prey. Nature , –. Shashar, N., Rutledge, P. S., and T. W. Cronin (). Polarization vision in cuttlefish—A concealed communication channel? J. Exp. Biol. , –. Sherk, T. E. (). Development of the compound eyes of dragonflies (Odonata). III. Adult compound eyes. J. Exp. Zool. , –. Siddiqi, A., Cronin, T. W., Loew, E. R., Vorobyev, M., and K. Summers (). Interspecific and intraspecific views of color signals in the strawberry poison frog Dendrobates pumilio. J. Exp. Biol. , –. Siebeck, U. E., Losey, G. S., and N. J. Marshall (). UV communication in fish. In F. Laddich, S. P. Collin, P. Moller, and B. G. Kapoor (Eds.), Communication in Fishes, Vol. , pp. –. Plymouth, UK: Science Publications Inc. Siebeck, U. E., Parker, A. N., Sprenger, D., Mathger, L. M., and G. Wallis (). A species of reef fish that uses ultraviolet patterns for covert face recognition. Curr. Biol. , –. Silveira, L.C.L., Saito, C. A., Lee, B. B., Kremers, J., Filho, M. da S., Kilavik, B. E., et al. (). Morphology and physiology of primate M- and P- cells. Prog. Brain Res. , –. Silveira, L.C.L., Yamada, E. S., Franco, E.C.S., and B. L. Finlay (). The specialization of the owl monkey retina for night vision. Colour Res. Appl. , S–S. Silveira, L.C.L., Yamada, E. S., Perry, V. H., and C. W. Picanco-Diniz (). M and P retinal ganglion cells of diurnal and nocturnal New-World monkeys. Neuro Rep. , –. Sinclair, S. (). How Animals See. New York, Oxford: Facts on File Publications.
References 377
Sivak, J. G., Howland, H. C., West, J., and J. Weerheim (). The eye of the hooded seal, Cystophora cristata, in air and water. J. Comp. Physiol. A , –. Smith, F. E., and E. R. Baylor (). Color responses in the Cladocera and their ecological significance. Am. Nat. , –. Smith, K. C., and E. R. Macagno (). UV photoreceptors in the compound eye of Daphnia magna (Crustacea, Branchiopoda)—a th spectral class in single ommatidia. J. Comp. Physiol. A , –. Smith, R. C., and K. S. Baker (). Optical properties of the clearest natural waters (– nm). Appl. Opt. , –. Smolka, J., and J. M. Hemmi (). Topography of vision and behavior. J. Exp. Biol. , –. Snow, D. W. (). The natural history of the oilbird, Steatornis caripensis, in Trinidad. . General behaviour and breeding habits. Zoologica , –. Snyder, A. W. (). Photoreceptor optics. In: A. W. Snyder and R. Menzel (Eds.), Photoreceptor Optics, pp. –. Heidelberg, New York: Springer-Verlag. Snyder, A. W. (). Acuity of compound eyes: Physical limitations and design. J. Comp. Physiol. , –. Snyder, A. W. (). Physics of vision in compound eyes. . In H. Autrum (Ed.), Handbook of Sensory Physiology, Vol VII/A, pp. –. Berlin, Heidelberg, New York: Springer. Snyder, A. W., and J. D. Love (). Optical Waveguide Theory. Boston, Dordrecht: Kluwer Academic Publishers. Snyder, A. W., Menzel, R., and S. B. Laughlin (). Structure and function of the fused rhabdom. J. Comp. Physiol. , –. Snyder, A. W., and W. H. Miller (). Telephoto lens system of falconiform eyes. Nature , –. Soffer, B. H., and D. K. Lynch (). Some paradoxes, errors, and resolutions concerning the spectral optimization of human vision. Am. J. Phys. , –. Solovei, I., Kreysing, M., Lanctôt, C., Kösem, S., Peichl, L., Cremer, T., et al. (). Nuclear architecture of rod photoreceptor cells adapts to vision in mammalian evolution. Cell , –. Somanathan, H., Borges, R. M., Warrant, E. J., and A. Kelber (a). Nocturnal bees learn landmark colours in starlight. Current Biol. , R–R. Somanathan, H., Borges, R. M., Warrant, E. J., and A. Kelber (b). Visual ecology of Indian carpenter bees I: Light intensities and flight activity. J. Comp. Physiol. A , –. Speiser, D. I., Eernisse, D. J., and S. Johnsen (b). A chiton uses aragonite lenses to form images. Curr. Biol. , –. Speiser, D. I., and S. Johnsen (). Comparative morphology of the concave mirror eyes of scallops (Pectinoidea). Am. Malac. Bull. , –. Speiser, D. I., Loew, E. R., and S. Johnsen (a). Spectral sensitivity of the concave mirror eyes of scallops: Potential influences of habitat, self-screening and longitudinal chromatic aberration. J. Exp. Biol. , –. Srinivasan, M. V. (). Honeybees as a model for the study of visually guided flight, navigation, and biologically inspired robotics. Physiol. Rev. , –. Srinivasan, M. V., and G. D. Bernard (). The effect of motion on visual acuity of the compound eye: A theoretical analysis. Vision Res. , –. Srinivasan, M. V., Zhang, S., Altwein, M., and J. Tautz (). Honeybee navigation: Nature and calibration of the “odometer.” Science , –. Srinivasan, M. V., Zhang, S. W., and N. J. Bidwell (). Visually measured odometry in honeybees. J. Exp. Biol. , –. Srinivasan, M. V., Zhang, S. W., Lehrer, M., and T. S. Collett (). Honeybee navigation en route to the goal: Visual flight control and odometry. J. Exp. Biol. , –. Stange, G. (). The ocellar component of flight equilibrium control in dragonflies. J. Comp. Physiol. , –. Stavenga, D. G. (). Reflections on colourful ommatidia of butterfly eyes. J. Exp. Biol. , –.
378
References
Stavenga, D. G., Kinoshita, M., Yang, E. C., and K. Arikawa (). Retinal regionalization and heterogeneity of butterfly eyes. Naturwissenschaften , –. Stavenga, D. G., and J. W. Kuiper (). Insect pupil mechanisms. I. On the pigment migration in the retinula cells of Hymenoptera (suborder Apocrita). J. Comp. Physiol. A. , –. Stavenga, D. G., Stowe, S., Siebke, K., Zeil, J., and K. Arikawa (). Butterfly wing colours: Scale beads make white pierid wings brighter. Proc R. Soc. Lond. B , –. Stevens, M. (). Sensory Ecology, Behaviour, and Evolution. Oxford: Oxford University Press. Stevens, M., and I. C. Cuthill ().Hidden messages: Are ultraviolet signals a special channel in avian communication? Bioscience , –. Stevens, M., and S. Merilaita (). Animal camouflage: Function and mechanisms. In M. Stevens and S. Merilaita (Eds.), Animal Camouflage: Mechanisms and Function, pp. –. New York: Cambridge University Press. Stowasser, A., and E. K. Buschbeck (). Electrophysiological evidence for polarization sensitivity in the camera-type eyes of the aquatic predacious insect larva Thermonectus marmoratus. J. Exp. Biol. , –. Stowe, S. (). Rapid synthesis of photoreceptor membrane and assembly of new microvilli in a crab at dusk. Cell Tissue Res. , –. Stowe, S. (). A theoretical explanation of intensity-independant variation of polarisation sensitivity in crustacean retinular cells. J. Comp. Physiol. A , –. Strausfeld, N. J. (). Structural organization of male-specific visual neurons in calliphorid optic lobe. J. Comp. Physiol. A , –. Strausfeld, N. J., and A. D. Blest (). Golgi studies on insects. I. The optic lobes of Lepidoptera. Phil. Trans. R. Soc. Lond. B , –. Straw, A.D., Warrant, E. J., and D. C. O’Carroll (). A “bright zone” in male hoverfly (Eristalis tenax) eyes and associated faster motion detection and increased contrast sensitivity. J. Exp. Biol. , –. Sweeney, A. M., Des Marais, D. L., Ban, Y. A., and S. Johnsen (). Evolution of graded refractive index in squid lenses. J. R. Soc. Interface , –. Sweeney, A. M., Jiggins, C., and S. Johnsen (). Polarized light as a butterfly mating signal. Nature : –. Sweeting, L. M. (). Light your candy. ChemMatters (), –. Talbot, C. M., and N. J. Marshall (). The retinal topography of three species of coleoid cephalopod: Significance for perception of polarised light. Phil. Trans. R. Soc. Lond. B. , –. Taylor, W. R., and D. I. Vaney (). New directions in retinal research. Trends Neurosci. , –. Temple, S., Hart, N. S., Marshall, N. J., and S. P. Collin (). A spitting image: Specializations in archerfish eyes for vision at the interface between air and water. Proc. R. Soc. Lond. B , –. Temple, S. E., Pignatelli, V., Cook, T., How, M. J., Chiou, T. H., Roberts, N. W., and N. J. Marshall (). High-resolution polarisation vision in a cuttlefish. Curr. Biol. , R–R. Thayer, G. H., and A. H. Thayer (). Concealing-Coloration in the Animal Kingdom: An Exposition of the Laws of Disguise through Color and Pattern: Being a Summary of Abbott H. Thayer’s Discoveries. New York: Macmillan. Theobald, J. C., Greiner, B., Wcislo, W. T., and E. J. Warrant (). Visual summation in night-flying sweat bees: A theoretical study. Vision Res. , –. Thomas, R. J., Székely, T., Powell, R. F., and I. C. Cuthill (). Eye size, foraging methods and the timing of foraging in shorebirds. Funct. Ecol. , –. Trujillo-Cenóz, O. (). The structural organization of the compound eye in insects. In M.G.F. Fuortes (Ed.), Handbook of Sensory Physiology, Vol. VII/, pp. –. Berlin: Springer. Tucker, V. A., Tucker, A. E., Akers, K., and J. H. Enderson (). Curved flight paths and sideways vision in peregrine falcons (Falco peregrinus). J. Exp. Biol. , –.
References 379
Tuthill, J. C., and S. Johnsen (). Polarization sensitivity of the red swamp crayfish Procambarus clarkii enhances the detection of moving transparent objects. J. Exp. Biol. , –. Ugolini, A., Melis, C., Innocenti, R., Tiribilli, B., and C. Castellini (). Moon and sun compasses in sandhoppers rely on two separate chronometric mechanisms. Proc. R. Soc. Lond. B , –. Ulrich, D. J., Essock, E. A., and S. Lehmkuhle (). Cross-species correspondence of spatial contrast sensitivity functions. Behav. Brain Res. , –. Vallet, A. M., and J. A. Coles (). The perception of small objects by the drone honeybee. J. Comp. Physiol. A , –. Van Dover, C. L., Reynold, G. T., Chave, A. D., and J. A. Tyson (). Light at deep-sea hydrothermal vents. Geophys. Res. Lett. , –. Van Dover, C. L., Szuts, E. Z., Chamberlain, S. C., and J. R. Cann (). A novel eye in “eyeless” shrimp from hydrothermal vents of the mid-Atlantic ridge. Nature , –. Vignolini, S., Rudall, P. J., Rowland, A. V, Reed, A., Moyroudc, E., Faden, R. B., et al. (). Pointillist structural color in Pollia fruit. Proc. Natl. Acad. Sci. USA , –. Vogt, K. (). Optische Untersuchungen an der Cornea der Mehlmotte Ephestia kühniella. J. Comp. Physiol. , –. Vogt, K. (). Zur Optik des Fluß krebsauges. Z. Naturforsch. , . Vorobyev, M. (a). Cost and benefits of increasing the dimensionality of colour vision system. In C. Taddei-Ferretti (Ed.), Biophysics of Photoreception, Molecular and Phototransductive Events, Vol. , pp. –. Singapore: World Scientific. Vorobyev, M. (b). Discrimination of natural colours and receptor spectral sensitivity functions. In C. Taddei-Ferretti (Ed.), Biophysics of Photoreception, Molecular and Phototransductive Events, Vol. , pp. –. Singapore: World Scientific. Vorobyev, M. (). Coloured oil droplets enhance colour discrimination. Proc. R. Soc. Lond. B , –. Vorobyev, M., and R. Menzel (). Flower advertisement for insects: Bees, a case study. In S. N. Archer, M.B.A. Djamgoz, E. R. Loew, J. C. Partridge, and S. Vallerga (Eds.), Adaptive Mechanisms in the Ecology of Vision, pp. –. London: Kluwer Academic Publishers. Vorobyev, M., and D. Osorio (). Receptor noise as a determinant of colour thresholds. Proc. R. Soc. Lond. B , –. Vukusic, P., and J. R. Sambles (). Photonic structures in biology. Nature , –. Wagner, H. J., Douglas, R. H., Frank, T. M., Roberts, N. W., and J. C. Partridge (). A novel vertebrate eye using both refractive and reflective optics. Curr. Biol. , –. Wagner, H.-J., Fröhlich, E., Negishi, K., and S. P. Collin (). The eyes of deep sea fish II. Functional morphology of the retina. Prog. Ret. Eye Res. , –. Walls, G. L. (). The Vertebrate Eye and Its Adaptive Radiation. Bloomfield Hills, NJ: The Cranbrook Press. Walton, A. J. (). Triboluminescence. Advance Phys. , –. Wardill, T. J., List, O., Xiaofeng, L., Dongre, S., McCulloch, M., Ting, C-Y., et al. (). Multiple spectral inputs improve motion discrimination in the Drosophila visual system. Science , –. Warrant, E. J. (). Seeing better at night: Life style, eye design and the optimum strategy of spatial and temporal summation. Vision Res. , –. Warrant, E. J. (). The eyes of deep-sea fishes and the changing nature of visual scenes with depth. Phil. Trans. R. Soc. Lond. B , –. Warrant, E. J. (). The design of compound eyes and the illumination of natural habitats. In F. G. Barth and A. Schmid (Eds.), Ecology of Sensing, pp. –. Berlin, Heidelberg: Springer Verlag. Warrant, E. J. (). Vision in the dimmest habitats on earth. J. Comp. Physiol. A , –. Warrant, E. J. (). The sensitivity of invertebrate eyes to light. In E. J. Warrant and D. E. Nilsson (Eds.), Invertebrate Vision, pp. –. Cambridge: Cambridge University Press.
380
References
Warrant, E. J. (a). Nocturnal vision. In T. Albright and R. H. Masland (Vol. Eds.), A. I. Basbaum, A. Kaneko, G. M. Shepherd, and G. Westheimer (Ser. Eds.), The Senses: A Comprehensive Reference, Vol. : Vision II), pp. –. Oxford: Academic Press. Warrant, E. J. (b). Seeing in the dark: Vision and visual behaviour in nocturnal bees and wasps. J. Exp. Biol. , –. Warrant, E. J., Bartsch, K., and C. Günther (). Physiological optics in the hummingbird hawkmoth: A compound eye without ommatidia. J. Exp. Biol. , –. Warrant, E. J., and M. Dacke (). Visual orientation and navigation in nocturnal arthropods. Brain Behav. Evol. , –. Warrant, E. J., and M. Dacke (). Vision and visual navigation in nocturnal insects. Annu. Rev. Entomol. , –. Warrant, E. J., Kelber, A., Gislén, A., Greiner, B., Ribi, W., and W. T. Wcislo (). Nocturnal vision and landmark orientation in a tropical halictid bee. Curr. Biol. , –. Warrant, E. J., Kelber, A., and N. P. Kristensen (). Eyes and vision. In N. P. Kristensen (Ed.), Handbook of Zoology, Vol. IV, Part : Lepidoptera, Moths and Butterflies, Vol. , pp. –. Berlin: Walter de Gruyter. Warrant, E. J., Kelber, A., Wallén, R., and W. Wcislo (). Ocellar optics in nocturnal and diurnal bees and wasps. Arthrop. Struct. Devel. , –. Warrant, E. J., and N. A. Locket (). Vision in the deep sea. Biol. Rev. , –. Warrant, E. J., and P. D. McIntyre (). Limitations to resolution in superposition eyes. J. Comp. Physiol. A , –. Warrant, E. J., and P. D. McIntyre (). Strategies for retinal design in arthropod eyes of low F-number. J. Comp. Physiol. A , –. Warrant, E. J., and P. D. McIntyre (). Arthropod eye design and the physical limits to spatial resolving power. Prog. Neurobiol. , –. Warrant, E. J., and D. E. Nilsson (). Absorption of white light in photoreceptors. Vision Res. , –. Warrant, E. J., Porombka, T., and W. H. Kirchner (). Neural image enhancement allows honeybees to see at night. Proc. R. Soc. Lond. B , –. Wässle, H., and B. Boycott (). Functional architecture of the mammalian retina. Vision Res. , –. Waterman, T. H. (). Polarization sensitivity. In H. Autrum (Ed.), Handbook of Sensory Physiology, Vol VII/B, pp. –. Berlin, Heidelberg, New York: Springer. Wcislo, W. T., and S. M. Tierney (). Behavioural environments and niche construction: The evolution of dim-light foraging in bees. Biol. Rev. , –. Weckström, M., Järvilehto, M., and K. Heimonen (). Spike-like potentials in the axons of nonspiking photoreceptors. J. Neurophysiol. , –. Weckström, M., and S. B. Laughlin (). Visual ecology and voltage-gated ion channels in insect photoreceptors. Trends Neurosci. , –. Wehner, R. (). Spatial vision in arthropods. In H. Autrum (Ed.), Handbook of Sensory Physiology, Vol VII/C, pp. –. Berlin, Heidelberg, New York: Springer. Wehner, R. (). The perception of polarized light. Symp. Soc. Exp. Biol. , –. Wehner, R. (). “Matched filters”—neural models of the external world. J. Comp. Physiol. A , –. Wehner, R. (). Polarization vision—a uniform sensory capacity? J. Exp. Biol. , –. Wehner, R. (). Desert ant navigation: How miniature brains solve complex tasks. J. Comp. Physiol. A , –. Wehner, R., and G. D. Bernard (). Photoreceptor twist—a solution to the false-color problem. Proc. Natl. Acad. Sci. USA , –. Wehner, R., and B. Lanfranconi (). What do ants know about the rotation of the sky? Nature , –. Wehner, R., and F. Räber (). Visual spatial memory in desert ants, Cataglyphis bicolor (Hymenoptera: Formicidae). Experientia , –.
References 381
Werringloer, A. (). Die Sehorgane und Sehzentren der Dorylinen nebst Untersuchungen über die Facettenaugen der Formiciden. Z. Wissen. Zool. , –. White, C. R., Day, N., Butler, P. J., and G. R. Martin (). Vision and foraging in cormorants: More like herons than hawks? PLoS ONE (): e. White, S. N., Chave, A. D., and G. T. Reynolds (). Investigation of ambient light emission at hydrothermal vents. J. Geophys. Res. , –. Widder, E. A. (). A predatory use of counterillumination by the squaloid shark, Isistius brasiliensis. Env. Biol. Fish , –. Widder, E. A. (). Bioluminescence. In S. N. Archer, M.B.A. Djamgoz, E. Loew, J. C. Partridge, and S. Vallerga (Eds.), Adaptive Mechanisms in the Ecology of Vision, pp. –. Dordrecht: Kluwer Academic Publishers. Widder, E. A. (). Bioluminescence in the ocean: Origins of biological, chemical, and ecological diversity. Science , –. Wiederman, S. D., and D. C. O’Carroll (). Selective attention in an insect visual neuron. Curr. Biol. , –. Wikler, K. C., and P. Rakic (). Distribution of photoreceptor subtypes in the retina of diurnal and nocturnal primates. J. Neurosci. , –. Wilcox, J. G., and H. B. Barlow (). The size and shape of the pupil in lightly anaesthetized cats as a function of luminance. Vision. Res. , –. Wilkens, L. A. (). Primary inhibition by light: A unique property of bivalve photoreceptors. Am. Malac. Bull. , –. Williams, D., and P. D. McIntyre (). The principal eyes of a jumping spider have a telephoto component. Nature , –. Wilson, M. (). The functional organisation of locust ocelli. J. Comp. Physiol. , –. Wright, P. C. (). Home range, activity pattern and agonistic encounters of a group of night monkeys (Aotus trivigatus) in Peru. Folia Primatol. , –. Wright, P. C. (). The nocturnal primate niche in the New World. J. Hum. Evol. , –. Wystrach, A., Schwarz, S., Schultheiss, P., Buegnon, G., and K. Cheng (). Views, landmarks, and routes: How do desert ants negotiate an obstacle course? J. Comp. Physiol. A , –. Yahel, R., Yahel, G., Berman, T., Jaffe, J., and A. Genin (). Diel pattern with abrupt crepuscular changes of zooplankton over a coral reef. Limnol. Oceaongr. , –. Yamada, E. S., Silveira, L.C.L., Perry, V. H., and E.C.S. Franco (). M and P retinal ganglion cells of the owl monkey: Morphology, size and photoreceptor convergence. Vision Res. , –. Yamamoto, T., Tasaki, K., Sugawara, Y., and A. Tonisaki (). The fine structure of the octopus retina. J. Cell. Biol. , –. Yeandle, S. (). Evidence of quantized slow potentials in the eye of Limulus. Am. J. Ophthalmol. , –. Yoshida, A., Motoyama, M., Kosaku, A., and K. Miyamoto (). Antireflective nanoprotuberance array in the transparent wing of a hawkmoth, Cephanodoes hylas. Zool. Sci. , –. Young, R. E., and F. M. Mencher (). Bioluminescence in mesopelagic squid: Diel color change during counterillumination. Science , –. Young, R. E., and C.F.E. Roper (). Bioluminescent countershading in mid-water animals: Evidence from living squid. Science , –. Young, R. E., and C.F.E. Roper (). Intensity regulation of bioluminescence during countershading in living midwater animals. Fish. Bull. , –. Young, R. W. (). Visual cells. Sci. Am. , –. Young, S., and V. A. Taylor (). Visually guided chases in Polyphemus pediculus. J. Exp. Biol. , –. Zahavi, A., Zahavi, A., Balaban, A., Ely, N., and M. P. Ely (). The Handicap Principle: A Missing Piece of Darwin’s Puzzle. Oxford: Oxford University Press. Zaneveld, J.R.V., and W. S. Pegau (). Robust underwater visibility parameter. Opt. Express , –.
382
References
Zeiger, J., and T. H. Goldsmith (). Spectral properties of porphyropsin from an invertebrate. Vision Res. , –. Zeil, J. (a). Sexual dimorphism in the visual system of flies: The compound eyes and neural superposition in Bibionidae (Diptera). J. Comp. Physiol. , –. Zeil, J. (b). Sexual dimorphism in the visual system of flies: The free flight behaviour of male Bibionidae (Diptera). J. Comp. Physiol. , –. Zeil, J. (). Orientation flights of solitary wasps (Cerceris; Sphecidae; Hymenoptera). I. Description of flight. J. Comp. Physiol. A , –. Zeil, J., Boeddeker N., and J. M. Hemmi (). Vision and the organization of behaviour. Current Biol. , R–R. Zeil, J., Boeddeker, N., and W. Stürzl (). Visual homing in insects and robots. In D. Floreano, J.-C. Zufferey, M. V. Srinivasan, and C. Ellington (Eds.), Flying Insects and Robots, pp. –. Berlin: Springer-Verlag. Zeil, J., and J. M. Hemmi (). The visual ecology of fiddler crabs. J. Comp. Physiol. A , –. Zhou, Z. J., and S. Lee (). Synaptic physiology of direction selectivity in the retina. J. Physiol. , –. Zylinski, S., and S. Johnsen (). Mesopelagic cephalopods switch between transparency and pigmentation to optimize camouflage in the deep. Curr. Biol. , –.
General Index Italicized pages indicate figures –dehydroretinal, , –hydroxyretinal, , –hydroxyretinal, , absorbance: absorptance and, –; color vision and, ; per unit length, –; visual pigments and, –, –, , , , – absorption, ; absorbance and, –; attenuating media and, –, , , –, , , –, ; camouflage and, –, , –; coefficient of, , ; color patterns and, –; color vision and, , , , ; photoreceptors and, –; polarization vision and, –, ; retinal and, (see also retinal); signals and, –, , , –; stochastic, ; visual pigments and, –, , , , –, acceptance angle: eye designs of animals and, –, ; polarization vision and, ; spatial vision and, , accommodation, , , achromatic-chromatic compromise, – achromatic vision: attenuation and, , ; color vision and, , –; signals and, active vision, – acute zone: motion vision and, , , , ; spatial vision and, , , –, , , aggressive mimicry, , , airglow, , Airy, George Biddell, Airy disk, –, amacrine cell: eye designs and, , ; motion vision and, , ; spatial vision and, , amphibians: camera eyes and, –; dim light and, –, ; visual pigments and, , – angle of polarization, , , , Animal Eyes (Land and Nilsson), anterolateral eyes, , anteromedial eyes, , ants: dim light and, , , ; eye designs and, ; orientation and, , –; spatial vision and, ,
aperture: dim light and, , , ; eye designs and, –; spatial vision and, , ; superposition, –, , – aphakic gap, area centralis, , , arthropods, ; dim light and, –, –, , , ; eye designs and, , , –; polarization vision and, , , ; signals and, ; spatial vision and, ; visual pigments and, , attenuation, ; absorption and, –, , , –, , , –, ; background light and, , , , –; beam coefficient of, –, , ; behavior and, ; bioluminescence and, , , ; birds and, –, ; chlorophyll and, ; coefficients of, –, ; color vision and, ; contrast and, , , , –; degree of polarization and, –; diffuse coefficient and, , ; e-vector and, ; filters and, , , , –, ; hue and, ; irradiance and, ; lenses and, , –; light and, –; microvilli and, ; monochromatic light and, , ; multiple scattering and, –, , –; path radiance and, –, , –; photons and, –, , –; photoreceptors and, , , ; point sources and, ; polarization vision and, –, ; radiance and, –, –, , –; refractive indices and, ; resolution and, , –; retinas and, , ; scattering and, , –, , –; sensitivity and, , –, , –, –, ; sighting distance and, –; signals and, , , ; signal-to-noise ratio (SNR) and, ; spatial frequency and, –, ; spectral sensitivity and, , –, ; terrestrial environments and, –; ultraviolet light and, ; water and, –, –; wavelength and, , –, – auditory cues, , aurora, –, Autrum, Hansjochem,
384
General Index background light, ; attenuation and, , , , –; camouflage and, , –, ; polarization vision and, , ; sensitivity and, –; signals and, , –, bats, bees: color vision and, –, , , ; dim light and, –, , , –; eye designs and, , ; motion vision and, , –; orientation and, –, –; spatial vision and, –; sun compasses and, – beetles, ; dim light and, , , , , ; eye designs and, –; motion vision and, ; orientation and, , , ; polarization vision and, , , , , , ; signals and, , , , – behavior: achromatic-chromatic compromise and, –; attenuation and, ; color vision and, –, , , , , –; dim light and, , , –, , ; eye designs and, , , , ; light and, –, , ; motion vision and, –, –, –, –; navigation and, –, –, , , ; orientation and, –, –, , , ; polarization vision and, –; shadow response and, ; signals and, , , , –; spatial vision and, , , , , ; tracking, –; visual pigments and, , , Bermuda glow worms, binning, binocular overlap, , binocular vision, , , , bioluminescence: attenuation and, , , ; camouflage and, –, , –, –; color vision and, –, ; eye designs and, , ; light and, , –, , , ; orientation and, , ; photoreceptors and, , ; polarization vision and, , ; signals and, –, , –, –; spatial vision and, , – bipolar cells: color vision and, ; dim light and, ; eye designs and, ; motion vision and, –; spatial vision and, , birds, ; attenuation and, –, ; color vision and, –, , , –, , ; dim light and, , –, , , –; eye designs and, –; motion vision and, , –, ; orientation and, , , , ; photoreceptors and, ; plumage and, , , , , ; polarization vision and, , ; signals and, , , , –, ; spatial vision and, –, , , , , ; visual pigments and,
birefringence: camouflage and, –, ; eye designs and, , ; polarization vision and, –, –; signals and, –, blackbody radiation, , , blindness, ; cavefish and, , ; color, ; dim light and, , , blind spots, , , blowflies, , , body temperature, , , Brewster’s angle, , brightness: color vision and, , ; eye designs and, ; intensity and, –, –, , , , (see also intensity); light and, –, , , , , , , , , , , , , , ; motion vision and, ; orientation and, ; polarization vision and, , , ; pupils and, ; signals and, , ; visual pigments and, , bright zones: motion vision and, –; polarization vision and, , , , Buschbeck, Elke, butterflies: color vision and, , , , , , , ; eye designs and, –, ; orientation and, –; polarization and, , –, ; signals and, , , , , , –, ; spatial vision and, –; visual pigments and, , , , , butterflyfish, , camera eyes: amphibious vision and, –; deep-sea, –; dim light and, , , –, ; eye designs and, –, ––; invertebrates and, –; Matthiessen’s ratio and, ; negative lenses and, –; neural adaptations and, –; nocturnal, –; optical adaptations and, –; optics of, –; pupils and, –; retinas and, –; sampling stations of, –; spatial vision and, –, ; telephoto components and, –; vertebrates and, – camouflage: absorption and, –, , –; background light and, , –, ; bioluminescence and, –, , –, –; birefringence and, –, ; color patterns and, , , –; counterillumination and, , , , –, , ; crustaceans and, , , , , ; detection vs. recognition and, –; evolution and, –, ; fish and, –, , –, , –; insects and, , , , ; invertebrates and, ; mirroring and, –, –, ; predators and, , , , ; prey and, –, , , , ; reflection and, , , , ,
General Index 385 –, ; refractive indices and, , , ; spectral sensitivity and, , ; transmitted light and, –; transparency and, –; ultraviolet light and, , , ; visual pigments and, –, , –, –; water and, , –, , , –, – canopy orientation, – carbon rings, Carleton, Karen, cartridges, – cats: dim light and, ; eye designs and, ; spatial vision and, –, ; visual pigments and, caustic flicker, cephalopods: dim light and, , , ; eye designs and, , , , ; motion vision and, , ; optical environments and, –; polarization vision and, , , , –; signals and, , –, –, ; spatial vision and, , , chameleons, , , , channel number, –, Chappius band, chemiluminescence: attenuation and, , , ; bioluminescence and, , –, , , –, , , , –, –, , , , , , , , , , –, –, , –, –; color vision and, –, , , ; dim light and, , , , ; eye designs and, , ; optical environments and, , –; signals and, –, , –, –; spatial vision and, , –; visual pigments and, Cheng, Ken, chlorophyll: attenuation and, ; light and, , –; visual pigments and, chromatic aberration: color vision and, ; eye designs and, , ; spatial vision and, chromatic contrast, –, , chromaticity, , , chromophores: –dehydroretinal and, , ; –hydroxyretinal and, , ; –hydroxyretinal and, , ; color vision and, ; G-protein-coupled receptors (GPCRs) and, –, ; opsins and, – (see also opsins); polarization vision and, –; retinal, –, , – (see also retinal); visual pigments and, –, –, –, , – CIE (International Commission on Illumination), , circularly polarized light (CPL), clams, , , , , clearing agent, cognitive psychology,
color-blindness, color constancy, color patterns: absorption and, –; camouflage and, , , –; polarization vision and, –; signals and, , – color temperature, ; blackbody radiation and, ; daylight and, ; mechanoluminescence and, ; Planckian locus and, ; starlight and, color vision: absorbance and, ; absorptance and, , , , ; achromatic-chromatic compromise and, –; achromatic vision and, , –; aquatic animals and, –; attenuation and, ; basics of, –; bees and, –, , , ; behavior and, –, , , , , –; bioluminescence and, –, ; bipolar cells and, ; birds and, –, , , –, , ; brightness and, , ; caustic flicker and, ; channel number and, –, , –; channel overlap and, –; chromatic aberration and, ; chromaticity and, , ; chromophores and, ; color spaces and, –; compound eyes and, , , , , ; cones and, , , , –, , –, –, ; contrast and, , , , , –, , , , ; cornea and, ; crustaceans and, , , , –; daylight and, ; dichromatic, , –, , , –, , , , , ; dim light and, , ; evolution and, –, –, , , –; far-red channels and, –; filters and, , –, , , , , , –, , –; fish and, , , –, , –; flies and, , , –, –; focusing and, ; frequency and, , ; grades of, ; hue and, , , , , , , ; humans and, , , , , , , , , , ; insects and, , –, , –; intensity and, , , , ; invertebrates and, , , , , ; irradiance and, , ; lambda-max wavelength and, , , , ; lenses and, ; long-wavelength studies and, –; mammals and, –, , , ; Maxwell triangle and, –, , ; monkeys and, ; monochromatic light and, –, , , , –, , , , , , , , , , , , , , , ; moonlight and, –; noise and, –, , –; oil droplets and, , , , , –, ; ommatidium and, –, , ; opponency and, , , , , ; opsins and, , , ; parasol ganglion cells and, , ; photons and, –, , ; photoreceptors and, –, –, –; phototaxis and, ;
386
General Index color vision (continued) plumage and, , , , , ; polarization vision and, , , ; predators and, , , , –, ; prey and, , , , , ; primates and, –, –; radiance and, , , , ; reflection and, , ; reptiles and, –, , ; retinal and, , , , –, ; retinas and, , , , –, , –; rhabdoms and, , , ; rods and, , , , , ; saturation and, ; sensitivity and, –, ; sensor fusion and, ; sex and, , , , ; signal-to-noise ratio (SNR) and, –; skylight and, –, ; specific tasks and, –; spectra and, –; spectral sensitivity and, , , , , –, , , –, , ; stomatopods and, –; sunlight and, , ; taxis and, ; terrestrial color cycle and, –; tetrachromatic, –, , , , , –, , ; trichromatic, , , , –, , –, , , , ; twilight and, –, , ; ultraviolet light and, , , , , –, –, ; vertebrates and, , , , , , , –; visual field regionalization and, –; visual pigments and, , , –, , , , –, , ; water and, , , , , –, –; wavelength and, – compound eyes: acute zones and, , , –, , , , , , , ; apposition, –; color vision and, , , , , ; deep-sea, –; dim light and, –, , , –; evolution and, , ; eye designs and, , , –; motion vision and, , , , –; nocturnal, –; orientation and, , ; polarization vision and, , , ; popularity of, ; sampling stations of, –; spatial vision and, –, –, , ; superposition, – concave mirror eyes, – cones: color vision and, , , , –, , –, –, ; dim light and, , –; double or twin (D/T), –, , , , , , , ; eye designs and, , , –, ; light and, , , , –; long-wavelength sensitive, , , , –, –; medium-wavelength sensitive, , , , , ; offset classes and, –; oil droplets and, , , , –, , , , , –, , ; parasol ganglion cells and, , ; polarization vision and, –, , ; shortwavelength sensitive, , , –, –; spatial vision and, –; visual pigments and, –, , –
contrast, ; attenuation and, , , , –; color vision and, , , , , –, , , , ; dim light and, –, , , , ; eye designs and, –, , , ; filters and, –; motion vision and, , ; orientation and, –, , , –, , , –; polarization vision and, , , , , –, –, –; resolution and, –; sensitivity and, –; sighting distance and, –; spatial vision and, , , , , ; thresholds of, –; visual pigments and, . See also eye movements; intensity cornea: acute zone and, (see also acute zone); attenuation and, , ; bright zone and, (see also bright zone); color vision and, ; dim light and, , , , , ; eye designs and, –, –; ommatidium and, (see also ommatidum); polarization vision and, ; spatial vision and, , , , , , , ; superposition aperture and, –, , – cosine corrector, Cott, Hugh, counterillumination: camouflage and, , , , –, , ; color vision and, ; light and, , crabs: color vision and, , ; dim light and, ; eye design and, , –, ; motion vision and, –; orientation and, –; polarization vision and, , ; spatial vision and, –; visual pigments and, , , , crayfish, –, , , , crustaceans: attenuation and, ; camouflage and, , , , , ; color vision and, , , , –; dim light and, , ; eye designs and, , , , , ; motion vision and, ; optical environment and, –, ; orientation and, ; polarization vision and, , –, , ; signals and, , , , , ; spatial vision and, , ; stomatopods, , , , , , , –, , , , , , , , , , ; visual pigments and, , , , , – CSTMD cell, cuttlefish, , , , –, , , , , cystalloluminescence, dark noise, , , , Dartnall, H. J. A., daylight: color temperature of, ; color vision and, ; dim light and, , , , –, ; eye designs and, , , ; optical environments and, –, –, ; signals and, ;
General Index 387 spatial vision and, –; visual pigments and, , , degree of polarization: attenuation and, –; polarization vision and, –, , , –, ; signals and, dehydroretinal, –, , Denton, Eric, dichroism, , , diffraction: Airy disk and, –, ; optical building blocks and, ; spatial vision and, ; visual pigments and, dim light, ; absorption and, , –; amphibians and, –, ; ants and, , , ; aperture and, , , ; aphakic gap and, ; arthropods and, –, –, , , ; bees and, –, , , –; behavior and, , , –, , ; bioluminescence and, , , ; bipolar cells and, ; birds and, , –, , –; blindness and, , , ; camera eyes and, , , –, ; cartridges and, –; cats and, ; cephalopods and, , , ; color vision and, , ; compound eyes and, –, , , –; cones and, , –; contrast and, –, , , , ; cornea and, , , , , ; crustaceans and, , ; daylight and, , , , –, ; evolution and, , , , ; extended scenes and, ; eye designs and, , , ; fish and, , –; flies and, , ; F-number and, , , , ; focal length and, –, ; focusing and, –, ; fovea and, –; frequency and, , ; gratings and, ; humans and, –, –; insects and, –, , , , , ; intensity and, , , , ; landmarks and, –, , , ; lenses and, , , –, –, ; mammals and, –, –, ; microvilli and, ; midget ganglion cells and, ; mirrors and, ; monkeys and, , –, –; monochromatic light and, , , , ; moonlight and, –, , ; neural adaptations and, –; noise and, , , ; oil droplets and, ; ommatidium and, –, ; opsins and, ; optical adaptations for, –; photons and, , , , –; photoreceptors and, , , –, , , –, –; pixels and, ; polarization vision and, , , ; predators and, , , , , ; prey and, –, , , ; primates and, , , , –, ; pupils and, –, , ; reptiles and, ; resolution and, , –, , , –,
–, ; retinal and, , , –; retinal ganglion cell and, , ; retinas and, –; rhabdoms and, , , ; rods and, , –; scattering and, ; scotopic sensitivity and, , , –; sex and, ; signal-tonoise ratio (SNR) and, –, , ; spatial frequency and, ; spatial summation and, –; spectral sensitivity and, ; square root law of visual detection and, ; starlight and, , , , ; sunlight and, , , ; sunset and, ; tapeta structures and, –; temporal summation and, –; twilight and, , ; ultraviolet light and, ; vertebrates and, , –, , , , –, ; visual fields and, , , , ; visual pigments and, , , –, ; wasps and, , , ; water and, , –, – dinosaurs, diopter (D) measurement, direction-selective ganglion cells (DSGCs), , DIT cells, – dolphins, , dorsal rim area (DRA), , , , , , , , double twin (DT) cones, , , , , , , , dragonflies, ; color vision and, ; eye designs and, –; motion vision and, , , –, , ; polarization vision and, ; spatial vision and, , –, , , droneflies, eccentricity, –, , , electroretinogram (ERG), , elementary motion detection (EMD), Entine, Gerald, equations: attenuation, –, –, , ; camera eyes, ; compound eyes, ; light in water/air interface, ; monochromatic point, ; polarization vision, ; resolution, –, ; sensitivity, –; sighting distance, e-vector: attenuation and, ; degree of, –, , , , , , ; polarization vision and, –, , , , , , , , , , –, , evolution: camouflage and, –, ; color vision and, –, –, , , –; dim light and, , , , ; eye designs and, –, , , ; signals and, –, ; spatial vision and, –, , , –, ; visual pigments and, , , , , Exner, Sigmund, , , ,
388
General Index extended scenes: dim light and, ; optical building blocks and, , –; spatial vision and, , – eye designs: acceptance angle and, –, ; accommodation and, , ; amacrine cell and, , ; ants and, ; aperture and, –; arthropods and, , , –; behavior and, , , , ; bioluminescence and, , ; bipolar cells and, ; birds and, –; birefringence and, , ; brightness and, ; camera eyes and, –; cats and, ; cephalopods and, , , , ; chromatic aberration and, , ; compound eyes and, , , –; concave mirror eyes and, –; cones and, , , –, ; contrast and, –, , , ; cornea and, –, –; crustaceans and, , , , , ; daylight and, , , ; dim light and, , , ; evolution and, –, , , ; fish and, –, –, , ; flies and, –, ; F-number and, , , ; focal length and, , –, ; focusing and, –, , –; fovea and, ; humans and, –; insects and, , –, , , ; intensity and, ; inter-receptor angle and, , ; invertebrates and, –, –, –; lenses and, –; mammals and, –; microvilli and, , , ; monochromatic light and, ; ommatidium and, –; opsins and, , –; photoreceptors and, –, –, , , –, , , –; phototaxis and, ; pigment-pit eyes and, –, , ; pinhole eye and, –; pixels and, ; polarization vision and, , , , ; predators and, , ; prey and, , , , ; pupils and, –, –, ; reflection and, –, , ; refractive indices and, , –, , , –, , ; reptiles and, ; resolution and, –, , , , –, –; retinal and, –, , , , ; retinas and, –, , –, , , –, ; rhabdoms and, , , , –, , , ; rods and, –; sensitivity and, –, , –, –; shielding and, –, ; spectral sensitivity and, ; tapeta structures and, ; turtles and, –, ; ultraviolet light and, ; vertebrates and, –, –, –; visual fields and, , , ; water and, –, ; wavelength and, , , , , eye glow, , , , , eye movements: aerial interceptors and, –; image stabilization and, –; mantis shrimps and, –; motion stimuli and, – (see also motion vision); predators and, , , –, ; rotation and, , , , ,
–, , –; tracking behavior and, –; vestibulo-ocular reflex (VOR) and, eyes: anterolateral, , ; anteromedial, , ; camera, (see also camera eyes); compound, –, (see also compound); concave mirror, –; evolution and, –; negative lenses and, –; nocturnal, – (see also dim light); pinhole, –; posteromedian, , , ; shielded, –, ; telephoto components and, –; underfocused, , , , . See also specific component far-red light, –, – figure-detection (FD) cells, filters: attenuation and, , , , –, ; color vision and, , –, , , , , , –, , –; contrast and, –; exclusion of information and, ; lateral filtering and, ; matched, , , ; motion vision and, ; oil-droplet, , , , –, , , , , –, , ; optical environments and, ; photoreceptors and, , , –, , , , , –, , ; polarization vision and, , , ; sighting distance and, –; signals and, , ; spatial vision and, , ; visual pigments and, – fireflies, fish, xix, –, –; attenuation and, , –, –, , ; camouflage and, –, , –, , –; color vision and, , , –, , –; dim light and, , –; eye designs and, –, –, , ; motion vision and, –, , ; optical environments and, –; orientation and, –, ; polarization vision and, , , , , , –, –; signals and, , ; spatial vision and, , , , –; spectral sensitivity and, –, , , , ; vision variability in, –; visual pigments and, , –, , , –, , – flies, , –; color vision and, , , –, –; dim light and, , ; eye designs and, –, ; motion vision and, , , , , –, , ; orientation and, , ; polarization vision and, , –, –; signals and, , , , , , , –, –, ; spatial vision and, , , , –, , ; visual pigments and, , , flow fields: active vision and, –; emergence of, –; motion vision and, –, –; optic flow and, – fluorescence, –, , flying fish, –
General Index 389 F-number: dim light and, , , , ; eye designs and, , , ; optical building blocks and, , , focal length: dim light and, –, ; diopter (D) measurement and, ; eye designs and, , –, ; F-number and, , , , , , , , , , ; lenses and, , –, , , –, , –, , –, –, ; spatial vision and, –, , – focusing, ; accommodation and, , , ; caustic flicker and, ; chromatic aberration and, –, , , , ; color vision and, ; dim light and, –, ; diopter (D) measurement and, ; eye designs and, –, , –; orientation and, , ; photoreceptors and, –; spatial vision and, , , ; spherical aberration and, , , ; superposition aperture and, –, , –; underfocused eyes and, , , , ; unfocused light and, , , forests, – Fourtes, Michealangelo, fovea: area centralis and, , , ; dim light and, –; eye designs and, ; motion vision and, , , , ; spatial vision and, , –, –, , , , , frequency: color vision and, , ; dim light and, , ; motion vision and, ; optical environments and, –; orientation and, ; photoreceptors and, ; polarization vision and, , ; spatial vision and, , , , , ; spectra and, (see also spectra) frogs, , , , –, fruit flies, , , – full-width half-max (FWHM) parameter, , galactic light, G-protein-coupled receptor (GPCR), –, gratings: dim light and, ; optical building blocks and, , –; spatial frequency and, ; spatial vision and, , gravity, gray levels, , , Hayakawa, Yoshitaro, head bob, Hecht, Selig, , helmut gecko, hoverflies, hue: attenuation and, ; color vision and, , , , , , , ; orientation and, ; polarization vision and, ; signals and, , Hughes, Austin,
humans, , ; color vision and, , , , , , , , , , ; dim light and, –, –; eye designs and, –; light and, , , , , , , ; motion vision and, –, ; orientation and, ; polarization vision and, ; signals and, , , ; spatial vision and, , –; visual pigments and, Huxley, Thomas Henry, illuminance: optical environment and, –; vector irradiance and, –, illuminants, , , , infrared light, –, , –, insects, xix; camouflage and, , , , ; color vision and, , –, , –; dim light and, –, , , , , ; eye designs and, , –, , , ; light and, ; motion vision and, , , , –, –, –; orientation and, , , , ; polarization vision and, , , , , , , , , –; signals and, , , , ; spatial vision and, , , –, , , ; visual pigments and, , , , , . See also specific insect intensity: color vision and, , , , ; dim light and, , , , ; eye designs and, ; motion vision and, ; optical building blocks and, , –, , , , , ; optical environments and, –, –, , , , ; orientation and, –; polarization vision and, , , , , , ; signals and, , , ; spatial vision and, internal clocks, inter-receptor angle: eye designs and, , ; optical building blocks and, , , ; spatial vision and, –, , , , intrarhabdomal filter, , , invertebrates, ; camera eyes and, –; camouflage and, ; color vision and, , , , , ; eye designs and, –, –, –; motion vision and, , , ; orientation and, , ; polarization vision and, , , ; signals and, ; spatial vision and, –, –, , ; visual pigments and, , , , , irradiance: attenuation and, ; color vision and, , ; moonlight and, ; optical environments and, –, ; orientation and, ; photoreceptors and, , ; scalar, , , ; signals and, ; vector, –, jellyfish, , , , , , , , , , kinesis, , , –, –
390
General Index lambda-max wavelength: color vision and, , , , ; visual pigments and, –, –, , , –, , Land, Michael, , , landmarks: dim light and, –, , , ; navigation and, –, , , , , –; orientation and, –, , , , , – lenses: accommodation and, , , ; acute zone and, (see also acute zone); aperture and, , , –, –, (see also aperture); aphakic gap and, ; attenuation and, , –; bright zone and, (see also bright zone); chromatic aberration and, –, , , , ; color vision and, ; cornea and, (see also cornea); dim light and, , , –, –, ; eye designs and, –; focal length and, , –, , , –, , –, , –, –, ; Matthiessen, ; modulation transfer function (MTF) and, , –; motion vision and, , ; multifocal, , ; negative, , , –; ommatidium and, (see also ommatidium); optical environments and, –; orientation and, ; positive, ; refractive, ; signals and, , ; spatial frequency and, (see also spatial frequency); spatial vision and, , –, , , –, ; superposition and, –, , –; telephoto components and, –; visual pigments and, , Liebman, Paul, light, –; absorbance and, –, –, , , , –; absorption and, , , , , –, –, , –; airglow and, , ; attenuation and, –; aurora and, –, ; background, –, , , , , , , –, , –, ; behavior and, –, , ; bioluminescence and, , –, , , ; blackbody radiation and, , , ; brightness and, –, , , , , , , , , , , , , , ; chemiluminescence and, ; chromaticity and, ; color vision and, –, – (see also color vision); cones and, , , , –; contrast and, (see also contrast); counterillumination and, , , , –, , ; daylight and, – (see also daylight); dim, – (see also dim light); electromagnetism and, ; elliptically polarized, , ; far-red, –, –; flashes of, , , , , , , , , , , , , , ; fluorescence and, –, , ; forests and, –; full-width half-max (FWHM) parameter and, , ; galactic, ; humans
and, , , , , , , ; illuminance and, –; inensity and, –, –, , , , (see also intensity); infrared, –, , –, ; irradiance and, (see also irradiance); lenses and, (see also lenses); lumen measurement and, , , ; measurement of, –; monochromatic, –, , , , –, , , , , , , , , , , , , , , , , ; moonlight and, , –, –, –, , , ; photokinesis and, –; photons and, – (see also photons); polarized, , –, –, –, –, , , , , ; quantal terms for, –, , , , , –, –, –; radiance and, , , , , , – (see also radiance); scattering and, , , –; sensitivity to, , , , –; spectrometers and, , , ; spectra and, , , , , , , –, , (see also spectra); starlight and, xix, –, , , , , , ; twilight and, , –, , (see also twilight); ultraviolet, , , (see also ultraviolet light); understanding nature of, –; visual pigments and, – (see also visual pigments); water and, –; wavelength and, –, – (see also wavelength); white, , , , , , ; zodiacal, –, light detectors, , light guide, light pollution, , , , , linear polarization, , , , , –, , , , lobula plate, , , –, , , locomotion, – looming stimulus, , , , , love spot, –, , lumens, , , lux measurement, , Lythgoe, John, xix magnetic fields, , mammals: color vision and, –, , , ; dim light and, –, –, ; eye designs and, –; motion vision and, ; optical environments and, ; orientation and, , , ; spatial vision and, , , ; visual pigments and, , –, , , manatees, matched filters, ; polarization vision and, , ; spatial vision and, , –, – Matthiessen, Ludwig, Matthiessen lens, Matthiessen’s ratio, , Maxwell triangle, –, ,
General Index 391 McFall-Ngai, Margaret, McFarland, Bill “Mac”, xix, mechanoluminescence, , – mice, , , , , Michelson, Michelson contrast, – microspectrophotometry (MSP), , , microvilli: attenuation and, ; dim light and, ; eye designs and, , , ; illustrated structure of, ; photoreceptors and, –, , , , , , , –, , ; polarization vision and, , – midget ganglion cell, –, , Milky Way, –, , , mirroring: camouflage and, –, –, ; concave mirror eyes and, –; dim light and, ; optical environments and, , –; polarization vision and, modulation transfer function (MTF), , – Moken people, monkeys, ; color vision and, ; dim light and, , –, –; spatial vision and, monochromatic light: attenuation and, , ; color vision and, , , , ; dim light and, , , , ; eye designs and, ; optical building blocks and, –, , ; optical environments and, –, ; signals and, , , ; visual pigments and, , monocular vision, , , moon compass, , moonlight: dim light and, –, , ; optical environments and, , –, –; signals and, moths, , , –, , –, , , , , , , , motion parallax, , , –, motion vision: active vision and, –; acute zone and, , , , ; aerial interception and, –; amacrine cell and, , ; behavior and, –, –, –, –; bipolar cells and, –; birds and, , –, ; brightness and, ; bright zones and, –; cephalopods and, , ; compound eyes and, , , , –; contrast and, , ; crustaceans and, ; direction-selective ganglion cells (DSGCs) and, , ; DIT cells and, –; figure-detection (FD) cells and, ; filters and, ; fish and, –, , ; flies and, , , , , –, , ; flow fields and, –, –; fovea and, , , , ; frequency and, ; fruit flies and, , –; humans and, –, ; image stabilization and, –; insects and, , ,
, –, –, –; intensity and, ; invertebrates and, , , ; kinesis and, , , –, –; lenses and, , ; love spots and, , ; mammals and, ; noise and, ; null direction and, ; ommatidium and, , , –; optomotor response and, , ; photoreceptors and, –, , , –, , ; polarization vision and, , , ; predators and, , , –, ; prey and, , , –, ; Reichardt detector and, –, , ; resolution and, , ; retinas and, –, –, , –, , ; scanning eye control and, –; sensing motion and, –; sensitivity and, –, –, –; sex and, –; small-field small-target movement detectors (SF-STMDs) and, –, ; stimuli for, –; sunlight and, , , ; synapses and, , ; twilight and, ; vertebrates and, , , , , , , , ; vestibulo-ocular reflex (VOR) and, ; visual ecology and, , , , , , –; visual fields and, , , –, ; water and, , , , , –; wavelength and, multifocal lens, , multiple scattering, –, , – navigation: bats and, ; behavior and, –, –, , , ; birds and, , , , ; celestial cues and, –; landmarks and, –, , , , , –; multimodality and, –; odometry and, –; sensitivity and, , , , ; starlight and, ; sun compass and, , –. See also orientation Nilsson, Dan-Eric, noise: channel number effects and, –; chemical, ; color vision and, –, , –; dark, , , , ; dim light and, , , ; extrinsic, ; intrinsic, –; motion vision and, ; optical building blocks and, –, –; photon, , , , ; polarization vision and, ; signal-to-noise ratio (SNR) and, –, , , –, , , ; spatial vision and, , ; thermal, , , , ; transducer, –; visual pigments and, , , nomograms, north star, nystagmus, , octopuses, , , –, , , , , , ,
392
General Index odometry, – oil droplets: color vision and, , , , , –, ; dim light and, ; photoreceptors and, , , , –, , , , , –, , ommatidium: color vision and, –, , ; dim light and, –, ; eye designs and, –; motion vision and, , , –; polarization vision and, , , –, ; spatial vision and, , , , , , –, , , , opponency: attenuation and, ; color vision and, , , , , ; polarization vision and, , opsins, –; chromophores and, –; color vision and, , , ; dim light and, ; eye designs and, , –; G-protein-coupled receptors (GPCRs) and, –, ; optical building blocks and, ; orientation and, ; rhodopsin, , , , , , ; spatial vision and, ; visual pigments and, –, , –, , , optical cross talk, , , , , optical cutoff frequency, , , , optical environments: daylight and, –, –, ; forests and, –; intensity and, –, –, , , , ; irradiance and, –, ; lenses and, –, ; mirrors and, , –; monochromatic light and, –, ; moonlight and, , –; photons and, , ; pigment-pit eyes and, –, , ; predators and, , ; prey and, ; skylight and, , , –; spectra and, , , , , , , –, , ; starlight and, xix, –, , , , , , ; sunlight and, –, –; water and, –, , , –, –, (see also water) optical sensitivity, –, –, – optic flow, – optokinesis, , – optomotor response, , orientation: ants and, , –; auditory cues and, , ; behavior and, –, –, , , ; bioluminescence and, , ; birds and, , , , ; brightness and, ; canopy, –; celestial cues and, –; chemical stimuli and, , , , , , ; compound eyes and, , ; contrast and, –, , , –, , , –; crustaceans and, ; filters and, ; fish and, –, ; flies and, , ; focusing and, , ; frequency and, –, ; gravity and, ; hue and, ; humans and, ; insects and, , , , ; intensity and, –;
internal stimuli and, , , , ; invertebrates and, , ; irradiance and, ; landmarks and, –, , , , , –; magnetic fields and, , ; mammals and, , , ; moon compass and, , ; multimodality and, –; north star and, ; opsins and, ; panorama, –; photokinesis and, –; photoreceptors and, ; phototaxis and, –; place cells and, –; polarization vision and, –, , , , , –, –, , ; predators and, –, , , ; prey and, ; primates and, ; radiance and, , , , , , –; reflection and, ; retinal ganglion cell and, ; retinas and, , ; rodents and, –; sensitivity and, , , , ; shadow response and, ; skylight and, ; snapshot model and, –; star compass and, ; starlight and, ; sun compass and, , –; sunlight and, , , –, , , –; sunset and, , , , –, , , , ; survival and, –; taxis and, –; turtles and, –; twilight and, , –, , ; ultraviolet light and, ; vertebrates and, , ; visual landmarks and, –; visual odometry and, –; wasps and, –, ; water and, , –, , ; wavelength and, –, –, outer segment, , , , , –, , , , , , , – overlap: absorption and, ; binocular, , ; color vision and, –; dim light and, ; motion vision and, ; spatial vision and, , , panorama orientation, – parasol ganglion cell, –, , path radiance, –, , , – phase velocity, , – photataxis, , , – photocyte, photokinesis, – photon bumps, , photon capture, , –, , photonic crystal, photon noise, , photons: attenuation and, –, , –; bumps and, , ; color vision and, –, , ; dim light and, , , , –; motion vision and, , ; optical environments and, –, , ; polarization vision and, , ; spatial vision and, , , , ; square root law of visual detection and, ; visual pigments and, , , –, , , –
General Index 393 photopic sensitivity, , , , , , photoreceptors: –dehydroretinal and, , ; – hydroxyretinal and, , ; –hydroxyretinal and, , ; absorbance and, –, –, , , , –; absorption and, –, , , –, ; attenuation and, , , ; bioluminescence and, , ; birds and, ; bumps and, , ; cells of, –; ciliary, , , , , , ; color vision and, –, –, –; cones and, , , – (see also cones); dark noise and, , , ; dim light and, , , –, , , –, –; eye designs and, –, –, , , –, , , –; filters and, , , –, , , , , , –, , ; focusing and, –; frequency and, ; G-protein-coupled receptors (GPCRs) and, –, ; illustrated structure of, ; irradiance and, , ; microvillar, –, , , , , , , –, , ; motion vision and, –, , , –, , ; oil droplets and, , , , –, –, , –, , ; optical cross talk and, , , –, ; optical sensitivity and, –, –, –; optical specializations of, –; orientation and, ; outer segment and, , , , , –, , , , , , , –; photochemical process of, –; pigment-pit eyes and, –, , ; polarization vision and, –; psychophysical experiments and, ; R–, , ; R, , , ; retinal and, –, , – (see also retinal); rhabdoms and, , –, , –, , , , –, , , , –, , –, , , , , –, , , ; rods and, – (see also rods); sensitivity and, (see also sensitivity); spatial vision and, –, , ; ultraviolet light and, (see also ultraviolet light); vertebrates and, ; wavelength and, –, , , – phototaxis: color vision and, ; eye designs and, ; orientation and, – phytoplankton, –, piezoluminescence, pigment-pit eyes, –, , pinhole eyes, – pipefish, Pirenne, Maurice, , , , pixels: dim light and, ; extended scenes and, –; eye designs and, ; lateral filtering and, ; optical building blocks and, , –, –, , –; point sources and, –; spatial vision and, place cells, –
Planckian locus, plankton, , –, , , , , , –, , Plotinus, xix plumage, , , , , point sources: attenuation and, ; optical building blocks and, –; spatial vision and, polarization vision, –; absorption and, –, ; acceptance angle and, ; angle of polarization and, , , , ; arthropods and, , , ; attenuation and, –, ; background light and, , ; behavior and, –; bioluminescence and, , ; birds and, , ; birefringence and, –, –; Brewster’s angle and, , ; brightness and, , , ; bright zones and, , , , ; cephalopods and, , , , –; chromophores and, –; circularly polarized light (CPL) and, –; color patterns and, –; color vision and, , , ; compound eyes and, , , ; cones and, –, , ; contrast and, , , , , –, –, –; cornea and, ; crustaceans and, , –, , ; degree of polarization and, –, , , –, ; dim light and, , , ; elliptically polarized light and, , ; e-vector and, –, , , , , , , , , , –, , ; eye designs and, , , , ; filters and, , , ; fish and, , , , , , –, , –; flies and, , –, –; frequency and, , ; hue and, ; humans and, ; insects and, , , , , , , , , –; intensity and, , , , , , ; invertebrates and, , , ; linear polarization and, , , , , –, , , , ; matched filters and, , ; microvilli and, , –; mirrors and, ; motion vision and, , , ; neural processing and, –; noise and, ; ommatidium and, , , –, ; opponency and, , ; orientation and, –, , , , , –, –, , ; photons and, , ; photoreceptors and, –; physics of polarized light and, –; pixels and, ; POL neurons and, –; predators and, , , ; prey and, , ; reflection and, , , , , , , –, , , ; refractive indices and, , ; retinal and, , , ; retinas and, –, , , , , ; rhabdoms and, –; rods and, –; saturation and, ; scattering and, , , , , , , , ; sensitivity and, –, –, –; sensor fusion and, ;
394
General Index polarization vision (continued) sighting distance and, –; signals and, , –, , , –; skylight and, ; spatial vision and, , ; spectra and, , , , ; sun compasses and, , –, ; sunlight and, , , , , , , , ; sunset and, , , ; tapeta structures and, , ; ultraviolet light and, , –, , , ; vertebrates and, , –, ; visual pigments and, –, –; water and, –, , , –, –; wavelength and, , , , , polarized light, , ; attenuation and, ; circular, , –, –, , ; degree of, –, , , , , , ; dim light and, , , ; discrete sources of, –; extended sources of, –; eye design and, –; linear, –, , –, , –; in nature, –; physics of, – POL neurons, – polychromatic light, porphyropsin, posteromedian eyes, , , predators, –, , ; aerial interception and, –; aquatic habitat layouts and, –; attenuation and, , , ; camouflage and, , , , ; color vision and, , , , –, ; CSTMD cell and, ; dim light and, , , , , ; eye designs and, , ; motion vision and, , , –, ; optical environments and, , ; orientation and, –, , , ; polarization vision and, , , ; scanning eye control and, –; signals and, , , , ; spatial vision and, –, , –; terrestrial habitat layouts and, –; visual pigments and, prey, –, ; aerial interception and, –; aquatic habitat layouts and, –; attenuation and, ; camouflage and, –, –, , ; color vision and, , , , , ; dim light and, –, , , ; eye designs and, , , , ; motion vision and, , , –, ; optical environments and, ; orientation and, ; polarization vision and, , ; shadow response and, ; spatial vision and, –, , –, –; terrestrial habitat layouts and, –; visual pigments and, primates, ; color vision and, –, –; dim light and, , , , –, ; orientation and, ; signals and, ; spatial vision and, , ; visual pigments and, , private signals, , psychophysical experiments, ,
pupils: brightness and, ; camera eyes and, –; dim light and, –, , ; eye designs and, –, –, ; spatial vision and, –, , quantal terms: optical environments and, –, , , , , –, –, –; quantum catch and, –, –, –; radiance and, – quarter-wave retarder, , , , R– photoreceptors, , R photoreceptors, , , , , –, rabbits, , – radiance: attenuation and, –, , –; color vision and, , , , ; optical environments and, , , , , , –; path, –, , , –; signals and, –, , , , radiometry, xix, Raman scattering, –, rats, , , , , , Rayleigh scattering, , , – receptive field: light and, , –, , , , , , , , , , , , , , , –, –, , , , , –, , ; motion vision and, –, , ; spatial vision and, , , , , , , red zone, – reflectance, , , reflection: camouflage and, , , , , – , ; color vision and, , ; dim light and, ; eye designs and, –, , ; mirroring and, (see also mirroring); orientation and, ; polarization vision and, , , , , , , –, , , ; signals and, , , , , , , –, ; spatial vision and, ; structural colors and, –; tapetum lucidum and, , refractive indices: attenuation and, ; camouflage and, , , ; eye designs and, , –, , , –, , ; polarization vision and, , ; signals and, , , ; spatial vision and, , , Reichardt, Werner, Reichardt detector, –, , reptiles: color vision and, –, , ; dim light and, ; eye designs and, ; optical environments and, resolution, , ; acute zone and, (see also acute zone); attenuation and, , –; contrast discrimination and, –; dim light and, , –, , , –, –, ; eye
General Index 395 designs and, –, , , , –, –; fovea and, (see also fovea); image quality limitations and, –; motion vision and, , ; optical cross talk between photoreceptors and, –; pixels and, , –, –, , –, , , , ; retinal ganglion cells and, (see also retinal ganglion cells); scattering and, –; sensitivity tradeoff and, –; signals and, ; spatial, , –, , , –, –, , , –, –, –, –, , , , , , , , , , –; spatial vision and, –, –, –, , –; visual pigments and, retinal: –dehydroretinal and, , ; –hydroxyretinal and, ; –hydroxyretinal, , ; actions of, –, , –, , , , –, , – , , , , ; color vision and, , , , –, ; dim light and, , , –; eye designs and, –, , , , ; motion vision and, , , , , ; orientation and, ; polarization vision and, , , ; spatial vision and, –, , , , , ; visual field regionalization and, – retinaldehyde, retinal ganglion cell: dim light and, , ; eye designs and, ; midget, –, , ; motion vision and, ; orientation and, ; parasol, –, ; spatial vision and, –, retinal temperature, retinas, –; accessory, ; air, ; banked, ; camera eyes and, –; color vision and, , , , –, , –; dim light and, –; distal, , ; eye designs and, –, , –, , , –, ; focusing and, , –, –, –, , –, ; motion vision and, –, –, , –, , ; orientation and, , ; polarization vision and, –, , , , , ; spatial vision and, –, , , , , , , , ; tapetum lucidum and, , ; tiered, ; visual field regionalization and, –; visual pigments and, –, –, , , , –; retinol, rhabdoms: color vision and, , , ; dim light and, , , ; eye designs and, , , , –, , , ; illustrated structure of, ; microvillar, –, , , , , , , –, , ; photoreceptors and, , –, , –, , , , –, , , , –, , –, , , , , –, , , ; polarization vision and, –; spatial vision and, , , , ; structure of, –; superposition aperture and, –,
, –; visual pigments and, , –, , –, . See also rods rhodopsin, , , , , , rods: color vision and, , , , , ; dim light and, , –; eye designs and, –; macroreceptor, –; polarization vision and, –; spatial vision and, –, ; visual pigments and, –, – Rose, Albert, saccades, –, –, , sandhoppers, , , –, , saturation, , scalar irradiance, , , scallops, , – scattering: absorption and, (see also absorption); attenuation and, , –, , –; dim light and, ; multiple, , –, , –; optical environments and, , , –; polarization vision and, , , , , , , , ; Raman, –, ; Rayleigh, , , –; resolution and, –; signals and, , , , , ; spatial vision and, , ; visual pigments and, –; volume scattering function and, ; water and, – Schlaer, Simon, , Schwind, Rudolf, scorpions, , , , scotopic conditions, , , seals, , , –, self-screening, , , , sensitivity, ; background light and, –; color vision and, –; cones and, , , –, –; contrast discrimination and, –; detection of single photons and, ; dim light and, , , –; extended scenes and, –; eye designs and, –, , –, –; higher neural enhancement of, –; light and, , , , –; motion vision and, –, –, –; orientation and, , , , ; point sources and, –; polarization vision and, –, –, –; psychophysical experiments and, ; resolution and, –, (see also resolution); signals and, –, , , , ; spatial summation and, –; spatial vision and, –, , –, , , , , ; square root law of visual detection and, ; temporal summation and, –; visual pigments and, , , , , , , , –; wavelength and, , , – sensitizing pigments, – sensor fusion,
396
General Index sex, ; color vision and, , , , ; dim light and, ; love spots and, –, , ; motion vision and, –; signals and, , ; spatial vision and, –, shadow response, shielded eyes, –, shrimp: color vision and, , , –; dim light and, ; eye movements and, –, –, ; optic environments and, , , ; polarization vision and, , –, , , ; signals and, , , , ; visual pigments and, sighting distance: attenuation and, –; contrast and, –; polarization vision and, –; terrestrial, –; underwater, – signals: absorption and, –, , , , –; achromatic vision and, ; arthropods and, ; attenuation and, , , ; background light and, , –, ; behavior and, , , , –; bioluminescence and, –, , –, –; birds and, , , , –, ; birefringence and, –, ; brightness and, , ; cephalopods and, , –, –, ; color patterns and, –; crustaceans and, , , , , ; daylight and, ; degree of polarization and, ; detection vs. recognition and, –; emitted light and, –; evolution and, –, ; filters and, , ; fish and, , ; flies and, , , , , , , –, –, ; G-protein-coupled receptors (GPCRs) and, –, ; hue and, , ; humans and, , , ; insects and, , , , ; intensity and, , , ; invertebrates and, ; irradiance and, ; lenses and, , ; monochromatic light and, , , ; moonlight and, ; opponency and, (see also opponency); photons and, –, –, ; polarization vision and, , –, , , –; predators and, , , , ; primates and, ; private, , ; radiance and, –, , , , ; reflection and, , , , , , , –, ; refractive indices and, , , ; resolution and, ; scattering and, , , , , ; sensitivity and, –, , , , ; sexual, , ; spectra and, , , , , –; structural colors and, –; sunlight and, , , , , ; tapeta structures and, ; transmitted light and, –; transparency and, –; ultraviolet light and, , , , , , , ; vertebrates and, , –; visual pigments and, , ; water and, –, , , –,
–; wavelength and, , , , , –, , signal-to-noise ratio (SNR): attenuation and, ; color vision and, –; dim light and, –, , ; spatial vision and, , skylight: attenuation and, ; color vision and, ; moonless nights and, –; optical environments and, , , –; orientation and, ; polarization vision and, small-field small-target movement detectors (SF-STMDs), –, snapshot model, – Snel’s window, , , , , , , solar elevation, , solar wind, solid angle, , –, , , spatial frequency: attenuation and, –, ; dim light and, ; gratings and, ; maximum detectable, , ; modulation transfer function (MTF) and, , –; spatial vision and, , –, , spatial summation, – spatial vision: acceptance angle and, , ; accommodation and, ; acute zone and, , , –, , , ; amacrine cell and, , ; ants and, , ; aperture and, , ; aquatic habitat layouts and, –; arthropods and, ; behavior and, , , , , ; bioluminescence and, , –; bipolar cells and, , ; birds and, –, , , , , ; camera eyes and, –, ; cats and, –, ; cephalopods and, , , ; chromatic aberration and, ; compound eyes and, –, –, , ; cones and, –; contrast and, , , , , ; cornea and, , , , , , , ; crustaceans and, , ; daylight and, –; diffraction and, ; ecology of, –; evolution and, –, , , –, ; extended scenes and, , –; filters and, , ; fish and, , , , –; flies and, , , , –, , ; focal length and, –, , –; focusing and, , , ; fovea and, , –, –, , , , , ; frequency and, , , , , ; gratings and, , ; humans and, , –; insects and, , , –, , , ; intensity and, ; interreceptor angle and, –, , , , ; invertebrates and, –, –, , ; lenses and, , –, , , –, ; locomotion and, –; love spots and, –; mammals and, , , ; matched filters and, , –, –; monkeys and, ; noise and, , ;
General Index 397 ommatidium and, , , , , , –, , , , ; opsins and, ; optic flow and, –; overlap and, , , ; photons and, , , , ; photoreceptors and, –, , ; pixels and, ; point sources and, , –; polarization vision and, , ; predators and, –, , –; prey and, –, , –, –, , ; primates and, , ; pupils and, –, , ; reflection and, ; refractive indices and, , , ; resolution and, –, –, –, , –; retinal and, –, , , , , ; retinal ganglion cells and, –; retinas and, –, , , , , , , , ; rhabdoms and, , , , ; rods and, –, ; scattering and, , ; sensitivity and, –, , –, , , , , ; sex and, –, ; signal-to-noise ratio (SNR) and, , ; spatial frequency and, , –, . ; telephoto components and, –; terrain theory and, ; terrestrial habitat layouts and, –; ultraviolet light and, , ; vertebrates and, –, , , , , ; visual fields and, , , , –, , ; water and, , , ; wavelength and, spectra: color vision and, –; dim light and, ; eye designs and, ; optical building blocks and, ; orientation and, , ; polarization vision and, , , , ; signals and, , , , , –; tuning and, –; visual pigments and, –. See also absorbance; absorption spectral sensitivity: attenuation and, , –, ; camouflage and, , ; color vision and, , , , –, , , –, , ; dim light and, ; eye designs and, ; fish and, –, , , , ; light and, ; orientation and, , ; polarization vision and, ; signals and, , ; visual pigments and, , , spectral tuning, – spectrometers, , , spherical aberration, , , spiders: color vision and, ; dim light and, , , , , , , , ; eye designs and, , ; motion vision and, , ; polarization vision and, , , , ; signals and, ; spatial vision and, –, , –, spook fish, , , square root law of visual detection, squid, ; dim light and, , , ; polarization vision and, , , ; signals and, ,
–, , ; spatial vision and, ; visual pigments and, , , , , sRGB, starburst amacrine cell (SBAC), , star compass, starlight: color temperature of, ; dim light and, , , , ; optical environments and, xix, –, , , , , , ; orientation and, ; visual pigments and, steradian measurement, , stomatopods, ; color vision and, , , , –, ; motion vision and, , ; polarization vision and, , , , , , ; signals and, ; visual pigments and, , strip retina, , – structural colors, , , , –, , sun compass, , – sunlight, , ; attenuation and, , ; color vision and, , ; dim light and, , , ; motion vision and, , , ; optical environments and, –, –; orientation and, , , –, , , –; polarization vision and, , , , , , , , ; signals and, , , , , ; spatial vision and, –; visual pigments and, , – sunset: attenuation and, ; color vision and, ; dim light and, ; optical environments and, , , , –, , ; orientation and, , ; polarization vision and, , , sunspots, superposition aperture, –, , – synapses, , , , , , tapetum lucidum: dim light and, –; eye designs and, ; polarization vision and, , ; signals and, target-selective descending neurons (TSDNs), , taxis: color vision and, ; eye designs and, ; orientation and, – telephoto components, – telescopes, , , , , temporal summation, – terrain theory, Thayer, Abbott, toads, , , , –, , Tokay gecko, transducer noise, – transparency: camouflage and, –; signals and, – triboluminescence, turtles: attenuation and, ; eye designs and, –, ; orientation and, –
398
General Index twilight: color vision and, , ; dim light and, , ; motion vision and, ; optical environments and, , –, , ; visual pigments and, , ultraviolet light, , –; attenuation and, ; color vision and, , , , , –, – , ; dim light and, ; eye designs and, ; orientation and, ; polarization vision and, , –, , , ; signals and, , , , , , , ; spatial vision and, , ; visual pigments and, , –, , , underfocused eyes, , , , vector irradiance, –, veiling light, Vertebrate Eye and Its Adaptive Radiation, The (Walls), vertebrates: attenuation and, , ; camera eyes and, –; color vision and, , , , , , , –; dim light and, , –, , , , –, ; eye designs and, –, –, –; motion vision and, , , , , , , , ; orientation and, , ; photoreceptors and, ; polarization vision and, , –, ; signals and, , –; spatial vision and, –, , , , , ; visual pigments and, , , , –, –, , , – vestibulo-ocular reflex (VOR), visual fields: color vision and, ; dim light and, , , , ; eye designs and, , , ; motion vision and, , , –, ; regionalization and, –; spatial vision and, , , , –, , visual fixation, visual pigments: –dehydroretinal and, , ; –hydroxyretinal and, , ; –hydroxyretinal and, , ; absorbance and, –, –, , , , –; absorption and, –, , , – , ; alpha-band and, ; amino acids and, , –; amphibians and, , –; arthropods and, , ; behavior and, , , ; beta-band and, , ; birds and, ; brightness and, , ; camouflage and, –, , –, –; carbon rings and, ; cats and, ; chlorophyll and, ; chromophores and, –, –, –, , –; color vision and, , , –, , , , –, , ; cones and, –, , –; contrast and, ; crustaceans and, , , , , –, ; dark noise and, , , ; Dartnall nomogram and, ; daylight
and, , , ; developmental shifts and, –; dichroism and, ; dim light and, , , –, ; evolution and, , , , , ; eye designs and, , ; filters and, –; fish and, , –, , , –, , –; flies and, , , ; fluorescence and, ; G-protein-coupled receptors (GPCRs) and, –, ; humans and, ; insects and, , , , , ; invertebrates and, , , , , ; lambda-max wavelength and, –, –, , , –, , ; mammals and, , –, , , ; measure of useful, –; monochromatic light and, , ; noise and, , , ; oil droplets and, , , –, , , , , –, , ; opsins and, –, , –, , , ; photochemical process of, –; photons and, , , –, , , –; polarization vision and, –, –; predators and, ; prey and, ; primates and, , ; resolution and, ; retinal and, –, , –, , , , –, , –, , , , ; retinas and, –, –, , , , –; rhabdoms and, , –, , –, , , , –, , , , –, , –, , , , , –, , , ; rods and, –, –, –; scattering and, –; sensitivity and, , , , , , , , –; sensitizing pigments and, –; signals and, , ; special properties of, –; spectra and, –; starlight and, ; sunlight and, , –; tuning of, –; twilight and, , ; ultraviolet light and, , –, , , ; vertebrates and, , , , –, –, , , –; water and, –, , – visual scan, visual streak, –, Vogt, Klaus, volume scattering function, Vries, Hugo de, wasps: dim light and, , , ; orientation and, –, water, –, ; aquatic habitat layouts and, –; attenuation and, –, –; camouflage and, , –, , , –, –; Case I, ; Case II, ; color filters and, –; color vision and, , , , , –, –; dim light and, , –, –; eye designs and, –, ; motion vision and, , , , , –; optical environments and, –, , , –, ; orientation and, , –, , ; polarization vision and, –, , , –, –; scattering and,
General Index 399 –; sighting distance and, –; signals and, –, , , –, –; spatial vision and, , , ; temperature and, , ; visual pigments and, –, , – water fleas, , , – waveguide, , , wavelength: color vision and, –; eye designs and, , , , , ; motion vision and, ; optical environments and, –, –; orientation and, ; photoreceptors and, –, , , –; polarization vision and, , , , , ; sensitivity and, , , –, –; signals and, , , , ,
–, , ; spatial vision and, ; spectra and, (see also spectra) wavelength-specific behavior, , , Weber contrast, – Weber-Fechner law, whales, , , , , white light, , , , , , worms, , , , , , –, Yeandle, Stephen, zodiacial light, –, zooplankton, , , , ,
Index of Names Italicized pages indicate figures Adrian, W., Aho, A.-C., , , , , Allen, J. J., , Amundsen, T., Andersson, S., Arnott, H. J., Austin, A. D., Baird, E., , 295, Baldwin, J., Bang, B. G., Barber, V. C., – Barlow, H. B., , , –, , Barth, F. G., Baylor, D. A., 72, , , Bennett, A. T. D., Bernard, G. D., , Bhagavatula, P. S., Birch, D., , , 272 Blamires, S. J., Blest, A. D., , , , , , 124, Bohren, C., , , , , , , Borst, A., –, 238 Bowmaker, J. K., , 165, Boycott, B., Brady, P., Brandley, N. C., Bridgeman, C. S., Briscoe, A. D., , Britt, L. L., Brooke, M. de L., , Buddenbrock, W. von, Buschbeck, E. K., 194, , 254, Busseroles, F. de, 174, Carleton, K. C., , Cartwright, B. A., , 301 Case, J. F., , , Castenholz, A., Caveney, S., –, 115, , Chan, D. Y. C., Charles-Dominique, P., , Charman, W. N.,
Cheng, K., , Chiao, C.-C., 26, , 62 Chiou, T. H., 181, –, , Chittka, L., , , 158 Chuang, C. Y., Clarke, R. J., Cleary, P., – Clothiaux, E. E., , , , Cohen, A. C., , 322 Coles, J. A., Collett, T. S., , , 249, , 301, Collin, S. P., , 141, , , 277 Collins, C. E., , Coppens, J. E., Cott, H., , , Cratsley, C. K., Cronin, T. W., 148, , –, , , , , Cummings, M. E., , Cuthill, I. C., , Dacke, M., , 188, 199, 257, , , 267, Dartnall, J. J. A., del Mar Delgado, M., Demb, J. B., Denhardt, G., Denton, E., , , , – Detto, T., De Valois, S., Douglas, J. M., 181, Douglas, R. H., , 57, 154, , 181, , –, Doujak, F. E., Dubs, A., Duntley, S. Q., , , , , Easter, S. S., Eckert, M. P., , Egelhaaf, M., , Emlen, S. T., Entine, G., Ewert, J.-P., Exner, S., , –, –
402
Index of Names Fasick, J. I., Fenk, L. M., Fineran, B. A., Fite, K. V., Forward, R. B., Jr., –, 293, Fourtes, M., Fox, H. M., , , Frank, T. M., , , Fraser, A. B., , Frederiksen, R., 72, , t, , Fried, S. I., , 239 Fritsches, K., 98, Garamszegi, L. Z., Garm, A., Garstang, R. H., Genin, A., Gilbert, C., Gislén, A., , 101 Glantz, R. M., Goldstein, D. H., Gomez, D., Gonzalez-Bellido, P. T., , , Gorman, A. L. F., Govardovskii, V. I., , 40, Graham, P., , , 306 Greiner, B., , – Gronenberg, W., Gunter, R., Gwyther, J., Haddock, T. M., –, , 324, , 330 Hall, M. I., Hall, R., 272 Hanke, F. D., Hanlon, R. T., , , – Hardy, A. C., Harper, R. D., Hart, N. S., 52, , 154, , 176 Hartline, H. K., Hastings, J. W., Hateren, J. H. van, , , , Hattar, S., Hausen, K., , Hausmann, F., Hawryshyn, C. W., 182, , 194, 199, – Hecht, S., , , Heimonen, K., Heine, L., Hemmi, J. M., –, Henderson, S. R., 19, Herring, P. J., , Herreros de Tejada, P.,
Hess, R. F., Heyers, D., Hill, R. M., Hinton, H. E., Hölldobler, B., – Homberg, U., , 198 Hornstein, E. P., , 136 Horridge, G. A., Horvath, G., , – Howland, H. C., , Huffman, D. R., Hughes, A., , –, , , , , Huxley, T. H., Ikeda, H., Ivanoff, A., , 184 Jacobs, G. H., , , , 272, Jerlov, N, G., – Johnsen, S., 11, , , , 19, 22, 24, 25, 33, –, , , –, –, , , –, –, , –, , 336, 338, Jones, B. W., Kamermans, M., Karpel, N., Katz, B., Katzir, G., Kay, R. F., , Kelber, A., , 148, , , , , , 203, , , 265, – Kidd, R., Killinger, D. K., , Kinoshita, S., , , Kirchner, W. H., Kirk, E. C., , 274, , Kirschfeld, K., , , , , 190 Kondrachev, S. L., Können, G. P., Korringa, P., Kral, K., Kreysing, M., Kröger, R. H. H., , Kruger, L., Kunze, P., Labhart, T., , 189, 195, Land, M. R., , , , 76, , , , , , , , , 104, , , , , 124, , 126, –, 138, , , , , 242, , , 249, , , , , Lanfranconi, B., Larsen, L. O.,
Index of Names 403 Laughlin, S. B., , , , , , 136, , , , Lee, S., Leech, D. M., , Lehrer, M., , 258, – Leibowitz, H. W., Lewis, S. M., Liebman, P., Lillywhite, P. G., , Lima, S. M. A., Lind, O., , , 272 Lisney, T. J., Livingston, W., Lloyd, J. E., , Locket, N. A., –, , , , 278, –, 285, Loew, E. R., 154, , 166, Lohmann, K. J., Longcore, T., Lynch, D. K., , Lythgoe, J. N., xix, , , , , 166, , , , , Macagno, E. R., Maloney, L. T., Marshall, N. J., , , 148, 155, –, , 163, –, 172, , , 190, , 192, 194, 195, , , 245, , , , Martin, C., , 297 Martin, G. R., –, 272, , , Masland, R. H., , 239 Mäthger, L. M., , , Matilla, K., Matthiessen, L., t, , , Maximov, V. V., , 176 McFall-Ngai, M., , McFarland, W. N. xix, , 169 McIntyre, P., , 85, –, 115, 126, , , McKenzie, D. R., McReynolds, J. S., Meinel, A., Meinel, M., Mencher, R. M., Mensinger, A. F., Menzel, R., – Merigan, W. H., , 272 Merilaita, S., , Mertens, L. E., , , Meyer-Rochow, V. B., Michelson, , – Minke, B., Mobley, C. D., , , , Moller-Racke, I., , 303,
Moore, M. V., Morin, J. G., , , 292, , , 322 Mouritsen, H., Muheim, R., Müller, J. P., , 304 Munk, O., , 101, 143, – Munoz-Cuevas, A., Munoz Tedo, C., Muntz, W. R. A., –, , , Munz, F. W., , 169 Murphy, C. J., , Nagata, T., Nagle, M. G., Nalbach, H. O., Needham, M. G., Neumeyer, C., Nichigushi, M. K., Nicol, J. A. C., , , , Nilsson, D.-E., –, , 67, , t, –, , –, 98, 104, 108, –, , 124, 126, , 138, , , , , 285, Nilsson, H. L., Nordström, K., 136, , Nǿrgaard, T., Novales Flamarique, I., Nyholm, S. V., Oberwinkler, J., O’Carroll, D. C., , 136, , 138 Ogden, T. E., Ohly, K. P., Okawa, H., , Olberg, R. M., , , 260 Orlowski, J., , 272 O’Rourke, C. T., Ortega-Escobar, J., Osorio, D., , –, , , – Ott, M., Owens, G. L., , Pahlberg, J., Pardi, L., Parker, A. G., Parry, J. W. L., Partridge, J. C., , , Pasternak, T., , 272 Paul, H., Pearcy, W. G., Pedersen, J. N., Pegau, W. S., , , Peichl, L., 63, , 174, Pelger, S.,
404
Index of Names Penteriani, V., Peters, R., – Pierscionek, B. K., Pignatelli, V., Pirenne, M., –, Pirhofer-Waltz, K., Planck, M., Plotinus, xix Polyak, S., Porter, M. L., 7, 38 Prete, F. R., Purcell, J. E., Räber, F., Rakic, P., – Read, K. R. H., Reichardt, W., Reid, S. F., , 268 Reisenbichler, K. R., Reymond, L., , Ribi, W. A., , 287 Rich, C., Rickel, S., Rieke, F., Ritz, T., Rivers, T. J., Roberts, N. E., –, Robinson, P. R., Robison, B. H., , 277 Rodieck, R. W., 120, , 122 Rojas, L. M., Rose, Albert, Rosenblatt, R. H., Ross, C. F., Rossel, S., Roth, L. S. V., , , 265, Rozenberg, G. V., Ruxton, G. D., ,
Shashar, N., , , , , Sherk, T. E., , 138 Siebeck, U. E., , 160, Silveira, L. C. L., , Sivak, J. G., Smith, F. E., Smith, K. C., Smith, K. U., Smith, R. C., 27, Smolka, J., – Snow, D. W., Snyder, A. W., , , –, , , , , Soffer, B. H., Solovei, I., 51, , Somanathan, H., , Sosik, H. M., , Speiser, D. I., xx, –, Srinivasan, M. V., , –, , 311 Stange, G., Stevens, M., , , , Stowasser, A., 194, Strausfeld, N. J., 134, , Sweeney, A. M., , , 204, Sweeting, L. M., Talbot, C. M., 190, Taylor, V. A., , 249 Taylor, W. R., , 239 Temple, S., , 175, , 202 Thayer, A. H., , Thayer, G. H., Theobald, J. C., Thomas, R. J., Tierney, S. M., Tobey, F. L., Tucker, V. A., , 253 Tuthill, J. C., Ugolini, A.,
Saidel, W. M., 194, Salmela, I., Sambles, J. R., Sampath, A. P., Sauer, E. G. F., Sauer, E. M., Sawicka, E., Scapini, F., , Schaeffel, F., Schechner, Y. Y., , Schlaer, Simon, , Schmid, A., Schwassmann, H. O., Schwind, R., , 143, –
Vallet, A. M., Van Dover, C. L., – Vaney, D. I., xx, , 239 Varjú, D., , Vevers, G., , , Vignolini, S., Vogt, K., – Vorobyev, M., –, 154, –, , , – Vries, Hugo de, Vukusic, P., Wagner, H.-J., , 144, , , , , , ,
Index of Names 405 Walls, G. L., , , 101, , , , , –, Walton, A. J., Wardill, T. J., Warrant, E., 69, , t, , 83, , 85, , 98, , , 130, , 134, –, , , , , , 277, , 280, 281, 283, –, 302, Warren, F. J., Wässle, H., , Waterman, T. H., , 184, , , 285 Wcislo, W. T., Weckström, M., , Wehner, R., , , , 143, 181, 182, , , 196, 199, , – Wenzel, B. M., Werringloer, A., White, C. R., White, S. N., Widder, E. A., , , 292, , 316, , 318, –, Wiederman, S. D., Wikler, K. C., – Wilcox, J. G.,
Wilkens, L. A., Williams, D., 126, Williamson, C. E., Willows, A. O. D., Wilson, M., Wilson, R., 133 Wright, P. C., Wystrach, A., –, 307 Yahel, R., Yamada, E. S., , 280, Yeandle, S., Yoshida, A., Yoshioka, S., Young, R. E., , Young, R. W., 49 Young, S., , 249 Zahavi, A., Zaneveld, J. R. V., , , Zeil, J., , , , , –, 300 Zhou, Z. J., Zylinski, S., , 327