300 44 7MB
English Pages 474 Year 2004
BERKSHIRE PUBLISHING GROUP
Berkshire Encyclopedia of
Human—Computer Interaction When science fiction becomes science fact
William Sims Bainbridge National Science Foundation
Edited by
Berkshire Encyclopedia of
Human-Computer Interaction
Berkshire Encyclopedia of
Human-Computer Interaction VOLUME
1
William Sims Bainbridge Editor
Great Barrington, Massachusetts U.S.A. www.berkshirepublishing.com
Copyright © 2004 by Berkshire Publishing Group LLC All rights reserved. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the publisher. Cover photo: Thad Starner sporting a wearable computer. Photo courtesy of Georgia Institute of Technology. Cover background image: Courtesy of Getty Images. For information: Berkshire Publishing Group LLC 314 Main Street Great Barrington, Massachusetts 01230 www.berkshirepublishing.com Printed in the United States of America Library of Congress Cataloging-in-Publishing Data Berkshire encyclopedia of human-computer interaction / William Sims Bainbridge, editor. p. cm. “A Berkshire reference work.” Includes bibliographical references and index. ISBN 0-9743091-2-5 (hardcover : alk. paper) 1. Human-computer interaction--Encyclopedias. I. Bainbridge, William Sims. II. Title. QA76.9.H85B46 2004 004'.01'9--dc22 2004017920
BERKSHIRE PUBLISHING STAFF Project Director Karen Christensen Project Coordinators Courtney Linehan and George Woodward Associate Editor Marcy Ross Copyeditors Francesca Forrest, Mike Nichols, Carol Parikh, and Daniel Spinella Information Management and Programming Deborah Dillon and Trevor Young Editorial Assistance Emily Colangelo Designer Monica Cleveland Production Coordinator Janet Lowry Composition Artists Steve Tiano, Brad Walrod, and Linda Weidemann Composition Assistance Pam Glaven Proofreaders Mary Bagg, Sheila Bodell, Eileen Clawson, and Cassie Lynch Production Consultant Jeff Potter Indexer Peggy Holloway
CONTENTS
List of Entries, ix Reader’s Guide, xv List of Sidebars, xix Contributors, xxiii Introduction, xxxiii Publisher’s Note, xli About the Editor, xliii
Entries Volume I: A–L 1–440 Vol II: M–W 441–826 Appendix 1: Glossary, 827 Appendix 2: Master Bibliography of Human-Computer Interaction, 831 HCI in Popular Culture, 893 Index, 931 • Index repeated in this volume, I-1 vii
LIST OF ENTRIES
Adaptive Help Systems Peter Brusilovsky Adaptive Interfaces Alfred Kobsa
Animation Abdennour El Rhalibi Yuanyuan Shen Anthropology and HCI Allen W. Batteau
Affective Computing Ira Cohen Thomas S. Huang Lawrence S. Chen
Anthropometry Victor L. Paquet David Feathers
Altair William Sims Bainbridge
Application Use Strategies Suresh K. Bhavnani
Alto William Sims Bainbridge
Arpanet Amy Kruse Dylan Schmorrow Allen J. Sears
Artificial Intelligence Robert A. St. Amant Asian Script Input William Sims Bainbridge Erika Bainbridge Atanasoff-Berry Computer John Gustafson Attentive User Interface Ted Selker Augmented Cognition Amy Kruse Dylan Schmorrow
ix
X ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Augmented Reality Rajeev Sharma Kuntal Sengupta
Compilers Woojin Paik
Digital Divide Linda A. Jackson
Avatars Jeremy Bailenson James J. Blascovich
Computer-Supported Cooperative Work John M. Carroll Mary Beth Rosson
Digital Government Jane E. Fountain Robin A. McKinnon
Beta Testing Gina Neff
Constraint Satisfaction Berthe Y. Choueiry
Braille Oleg Tretiakoff
Converging Technologies William Sims Bainbridge
Brain-Computer Interfaces Melody M. Moore Adriane D. Davis Brendan Z. Allison
Cybercommunities Lori Kendall
Browsers Andy Cockburn Cathode Ray Tubes Gregory P. Crawford CAVE Thomas DeFanti Dan Sandin Chatrooms Amanda B. Lenhart Children and the Web Dania Bilal Classrooms Chris Quintana Client-Server Architecture Mark Laff Cognitive Walkthrough Marilyn Hughes Blackmon Collaboratories Gary M. Olson
Cybersex David L. Delmonico Elizabeth Griffin Cyborgs William Sims Bainbridge Data Mining Mohammad Zaki Data Visualization Kwan-Liu Ma Deep Blue Murray Campbell
Digital Libraries Jose-Marie Griffiths Drawing and Design Mark D. Gross E-business Norhayati Zakaria Education in HCI Jan Stage Electronic Journals Carol Tenopir Electronic Paper Technology Gregory P. Crawford Eliza William H. Sterner E-mail Nathan Bos Embedded Systems Ronald D. Williams
Denial-of-Service Attack Adrian Perrig Abraham Yaar
ENIAC William Sims Bainbridge
Desktop Metaphor Jee-In Kim
Ergonomics Ann M. Bisantz
Dialog Systems Susan W. McRoy
Errors in Interactive Behavior Wayne D. Gray
Digital Cash J. D. Tygar
Ethics Helen Nissenbaum
LIST OF ENTRIES ❚❙❘ XI
Ethnography David Hakken Evolutionary Engineering William Sims Bainbridge Expert Systems Jay E. Aronson
Handwriting Recognition and Retrieval R. Manmatha V. Govindaraju Haptics Ralph L. Hollis
Eye Tracking Andrew T. Duchowski
History of Human-Computer Interaction Jonathan Grudin
Facial Expressions Irfan Essa
Hollerith Card William Sims Bainbridge
Fly-by-Wire C. M. Krishna
Human-Robot Interaction Erika Rogers
Fonts Thomas Detrie Arnold Holland
Hypertext and Hypermedia David K. Farkas
Games Abdennour El Rhalibi
Icons Stephanie Ludi
Information Theory Ronald R. Kline Instruction Manuals David K. Farkas Internet—Worldwide Diffusion Barry Wellman Phuoc Tran Wenhong Chen Internet in Everyday Life Barry Wellman Bernie Hogan Iterative Design Richard Baskerville Jan Stage Keyboard Alan Hedge Language Generation Regina Barzilay
Gender and Computing Linda A. Jackson
Identity Authentication Ashutosh P. Deshpande Parag Sewalkar
Laser Printer Gary Starkweather
Geographic Information Systems Michael F. Goodchild
Impacts Chuck Huff
Law and HCI Sonia E. Miller
Gesture Recognition Francis Quek
Information Filtering Luz M. Quiroga Martha E. Crosby
Law Enforcement Roslin V. Hauck
Graphical User Interface David England Grid Computing Cavinda T. Caldera
Information Organization Dagobert Soergel Information Overload Ruth Guthrie
Groupware Timothy J. Hickey Alexander C. Feinman
Information Retrieval Dagobert Soergel
Hackers Douglas Thomas
Information Spaces Fionn Murtagh
Lexicon Building Charles J. Fillmore Liquid Crystal Displays Gregory P. Crawford Literary Representations William Sims Bainbridge Machine Translation Katrin Kirchhoff
XII ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Markup Languages Hong-Gee Kim Mobile Computing Dharma P. Agrawal Mosaic William Sims Bainbridge
Online Education Robert S. Stephenson Glenn Collyer Online Questionnaires James Witte Roy Pargas
Programming Languages David MacQueen Prototyping Richard Baskerville Jan Stage Psychology and HCI Judith S. Olson
Motion Capture and Recognition Jezekiel Ben-Arie
Online Voting R. Michael Alvarez Thad E. Hall
Mouse Shumin Zhai
Ontology Christopher A. Welty
Recommender and Reputation Systems Cliff Lampe Paul Resnick
Movies William Sims Bainbridge
Open Source Software Gregory R. Madey
Repetitive Strain Injury Jack Tigh Dennerlein
MUDs Richard Allan Bartle
Optical Character Recognition V. Govindaraju Swapnil Khedekar
Scenario-Based Design John M. Carroll
Multiagent systems Gal A. Kaminka Multimodal Interfaces Rajeev Sharma Sanshzar Kettebekov Guoray Cai Multiuser Interfaces Prasun Dewan Musical Interaction Christopher S. Raphael Judy A. Franklin Natural-Language Processing James H. Martin Navigation John J. Rieser
Peer-to-Peer Architecture Julita Vassileva Pen and Stylus Input Alan Hedge Personality Capture William Sims Bainbridge Physiology Jennifer Allanson Planning Sven Koenig Michail G. Lagoudakis Pocket Computer William Sims Bainbridge
N-grams James H. Martin
Political Science and HCI James N. Danziger Michael J. Jensen
Olfactory Interaction Ricardo Gutierrez-Osuna
Privacy Jeffrey M. Stanton
Search and Rescue Howie Choset Search Engines Shannon Bradshaw Security Bhavani Thuraisingham Semantic Web Bhavani Thuraisingham Smart Homes Diane J. Cook Michael Youngblood Sociable Media Judith Donath Social Informatics Howard Rosenbaum Social Proxies Thomas Erickson Wendy A. Kellogg
LIST OF ENTRIES ❚❙❘ XIII
Jenny Preece Diane Maloney-Krichmar
Social Psychology and HCI Susan R. Fussell
Task Analysis Erik Hollnagel
Sociology and HCI William Sims Bainbridge
Telecommuting Ralph David Westfall
Value Sensitive Design Batya Friedman
Socio-Technical System Design Walt Scacchi
Telepresence John V. Draper
Video Immanuel Freedman
Software Cultures Vaclav Rajlich
Text Summarization Judith L. Klavans
Video Summarization A. Murat Tekalp
Software Engineering Richard Kazman
Theory Jon May
Virtual Reality Larry F. Hodges Benjamin C. Lok
Sonification David M. Lane Aniko Sandor S. Camille Peres
Three-Dimensional Graphics Benjamin C. Lok
Spamming J. D. Tygar Speech Recognition Mary P. Harper V. Paul Harper Speech Synthesis Jan P.H. van Santen Speechreading Marcus Hennecke Spell Checker Woojin Paik Sphinx Rita Singh Statistical Analysis Support Robert A. St. Amant Supercomputers Jack Dongarra Tablet Computer William Sims Bainbridge
Three-Dimensional Printing William Sims Bainbridge Touchscreen Andrew L. Sears Rich Goldman Ubiquitous Computing Olufisayo Omojokun Prasun Dewan Unicode Unicode Editorial Committee Universal Access Gregg Vanderheiden
Viruses J. D. Tygar Visual Programming Margaret M. Burnett Joseph R. Ruthruff Wearable Computer Thad Starner Bradley Rhodes Website Design Barbara S. Chaparro Michael L. Bernard Work Christine A. Halverson
Usability Evaluation Jean Scholtz
Workforce Brandon DuPont Joshua L. Rosenbloom
User Modeling Richard C. Simpson
World Wide Web Michael Wilson
User Support Indira R. Guzman
WYSIWYG David M. Lane
User-Centered Design Chadia Abras
READER’S GUIDE
This list is provided to assist readers in locating entries on related topics. It classifies articles into ten general categories: Applications; Approaches; Breakthroughs; Challenges; Components; Disciplines; Historical Development; Interfaces; Methods; and Social Implications. Some entries appear in more than one category. Applications Classrooms Digital Government Digital Libraries E-business Games Geographic Information Systems Grid Computing Law Enforcement Mobile Computing
Navigation Online Education Online Voting Planning Recommender and Reputation Systems Search and Rescue Statistical Analysis Support Supercomputers Telecommuting Ubiquitous Computing Video Approaches Application Use Strategies Beta Testing Cognitive Walkthrough Constraint Satisfaction Ethics xv
XVI ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Ethnography Evolutionary Engineering Information Theory Iterative Design Ontology Open Source Software Prototyping Scenario-Based Design Social Informatics Socio-Technical System Design Task Analysis Theory Universal Access Usability Evaluation User Modeling User-Centered Design Value Sensitive Design Website Design Breakthroughs Altair Alto Arpanet Atanasoff-Berry Computer CAVE Converging Technologies Deep Blue Eliza ENIAC Hollerith Card Mosaic Sphinx Challenges Denial-of-Service Attack Digital Divide Errors in Interactive Behavior Hackers Identity Authentication Information Filtering Information Overload Privacy Repetitive Strain Injury Security Spamming Viruses
Components Adaptive Help Systems Animation Braille Cathode Ray Tubes Client-Server Architecture Desktop Metaphor Electronic Paper Technology Fonts Keyboard Laser Printer Liquid Crystal Displays Mouse N-grams Peer-to-Peer Architecture Social Proxies Spell Checker Touchscreen Unicode WYSIWYG Disciplines Anthropology and HCI Artificial Intelligence Ergonomics Law and HCI Political Science and HCI Psychology and HCI Social Psychology and HCI Sociology and HCI Historical Development Altair Alto ENIAC History of HCI Interfaces Adaptive Interfaces Affective Computing Anthropometry Asian Script Input Attentive User Interface Augmented Cognition Augmented Reality Brain-Computer Interfaces
READER’S GUIDE ❚❙❘ XVII
Compilers Data Visualization Dialog Systems Drawing and Design Eye Tracking Facial Expressions Fly-by-Wire Graphical User Interface Haptics Multimodal Interfaces Multiuser Interfaces Musical Interaction Olfactory Interaction Online Questionnaires Pen and Stylus Input Physiology Pocket Computer Smart Homes Tablet Computer Telepresence Three-Dimensional Graphics Three-Dimensional Printing Virtual Reality Wearable Computer Methods Avatars Browsers Data Mining Digital Cash Embedded Systems Expert Systems Gesture Recognition Handwriting Recognition and Retrieval Hypertext and Hypermedia Icons Information Organization Information Retrieval Information Spaces Instruction Manuals Language Generation Lexicon Building Machine Translation
Markup Languages Motion Capture and Recognition Natural-Language Processing Optical Character Recognition Personality Capture Programming Languages Search Engines Semantic Web Software Engineering Sonification Speech Recognition Speech Synthesis Speechreading Text Summarization User Support Video Summarization Visual Programming World Wide Web Social Implications Chatrooms Children and the Web Collaboratories Computer-Supported Cooperative Work Cybercommunities Cybersex Cyborgs Education in HCI Electronic Journals E-mail Gender and Computing Groupware Human-Robot Interaction Impacts Internet—Worldwide Diffusion Internet in Everyday Life Literary Representations Movies MUDs Multiagent systems Sociable Media Software Cultures Work Workforce
LIST OF SIDEBARS
Adaptive Help Systems Farewell “Clippy”
Chatrooms Life Online
Adaptive Interfaces Keeping Disabled People in the Technology Loop
Classrooms History Comes Alive in Cyberspace Learning through Multimedia
Anthropology and HCI Digital Technology Helps Preserve Tribal Language Anthropology and HCI Eastern vs. Western Cultural Values
Computer-Supported Cooperative Work Internet Singing Lessons Social Context in Computer-Supported Cooperative Work
Augmented Cognition Putting Humans First in Systems Design
Cybercommunities Welcome to LamdaMOO
Braille Enhancing Access to Braille Instructional Materials
Cybersex Cybersex Addiction xix
XX ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Digital Divide HomeNetToo Tries to Bridge Digital Divide Digital Libraries Vannevar Bush on the Memex Education in HCI Bringing HCI Into the Real World Eliza Talking with ELIZA E-mail The Generation Gap Errors in Interactive Behavior To Err Is Technological Fonts Our Most Memorable Nightmare Gender and Computing “Computer Girl” Site Offers Support for Young Women Narrowing the Gap Geographic Information Systems Geographic Information Systems Aid Land Conservation Groupware Away Messages The Wide World of Wikis History of HCI Highlights from My Forty Years of HCI History Human-Robot Interaction Carbo-Powered Robots Hypertext and Hypermedia Ted Nelson on Hypertext and the Web Impacts Therac-25 Safety Is a System Property
Internet in Everyday Life Finding Work Online Information Technology and Competitive Academic Debate Law Enforcement Fighting Computer Crime Literary Representations Excerpt from Isaac Asimov’s I, Robot Excerpt from “The Sand-Man” (1817) by E. T. A. Hoffman Machine Translation Warren Weaver on Machine Translation Movies HAL’s Birthday Celebration MUDs The Wide World of a MUD Online Education An Online Dig for Archeology Students Virtual Classes Help Rural Nurses Political Science and HCI Washington Tales of the Internet Psychology and HCI Human Factors Come into the Forefront Virtual Flight for White-Knuckled Travelers Repetitive Strain Injury The Complexities of Repetitive Strain Scenario-Based Design The Value of a Devil’s Advocate Social Psychology and HCI Love and HCI Sociology and HCI “Who’s on First” for the Twenty-First Century
LIST OF SIDEBARS ❚❙❘ XXI
Spell Checker Check the Spell Checker Task Analysis Excerpt from Cheaper by the Dozen Unicode History and Development of Unicode Relationship of the Unicode Standard to ISO_IEC 10646 Usability Evaluation Global Usability Is Usability Still a Problem?
Work Software Prescribes Break Time for Enhanced Productivity Workforce Cultural Differences Employee Resistance to Technology World Wide Web “Inventing” the World Wide Web Tim Berners-Lee on the Web as Metaphor WYSIWYG The Future of HCI
CONTRIBUTORS
Abras, Chadia Goucher College User-Centered Design
Alvarez, R. Michael Caltech-MIT Voting Technology Project Online Voting
Agrawal, Dharma P. University of Cincinnati Mobile Computing
Aronson, Jay E. University of Georgia Expert Systems
Allanson, Jennifer Lancaster University Physiology
Bailenson, Jeremy Stanford University Avatars
Allison, Brendan Z. Georgia State University Brain-Computer Interfaces
Bainbridge, Erika Harvard University, Center for Hellenic Studies Asian Script Input
xxiii
XXIV ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Bainbridge, William Sims National Science Foundation Altair Alto Asian Script Input Converging Technologies Cyborgs ENIAC Evolutionary Engineering Hollerith Card Literary Representations Mosaic Movies Personality Capture Pocket Computer Sociology and HCI Tablet Computer Three-Dimensional Printing Bartle, Richard Allan Multi-User Entertainment Limited MUDs Barzilay, Regina Massachusetts Institute of Technology Language Generation
Bilal, Dania University of Tennessee Children and the Web Bisantz, Ann M. State University of New York, Buffalo Ergonomics Blackmon, Marilyn Hughes University of Colorado, Boulder Cognitive Walkthrough Blascovich, James J. University of California, Santa Barbara Avatars Bos, Nathan University of Michigan E-mail Bradshaw, Shannon University of Iowa Search Engines Brusilovsky, Peter University of Pittsburgh Adaptive Help Systems
Baskerville, Richard Georgia State University Iterative Design Prototyping
Burnett, Margaret M. Oregon State University Visual Programming
Batteau, Allen W. Wayne State University Anthropology and HCI
Cai, Guoray Pennsylvania State University Multimodal Interfaces
Ben-Arie, Jezekiel University of Illinois, Chicago Motion Capture and Recognition
Caldera, Cavinda T. Syracuse University Grid Computing
Bernard, Michael L. Wichita State University Website Design
Campbell, Murray IBM T.J. Watson Research Center Deep Blue
Bhavnani, Suresh K. University of Michigan Application Use Strategies
CONTRIBUTORS ❚❙❘ XXV
Carroll, John M. Pennsylvania State University Computer-Supported Cooperative Work Scenario-Based Design Chaparro, Barbara S. Wichita State University Website Design Chen, Lawrence Eastman Kodak Research Labs Affective Computing Chen, Wenhong University of Toronto Internet – Worldwide Diffusion Choset, Howie Carnegie Mellon University Search and Rescue Choueiry, Berthe Y. University of Nebraska, Lincoln Constraint Satisfaction Cockburn, Andy University of Canterbury Browsers Cohen, Ira Hewlett-Packard Research Labs, University of Illinois, Urbana-Champaign Affective Computing Collyer, Glenn iDacta, Inc. Online Education Cook, Diane J. University of Texas, Arlington Smart Homes Crawford, Gregory P. Brown University Cathode Ray Tubes Electronic Paper Technology Liquid Crystal Displays
Crosby, Martha E. University of Hawaii Information Filtering Danziger, James N. University of California, Irvine Political Science and HCI Davis, Adriane D. Georgia State University Brain-Computer Interfaces DeFanti, Thomas University of Illinois, Chicago Cave Delmonico, David L. Duquesne University Cybersex Dennerlien, Jack Tigh Harvard School of Public Health Repetitive Strain Injury Deshpande, Ashutosh P. Syracuse University Identity Authentication Detrie, Thomas Arizona State University Fonts Dewan, Prasun Microsoft Corporation Multiuser Interfaces Ubiquitous Computing Donath, Judith Massachusetts Institute of Technology Sociable Media Dongarra, Jack University of Tennessee Supercomputers
XXVI ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Draper, John V. Raven Research Telepresence
Fountain, Jane E. Harvard University Digital Government
Duchowski, Andrew T. Clemson University Eye Tracking
Franklin, Judy A. Smith College Musical Interaction
DuPont, Brandon Policy Research Institute Workforce
Freedman, Immanuel Dr. Immanuel Freedman, Inc. Video
El Rhalibi, Abdennour Liverpool John Moores University Animation Games
Friedman, Batya University of Washington Value Sensitive Design
England, David Liverpool John Moores University Graphical User Interface Erickson, Thomas IBM T. J. Watson Research Center Social Proxies Essa, Irfan Georgia Institute of Technology Facial Expressions Farkas, David K. University of Washington Hypertext and Hypermedia Instruction Manuals Feathers, David State University of New York, Buffalo Anthropometry Feinman, Alexander C. Brandeis University Groupware Fillmore, Charles J. International Computer Science Institute Lexicon Building
Fussell, Susan R. Carnegie Mellon University Social Psychology and HCI Goldman, Rich University of Maryland, Baltimore Touchscreen Goodchild, Michael F. University of California, Santa Barbara Geographic Information Systems Govindaraju, V. University at Buffalo Handwriting Recognition and Retrieval Optical Character Recognition Gray, Wayne D. Rensselaer Polytechnic Institute Errors in Interactive Behavior Griffin, Elizabeth J. Internet Behavior Consulting Cybersex Griffiths, Jose-Marie University of Pittsburgh Digital Libraries
CONTRIBUTORS ❚❙❘ XXVII
Gross, Mark D. University of Washington Drawing and Design
Hauck, Roslin V. Illinois State University Law Enforcement
Grudin, Jonathan Microsoft Research Computer Science History of HCI
Hedge, Alan Cornell University Keyboard Pen and Stylus Input
Gustafson, John Sun Microsystems Atanasoff-Berry Computer
Hennecke, Marcus TEMIC Telefunken Microelectronic GmbH Speechreading
Guthrie, Ruth California Polytechnic University of Pomona Information Overload
Hickey, Timothy J. Brandeis University Groupware
Gutierrez-Osuna, Ricardo Texas A&M University Olfactory Interaction
Hodges, Larry F. University of North Carolina, Charlotte Virtual Reality
Guzman, Indira R. Syracuse University User Support
Hogan, Bernie University of Toronto Internet in Everyday Life
Hakken, David State University of New York Institute of Technology Ethnography
Holland, Arnold California State University, Fullerton Fonts
Hall, Thad E. Century Foundation Online Voting Halverson, Christine IBM T. J. Watson Research Center Work Harper, Mary P. Purdue University Speech Recognition Harper, V. Paul United States Patent and Trademark Office Speech Recognition
Hollis, Ralph L. Carnegie Mellon University Haptics Hollnagel, Erik University of Linköping Task Analysis Huang, Thomas S. University of Illinois, Urbana-Champaign Affective Computing Huff, Chuck Saint Olaf College Impacts
XXVIII ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Jackson, Linda A. Michigan State University Digital Divide Gender and Computing Jensen, Michael J. University of California, Irvine Political Science and HCI Kaminka, Gal Bar Ilan University Multiagent systems Kazman, Richard Carnegie Mellon University Software Engineering Kellogg, Wendy A. IBM T. J. Watson Research Center Social Proxies
Klavans, Judith L. Columbia University Text Summarization Kline, Ronald R. Cornell University Information Theory Kobsa, Alfred University of California, Irvine Adaptive Interfaces Koenig, Sven Georgia Institute of Technology Planning Krishna, C. M. University of Massachusetts, Amherst Fly-by-Wire
Kendall, Lori State University of New York, Purchase College Cybercommunities
Kruse, Amy Strategic Analysis, Inc. Arpanet Augmented Cognition
Kettebekov, Sanshzar Oregon Health and Science University Multimodal Interfaces
Laff, Mark IBM T.J. Watson Research Center Client-Server Architecture
Khedekar, Swapnil University at Buffalo Optical Character Recognition
Lagoudakis, Michail G. Georgia Institute of Technology Planning
Kim, Hong-Gee Dankook University Markup Languages
Lampe, Cliff University of Michigan Recommender and Reputation Systems
Kim, Jee-In Konkuk University Desktop Metaphor
Lane, David M. Rice University Sonification WYSIWYG
Kirchhoff, Katrin University of Washington Machine Translation
Lenhart, Amanda B. Pew Internet & American Life Project Chatrooms
CONTRIBUTORS ❚❙❘ XXIX
Lok, Benjamin C. University of Florida Three-Dimensional Graphics Virtual Reality Ludi, Stephanie Rochester Institute of Technology Icons Ma, Kwan-Liu University of California, Davis Data Visualization MacQueen, David University of Chicago Programming Languages Madey, Gregory R. University of Notre Dame Open Source Software Maloney-Krichmar, Diane Bowie State University User-Centered Design Manmatha, R. University of Massachusetts, Amherst Handwriting Recognition and Retrieval Martin, James H. University of Colorado, Boulder Natural-Language Processing N-grams May, Jon University of Sheffield Theory McKinnon, Robin A. Harvard University Digital Government McRoy, Susan W. University of Wisconsin, Milwaukee Dialog Systems
Miller, Sonia E. S. E. Miller Law Firm Law and HCI Moore, Melody M. Georgia State University Brain-Computer Interfaces Murtagh, Fionn Queen’s University, Belfast Information Spaces Neff, Gina University of California, Los Angeles Beta Testing Nissenbaum, Helen New York University Ethics Olson, Gary M. University of Michigan Collaboratories Olson, Judith S. University of Michigan Psychology and HCI Omojokun, Olufisayo University of North Carolina, Chapel Hill Ubiquitous Computing Paik, Woojin University of Massachusetts, Boston Compilers Spell Checker Paquet, Victor L. State University of New York, Buffalo Anthropometry Pargas, Roy Clemson University Online Questionnaires
XXX ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Peres, S. Camille Rice University Sonification
Rogers, Erika California Polytechnic State University Human-Robot Interaction
Perrig, Adrian Carnegie Mellon University Denial-of-Service Attack
Rosenbaum, Howard Indiana University Social Informatics
Preece, Jenny University of Maryland, Baltimore County User-Centered Design
Rosenbloom, Joshua L. University of Kansas Workforce
Quek, Francis Wright State University Gesture Recognition
Rosson, Mary Beth Pennsylvania State University Computer-Supported Cooperative Work
Quintana, Chris University of Michigan Classrooms
Ruthruff, Joseph R. Oregon State University Visual Programming
Quiroga, Luz M. University of Hawaii Information Filtering
Sandin, Dan University of Illinois, Chicago CAVE
Rajlich, Vaclav Wayne State University Software Cultures
Sandor, Aniko Rice University Sonification
Raphael, Christopher S. University of Massachusetts, Amherst Musical Interaction
Scacchi, Walt University of California, Irvine Socio-Technical System Design
Resnick, Paul University of Michigan Recommender and Reputation Systems
Schmorrow, Dylan Defense Advanced Projects Agency Arpanet Augmented Cognition
Rhodes, Bradley Ricoh Innovations Wearable Computer Rieser, John J. Vanderbilt University Navigation
Scholtz, Jean National Institute of Standards and Technology Usability Evaluation Sears, Andrew L. University of Maryland, Baltimore County Touchscreen
CONTRIBUTORS ❚❙❘ XXXI
Sears, J. Allen Corporation for National Research Initiatives Arpanet Selker, Ted Massachusetts Institute of Technology Attentive User Interface Sewalkar, Parag Syracuse University Identity Authentication Sengupta, Kuntal Advanced Interfaces Augmented Reality Sharma, Rajeev Advanced Interfaces Augmented Reality Multimodal Interfaces Shen, Yuan Yuan Liverpool John Moores University Animation Simpson, Richard C. University of Pittsburgh User Modeling Singh, Rita Carnegie Mellon University Sphinx
Stage, Jan Aalborg University Education in HCI Iterative Design Prototyping Stanton, Jeffrey M. Syracuse University Privacy Starkweather, Gary Microsoft Corporation Laser Printer Starner, Thad Georgia Institute of Technology Wearable Computers Stephenson, Robert S. Wayne State University Online Education Sterner, William H. University of Chicago Eliza Tekalp, A. Murat University of Rochester Video Summarization Tenopir, Carol University of Tennessee Electronic Journals
Soergel, Dagobert University of Maryland Information Organization Information Retrieval
Thomas, Douglas University of Southern California Hackers
St. Amant, Robert A. North Carolina State University Artificial Intelligence Statistical Analysis Support
Thuraisingham, Bhavani National Science Foundation Security Semantic Web Tran, Phuoc University of Toronto Internet — Worldwide Diffusion
XXXII ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Tretiakoff, Oleg C.A. Technology, Inc. Braille
Westfall, Ralph David California State Polytechnic University, Pomona Telecommuting
Tygar, J. D. University of California, Berkeley Digital Cash Spamming Viruses
Williams, Ronald D. University of Virginia Embedded Systems
Unicode Editorial Committee Unicode van Santen, Jan P.H. Oregon Health and Science University Speech Synthesis Vanderheiden, Gregg University of Wisconsin, Madison Universal Access Vassileva, Julita University of Saskatchewan Peer-to-Peer Architecture Wellman, Barry University of Toronto Internet - Worldwide Diffusion Internet in Everyday Life Welty, Christopher A. IBM T.J. Watson Research Center Ontology
Wilson, Michael CCLRC Rutherford Appleton Laboratory World Wide Web Witte, James Clemson University Online Questionnaires Yaar, Abraham Carnegie Mellon University Denial of Service Attack Youngblood, Michael University of Texas, Arlington Smart Homes Zakaria, Norhayati Syracuse University E-business Zaki, Mohammad Rensselaer Polytechnic Institute Data Mining Zhai, Shumin IBM Almaden Research Center Mouse
INTRODUCTION By William Sims Bainbridge
In hardly more than half a century, computers have become integral parts of everyday life, at home, work, and play. Today, computers affect almost every aspect of modern life, in areas as diverse as car design, filmmaking, disability services, and sex education. Human-computer interaction (HCI) is a vital new field that examines the ways in which people communicate with computers, robots, information systems, and the Internet. It draws upon several branches of social, behavioral, and information science, as well as on computer science and electrical engineering. The traditional heart of HCI has been user interface design, but in recent years the field has expanded to include any science and technology related to the ways that humans use or are affected by computing technology. HCI brings to the fore social and ethical issues that
hitherto existed only in the pages of science fiction. For a sense of the wide reach of HCI, consider the following vignettes: Gloria, who owns a small fitness training business, is currently trying out a new system in which she and a client dance on sensor pads on the floor, while the computer plays rhythms and scores how quickly they are placing their feet on the designated squares. ■ Elizabeth has made friends through chatrooms connected to French and British music groups that are not well known in the United States. She occasionally shares music files with these friends before buying CDs from foreign online distributors, and she has helped one of the French bands translate its website into English. ■
xxxiii
XXXIV ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Carl’s work team develops drivers for new color printers far more quickly and effectively than before, because the team comprises expert designers and programmers who live in different time zones around the world, from India to California, collectively working 24 hours a day, 7 days a week, by means of an Internet-based collaboration system. ■ Bella is blind, but her wearable computer uses Internet and the Global Positioning System not only to find her way through the city safely but also to find any product or service she needs at the best price and to be constantly aware of her surroundings. ■ Anderson, whose Internet moniker is Neo, discovers that his entire life is an illusion, maintained by a vast computer plugged directly into his nervous system. ■
The first three stories are real, although the names are pseudonyms, and the scenarios are duplicated millions of times in the modern world of personal computers, office automation, and the World Wide Web. The fourth example could be realized with today’s technology, simply given a sufficient investment in infrastructure. Not only would it revolutionize the lives of blind people like Bella, it would benefit the sighted public too, so we can predict that it will in fact become true over the next decade or two. The story about Mr. Anderson is pure fiction, no doubt recognizable to many as the premise of the 1999 film The Matrix. It is doubtful that HCI ever could (or should) become indistinguishable from real life.
Background on HCI In a brief history of HCI technology published in 1996, the computer scientist Brad Myers noted that most computer interface technology began as government-supported research projects in universities and only years later was developed by corporations and transformed into commercial products. He then listed six up-and-coming research areas: natural language and speech, computer-supported cooperative work, virtual and augmented reality, three-dimensional graphics, multimedia, and com-
puter recognition of pen or stylus movements on tablet or pocket computers. All of these have been very active areas of research or development since he wrote, and several are fundamental to commercial products that have already appeared. For example, many companies now use speech recognition to automate their telephone information services, and hundreds of thousands of people use stylus-controlled pocket computers every day. Many articles in the encyclopedia describe new approaches that may be of tremendous importance in the future. Our entire perspective on HCI has been evolving rapidly in recent years. In 1997, the National Research Council—a private, nonprofit institution that provides science, technology, and health policy advice under a congressional charter—issued a major report, More Than Screen Deep, “to evaluate and suggest fruitful directions for progress in user interfaces to computing and communications systems.” This high-level study, sponsored by the National Science Foundation (NSF), concluded with three recommendations to the federal government and university researchers. 1. Break away from 1960s technologies and paradigms. Major attempts should be made to find new paradigms for human-machine interaction that employ new modes and media for inp u t a n d o u t p u t a n d t h a t i nv o l v e n e w conceptualizations of application interfaces. (192) 2. Invest in the research required to provide the component subsystems needed for every-citizen interfaces. Research is needed that is aimed at both making technological advances and gaining understanding of the human and organizational capabilities these advances would support. (195) 3. Encourage research on systems-level design and development of human-machine interfaces that support multiperson, multimachine groups as well as individuals. (196) In 2002, John M. Carroll looked back on the history of HCI and noted how difficult it was at first to get computer science and engineering to pay attention to issues of hardware and software usability. He
INTRODUCTION ❚❙❘ XXXV
argued that HCI was born as the fusion of four fields (software engineering, software human factors, computer graphics, and cognitive science) and that it continues to be an emerging area in computer science. The field is expanding in both scope and importance. For example, HCI incorporates more and more from the social sciences as computing becomes increasingly deeply rooted in cooperative work and human communication. Many universities now have research groups and training programs in HCI. In addition to the designers and engineers who create computer interfaces and the researchers in industry and academia who are developing the fundamental principles for success in such work, a very large number of workers in many industries contribute indirectly to progress in HCI. The nature of computing is constantly changing. The first digital electronic computers, such as ENIAC (completed in 1946), were built to solve military problems, such as calculating ballistic trajectories. The 1950s and 1960s saw a great expansion in military uses and extensive application of digital computers in commerce and industry. In the late 1970s, personal computers entered the home, and in the 1980s they developed more user-friendly interfaces. The 1990s saw the transformation of Internet into a major medium of communications, culminating in the expansion of the World Wide Web to reach a billion people. In the first decade of the twenty-first century, two trends are rushing rapidly forward. One is the extension of networking to mobile computers and embedded devices literally everywhere. The other is the convergence of all mass media with computing, such that people listen to music, watch movies, take pictures, make videos, carry on telephone conversations, and conduct many kinds of business on computers or on networks of which computers are central components. To people who are uncomfortable with these trends, it may seem that cyberspace is swallowing real life. To enthusiasts of the technology, it seems that human consciousness is expanding to encompass everything. The computer revolution is almost certainly going to continue for decades, and specialists in human-computer interaction will face many new challenges in the years to come. At least one other
technological revolution is likely to give computer technology an additional powerful boost: nanotechnology. The word comes from a unit for measuring tiny distances, the nanometer, which is one billionth of a meter (one millionth of a millimeter, or one millionth the thickness of a U.S. dime). The very largest single atoms are just under a nanometer in size, and much of the action in chemistry (including fundamental biological processes) occurs in the range between 1 nanometer and 100–200 nanometers. The smallest transistors in experimental computer chips are about 50 nanometers across. Experts working at the interface between nanotechnology and computing believe that nanoelectronics can support continued rapid improvements in computer speed, memory, and cost for twenty to thirty years, with the possibility of further progress after then by means of integrated design approaches and investment in information infrastructure. Two decades of improvement in computer chips would mean that a desktop personal computer bought in 2024 might have eight thousand times the power of one bought in 2004 for the same price—or could have the same power but cost only twenty cents and fit inside a shirt button. Already, nanotechnology is being used to create networks of sensors that can detect and identify chemical pollutants or biological agents almost instantly. While this technology will first be applied to military defense, it can be adapted to medical or personal uses in just a few years. The average person’s wristwatch in 2024 could be their mobile computer, telling them everything they might want to know about their environment— where the nearest Thai restaurant can be found, when the next bus will arrive at the corner up the road, whether there is anything in the air the person happens to be allergic to, and, of course, providing any information from the world’s entire database that the person might want to know. If advances in natural-language processing continue at the rate they are progressing today, then the wristwatch could also be a universal translator that allows the person to speak with anyone in any language spoken on the face of the planet. Of course, predictions are always perilous, and it may be that progress will slow down. Progress does not simply happen of its own
XXXVI ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
accord, and the field of human-computer interaction must continue to grow and flourish if computers are to bring the marvelous benefits to human life that they have the potential to bring.
My Own Experience with Computers Computer and information technologies have progressed amazingly over the past fifty years, and they may continue to do so for the next half century. My first computer, if it deserves that word, was a Geniac I received for my sixteenth birthday in 1956. Costing only $20, it consisted of masonite disks, wires, light bulbs and a vast collection of nuts, bolts, and clips. From these parts I could assemble six rotary switches that could be programmed (by hardwiring them) to solve simple logic problems such as playing tick-tack-toe. I developed a great affection for the Geniac, as I did for the foot-long slide rule I lugged to my high school classes, but each was a very far cry from the pocket computer or even the programmable calculator my sixteenyear-old daughter carries in her backpack today. Geniac was not really an electronic computer because it lacked active components—which in 1956 meant relays or vacuum tubes, because transistors were still very new and integrated circuits had not yet been invented. The first real computer I saw, in the early 1960s, was the massive machine used by my father’s company, Equitable Life Insurance, to keep its records. Only decades later did I learn that my uncle, Angus McIntosh, had been part of a team in World War II that seized the German computer that was cracking Soviet codes, and that the secret Colossus computer at Bletchley Park where he worked had been cracking German codes. In the middle of the twentieth century, computers were huge, rare, and isolated from the general public, whereas at the beginning of the twenty-first century they are essential parts of everyday life. My first experience programming computers came in 1974, when I was a graduate student in the sociology department at Harvard University, and I began using the machines for statistical analysis of data. Starting the next year at the University of Washington, where I was a beginning assistant professor, I would sit for hours at a noisy keypunch machine, making the punch cards to enter programs
and data. After a while I realized I was going deaf from the noise and took to wearing earplugs. Later, back at Harvard in a faculty position, I began writing my own statistical analysis programs for my first personal computer, an Apple II. I remember that one kind of analysis would take a 36 hours to run, with the computer humming away in a corner as I went about my daily life. For a decade beginning in 1983, I programmed educational software packages in sociology and psychology, and after a series of computer-related projects found myself running the sociology program at the National Science Foundation and representing the social and behavioral sciences on the major computing initiatives of NSF and the federal government more generally. After eight years of that experience, I moved to the NSF Directorate for Computer and Information Science and Engineering to run the NSF’s programs in human-computer interaction, universal access, and artificial intelligence and cognitive science before becoming deputy director of the Division of Information and Intelligent Systems, which contains these programs. My daughters, aged sixteen and thirteen, have used their considerable computer expertise to create the Center for Glitch Studies, a research project to discover and analyze programming errors in commercial video games. So far they have documented on their website more than 230 programming errors in popular video games. The hundreds of people who visit the website are not a passive audience, but send e-mail messages describing errors they themselves discovered, and they link their own websites into a growing network of knowledge and virtual social relationships.
A Personal Story—NSF’s FastLane Computers have become vastly more important at work over recent decades, and they have come to play increasingly more complex roles. For example, NSF has created an entire online system for reviewing grant proposals, called FastLane, and thousands of scientists and educators have become familiar with it through serving as reviewers or principal investigators.
INTRODUCTION ❚❙❘ XXXVII
A researcher prepares a description of the project he or she hopes to do and assembles ancillary information such as a bibliography and brief biographies of the team members. The researcher submits this material, along with data such as the dollar requests on the different lines of the formal budget. The only software required is a word processor and a web browser. As soon as the head of the institution’s grants office clicks the submit button, the full proposal appears at NSF, with the data already arranged in the appropriate data fields, so nobody has to key it in. Peer review is the heart of the evaluation process. As director of the HCI program, I categorize proposals into review panels, then recruit panelists who were experts in the field with specializations that matched the scope of the proposals. Each panelist reviews certain proposals and submits a written review electronically. Once the individual reviews have been submitted, the panel meets face-to-face to discuss the proposals and recommend funding for the best ones. The panelists all have computers with Electronic Panel System (EPS) groupware that provides easy access to all the proposals and reviews associated with the particular panel. During the discussion of a particular proposal, one panelist acts as “scribe,” keeping a summary of what was said in the EPS. Other panelists can read the summary, send written comments to the scribe, and may be asked to approve the final draft online. Next the NSF program officer combines all the evaluations and writes a recommendation in the electronic system, for approval by the director of the division in which the program is located. More often than not, unfortunately, the decision is to decline to fund the proposal. In that case, the program officer and division director processes the action quickly on their networked computers, and an electronic notification goes immediately to the principal investigator, who can access FastLane to read the reviews and summary of the panel discussion. In those rarer and happier situations when a grant is awarded, the principal investigator and program officer negotiate the last details and craft an abstract, describing the research. The instant the award is made, the money goes electronically to
the institution, and the abstract is posted on the web for anyone to see. Each year, the researcher submits a report, electronically of course, and the full record of the grant accumulates in the NSF computer system until the work has been completed. Electronic systems connect the people— researcher, program director, and reviewers—into a system of information flow that is also a social system in which each person plays a specific role. Because the system was designed over a number of years to do a particular set of jobs, it works quite well, and improvements are constantly being incorporated. This is a prime example of Computer-Supported Cooperative Work, one of the many HCI topics covered in this encyclopedia.
The Role of the Berkshire Encyclopedia of Human-Computer Interaction Because the field of HCI is new, the Berkshire Encyclopedia of Human-Computer Interaction breaks new ground. It offers readers up-to-date information about several key aspects of the technology and its human dimensions, including applications—major tools that serve human needs in particular ways, with distinctive usability issues. ■ approaches—techniques through which scientists and engineers design and evaluate HCI. ■ breakthroughs—particular projects that marked a turning point in the history of HCI. ■ challenges—problems and solutions, both technical and human, especially in controversial areas. ■ components—key parts of a software or hardware system that are central to how people use it. ■ disciplines—the contributions that various sciences and academic fields make to HCI. ■ interfaces—hardware or software systems that mediate between people and machines. ■ methods—general computer and information science solutions to wide classes of technical problems. ■ social implications—technological impacts on society and policy issues, and the potential of multiuser HCI systems to bring about social change. ■
XXXVIII ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
These categories are not mutually exclusive; many articles fit in two or more of them. For example, the short article on laser printers concerns an output interface and explains how a laser printer puts words and pictures on paper. But this article also concerns a breakthrough, the actual invention of the laser printer, and it was written by the inventor himself, Gary Starkweather. Contributors The 175 contributors to the encyclopedia possess the full range and depth of expertise covered by HCI, and more. They include not only computer scientists and electrical engineers, but also social and behavioral scientists, plus practicing engineers, scientists, scholars, and other experts in a wide range of other fields. The oldest authors were born around the time that the very first experimental digital electronic computer was built, and the entire history of computing has taken place during their lives. Among the influential and widely respected contributors is Jose-Marie Griffiths, who contributed the article on digital libraries. As a member of the U.S. President’s Information Technology Advisory Committee, Griffiths understands the full scope and social value of this new kind of public resource. Contributors Judith S. Olson, Gary M. Olson, and John M. Carroll are among the very few leaders who have been elected to the Academy of the Special Interest Group on Computer-Human Interaction of the Association for Computing Machinery (SIGCHI). In 2003 Carroll received the organization’s Lifetime Achievement Award for his extensive accomplishments, including his contributions to the Blacksburg Electronic Village, the most significant experiment on community participation in computer-mediated communication. Jack Dongarra, who wrote the contribution on supercomputers, developed the LINPACK Benchmark, which is used to test the speed of these upper-end machines and which is the basis of the annual list of the five hundred fastest computers in the world. Building the Encyclopedia: Computer-Supported Cooperative Work The creation of this encyclopedia is an example of computer-supported cooperative work, a main area
of HCI. I have written occasional encyclopedia articles since the early 1990s, when I was one of several subject matter editors of The Encyclopedia of Language and Linguistics. Often, an editor working on a specialized encyclopedia for one publisher or another would send me an e-mail message asking if I would write a particular essay, and I would send it in, also by e-mail. I had a very good experience contributing to the Encyclopedia of Community, edited by Karen Christensen and David Levinson of Berkshire Publishing. I suggested to Karen that Berkshire might want to do an encyclopedia of human-computer interaction and that I could recruit excellent authors for such a project. Berkshire has extensive experience developing high-quality reference works, both in partnership with other publishing houses and on its own. Almost all the communication to create the encyclopedia was carried out online. Although I know many people in the field personally, it was a great help to have access to the public databases placed on the Web by NSF, including abstracts of all grants made in the past fifteen years, and to the online publications of organizations such as the Association for Computing Machinery and to the websites of all of the authors, which often provide copies of their publications. Berkshire created a special passwordprotected website with information for authors and a section where I could review all the essays as they were submitted. For the Reader There are many challenges ahead for HCI, and many are described in this encyclopedia. Difficult problems tend to have both technical and human aspects. For the benefit of the reader, the articles identify standard solutions and their ramifications, both positive and negative, and may also cover social or political controversies surrounding the problem and its possible solutions. Many of the articles describe how a particular scientific discipline or branch of engineering approaches HCI, and what it contributes to the multidisciplinary understanding of and improvement in how computers, robots, and information systems can serve human needs. Other articles focus on a particular interface, modality, or medium in which people receive information and control the
INTRODUCTION ❚❙❘ XXXIX
computer or system of which it is a part. These articles explain the technical features of the hardware or software; they also explain the way humans perceive, learn, and behave in the particular context. Still other articles concern how computer and information science has developed to solve a wide class of problems, using vivid examples to explain the philosophy of the method, paying some attention as well to the human side of the equation. Many articles—sometimes as their central focus and sometimes incidentally—examine the social implications of HCI, such as the impact of a particular kind of technology, the way that the technology fits into societal institutions, or a social issue involving computing. The technology can strengthen either cooperation or conflict between human beings, and the mutual relations between technological change and social change are often quite complex. For information technology workers, this encyclopedia provides insight into specialties other than the one they work in and offers useful perspectives on the broad field. For policy makers, it provides a basis for thinking about the decisions we face in exploiting technological possibilities for maximum human benefit. For students, this encyclopedia lays out how to use the technology to make a better world and offers a glimpse of the rapidly changing computer-assisted human world in which they are living their lives. To illuminate and expand on the articles themselves, the encyclopedia includes the following special features: ■
Approximately eighty sidebars with key primary text, glossary terms, quotes, and personal stories about how HCI has had an impact on the work and lives of professionals in the field.
Some seventy-five diverse illustrations, which range from “antique” photos of the ENIAC computer (c. 1940s) to cutting-edge computerized images. ■ A bibliography of HCI books and journal articles. ■ A popular culture appendix that includes more than 300 annotated entries on books, plays, movies, television shows, and songs that have connections to HCI. ■
William Sims Bainbridge The views expressed are those of the author and do not necessarily reflect the position of the National Science Foundation
FURTHER READING Asher, R. E., & Simpson, J. M. Y. (Eds.). (1994). The encyclopedia of language and linguistics. Oxford, UK: Pergamon. Bainbridge, W. S. (1989). Survey research: A computer-assisted introduction. Belmont, CA: Wadsworth. Bainbridge, W. S. (1992). Social research methods and statistics: A computer-assisted introduction. Belmont, CA: Wadsworth. Carroll, J. M. (Ed.). (2002). Human-computer interaction in the new millennium. Boston: Addison-Wesley. Christensen, K., & Levinson, D. (2003). Encyclopedia of community: From the village to the virtual world. Thousand Oaks, CA: Sage. Myers, B. A. (1996). A brief history of human computer interaction technology. ACM Interactions, 5(2), 44–54. National Research Council. (1997). More than screen deep. Washington, DC: National Academy Press. Roco, M. C., & Bainbridge, W. S. (2001). Societal implications of nanoscience and nanotechnology. Dordrecht, Netherlands: Kluwer. Roco, M. C., & Bainbridge, W. S. (2003). Converging technologies for improving human performance. Dordrecht, Netherlands: Kluwer.
PUBLISHER’S NOTE By Karen Christensen
The Berkshire Encyclopedia of Human-Computer Interaction (HCI) is our first independent title. We’ve done many other award-winning encyclopedias but HCI will always have a unique place in our hearts and in our history. Even though most of our work has been in the social sciences, when William Bainbridge at the National Science Foundation wrote to suggest the topic of HCI, I knew instantly that it was the right topic for our “knowledge and technology” company. I grew up with the computer industry. My father, a computer engineer in the Silicon Valley, tried very hard to explain the fundamentals of computing, and even built a machine out of plywood and blinking lights to show my sixth-grade class that information can be captured and communicated with nothing more than a combination of on-off switches. I was a reader, much more interested in human stories and
relationships than in binary code; but it was books— and a career in publishing—that at last brought home to me that computers can support and expand human connections and improve our lives in myriad ways. Berkshire Publishing Group, based in a tiny New England town, depends on human-computer interaction to maintain working relationships, and friendships too, with many thousands of experts around the world. We are convinced, in fact, that this topic is central to our development as a twenty-first century publishing company, The Berkshire Encyclopedia of Human-Computer Interaction takes computing into new realms, introducing us to topics that are intriguing both in their technical complexity and because they present us— human beings—with a set of challenging questions about our relationship with “thinking”machines. There are opportunities and risks in any new technology, and xli
XLII ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
HCI has intrigued writers for many decades because it leads us to a central philosophical, religious, and even historical question: What does it mean to be human? We’ll be exploring this topic and related ones in further works about technology and society. Bill Bainbridge was an exceptional editor: organized, focused, and responsive. Working with him has been deeply rewarding, and it’s no surprise that the hundreds of computer scientists and engineers he helped us recruit to contribute to the encyclopedia were similarly enthusiastic and gracious. All these experts—computer scientists and engineers as well as people working in other aspects of HCI— truly wanted to work with us to ensure that their work would be accessible and understandable. To add even greater interest and richness to the work, we’ve added dozens of photographs, personal stories, glossary terms, and other sidebars. In addition to article bibliographies, there is a master bibliography at the end, containing all 2,590 entries in the entire encyclopedia listed together for easy reference. And we’ve added a characteristic Berkshire touch, an appendix designed to appeal to even the most resolute Luddite: “HCI in Popular Culture,” a database compilation listing with 300 sci-fi novels, nonfiction titles, television programs and films from The Six-Million Dollar Man to The Matrix (perhaps the quintessential HCI story), and even a handful of plays and songs about computers and technology. The encyclopedia has enabled us to develop a network of experts as well as a cutting-edge resource that will help us to meet the needs of students, professionals, and scholars in many disciplines. Many articles will be of considerable interest and value to librarians—Digital Libraries, Information Filtering, Information Retrieval, Lexicon Building, and much more—and even to publishers. For example, we have an article on “Text Summarization” written by Judith Klavans, Director of Research at the Center for Advanced Study of Language, University of Maryland. “Summarization is a technique for identifying the key points of a document or set of related documents, and presenting these selected points as a brief, integrated independent representation” and is essential to electronic publishing, a key aspect of publishing today and in the future.
The Berkshire Encyclopedia of Human-Computer Interaction provides us with an essential grounding in the most relevant and intimate form of technology, making scientific and technological research available to a wide audience. This topic and other aspects of what Bill Bainbridge likes to refer to as “converging technologies” will continue to be a core part of our print and online publishing program. And, as befits a project so closely tied to electronic techn o l o g y, a n o n l i n e ve r s i o n o f t h e B e r k s h i re Encyclopedia of Human-Computer Interaction will be available through xrefplus. For more information, visit www.xreferplus.com. Karen Christensen CEO, Berkshire Publishing Group [email protected]
Editor’s Acknowledgements Karen Christensen, cofounder of the Berkshire Publishing Group, deserves both thanks and praise for recognizing that the time had come when a comprehensive reference work about human relations with computing systems was both possible and sorely needed. Courtney Linehan at Berkshire was both skilled and tireless in working with the authors, editor, and copyeditors to complete a marvelous collection of articles that are technically accurate while communicating clearly to a broad public. At various stages in the process of developing the encyclopedia, Marcy Ross and George Woodward at Berkshire made their own indispensable contributions. Among the authors, Mary Harper, Bhavani Thuraisingham, and Barry Wellman were unstinting in their insightful advice. I would particularly like to thank Michael Lesk who, as director of the Division of Information and Intelligent Systems of the National Science Foundation, gave me the opportunity to gain invaluable experience managing the grant programs in Universal Access and Human-Computer Interaction. William Sims Bainbridge Deputy Director, Division of Information and Intelligent Systems National Science Foundation
ABOUT THE EDITOR
William Sims Bainbridge is deputy director of the Division of Information and Intelligent Systems of the National Science Foundation, after having directed the division’s Human-Computer Interaction, Universal Access, and Knowledge and Cognitive Systems programs. He coedited Converging Technologies to Improve Human Performance, which explores the combination of nanotechnology, biotechnology, information technology, and cognitive science (National Science Foundation, 2002; www.wtec.org/ConvergingTechnologies). He has rep-
resented the social and behavioral sciences on five advanced technology initiatives: High Performance Computing and Communications, Knowledge and Distributed Intelligence, Digital Libraries, Information Technology Research, and Nanotechnology. Bill Bainbr idge is also the author of ten books, four textbook-software packages, and some 150 shorter publications in information science, social science of technology, and the sociology of culture. He earned his doctorate from Harvard University.
xliv
ADAPTIVE HELP SYSTEMS ADAPTIVE INTERFACES AFFECTIVE COMPUTING ALTAIR
A
ALTO ANIMATION ANTHROPOLOGY AND HCI ANTHROPOMETRY APPLICATION USE STRATEGIES ARPANET ARTIFICIAL INTELLIGENCE ASIAN SCRIPT INPUT THE ATANASOFF-BERRY COMPUTER ATTENTIVE USER INTERFACE AUGMENTED COGNITION AUGMENTED REALITY AVATARS
ADAPTIVE HELP SYSTEMS Adaptive help systems (AHSs; also called intelligent help systems) are a specific kind of help system and a recognized area of research in the fields of artificial intelligence and human-computer interaction. The goal of an adaptive help system is to provide personalized help to users working with complex interfaces, from operating systems (such as UNIX) to popular applications (such as Microsoft Excel). Unlike traditional static help systems that serve by request the same information to different users, AHSs attempt to adapt to the knowledge and goals of individual users, offering the most relevant information in the most relevant way.
The first wave of research on adaptive help emerged in early 1980 when the UNIX system—due to its low cost and efficiency—reached many universities whose users lacked the advanced technical training (such as knowledge of complicated commands) needed to operate UNIX. Early work on adaptive and intelligent help systems focused almost exclusively on UNIX and its utilities, such as text editors and e-mail. From 1980 to 1995 this research direction involved more than a hundred researchers working on at least two-dozen projects. The most representative projects of this generation were UNIX Consultant and EUROHELP. The widespread use of graphical user interfaces (GUIs) in early 1990 caused a pause in AHS research, because GUIs resolved a number of the problems that the early generation of AHS sought to address. In just a few years, however, GUIs reached the level of complexity where adaptive help again became important, giving 1
2 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
INTERFACE Interconnections between a device, program, or person that facilitate interaction.
rise to a second wave of research on AHSs. Lumière, the most well known project of this wave, introduced the idea of intelligent help to millions of users of Microsoft applications.
Active and Passive AHSs Adaptive help systems are traditionally divided into two classes: active and passive. In a passive AHS, the user initiates the help session by asking for help. An active help system initiates the help session itself. Both kinds of AHSs have to solve three challenging problems: They must build a model of user goals and knowledge, they must decide what to present in the next help message, and they must decide how to present it. In addition, active AHSs also need to decide when to intervene with adaptive help. User Modeling To be useful, a help message has to present information that is new to the user and relevant to the user‘s current goal. To determine what is new and relevant, AHSs track the user’s goals and the user’s knowledge about the interface and maintain a user model. Two major approaches to user modeling in AHSs are “ask the user” and “observe the user.” Most passive AHSs have exploited the first of these approaches. UNIX Consultant demonstrates that a passive AHS can be fairly advanced: It involves users in a natural-language dialogue to discover their goals and degree of knowledge and then provides the most relevant information. In contrast, active AHSs, introduced by the computer scientist Gerhard Fischer in 1985, strive to deduce a user’s goals by observing the user at work; they then strive to identify the lack of knowledge by detecting errors and suboptimal behavior. EUROHELP provides a good example of an active help system capable of identifying a knowledge gap and filling it provocatively. In practical AHSs the two approaches often coexist: The user model is initiated through a short interview with the user and then kept updated through observation.
Many AHSs use two classic approaches to model the user. First, they track the user’s actions to understand which commands and concepts the user knows and which are not known, and second, they use task models to deduce the user’s current goal and missing knowledge. The first technology is reasonably simple: The system just records all used commands and parameters, assuming that if a command is used, it must be known. The second is based on plan recognition and advanced domain knowledge representation in such forms as a goal-plan-action tree. To identify the current goal and missing pieces of knowledge, the system first infers the user’s goal from an observed se-
Farewell Clippy
M
any PC users through the years quickly learned how to turn off “Clippy,” the Microsoft Office helper who appeared out of nowhere eagerly hoping to offer advice to the baffled. The Microsoft press release below was Clippy’s swan song. REDMOND, Wash., April 11, 2001—Whether you love him or you hate him, say farewell to Clippy automatically popping up on your screen. Clippy is the little paperclip with the soulful eyes and the Groucho eyebrows. The electronic ham who politely offers hints for using Microsoft Office software. But, after four years on-screen, Clippy will lose his starring role when Microsoft Office XP debuts on May 31. Clippy, the Office Assistant introduced in Office 97, has been demoted in Office XP. The wiry little assistant is turned off by default in Office XP, but diehard supporters can turn Clippy back on if they miss him. “Office XP is so easy to use that Clippy is no longer necessary, or useful,” explained Lisa Gurry, a Microsoft product manager. “With new features like smart tags and Task Panes, Office XP enables people to get more out of the product than ever before. These new simplicity and easeof-use improvements really make Clippy obsolete,” she said. “He’s quite down in the dumps,” Gurry joked. “He has even started his own campaign to try to get his old job back, or find a new one.” Source: Microsoft. Retrieved March 10, 2004, from http://www.microsoft.com/presspass/features/2001/apr01/04-11clippy.asp
ADAPTIVE INTERFACES ❚❙❘ 3
quence of commands. It then tries to find a more efficient (or simply correct) sequence of commands to achieve this goal. Next, it identifies the aspects of the interface that the user needs to know to build this sequence. These aspects are suspected to be unknown and become the candidates to be presented in help messages.
intelligence and HCI and has helped to establish research on intelligent interfaces and user modeling. A treasury of knowledge accumulated by various AHS projects over the last thirty years is being used now to develop practical adaptive help and adaptive performance support systems. Peter Brusilovsky
Providing Adaptive Help: Deciding What to Present and How Deciding what should be the focus of the next help message is the most challenging job of an adaptive help system. A number of passive AHSs simply avoid this problem, allowing the users to determine what they need and focusing on adaptive presentation only. Classic AHSs, which use plan recognition, can determine quite precisely what the user needs, but this functionality requires elaborate knowledge representation. To bypass the knowledge representation barrier, modern practical AHSs use a range of alternative (though less precise) technologies that are either statistically or socially based. For example, Lumière used a complex probabilistic network to connect observed user actions with available help interventions, while the system developed by MITRE researchers Linton and Schaefer compared the skills of individual users with a typical set of interface skills assembled by observing multiple users. As soon as the focus of the next help message is determined, the AHS has to decide how to present the target content. While some AHSs ignore this part and focus solely on the selection part, it has been shown that adaptive presentation of help information can increase the user’s comprehension speed and decrease errors. Most often the content presentation is adapted to the user’s knowledge, with, for example, expert users receiving more specific details and novice users receiving more explanations. To present the adaptive content, classic AHSs that operated in a line-based UNIX interface relied mostly on a natural language generation approach. Modern AHSs operating in the context of Graphical User Interfaces exploit adaptive hypermedia techniques to present the content and links to further information that is most suitable for the given user. Research into adaptive help systems has contributed to progress in a number of subfields within artificial
See also Artificial Intelligence; Task Analysis; User Modeling FURTHER READING Brusilovsky, P., Kobsa, A., & Vassileva, J. (Eds.). (1998). Adaptive hypertext and hypermedia. Dordrecht, Netherlands: Kluwer. Encarnação, L. M., & Stoev, S. L. (1999). An application-independent intelligent user support system exploiting action-sequence based user modeling. In J. Kay (Ed.), Proceedings of 7th International Conference on User Modeling, UM99, June 20–24, 1999 (pp. 245–254). Vienna: Springer. Fischer, G. (2001). User modeling in human-computer interaction. User Modeling and User-Adapted Interaction, 11(1–2), 65–86. Goodman, B. A., & Litman, D. J. (1992). On the interaction between plan recognition and intelligent interfaces. User Modeling and UserAdapted Interaction, 2(1), 83–115. Hegner, S. J., Mc Kevitt, P., Norvig, P., & Wilensky, R. L. (Eds.). (2001). Intelligent help systems for UNIX. Dordrecht, Netherlands: Kluwer. Horvitz, E., Breese, J., Heckerman, D., Hovel, D., & Rommelse, K. (1998). The Lumière project: Bayesian user modeling for inferring the goals and needs of software users. In Proceedings of Fourteenth Conference on Uncertainty in Artificial Intelligence (pp. 256–265). San Francisco: Morgan Kaufmann. Linton, F., & Schaefer, H.-P. (2000). Recommender systems for learning: Building user and expert models through long-term observation of application use. User Modeling and User-Adapted Interaction, 10(2–3), 181–208. Oppermann, R. (Ed.). (1994). Adaptive user support: Ergonomic design of manually and automatically adaptable software. Hillsdale, NJ: Lawrence Erlbaum Associates. Wilensky, R., Chin, D., Luria, M., Martin, J., Mayfield, J., & Wu, D. (1988). The Berkeley UNIX Consultant project. Computational Linguistics, 14(4), 35–84. Winkels, R. (1992). Explorations in intelligent tutoring systems and help. Amsterdam: IOS Press.
ADAPTIVE INTERFACES Computer interfaces are becoming ever richer in functionality, software systems are becoming more
4 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
complex, and online information spaces are becoming larger in size. On the other hand, the number and diversity of people who use computer systems are increasing as well. The vast majority of new users are thereby not computer experts, but rather laypersons such as professionals in nontechnical areas, elderly people, and children. These users vary with respect not only to their computer skills, but also to their fields of expertise, their tasks and goals, their mood and motivation, and their intellectual and physical capabilities. The traditional strategy for enabling heterogeneous user groups to master the complexity and richness of computers was to render computer interaction as simple as possible and thereby to cater to the lowest common denominator of all users. Increasingly, though, developers are creating computer applications that can be “manually” customized to users’ needs by the users themselves or by an available expert. Other applications go beyond this capability. They are able within certain limits to recognize user needs and to cater to them automatically. Following the terminology of Reinhard Oppermann, we will use the term adaptable for the manual type of application and adaptive for the automatic type.
Adaptable and Adaptive Systems Adaptable systems are abundant. Most commercial software allows users to modify system parameters and to indicate individual preferences. Web portals permit users to specify the information they want to see (such as stock quotes or news types) and the form in which it should be displayed by their web browsers. Web shops can store basic information about their customers, such as payment and shipping data, past purchases, wish lists for future purchases, and birthdates of friends and family to facilitate transactions online. In contrast, adaptive systems are still quite rare. Some shopping websites give purchase recommendations to customers that take into account what these customers bought in the past. Commercial learning software for high school mathematics adapts its teaching strategies to the presumed level of expertise of each student. Advertisements on mobile devices are already being targeted to users in certain geographical locations only or to users who
perform certain indicative actions (such as entering certain keywords in search machines). User adaptability and adaptivity recently gained strong popularity on the World Wide Web under the notion of “personalization.” This popularity is due to the fact that the audiences of websites are often even less homogeneous than the user populations of commercial software. Moreover, personalization has been recognized as an important instrument for online customer relationship management.
Acquiring Information about Users To acquire the information about users that is needed to cater to them, people can use several methods. A simple way is to ask users directly, usually through an initial questionnaire. However, this questionnaire must be kept extremely short (usually to less than five questions) because users are generally reluctant to spend efforts on work that is not directly related to their current tasks, even if this work would save them time in the long run. In certain kinds of systems, specifically tutoring systems, user interviews can be clad in the form of quizzes or games. In the future, basic information about users may be available on smartcards, that is, machine-readable plastic cards that users swipe through a reading device before the beginning of a computer session or that can even be read from a distance as users approach a computer terminal. Various methods draw assumptions about users based on their interaction behavior. These methods include simple rules that predict user characteristics or assign users to predetermined user groups with known characteristics when certain user actions are being observed (the latter method is generally known as the “stereotype approach” to user modeling). Probabilistic reasoning methods take uncertainty and evidences from different sources into account. Plan recognition methods aim at linking individual actions of users to presumable underlying plans and goals. Machine-learning methods try to detect regularities in users’ actions (and to use the learned patterns as a basis for predicting future actions). Clique-based (collaborative) filtering methods determine those users who are closest to the current user in an n-dimensional attribute space and
ADAPTIVE INTERFACES ❚❙❘ 5
Keeping Disabled People in the Technology Loop
AUSTIN, Texas (ANS)—If communications technology is fueling the economy and social culture of the 21st century, why should 18 percent of the population be left behind? Stephen Berger, a specialist in retrofitting the latest computer and phone technology for the disabled, is trying to make sure they’re not. From an office in Austin, Berger works to make sure that those with hearing and vision impairments or other disabilities can benefit from the latest in Internet, cell phone and other technologies. As a project manager at Siemens Information and Communication Mobile, where he’s responsible for standards and regulatory management, Berger works to unravel such problems as why those who use hearing aids couldn’t use many brands of cell phones. “Some new cell phones make a buzz in hearing aids,” Berger explained. “The Federal Communications Commission took note and said it needed to be resolved.” But what was needed was either better technology or protocols that both the hearing impaired and the cell phone companies could agree on. Berger helped determine what types of hearing aids work with certain types of phones. The intelligence was passed around the industry, and the problem is now minimal. Berger is one of the many technology specialists in huge communications companies whose niche has
use them as predictors for unknown attributes of the current user. Clustering methods allow one to generalize groups of users with similar behaviors or characteristics and to generate user stereotypes.
Types of Information about the User Researchers have considered numerous kinds of user-related data for personalization purposes, including the following: ■
Data about the user, such as demographic data, and information or assumptions about the user’s knowledge, skills, capabilities, interests, preferences, goals, and plans
been defined in recent years. While the proliferation of computers, home gadgets and gizmos is on the rise, it’s workers like Berger who make sure the disabled aren’t left out of the loop. Other workers in the field, according to Berger, are coming from educational institutions. For example, Neil Scott and Charlie Robinson, from Stanford University and Louisiana Tech University respectively, are working on the things the Hollywood movies are made of. […] “Guys like this are breaking the barrier between the blind and computers,” he said.“(The blind) will soon have an interface with no visual, just audio computer controls with no touch, just head position and voice controls.” Other devices, like the Home RF systems—that’s home radio frequency—link all the major appliances and electronics of the home together. That means telephone, Dolby sound, Internet, entertainment electronics and other devices are all connected into one wireless network with voice control for those who aren’t mobile. “It’s microphones implanted in wallpaper, security systems by voice, household appliances that work on a vocal command,” Berger said. “It’s what the movies are made of and it’s here today.” Source: Innovations keep disabled in the technology loop. American News Services, October 12, 2000.
Usage data, such as selections (e.g., of webpages or help texts with certain content), temporal viewing behavior (particularly “skipping” of webpages or streaming media), user ratings (e.g., regarding the usefulness of products or the relevance of information), purchases and related actions (e.g., in shopping carts, wish lists), and usage regularities (such as usage frequencies, high correlations between situations and specific actions, and frequently occurring sequences of actions) ■ Environmental data, such as data about the user’s software and hardware environments and information about the user’s current location ■
6 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
(where the granularity ranges from country level to the precise coordinates) and personalizationrelevant data of this location.
Privacy Storing information about users for personalization is highly privacy relevant. Numerous consumer surveys show consistently that users are concerned about their privacy online, which also affects personalized systems on the Web. Some popular personalization methods also seem in conflict with privacy laws that protect the data of identified or identifiable individuals in more than thirty countries. Such laws usually call for parsimony, purpose specificity, and user awareness or even user consent in the collecting and processing of personal data. The privacy laws of many countries also restrict the transborder flow of personal data or even extend their coverage beyond the national boundaries. Such laws then also affect personalized websites abroad that serve users in these regulated countries, even if there is no privacy law in place in the country where the websites are located. Well-designed user interaction will be needed in personalized systems to communicate to users at any point the prospective benefits of personalization and the resulting privacy consequences to enable users to make educated choices. A flexible architecture, moreover, will be needed to allow for optimal personalization within the constraints set by users’ privacy preferences and the legal environment. Alternatively, anonymous yet personalized interaction can be offered.
Empirical Evaluation A number of empirical studies demonstrate in several application areas that well-designed adaptive user interfaces may give users considerable benefits. Boyle and Encarnacion showed that the automatic adjustment of the wording of a hypertext document to users’ presumed familiarity with technical vocabulary improved text comprehension and search times significantly in comparison with static hypertext. Conati and colleagues presented evidence that “adaptive prompts based on the student model effectively elicited self-explanations that improved stu-
dents’ learning” (Conati et al. 2000, 404). Corbett and Trask showed that a certain tutoring strategy (namely subgoal scaffolding based on a continuous knowledge trace of the user) decreases the average number of problems required to reach cognitive mastery of Lisp concepts. In studies reviewed by Specht and Kobsa, students’ learning time and retention of learning material improved significantly if learners with low prior knowledge received “strict” recommendations on what to study next (which amounted to the blocking of all other learning material), while students with high prior knowledge received noncompulsory recommendations only. Strachan and colleagues found significantly higher user ratings for the personalized version of a help system in a commercial tax advisor system than for its nonpersonalized version. Personalization for e-commerce on the Web has also been positively evaluated to some extent, both from a business and a user point of view. Jupiter Communications reports that personalization at twenty-five consumer e-commerce sites boosted the number of new customers by 47 percent and revenues by 52 percent in the first year. Nielsen NetRatings reports that registered visitors to portal sites (who obtain the privilege of adapting the displayed information to their interests) spend more than three times longer at their home portal than other users and view three to four times more pages. Nielsen NetRatings also reports that e-commerce sites offering personalized services convert approximately twice as many visitors into buyers than do e-commerce sites that do not offer personalized services. In design studies on beneficial personalized elements in a Web-based procurement system, participants, however,“expressed their strong desire to have full and explicit control of data and interaction” and “to readily be able to make sense of site behavior, that is, to understand a site’s rationale for displaying particular content” (Alpert et al. 2003, 373). User-adaptable and user-adaptive interfaces have shown their promise in several application areas. The increase in the number and variety of computer users is likely to increase their promise in the future. The observation of Browne still holds true, however: “Worthwhile adaptation is system specific. It is dependent on the users of that system and requirements
AFFECTIVE COMPUTING ❚❙❘ 7
to be met by that system” (Browne 1993, 69). Careful user studies with a focus on expected user benefits through personalization are, therefore, indispensable for all practical deployments. Alfred Kobsa See also Artificial Intelligence and HCI; Privacy; User Modeling
FURTHER READING Alpert, S., Karat, J., Karat, C.-M., Brodie, C., & Vergo, J. G. (2003). User attitudes regarding a user-adaptive e-commerce web site. User Modeling and User-Adapted Interaction, 13(4), 373–396. Boyle, C., & Encarnacion, A. O. (1994). MetaDoc: An adaptive hypertext reading system. User Modeling and User-Adapted Interaction, 4(1), 1–19. Browne, D. (1993). Experiences from the AID Project. In M. SchneiderHufschmidt, T. Kühme, & U. Malinowski (Eds.), Adaptive user interfaces: Principles and practice (pp. 69–78). Amsterdam: Elsevier. Carroll, J., & Rosson, M. B. (1989). The paradox of the active user. In J. Carroll (Ed.), Interfacing thought: Cognitive aspects of humancomputer interaction (pp. 80–111). Cambridge, MA: MIT Press. Conati, C., Gertner, A., & VanLehn, K. (2002). Using Bayesian networks to manage uncertainty in student modeling. User Modeling and User-Adapted Interaction, 12(4), 371–417. Corbett, A. T., & Trask, H. (2000). Instructional interventions in computer-based tutoring: Differential impact on learning time and accuracy. Proceedings of ACM CHI’ 2000 Conference on Human Factors in Computing Systems (pp. 97–104). Hof, R., Green, H., & Himmelstein, L. (1998, October 5). Now it’s YOUR WEB. Business Week (pp. 68–75). ICONOCAST. (1999). More concentrated than the leading brand. Retrieved August 29, 2003, from http://www.iconocast.com/issue/1999102102.html Kobsa, A. (2002). Personalized hypermedia and international privacy. Communications of the ACM, 45(5), 64–67. Retrieved August 29, 2003, from http://www.ics.uci.edu/~kobsa/papers/2002-CACMkobsa.pdf Kobsa, A., Koenemann, J., & Pohl, W. (2001). Personalized hypermedia presentation techniques for improving customer relationships. The Knowledge Engineering Review, 16(2), 111–155. Retrieved August 29, 2003, from http://www.ics.uci.edu/~kobsa/papers/2001-KERkobsa.pdf Kobsa, A., & Schreck, J. (2003). Privacy through pseudonymity in useradaptive systems. ACM Transactions on Internet Technology, 3(2), 149–183. Retrieved August 29, 2003, from http://www.ics.uci.edu/ ~kobsa/papers/2003-TOIT-kobsa.pdf Oppermann, R. (Ed.). (1994). Adaptive user support: Ergonomic design of manually and automatically adaptable software. Hillsdale, NJ: Lawrence Erlbaum. Rich, E. (1979). User modeling via stereotypes. Cognitive Science, 3, 329–354. Rich, E. (1983). Users are individuals: Individualizing user models. International Journal of Man-Machine Studies, 18, 199–214.
Specht, M., & Kobsa, A. (1999). Interaction of domain expertise and interface design in adaptive educational hypermedia. Retrieved March 24, 2004, from http://wwwis.win.tue.nl/asum99/specht/specht.html Strachan, L., Anderson, J., Sneesby, M., & Evans, M. (2000). Minimalist user modeling in a complex commercial software system. User Modeling and User-Adapted Interaction, 10(2–3), 109–146. Teltzrow, M., & Kobsa, A. (2004). Impacts of user privacy preferences on personalized systems—A comparative study. In C.-M. Karat, J. Blom, & J. Karat (Eds.), Designing personalized user experiences for e-commerce (pp. 315–332). Dordrecht, Netherlands: Kluwer Academic Publishers.
AFFECTIVE COMPUTING Computations that machines make that relate to human emotions are called affective computations. Such computations include but are not limited to the recognition of human emotion, the expression of emotions by machines, and direct manipulation of the human user’s emotions. The motivation for the development of affective computing is derived from evidence showing that the ability of humans to feel and display emotions is an integral part of human intelligence. Emotions help humans in areas such as decisionmaking and human-to-human communications. Therefore, it is argued that in order to create intelligent machines that can interact effectively with humans, one must give the machines affective capabilities. Although humans interact mainly through speech, we also use body gestures to emphasize certain parts of the speech and as one way to display emotions. Scientific evidence shows that emotional skills are part of what is called intelligence. A simple example is the ability to know when something a person says to another is annoying or pleasing to the other, and be able to adapt accordingly. Emotional skills also help in learning to distinguish between important and unimportant things, an integral part of intelligent decision-making. For computers to be able to interact intelligently with humans, they will need to have such emotional skills as the ability to display emotions (for example, through animated agents) and the ability to recognize the user’s emotions. The ability to recognize emotions would be useful in day-to-day interaction, for example, when the user is Web browsing or
8 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
FUNCTIONALITY The capabilities of a given program or parts of a program.
searching: If the computer can recognize emotions, it will know if the user is bored or dissatisfied with the search results. Affective skills might also be used in education: A computer acting as a virtual tutor would be more effective if it could tell by students’ emotional responses that they were having difficulties or were bored—or pleased. It would not, however, be necessary for computers to recognize emotions in every application. Airplane control and banking systems, for example, do not require any affective skills. However, in applications in which computers take on a social role (as a tutor, assistant, or even companion), it may enhance their functionality if they can recognize users’ emotions. Computer agents could learn users’ preferences through the users’ emotions. Computers with affective capabilities could also help human users monitor their stress levels. In clinical settings, recognizing a person’s inability to interpret certain facial expressions may help diagnose early psychological disorders. In addition to recognizing emotions, the affective computer would also have the ability to display emotions. For example, synthetic speech with emotions in the voice would sound more pleasing than a monotonous voice and would enhance communication between the user and the computer. For computers to be affective, they must recognize emotions, be capable of measuring signals that represent emotions, and be able to synthesize emotions.
General Description of Emotions Human beings possess and express emotions in everyday interactions with others. Emotions are often reflected on the face, in hand and body gestures, and in the voice. The fact that humans understand emotions and know how to react to other people’s expressions greatly enriches human interaction. There is no clear definition of emotions. One way to handle emotions is to give them discrete labels, such as joy, fear, love, surprise, sadness, and so on. One prob-
lem with this approach is that the humans often feel blended emotions. In addition, the choice of words may be too restrictive or culturally dependent. Another way to describe emotions is to have multiple dimensions or scales. Instead of choosing discrete labels, emotions are describe on several continuous scales, for example from pleasant to unpleasant or from simple to complicated. Two common scales are valence and arousal.Valence describes the pleasantness of the stimuli, with positive (or pleasant) on one end and negative (or unpleasant) on the other. For example, happiness has a positive valence, while disgust has a negative valence. The other dimension is arousal, or activation, which describes the degree to which the emotion stimulates the person experiencing it. For example, sadness has low arousal, whereas surprise has high arousal. The different emotional labels could be plotted at various positions on a two-dimensional plane spanned by these two axes to construct a two-dimensional emotion model. In 1954 the psychologist Harold Schlosberg suggested a three-dimensional model in which he added an axis for attention-rejection to the above two. This was reflected by facial expressions as the degree of attention given to a person or object. For example, attention is expressed by wide open eyes and an open mouth. Rejection shows contraction of eyes, lips, and nostrils. Although psychologists and others argue about what exactly emotions are and how to describe them, everyone agrees that a lack of emotions or the presence of emotional disorders can be so disabling that people affected are no longer able to lead normal lives or make rational decisions.
Technology for Recognizing Emotions Technologies for recognizing human emotions began to develop in the early 1990s. Three main modalities have been targeted as being relevant for this task: visual, auditory, and physiological signals. The visual modality includes both static images and videos containing information such as facial expressions and body motion. The audio modality uses primarily human voice signal as input, while the physiological signals measure changes in the human body, such as changes in temperature, blood pressure, heart rate, and skin conductivity.
AFFECTIVE COMPUTING ❚❙❘ 9
Facial Expressions One of the most common ways for humans to display emotions is through facial expressions. The best-known study of facial expressions was done by the psychologist Paul Ekman and his colleagues. Since the 1970s, Ekman has argued that emotions are manifested directly in facial expressions, and that there are six basic universal facial expressions corresponding to happiness, surprise, sadness, fear, anger, and disgust. Ekman and his collaborator, the researcher Wallace Friesen, designed a model linking facial motions to expression of emotions; this model is known as the Facial Action Coding System (FACS). The facial action coding system codes facial expressions as a combination of facial movements known as action units. The action units have some relation to facial muscular motion and were defined based on anatomical knowledge and by studying videotapes of how the face changes its appearance. Ekman’s work inspired many other researchers to analyze facial expressions by means of image and video processing. Although the FACS is designed to be performed by human observers viewing a video frame by frame, there have been attempts to automate it in some fashion, using the notion that a change in facial appearance can be described in terms of a set of facial expressions that are linked to certain emotions. Work on automatic facial-expression recognition started in the early 1990s. In all the research, some method to extract measurements of the facial features from facial images or videos was used and a classifier was constructed to categorize the facial expressions. Comparison of facial expression recognition methods shows that recognition rates can, on limited data sets and applications, be very high. The generality of these results has yet to be determined.
Voice Quantitative studies of emotions expressed in the voice have had a longer history than quantitative studies of facial expressions, starting in the 1930s. Studies of the emotional content of speech have examined the pitch, duration, and intensity of the utterance. Automatic recognition systems of emotions from voice have so far not achieved high accuracy. In addition, there is no agreed-upon theory
of the universality of how emotions are expressed vocally, unlike the case for facial expressions. Research that began in the late 1990s concentrated on combining voice and video to enhance the recognition capabilities of voice-only systems.
Multimodal Input Many researchers believe that combining different modalities enables more accurate recognition of a user’s emotion than relying on any single modality alone. Combining different modalities presents both technological and conceptual challenges, however. On the technological side, the different signals have different sampling rates (that is, it may take longer to register signals in one modality than in another), and the existence of one signal can reduce the reliability of another (for example, when a person is speaking, facial expression recognition is not as reliable). On the conceptual side, emotions are not always aligned in time for different signals. For example, happiness might be evident visually before it became evident physiologically.
Computers Displaying Emotions For affective computing, it is as important that computers display emotions as it is that they recognize them. There are a number of potential ways in which computers could evince an emotion.A computer might depend on facial expressions of animated agents or on synthesized speech, or emotion could be conveyed to a user through wearable devices and text messages. The method would be determined by the application domain and preset goals. For example, interaction in an office environment requires emotion to be expressed differently from the way it would be expressed during pursuit of leisure activities, such as video games; similarly, in computer-assisted tutoring, the computer’s goal is to teach the human user a concept, and the display of emotions facilitates this goal, while in a game of poker, the computer’s goal is to hide its intention and deceive its human adversary. A computer could also synthesize emotions in order to make intelligent decisions with regard to problems whose attributes cannot all be quantified exactly, or when the search space for the best solution is
10 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
large. By assigning valence to different choices based on different emotional criteria, many choices in a large space can be eliminated quickly, resulting in a quick and good decision.
Research Directions Affective computing is still in its infancy. Some computer systems can perform limited recognition of human emotion with limited responses in limited application domains. As the demand for intelligent computing systems increases, however, so does the need for affective computing. Various moral issues have been brought up as relevant in the design of affective computers. Among them are privacy issues: If a computer can recognize human emotions, a user may want assurances that information on his or her emotional state will not be abused. There are also issues related to computers’ manipulation of people’s emotions: Users should have assurance that computers will not physically or emotionally harm them. There are also questions regarding who will have responsibility for computer actions. As affective technology advances, these issues will have increasing relevance. Ira Cohen, Thomas S. Huang, Lawrence S. Chen
FURTHER READING Darwin, C. (1890). The expression of the emotions in man and animals (2nd ed.). London: John Murray. Ekman, P. (Ed.). (1982). Emotion in the human face (2nd ed.). New York: Cambridge University Press. Ekman, P., & Friesen,W.V. (1978). Facial action coding system: Investigator’s guide. Palo Alto, CA: Consulting Psychologists Press. James, W. (1890). The principles of psychology. New York: Henry Holt. Jenkins, J. M., Oatley, K., and Stein, N. L. (Eds.). (1998). Human emotions: A reader. Malden, MA: Blackwell. Lang, P. (1995). The emotion probe: Studies of motivation and attention. American Psychologist, 50(5), 372–385. Pantic, M., & Rothkrantz, L. J. M. (2000). Automatic analysis of facial expressions: The state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(12), 1424–1445. Picard, R. W. (1997). Affective computing. Cambridge, MA: MIT Press. Picard, R. W., Vyzas, E., & Healey, J. (2001). Toward machine emotional intelligence: Analysis of affective physiological state. IEEE Transactions Pattern Analysis and Machine Intelligence, 23(10), 1175–1191. Schlosberg, H. (1954). Three dimensions of emotion. Psychological Review, 61(2), 81–88.
ALTAIR People have called the legendary Altair the first true personal computer. However, although it played an important role in the development of personal computers, we would be more correct to say that Altair was the last hobby computer that set many of the ambitions for the social movement that produced real personal computers during the 1970s. Thus, from the standpoint of human-computer interaction, Altair is worth remembering because it marked a crucial transition between two eras of amateur computing: an experimental era lasting from about 1950 until the mid-1970s, when home computers were among the more esoteric projects attempted by electronics hobbyists, and the true personal computer era, beginning with such computers as the Apple II in 1977. Altair was announced to the world late in 1974 in the issue of Popular Electronics magazine dated January 1975. Some controversy exists about how active a role the magazine played in launching Altair, but clearly Altair was actually designed and manufactured by the small company Micro Instrumentation and Telemetry Systems (MITS) in Albuquerque, New Mexico, headed by H. Edward “Ed” Roberts. Altair was a kit, costing $397, that required much skill from the builder and did not include sufficient memory or input-output devices to perform any real tasks. The central processing unit was the new Intel 8080 microprocessor chip. Altair came with only 256 bytes of memory, and a notoriously unreliable 4-kilobyte memory expansion board kit cost an additional $264. After a while MITS offered data input and output by means of an audio cassette recorder, but initially only a dedicated amateur was in a practical position to add a keyboard (perhaps a used teletype machine) or punched paper tape reader. Inputoutput for the original computer was accomplished by switches and lights on the front of the cabinet. Popular Electronics hyped the Altair as if it were a fully developed “minicomputer” and suggested some excessively demanding applications: an autopilot for airplanes or boats, a high-speed input-output device for a mainframe computer, a
ALTAIR ❚❙❘ 11
brain for a robot, an automatic controller for an air-conditioning system, and a text-to-Braille converter to allow blind people to read ordinary printed matter. Altair rescued MITS from the verge of bankruptcy, but the company could never fully deliver on the promise of the computer and was absorbed by another company in 1977. Hundreds of amateurs built Altairs on the way to careers in the future personal computer industry, its subcomponent interface bus became the widely used S100 standard, and the computer contributed greatly to the revolution in human-computer interaction that occurred during its decade. Notably, the mighty Microsoft corporation began life as a tiny partner of MITS, producing a BASIC interpreter for programming the Altair. Altair was far from being the first hobby computer, however. That honor probably belongs to Edmund C. Berkeley’s Simon relaybased computer produced in 1950 and publicized among hobbyists in the pages of Radio-Electronics magazine. The most widely owned hobby digital computer before Altair was probably Berkeley’s GENIAC (Genius Almost-Automatic Computer), which cost less than twenty dollars in 1955. Lacking vacuum tubes, relays, or transistors, this assembly of Masonite board, rotary switches, lights, and wires instructed students in the rudiments of logic programming (programming the steps of logical deductions). Immediately prior to Altair, two less influential hobby computers were also based on Intel chips: the Scelbi 8H and the Titus Mark-8. The difference is that Altair was expandable and intended to evolve into a full-featured personal computer. The 1970s marked a turning point in the history of hobby electronics, and innovative projects such as Altair could be seen as desperate measures in the attempt to keep the field alive. Today some enthusiasts build electronic equipment from kits or from scratch, just as others build their own harpsichords, but they no longer have the same relationship to the electronics industry that they enjoyed during the middle decades of the twentieth century. Prior to the development of integrated circuits, factories constructed radios, televisions, and audio amplifiers largely by hand, laboriously
“Personal computers are notorious for having a half-life of about two years. In scientific terms, this means that two years after you buy the computer, half of your friends will sneer at you for having an outdated machine.” —Peter H. Lewis
soldering each wire, capacitor, and resistor into place manually. Any of these parts might burn out in use, so repair shops flourished, and companies such as Allied Radio and Lafayette Electronics sold individual parts to hobbyists and to anyone else who was willing to buy. For the novice, these distributors sold kits that provided all the parts needed to build a project, and more advanced amateurs followed instructions in a number of magazines to build projects from parts they bought separately from distributors. In purely financial terms building a stereo system from a kit during the 1960s, as tens of thousands of people did, made little sense, but the result was fully as good as the best ready-made system that could be bought in stores, and in some cases the designs were identical. The introduction of integrated circuits gradually reduced the role of repairpersons, and by the dawn of the twenty-first century much electronic equipment really could not be repaired effectively and was simply replaced when it broke down. Already by the late 1970s the electronics hobby was in decline, and the homebuilt computer craze during that decade was practically a last fling. For a decade after the introduction of Altair, a vibrant software hobbyist subculture prevailed as people manually copied programs from a host of amateur computer magazines, and many people “brewed” their own personally designed word processors and fantasy games. This subculture declined after the introduction of complicated graphical user interface operating systems by Apple and Microsoft, but it revived during the mid-1990s as vast numbers of people created their own websites in the initially simple HTML (hypertext markup language). During its heyday this subculture was a great training ground of personnel
12 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
for the electronics and computer industries because amateurs worked with the same technology that professionals worked with. Altair was a watershed personal computer in the sense that amateurs assembled it personally and that it transformed them personally into computer professionals. William Sims Bainbridge See also Alto FURTHER READING Freiberger, P., & Swaine, M. (1999). Fire in the valley: The making of the personal computer (2nd ed.). New York: McGraw-Hill. Roberts, H. E., & Yates, W. (1975). Altair minicomputer. Popular Electronics, 7(1), 33–38. Roberts, H. E., & Yates, W. (1975). Altair minicomputer. Popular Electronics, 7(2), 56–58. Mims, F. M. (1985, January). The tenth anniversary of the Altair 8800. Computers & Electronics, 23(1), 58–60, 81–82.
ALTO The Alto computer, developed at the Xerox Corporation’s Palo Alto Research Center (Xerox PARC) in the 1970s, was the prototype of the late twentieth-century personal computer. Input was by means of both keyboard and mouse; the display screen integrated text and graphics in a system of windows, and each computer could communicate with others over a local area network (LAN). The Alto was significant for human-computer interaction (HCI) in at least three ways. First, it established a new dominant framework for how humans would interact with computers. Second, it underscored the importance of theory and research in HCI. Third, the failure of Xerox to exploit Alto technology by gaining a dominant position in the personal computer industry is a classic case study of the relationship between innovators and the technology they create. During the late 1960s the Xerox Corporation was aware that it might gradually lose its dominant position in the office copier business, so it sought ways of expanding into computers. In 1969 it paid $920 million to buy a computer company named “Scientific
Data Systems” (SDS), and it established Xerox PARC near Stanford University in the area that would soon be nicknamed “Silicon Valley.” Xerox proclaimed the grand goal of developing the general architecture of information rather than merely producing a number of unconnected, small-scale inventions. The Alto was part of a larger system of software and hardware incorporating such innovations as object-oriented programming, which assembles programs from many separately created, reusable “objects,” the ethernet LAN, and laser printers. At the time computers were large and expensive, and a common framework for human-computer interaction was time sharing: Several users would log onto a mainframe or minicomputer simultaneously from dumb terminals, and it would juggle the work from all of the users simultaneously. Time sharing was an innovation because it allowed users to interact with the computer in real time; however, because the computer was handling many users it could not devote resources to the HCI experience of each user. In contrast, Alto emphasized the interface between the user and the machine, giving each user his or her own computer. In April 1973 the first test demonstration of an Alto showed how different using it would be from using the text-only computer terminals that people were used to when it began by painting on its screen a picture of the Cookie Monster from the television program Sesame Street. The Alto’s display employed bitmapping (controlling each pixel on the screen separately) to draw any kind of diagram, picture, or text font, including animation and pulldown menus. This capability was a great leap forward for displaying information to human beings, but it required substantial hardware resources, both in terms of memory size and processing speed, as well as radically new software approaches. During the 1970s the typical computer display consisted of letters, numbers, and common punctuation marks in a single crude font displayed on a black background in one color: white or green or amber. In contrast, the default Alto display was black on white, like printed paper. As originally designed, the screen was 606 pixels wide by 808 pixels high, and each of those 489,648 pixels could be separately controlled. The Xerox PARC researchers developed sys-
ANIMATION ❚❙❘ 13
tems for managing many font sizes and styles simultaneously and for ensuring that the display screen and a paper document printed from it could look the same. All this performance placed a heavy burden on the computer’s electronics, so an Alto often ran painfully slow and, had it been commercialized, would have cost on the order of $15,000 each. People have described the Alto as a “time machine,” a computer that transported the user into the office of the future, but it might have been too costly or too slow to be a viable personal computer for the average office or home user of the period in which it was developed. Human-computer interaction research of the early twenty-first century sometimes studies users who are living in the future. This means going to great effort to create an innovation, such as a computer system or an environment such as a smart home (a computer-controlled living environment) or a multimedia classroom, that would not be practical outside the laboratory. The innovation then becomes a test bed for developing future systems that will be practical, either because the research itself will overcome some of the technical hurdles or because the inexorable progress in microelectronics will bring the costs down substantially in just a few years. Alto was a remarkable case study in HCI with respect to not only its potential users but also its creators. For example, the object-oriented programming pioneered at Xerox PARC on the Alto and other projects changed significantly the work of programmers. Such programming facilitated the separation between two professions: software engineering (which designs the large-scale structure and functioning of software) and programming (which writes the detailed code), and it increased the feasibility of dividing the work of creating complex software among many individuals and teams. People often have presented Alto as a case study of how short sighted management of a major corporation can fail to develop valuable new technology. On the other hand, Alto may have been both too premature and too ambitious. When Xerox finally marketed the Alto-based Star in 1981, it was a system of many small but expensive computers, connected to each other and to shared resources such as laser printers—a model of distributed personal
computing. In contrast, the model that flourished during the 1980s was autonomous personal computing based on stand-alone computers such as the Apple II and original IBM PC, with networking developing fully only later. The slow speed and limited capacity of the Alto-like Lisa and original 128-kilobyte Macintosh computers introduced by Apple in 1983 and 1984 suggest that Alto would really not have been commercially viable until 1985, a dozen years after it was first built. One lesson that we can draw from Alto’s history is that corporate-funded research can play a decisive role in technological progress but that it cannot effectively look very far into the future. That role may better be played by university-based laboratories that get their primary funding from government agencies free from the need to show immediate profits. On the other hand, Xerox PARC was so spectacularly innovative that we can draw the opposite lesson—that revolutions in human-computer interaction can indeed occur inside the research laboratories of huge corporations, given the right personnel and historical circumstances. William Sims Bainbridge See also Altair; Graphical User Interface FURTHER READING Hiltzik, M. (1999). Dealers of lightning: Xerox PARC and the dawn of the computer age. New York: HarperBusiness. Lavendel, G. (1980). A decade of research: Xerox Palo Alto Research Center. New York: Bowker. Smith, D. C., & Alexander, R. C. (1988). Fumbling the future: How Xerox invented, then ignored the first personal computer. New York: William Morrow. Waldrop, M. M. (2001). The dream machine: J. C. R. Licklider and the revolution that made computing personal. New York: Viking.
ANIMATION Animation, the creation of simulated images in motion, is commonly linked with the creation of cartoons, where drawn characters are brought into play
14 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
to entertain. More recently, it has also become a significant addition to the rich multimedia material that is found in modern software applications such as the Web, computer games, and electronic encyclopedias.
Brief History Animations are formed by showing a series of still pictures rapidly (at least twelve images per second) so that the eye is tricked into viewing them as a continuous motion. The sequence of still images is perceived as motion because of two phenomena, one optical (persistence of vision) and one psychological (phi principle). Persistence of vision can be explained as the predisposition of the brain and eye to keep on seeing a picture even after it has moved out of the field of vision. In 1824 British scientist, physician, and lexicographer Peter Mark Roget (1779–1869) explained this phenomenon as the ability of the retina to retain the image of an object for 1/20 to 1/5 second after its removal; it was demonstrated two years later using a thaumatrope, which is a disk with images drawn on both sides that, when twirled rapidly, gives the illusion that the two images are combined together to form one image. The other principle is the phi phenomenon or stroboscopic effect. It was first studied by German psychologist Max Wertheimer (1880–1943) and German-American psycho-physiologist Hugo Munsterberg (1863–1916) during the period from 1912 to 1916. They demonstrated that film or animation watchers form a mental connection that completes the action frame-to-frame, allowing them to perceive a sequence of motionless images as an uninterrupted movement. This mental bridging means that even if there are small discontinuities in the series of frames, the brain is able to interpolate the missing details and thus allow a viewer to see a steady movement. In the nineteenth century, many animation devices, such as the zoetrope invented by William George Horner (1786–1837), the phenakistiscope (1832), the praxinoscope (1877), the flipbook, and the thaumatrope were direct applications of the persistence of vision. For example, the zoetrope is a cylindrical device through which one can see an image in action. The rotating barrel has evenly spaced peepholes on the out-
side and a cycle of still images on the inside that show an image in graduating stages of motion. Whenever the barrel spins rapidly, the dark frames of the still pictures disappear and the picture appears to move. Another, even simpler example is the flipbook, a tablet of paper with a single drawing on each page. When the book is flicked through rapidly, the drawings appear to move. Once the basic principles of animation were discovered, a large number of applications and techniques emerged. The invention of these simple animation devices had a significant influence on the development of films, cartoons, computer-generated motion graphics and pictures, and more recently, of multimedia.
Walt Disney and Traditional Animation Techniques During the early to mid-1930s, animators at Walt Disney Studios created the “twelve animation principles” that became the basics of hand-drawn cartoon character animation. While some of these principles are limited to the hand-drawn cartoon animation genre, many can be adapted for computer animation production techniques. Here are the twelve principles: 1. Squash and stretch—Use shape distortion to emphasize movement. 2. Anticipation—Apply reverse movement to prepare for and bring out a forward movement. 3. Staging—Use the camera viewpoint that best shows an action. 4. Straight-ahead vs. pose-to-pose action—Apply the right procedure. 5. Follow-through and overlapping action—Avoid stopping movement abruptly. 6. Slow-in and slow-out—Allow smooth starts and stops by spacing frames appropriately. 7. Arcs—Allow curved motion in paths of action. 8. Secondary actions—Animate secondary actions to bring out even more life. 9. Timing—Apply time relations within actions to create the illusion of movement. 10. Exaggeration—Apply caricature to actions and timing. 11. Solid drawing—Learn and use good drawing techniques.
ANIMATION ❚❙❘ 15
12. Appeal—Create and animate appealing characters. Traditional animation techniques use cel animation in which images are painted on clear acetate sheets called cels. Animation cels commonly use a layering technique to produce a particular animation frame. The frame background layer is drawn in a separate cel, and there is a cel for each character or object that moves separately over the background. Layering enables the animator to isolate and redraw only the parts of the image that change between consecutive frames. There is usually a chief animator who draws the key-frames, the ultimate moments in the series of images, while in-between frames are drawn by others, the in-betweeners. Many of the processes and lingo of traditional cel-based animation, such as layering, key-frames, and tweening (generating immediate frames between two images to give the appearance that the first image evolves smoothly into the next), have carried over into two-dimensional and three-dimensional computer animation.
Two-Dimensional Computer Animation In recent years, computer programs have been developed to automate the drawing of individual frames, the process of tweening frames between keyframes, and also the animation of a series of frames. Some animation techniques commonly used in twodimensional (2D) computer animation are either frame-based or sprite-based. Frame-based animation is the simplest type of animation. It is based on the same principle as the flipbook, where a collection of graphic files, each containing a single image, is displayed in sequence and performs like a flipbook. Here again, to produce the illusion of motion, graphic images, with each image slightly different from the one before it in the sequence, are displayed at a high frame-rate (the number of frames of an animation displayed every second). Sprite-based animation uses a technique that is similar to the traditional animation technique in which an object is animated on top of a static graphic background. A sprite is any element of an animation that moves independently, such as a bouncing ball
or a running character. In sprite-based animation, a single image or sequence of images can be attached to a sprite. The sprite can animate in one place or move along a path. Many techniques—for example, tiling, scrolling, and parallax— have been developed to process the background layer more efficiently and to animate it as well. Sometimes sprite-based animation is called path-based animation. In pathbased animation, a sprite is affixed to a curve drawn through the positions of the sprite in consecutive frames, called a motion path. The sprite follows this curve during the course of the animation. The sprite can be a single rigid bitmap (an array of pixels, in a data file or structure, which correspond bit for bit with an image) that does not change or a series of bitmaps that form an animation loop. The animation techniques used by computers can be frame-by-frame, where each frame is individually created, or real-time, where the animator produces the key-frames and the computer generates the frames in between when the animation is displayed at run time. Two-dimensional computer animation techniques are widely used in modern software and can be seen in arcade games, on the Web, and even in word processors. The software used to design twodimensional animations are animation studios that allow animators to draw and paint cels, provide key-frames with moving backgrounds, use multiple layers for layering, support linking to fast video disk recorders for storage and playback, and allow scans to be imported directly. Examples of this software include Adobe Photoshop (to create animated GIFs), Macromedia Director (multimedia authoring tool that includes sophisticated functions for animation), Macromedia Flash (vector-based authoring tool to produce real-time animation for the Web). Even some programming languages such as Java are used to produce good quality animation (frame-by-frame and real-time) for the Web.
Three-Dimensional Computer Animation Three-dimensional computer animations are based on a three-dimensional (3D) coordinate system, which is a mathematical system for describing
16 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
three-dimensional space. Space is measured along three coordinates, the X direction, Y direction, and Z direction. These coordinates correspond to the width, length, and depth of objects or space. The X, Y, Z coordinates of points in space are used to define polygons, and collections of polygons make up the definition of three-dimensional objects. The process of 3D animation involves at least the following stages: modeling, rendering, and animation. Modeling is the process of creating 3D objects from simple 2D objects by lofting (the process of transforming a two-dimensional cross section object into a complete three-dimensional object) or from other simple 3D objects called “primitives” (spheres, cubes, cylinders, and so on). Primitives can be combined using a variety of Boolean operations (union, subtraction, intersection, and so on). They can also be distorted in different ways. The resulting model is called a mesh, which is a collection of faces that represent an object. Rendering is used to create an image from data that represents objects as meshes, and to apply colors, shading, textures, and lights to them. In its simplest form, the process of three-dimensional computer animation is very similar to the two-dimensional process of key-frames and tweening. The main differences are that threedimensional animations are always vector-based and real-time. Spline-Based Animation Motion paths are more believable if they are curved, so animation programs enable designers to create spline-based motion paths. (Splines are algebraic representations of a family of curves.) To define spline-based curves, a series of control points is defined and then the spline is passed through the control points. The control points define the beginning and end points of different parts of the curve. Each point has control handles that enable designers to change the shape of the curve between two control points. The curves and the control points are defined in 3D space. Most computer animation systems enable users to change the rate of motion along a path. Some systems also provide very sophisticated control of the velocity of an object along paths.
Skeletal Structure Animation Skeletal structures are bones-based. Widely used to control three-dimensional creatures, they appear in practically all modern three-dimensional modeling software studios. They enable the artist to preset and control the rotation points of a three-dimensional creature, facilitating its animation. The animator can then model a geometric skin (representing how the creature would appear) and link it to the bones structure. Skeletal structures software with graphical and powerful interfaces provide rich environments in which artists can control the complex algorithms involved in creating animated threedimensional creatures (human, animal, or imaginary). The availability of a skeletal animation environment characteristically brings another advantage—the exploitation of inverse kinematics (IK) to bring a character to life. Inverse Kinematics IK is a common technique for positioning multilinked objects, such as virtual creatures. When using an animation system capable of IK, a designer can position a hand in space by grabbing the hand and leading it to a position in that space. The connected joints rotate and remain connected so that, for example, the body parts all stay connected. IK provides a goal-directed method for animating a 3D creature. It allows the animator to control a threedimensional creature’s limbs by treating them as a kinematics chains. The points of control are attached to the ends of these chains and provide a single handle that can be used to control a complete chain. IK enables the animator to design a skeleton system that can also be controlled from data sets generated by a motion capture application. Motion Capture Motion capture is the digital recording of a creature’s movement for immediate or postponed analysis and playback. Motion capture for computer character animation involves the mapping of human action onto the motion of a computer character. The digital data recorded can be as simple as the position and orientation of the body in space, or as intricate as the deformations of the expression of the visage.
ANTHROPOLOGY AND HCI ❚❙❘ 17
Advances in Three-Dimensional Animation With the support of powerful computers, threedimensional animation allows the production and rendering of a photo-realistic animated virtual world. Three-dimensional scenes are complex virtual environments composed of many elements and effects, such as cameras, lights, textures, shading, and environment effects, and all these elements can be animated. Although cel animation is traditionally two-dimensional, advances in three-dimensional rendering techniques and in camera animation have made it possible to apply three-dimensional techniques to make two-dimensional painted images appear visually three-dimensional. The 3D animation techniques described in this section are supported by modern 3D animation studios that are software programs such as Maya (alias|wavefront), Softimage (Softimage), 3D Studio Max (Discreet), or Rhino3D (Robert McNeel & Associates). Examples of environment effects include rain, fire, fog, or dying stars. A technique widely used in real-time applications involving an environmental effect is called a particle system. A particle system is a method of graphically producing the appearance of amorphous substances, such as clouds, smoke, fire, or sparkles. The substance is described as a collection of particles that can be manipulated dynamically for animation effects. Some even more recent techniques include physics-based behavior such as a realistic animation of cloth, hair, or grass affected by the wind.
Endless Possibilities Animation has become an ubiquitous component of human-computer interfaces. It has evolved from prehistoric paintings in Altamira caves to realistic virtual worlds in sophisticated multimedia computers. The technologies supporting animation are still emerging and will soon support even more complex worlds, more realistic character animation, considerably easier 3D animation development, better quality animations on the Web, and better interactions with virtual reality interfaces. The current
work in animation lies in physic-based modeling in which objects or natural phenomena are animated according to their real physical properties, in realtime motion capture, and in goal-orientated animation. Considering the numerous applications of animation, from multimedia to archeology and chemistry, the future possibilities seem endless. Abdennour El Rhalibi and Yuanyuan Shen See also Data Visualization; Graphic Display; Graphical User Interface FURTHER READINGS CoCo, D. (1995). Real-time 3D games take off. Computer Graphics World, 8(12), 22–33. Corra, W. T., Jensen, R. J., Thayer, C. E., & Finkelstein, A. (1998). Texture mapping for cel animation. In Proceedings of SIGGRAPH’ 98, Computer Graphics Proceedings, Annual Conference Series (pp. 435–446). Kerlow, I. V. (2000). The art of 3-D computer animation and imaging (2nd ed.). New York: Wiley. Lassiter, J. (1987). Principles of traditional animation applied to 3D computer animation. SIGGRAPH 87 (pp. 35–44). Maltin, L. (1987). Of mice and magic—A history of American animated cartoons. New York: Penguin Books. O’Rourke, M. (1995). Principles of three-dimensional computer animation. New York: W. W. Norton. Parent, R. (2001). Computer animation: Algorithms and techniques. San Francisco: Morgan-Kaufmann. Potter, C. D. (1995). Anatomy of an animation. Computer Graphics World, 18(3). 36–43. Solomon, C. (1994). The history of animation: Enchanted drawings. New York: Wings Books. Thomas, F., & Johnson, O. (1981). The illusion of life. New York: Abbeville Press. Watt, A., & Policarpo, F. (2001). 3D games—real-time rendering and software technology. New York: Addison-Wesley. Watt, A. H., & Watt, M. (1992). Advanced animation and rendering. New York: Addison-Wesley. Williams, R. (2001). The animator’s survival kit. New York: Faber & Faber.
ANTHROPOLOGY AND HCI As a social science that brings together social anthropology, linguistics, archaeology, and human biology, anthropology clearly has a major contribution to make to the study of human-computer interaction
18 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
(HCI). However, bringing that contribution into focus is at times a challenge, not only because of the extreme interdisciplinarity but also because of collaborations between anthropologists and computer scientists and the sometimes-blurred boundaries between anthropology and related disciplines, including sociology and psychology. Despite these challenges, anthropology has created distinctive methods and a distinctive epistemology, and has offered new insights for understanding human-computer interaction. Anthropology also poses three profound questions.
Methods Anthropology’s development of ethnographic methods is a notable contribution to research in HCI. More than simple naturalistic observation, ethnography is a structured process informed by theoretical models through which researchers attempt to elucidate the coherence of a context. For example, anthropologist Bonnie Nardi, in her study of end-user computing used concepts of formalisms and communication to interpret how users developed their own programs; anthropologist Lucy Suchman used a mechanistic concept of cognition as a foil to understand how users interacted with an expert-system-based help facility embedded in a copying machine. In both these cases researchers combined intensive naturalistic observation with conceptual insights to develop new HCI models. A frequently employed variation on ethnographic methods is called ethnomethodology. As originally developed by sociologist Harold Garfinkel, ethnomethodology stipulates that individuals make sense out of a context in an ad hoc, almost indeterminate manner. In place of social order, the actors in a given context are synthesizing what appears to be order, accepting or rejecting information as it fits with their synthesis. The mutual intelligibility of an interaction is thus an ongoing achievement between the actors, a result rather than a starting point. Thus, two users can construct two quite different meanings out of similar interactions with computers, depending on the experiences they bring to the interaction. This suggests some obvious limitations on the abilities of computers to constrain or reproduce human actions.
A third concept, actually a method, employed by anthropologists in the study of HCI is actor-network theory. This theory views artifacts and social roles as coevolving nodes in a common network. Insofar as each node encodes information about the entire network (for example, in any country, electrical appliances are tailored to the specific power system of the country and the expectations of the users) and is capable of state changes based on network inputs, both artifacts and social roles can be considered to have agency within the network. This concept, originally developed by the sociologist Michel Callon, in his study of the French government’s involvement in technological projects, and elaborated by the sociologist John Law in a study of Portuguese sailing vessels in the sixteenth century, is very pertinent to rapidly changing technologies such as computers. Indeed, observing the shifting topology of the Internet and Internet computing makes it clear that user roles are anticipated and complemented by machine behavior (for instance, collaborative filtering), and machine states enable or constrain users’ agency within the network (for example, the structures of search engines). Although silicon and carbon units are distinct, for now, the image of the cyborg (cybernetic organism), and the emergence of integrated biological/computational systems, suggests other possibilities. This hints at the final, and perhaps most important anthropological contribution to HCI, the evolutionary perspective. All branches of anthropology have been concerned with the evolution of human societies, languages, and even genotypes. Although there is room for debate over the telos or chaos of evolutionary processes, understanding humans and their artifacts as goal-seeking objects who learn is fundamental to any anthropological viewpoint. Using the archaeological record and anthropological knowledge of societies with simpler toolkits, the anthropologist David Hakken has questioned the extent to which the widespread use of computers in society justifies being called a “revolution”; he concludes that due to their failure to transform the character of labor, computers are “just one more technology” in the implementation of an automated, massified Fordist model of production—a model inspired by Henry Ford in which large quantities of products are produced through the repetitive motions of unskilled workers.
ANTHROPOLOGY AND HCI ❚❙❘ 19
Epistemology What distinguishes anthropology from other disciplines such as psychology and sociology that use similar methods is in many ways a matter of epistemology— that is, the stance it takes toward the subject matter. Central to this stance is the orthogonal view, that is, the ability to analyze a situation from a fresh and original yet plausible perspective. It is the orthogonal view that enables anthropologist Constance Perin to see office automation as a panopticon, that suggests to linguistic anthropologist Charlotte Linde that failed communication can improve performance, or that led Edwin Hutchins, a cognitive anthropologist, to understand a cockpit as a cognitive device. Orthogonal viewpoints originate from the experience of fieldwork, or rather, field immersion, preferably in a remote setting, which is the rite
of passage for most anthropologists. When researchers have lived for an extended period of time in an unfamiliar village, cut off from their normal social moorings, when cultural disorientation becomes embedded in their daily routine, they acquire a profound conviction that all social forms are conventional, that otherness is not alien, and that belonging and familiarity are rare and fragile flowers. It is this experience and this conviction more than any methodological or conceptual apparatus that defines anthropology and that enables the orthogonal view. It is this implicitly critical stance that has constrained anthropology’s contribution to the study of automation human factors. “Human factors” is an engineering discipline using engineering methods of analytic decomposition to solve engineering
A Personal Story—Eastern vs. Western Cultural Values My understanding of human communication using mediated technologies is primarily based on cultural assumptions. Cultural values could influence the way a human chooses its medium of communication. On the other hand, with the advancement of computer-mediated communication (CMC) technologies (e.g., e-mail, e-commerce sites, weblogs, bulletin boards, newsgroups) people could also change their communication patterns to suit the different forms of a medium. Whichever way, apparently, people will not adopt CMC unless and until it fits with their cultural values. Based on my interviews with a number of informants from different cultural backgrounds, I have observed some disparate yet interesting views on communication patterns and preferences, i.e., why and when people use CMC. Let me briefly illustrate one case of contrasting communication preferences and patterns. When I asked the informants from Eastern cultures why they would use CMC, one of the key responses was that they can express themselves better over mediated technologies than to voice their opinions in face-to-face. Public self-expression is avoided due to the value of “saving face.” Also, using asynchronous medium such as e-mail, does not require spontaneous response. People could first think, reflect, and then express. On the contrary, the informants from Western cultures felt that using e-mail is best for complex and detailed information, as they require very explicit forms of instructions. Additionally, people send messages via CMC in order to get quick response so that tasks can get completed. Also, based on a written format, the text becomes an evidence or “proof of say” for a job accomplished. Getting a job or assignment done is perceived as a priority and building a relationship is thus secondary. Cultural values could present a new lens to understand why and how certain a communication medium offers different functions or purposes. What is more important is the uniqueness of human beings with a set of cultural assumptions and values, and not the technological features. Anthropologist Edward T. Hall postulates that “communication is culture and culture is communication.” Hence, organizations need to understand fully the myriad cultural preferences before making a substantial investment in CMC technology. Without such understanding, technology will simply be another gadget that gets rusty and dusty! Norhayati Zakaria
20 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Digital Technology Helps Preserve Tribal Language
(ANS)—The American Indian language of Comanche was once taught through conversation—a vocabulary passed on and polished as it moved from one generation to the next. But as fluency among Comanches declines, the tribe has turned to cutting-edge technology to preserve this indigenous language. By next winter, members hope to produce an interactive CD-ROM that will create a digital record of the language and help tribe members learn it. “You can’t say you’re Comanche without knowing your own language. That’s the way I feel,” said Billie Kreger of Cache, Okla., vice president of the Comanche Language and Cultural Preservation Committee. Kreger, 47, didn’t learn much Comanche as a child but has begun studying it in the past few years. Of the 10,000 Comanches that still remain in the United States, roughly 200 are fluent, according to Karen Buller, president and chief executive officer of the Santa Fe, N.M.-based organization that is paying for the CDROM project, the first of its kind in the United States. Tribe members are anxious to record the language while the fluent speakers, who are in their 70s and 80s, are still living, she said. Buller’s group, the National Indian Telecommunications Institute, is paying for the project with $15,000 in grant money from the Fund for the Four Directions. The CD-ROM will teach about 1,500 vocabulary words. Students will see Comanche elders pronouncing the words and hear the words used in conversations. Buller’s group is recording conversations on videotape. Other indigenous language revitalization efforts are under way around the country, too, including language immersion programs in Alaskan and Hawaiian schools. The institute provided teacher training for those projects. “All the tribes are saying, ‘We’ve got to save the language,’” said Leonard Bruguier, who heads the Institute
of American Indian Studies at the University of South Dakota in Vermillion. Students at that university, located in the midst of a large Sioux community, are increasingly interested in learning indigenous languages, he said. Under a federal policy of discouraging use of American Indian languages by allowing only English to be spoken by American Indian children at schools run by the Bureau of Indian Affairs, Comanche began faltering about 50 years ago. Without preservation efforts, researchers predict that 90 percent of the world’s languages, including those of the 554 American Indian tribes, will disappear in the next century, said Peg Thomas, executive director of The Grotto Foundation, a nonprofit organization in St. Paul, Minn., that provides funding to American Indian organizations. Each year about five languages fall into “extinction,” meaning that they have no youthful speakers, she said. According to some estimates, between 300 and 400 American Indian languages have become extinct since European settlers first arrived in North America. The point of preserving the languages is partly to maintain a connection to the past and learn the history of a culture, said Buller. Students of the Comanche language discover, for instance, that the words for food preparation are based on the root word for “meat”—because meat was a key part of the Comanche diet. She and others say that American Indian children who learn indigenous languages in addition to English appear to perform better in school. But language programs are targeting adults, too. Kreger, of the Comanche Language and Cultural Preservation Committee, says she is looking forward to using the CD-ROM for her own language studies. “I can hardly wait,” she said. Nicole Cusano Source: Digital technology helps preserve tribal language. American News Service, June 15, 2000.
ANTHROPOLOGY AND HCI ❚❙❘ 21
problems—in other words, the improved performance of artifacts according to some preestablished set of specifications. Anthropology, by contrast, would begin by questioning the specifications, adopting a holistic point of view toward the entire project. Holism is the intellectual strategy of grasping the entire configuration rather than breaking it down into separate elements. From an anthropological viewpoint, specifications are not a given, but open to interrogation. A holistic viewpoint requires that the researcher adopt multiple disciplinary tools, including (but certainly not limited to) direct observation, interviewing, conversation analysis, engineering description, survey research, documentary study, and focus groups. For many, anthropology is highly interdisciplinary, assembling research tools as the contextualized problem requires. How far the anthropologist is permitted to go with this approach is one of the dilemmas of anthropologists working in software design. The emerging fields of “design ethnography” and “user-centered design” have employed ethnographers to better understand users’ requirements, and to elicit expert knowledge in the construction of expert systems. However, these efforts are at times compromised by a substantial disconnect between the anthropologists’ understanding of requirements and knowledge, and the eng ineers’ understanding of them. Anthropologists see human needs (that is, requirements) as emergent rather than given, and knowledge (even expert knowledge) as embedded in a culturally contingent body of assumptions called “common sense.” Many systems designers, as the late medical anthropologist Diana Forsythe put it, view common sense as unproblematic and universal. This assumption and others will be discussed below.
Insights The most important anthropological insight to HCI is the emphasis on context for understanding human behavior, including human interaction with cybernetic devices. The human organism is unique in its ability to integrate information from a variety of sensory inputs and to formulate an infinite array of potential behavioral responses to these inputs. These arrays of inputs and responses constitute—
that is, construct—the context of information, a structure of irreducible complexity. The context is far more than simply a compilation of information. Computers and other information technologies, by contrast, focus on the processing of information, stripping information of its contextual properties and thus of the attributes that humans use to turn information into (warranted, usable, and meaningful) knowledge. John Seely Brown, the former director of Xerox Palo Alto Research Center, and researcher Paul Duguid, for example, describe the importance of context for using information. “The news,” for instance, is not simply unfiltered information from a distant place; it is information that has been selected, aggregated, evaluated, interpreted, and warranted by human journalists, trained in face-to-face classrooms or mentored by over-the-shoulder coaches. Physicality is an important component of these relationships: Although people can learn technical skills online, they learn integrity and morality only interpersonally. Making a convincing case for the criticality of context for human users, Brown and Duguid describe six of the context-stripping mechanisms that are supposedly inherent in information technologies: demassification, decentralization, denationalization, despacialization, disintermediation, and disaggregation. “These are said to represent forces that, unleashed by information technology, will break society down into its fundamental constituents, primarily individuals and information” (Brown and Duguid 2000, 22). The sum of their argument is that such “6D thinking” is both unrealized and unrealizable. Information technology does not so much eliminate the social context of information, for this is either pointless or impossible, as it displaces and decomposes that context, thus posing new difficulties for users who need to turn information into knowledge. Contexts can be high (rich, detailed, and full of social cues) or low (impoverished and monochromatic), they can be familiar or unfamiliar, and they can include information channels that are broadband (a face-to-face conversation) or narrowband (reading tea leaves, for example). From a human perspective, all computer interaction, even the most multimedia-rich, is narrowband: Sitting where I am,
22 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
my computer screen and keyboard occupy no more than 25 percent of my field of vision, until I turn my head. Looking around, the percentage shrinks to under 5 percent. The other 95 percent is filled with other work and information storage devices (bookshelves and filing cabinets), task aids (charts on the wall), and reminders of relationships: a social context. As a low-context device, the computer must be supplemented by these other more social artifacts if it is to have human usefulness—that is, if it is to be used for knowledge work rather than mere information processing. Applying the concept of context to a specific technological problem, the design of intelligent systems, Suchman developed a concept of situated action as an alternative explanation for the rationality of human action. In place of seeing activity as the execution of a plan (or program), or inversely, seeing a plan as a retrospective rationalization of activity, Suchman’s concept of situated action sees plans as only one of several resources for making sense out of the ongoing flow of activity. Human action, or more accurately interaction (for all action is by definition social, even if only one actor is physically present), is an ongoing flow of message input and output. Traditionally social studies have assumed that actors have a scheme or mental program which they are enacting: a plan. In contrast to this, Suchman demonstrates that the rationality of an action is an ongoing construction among those involved in the action. The default state of this rationality is a transparent spontaneity in which the participants act rather than think. Only when the ongoing flow breaks down does it become necessary to construct a representation (that is, a plan or image) of what is happening. (Breakdowns, while frequent, are usually easily repaired.) Language, due to its ability to classify, is a powerful resource for constructing such representations, although it is only one of several channels that humans use for communication. Using language, the participants in an action understand what they are doing. Rationality (“understanding what they are doing”) is the achievement rather than the configuration state of interaction. The implications of this for constructing intelligent devices (such as expert systems) are profound. In order for an intelligent device to reproduce intelligi-
ble human action, according to Suchman, it must not attempt to anticipate every user state and response (for it cannot). Alternatively, a strategy of “real-time user modeling” that incorporates (a) continually updated models of user behavior, (b) detection (and adaptation to) diagnostic inconsistencies, (c) sensitivity to local conditions, and (d) learning from fault states (such as false alarms and misleading instructions) suggests a better approximation of situated action than preconceived “user models.” Suchman’s findings are based on the concept of “distributed cognition” originally developed by Edwin Hutchins. Instead of understanding cognition as information processing (searching, aggregating, parsing, and so on), Hutchins saw mental activity as contextually emergent, using contextual resources (including language and artifacts) as part of an interactive process. These insights are derived from efforts to use anthropological methods in the development of expert systems and other artificial intelligence devices. Expert systems hold out the hope that in classroom instruction, in routine bureaucratic problem solving, in medical diagnosis, and in other fields, certain low-level mental tasks could be accomplished by computers, in much the same manner as repetitive manual tasks have been automated. Building these systems requires a process of “knowledge acquisition” that is viewed as linear and unproblematic. An alternative view, suggested by anthropologist Jean Lave and computer scientist Etienne Wenger, is that learning is embedded in (and a byproduct of) social relationships and identity formation, and that people learn by becoming a member of a “community of practice.” The concept of “community of practice” is further developed by Wenger to describe how experts acquire, share, and use their expertise. Communities of practice are groups that share relationships, meaning, and identity around the performance of some set of tasks, whether processing insurance claims or delivering emergency medicine. The knowledge that they share is embedded in these relationships and identities, not something that can be abstracted and stored in a database (or “knowledge base”). Anthropologist Marietta Baba has applied these concepts along with the concept of “sociotechnical
ANTHROPOLOGY AND HCI ❚❙❘ 23
systems” developed by the Tavistock Institute to examine the response of work groups to the introduction of office automation and engineering systems. At major corporations she found that efforts to introduce new automated systems frequently failed because they were disruptive of the work processes, social relationships, identities, and values of the work group, considered as a community of practice. Understanding cognitive activity as distributed among multiple agents is closely related to the issue of man/machine boundaries, an issue clearly of interest to anthropologists. “Cyborg anthropology” has been an ongoing professional interest at least since the 1991 publication of anthropologist Donna Haraway’s Simians, Cyborgs, and Women. Although most cyborg anthropology has focused on medical technology (such as imaging systems and artificial organs) rather than on computational technology, the basic concept—of human bodies and lives becoming increasingly embedded within automated information (control) circuits—will have increasing relevance for understanding the adaptation of humans to advanced information technology: As more and more human faculties, such as memory, skilled manipulation, and interpersonal sensitivity, are minimalized, disaggregated, and shifted away from the individual organism to automated devices, the dependence of carbon-based humans on their artifactual prostheses will increase. Communities also form around technologies. Technology writer Howard Rheingold has described participation in a San Francisco-based usenet as a form of community building. Hakken describes the influence of class on the experiences of users with computing in Sheffield, England. Sociolologist Sherry Turkle describes the identity experimentation conducted by users of multiuser domains. Anthropologist Jon Anderson has examined how Middle Eastern countries have used and adapted the Internet with unique methods for unique social goals. These include the maintenance of diaspora relationships with countrymen scattered around the globe. Online communities quickly evolve (actually adapt from surrounding norms) distinctive norms, including styles of communication and categories of identity. Although such collections of norms and values fall short of full-fledged human cultures, they
are indicative of a propensity to create normative closure within any ongoing collectivity. Both these concepts, of work group cultures and online communities, point up the importance of culture for computing. As anthropology’s signature concept, culture has an important (if sometimes unstated) place in anthropological thinking about human-computer interaction.
Culture For anthropologists, culture is more profound than simply the attitudes and values shared by a population. As a system of shared understandings, culture represents the accumulated learning of a people (or a group), rooted in their history, their identity, and their relationship with other groups. Cultures evolve as shared projects with other groups. Although they are invented and imagined, cultures cannot be conjured up at will, as much of the recent management literature on corporate culture seems to suggest. This is significant, because much of computing use is in a corporate or organizational context (even if the organization is virtual). From an anthropological perspective, it is highly important to note that much of human-computer interaction is influenced either directly, by the regimes of instrumental rationality in which it takes place, or indirectly, by the fact that it follows protocols established by influential corporations. Several ethnographies of hightech companies suggest that computerization and the high-tech expectations associated with it are creating new corporate cultures: sociologist Gideon Kunda and anthropologist Kathleen GregoryHuddleston have described the working atmosphere of two high-tech corporations, noting that despite a technological aura and emancipatory rhetoric, their corporate cultures are still mechanisms of control. It should be noted that high tech is less an engineering concept for explaining functionality or performance than it is an aesthetic conceit for creating auras of power and authority. Others have taken note of the fact that computers create new forms of culture and identity and have described numerous microcultures that have sprung up around such systems as textual databanks, engineering design, and online instruction.
24 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
The culture of systems developers, as described by Diana Forsythe is particularly notable. Insofar as developers and users have separate and distinctive cultural outlooks, there will be a mismatch between their tacit understandings of system functionality and system performance. The frequent experience of systems not living up to expectations when deployed in the field is less a consequence of poor engineering than of the fundamental cultural relationships (or disconnects) between developers and users. Finally, anthropology’s original interest in the remote and exotic has often taken its attention away from the laboratories and highly engineered environments in which the most advanced information technologies are found. In 2001 Allen Batteau, an industrial anthropologist, observed that many factories and field installations usually lack the reliable infrastructure of universities or development laboratories. As a consequence, computationally intensive applications that work so well in the laboratory (or in the movies) crash and burn in the field. This lack, however, is not simply a matter of these production environments needing to catch up to the laboratories: Moore’s Law for nearly forty years has accurately predicted a doubling of computational capability every eighteen months, a geometric growth that outstrips the arithmetic pace of technological diffusion. The dark side of Moore’s Law is that the gap between the technological capabilities of the most advanced regions and those of the remote corners of the human community will continue to grow. In 1995 Conrad Kottak, an anthropologist, observed that “High technology has the capacity to tear all of us apart, as it brings some of us closer together” (NSF 1996, 29). Many of these observations grew out of a workshop organized by the American Anthropological Association and the Computing Research Association called “Culture, Society, and Advanced Information Technology.” Held (serendipitously) at the time of the first deployment of graphical Web browsers (an event that as much as any could mark the beginning of the popular information revolution), this workshop identified seven areas of interest for social research in advanced information technology: (1) the nature of privacy, identity, and social roles in the new infor-
mation society; (2) family, work groups, and personal relationships; (3) public institutions and private corporations; (4) communities, both virtual and real; (5) public policy and decision-making; (6) the changing shapes of knowledge and culture; and (7) the globalization of the information infrastructure (NSF 1996). In many ways this workshop both captured and projected forward the anthropological research agenda for understanding the changing social face of advanced information technology.
Questions Anthropology’s orthogonal viewpoint proposes several unique questions. Perhaps the first of these is the question of control versus freedom. On the one hand, cybernetic devices exist to create and integrate hierarchies of control, and the fifty-year histor y of the development of automation has demonstrated the effectiveness of this strategy. On the other hand, this poses the question of the proper role of a unique node in the control loop, the human user: How many degrees of freedom should the user be allowed? The designer’s answer, “No more than necessary,” can be unsatisfying: Systems that constrain the behavior of all their elements limit the users’ learning potential. The related concepts of system learning and evolution raise the second outstanding question, which has to do with the nature of life. Should systems that can evolve, learn from, and reproduce themselves within changing environments be considered “living systems”? Studies of artificial life suggest that they should. The possibility of a self-organizing system that can replicate itself within a changing environment has been demonstrated by anthropologist Chris Langston, enlarging our perspective beyond the carbon-based naïveté that saw only biological organisms as living. The final question that this raises, which is the ultimate anthropological question, is about the nature or meaning of humanity. Etymologically, anthropology is the “science of man,” a collective term that embraces both genders, and possibly more. Anthropologists always anchor their inquiries on the question of “What does it mean to be human?” Otherwise, their endeavors are difficult to distinguish from com-
ANTHROPOLOGY AND HCI ❚❙❘ 25
parative psychology, or comparative linguistics, or comparative sociology. However, the rise of information technology has fundamentally challenged some received answers to the question of what it means to be human. What are the human capabilities that computers will never mimic? As Pulitzer-prize-winning writer Tracy Kidder asked, Do computers have souls? Will there ever be a computer that meets the Turing test—that is, a computer that is indistinguishable from a fully social human individual? More specifically, how many generations are required to evolve a cluster of computers that will (unaided by human tenders) form alliances, reproduce, worship a deity, create great works of art, fall into petty bickering, and threaten to destroy the planet? As the abilities of silicon-based artifacts to think, feel, learn, adapt, and reproduce themselves continue to develop, the question of the meaning of humanity will probably become the most challenging scientific and philosophical question of the information age. Allen W. Batteau See also Ethnography; Sociology and HCI; Social Psychology and HCI
FURTHER READING Anderson, J. (1998). Arabizing the Internet. Emirates Occasional Papers # 30. Abu Dhabi, United Arab Emirates: Emirates Center for Strategic Studies and Research. Anderson, J., & Eikelman, D. (Eds.). (2003). New media in the Muslim world: The emerging public sphere (Indiana Series in Middle East Studies). Bloomington: Indiana University Press. Baba, M. L. (1995). The cultural ecology of the corporation: Explaining diversity in work group responses to organizational transformation. (1995). Journal of Applied Behavioral Science, 31(2), 202–233. Baba, M. L. (1999). Dangerous liaisons: Trust, distrust, and information technology in American work organizations. Human Organization, 58(3), 331–346. Batteau, A. (2000). Negations and ambiguities in the cultures of organization. American Anthropologist, 102(4), 726–740. Batteau, A. (2001). A report from the Internet2 ‘Sociotechnical Summit.’ Social Science Computing Review, 19(1), 100–105. Borofsky, R. (1994). Introduction. In R. Borofsky (Ed.), Assessing cultural anthropology. New York: McGraw-Hill. Brown, J. S., & Duguid, P. (2000). The social life of information. Boston: Harvard Business School Press. Callon, M. (1980). The state and technical innovation: A case study of the electrical vehicle in France. Research Policy, 9, 358–376.
Deal, T., & Kennedy, A. (1999). The new corporate cultures. Reading, MA: Perseus Books. Emery, F., & Trist, E. (1965). The causal texture of organizational environments. Human Relations, 18, 21–31. Forsythe, D. (2001). Studying those who study us: An anthropologist in the world of artificial intelligence. Stanford, CA: Stanford University Press. Garfinkel, H. (1967). Studies in ethnomethodology. Englewood Cliffs, NJ: Prentice-Hall. Gregory-Huddleston, K. (1994). Culture conflict with growth: Cases from Silicon Valley. In T. Hamada & W. Sibley (Eds.), Anthropological Perspectives on Organizational Culture. Washington, DC: University Press of America. Hakken, D. (1999). Cyborgs@Cyberspace: An ethnographer looks to the future. New York: Routledge. Haraway, D. (1991). Simians, cyborgs, and women—The reinvention of nature. London: Free Association Books. Hutchins, E. (1994). How a cockpit remembers its speeds. Cognitive Science, 19, 265–288. Hutchins, E. (1995). Cognition in the wild. Cambridge, MA: MIT Press. Kidder, T. (1981). The soul of a new machine. Boston: Little, Brown. Kunda, G. (1992). Engineering culture: Control and commitment in a high-tech corporation. Philadelphia: Temple University Press. Langston, C. G. (Ed.). (1989). Artificial life (Santa Fe Institute Studies in the Sciences of Complexity, Proceedings, Volume 6). Redwood City, CA: Addison-Wesley. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge, UK: Cambridge University Press. Law, J. (1987). Technology and heterogeneous engineering: The case of Portuguese expansion. In W. E. Bijker, T. P. Hughes, & T. Pinch (Eds.), The social construction of technological systems. (pp. 111–134). Cambridge, MA: MIT Press. Linde, C. (1988). The quantitative study of communicative success: Politeness and accidents in aviation discourse. Language in Society, 17, 375–399. Moore, G. (1965, April 19). Cramming more components onto integrated circuits. Electronics. Nardi, B. A. (1993). A small matter of programming: Perspectives on end user computing. Cambridge, MA: MIT Press. National Science Foundation. (1996). Culture, society, and advanced information technology (Report of a workshop held on June 1–2, 1995). Washington, DC: U. S. Government Printing Office. Perin, C. (1991). Electronic social fields in bureaucracies. Communications of the ACM, 34(12), 74–82. Rheingold, H. (1993). The virtual community: Homesteading on the electronic frontier. Reading, MA: Addison-Wesley. Star, S. L. (Ed.). (1995). The cultures of computing. Oxford, UK: Blackwell Publishers. Stone, A. R. (1995). The war of desire and technology at the close of the mechanical age. Cambridge, MA: MIT Press. Suchman, L. (1987). Plans and situated actions: The problem of humanmachine communication. Cambridge, UK: Cambridge University Press. Turkle, S. (1995). Life on the screen: Identity in the age of the Internet. New York: Simon & Schuster. Wenger, E. (1998). Communities of practice: Learning, meaning, and identity. Cambridge, UK: Cambridge University Press.
26 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
ANTHROPOMETRY The word anthropometry, which means “the measurement of the physical characteristics and physical abilities of people,” is derived from the Greek words anthropo meaning “human being” and metry meaning “measure.” Physical characteristics, also called “structural dimensions,” include such aspects as heights, widths, depths, and body segment circumferences. Physical abilities, also called “functional dimensions,” include such aspects as grip, push and pull strength, reaching capabilities, fields of vision, and functional task performance. Anthropologists, clinicians, and engineers use anthropometric information in a variety of ways. For engineers, in particular, anthropometry provides information that can be used for the design of occupational, pubic, and residential environments. The information can also be used for the design of tools, protective head gear, clothing, and workstation equipment. Doorway widths, tool handle lengths and circumferences, ranges of clothing sizes, and the location of displays and controls on workstations are some of the design applications. Anthropometry also provides information about body segment center of mass and joint center of rotation characteristics that is used for biomechanical modeling (the study of joint forces and torques on the body).
A Brief History Although anthropometry was applied when Greek and Egyptian artists created standards (canons) for the human form centuries ago, not until the nineteenth century were thought and dialogue on anthropometry organized. Early work in anthropometry focused on the human anatomy, racial characteristics, skeletal remains, and human growth. Among the noteworthy work documented by physical anthropologist Ales Hrdlicka was that of French anthropologist Paul Pierre Broca and the Belgian scientist Adolphe Quetelet. During the mid-eighteenth century Quetelet used statistics to describe anthropometric information. Shortly after the FrancoPrussian War of 1870, a growing emphasis on individualism was evident in the proposals of the
German scholar Rodolpho von Ihering. These proposals called upon German anatomists and anthropologists to reinvestigate craniometric (relating to measurement of the skull) and anthropometric measurement methods. The German Anthropological Society convened in Munich and Berlin during the 1870s and early 1880s to establish what anthropometrist J. G. Garson and others have called the “Frankfort Agreement” of 1882. This agreement introduced craniometric methods distinct from the predominant French methods and established a new nomenclature and measurement methods. The existence of the French and German schools only further cemented the belief that international consensus on methods, nomenclature, and measurements was needed. During the early twentieth century people attempted to develop an international consensus on the nomenclature of body dimensions and measurement methods. In 1906, at the Thirteenth International Congress of Prehistoric Anthropology and Archaeology in Monaco, an international agreement of anthropometry took form. This congress and the Fourteenth International Congress in Geneva, Switzerland, in 1912 began to formalize the body of anthropometric work. The foundations of a normative framework and a standardization of anthropometric measurement had been laid and translated into French, German, and English by 1912. This framework standardized anthropometric measurements on both skeletal and living human subjects. Since 1912 several works by Hrdlicka, Rudolf Marting, and James Gaven have increased the awareness of anthropometry and its uses and added to its scientific rigor. After the initial congresses, people attempted to establish consensus throughout the twentieth century. Congresses meeting under the name of Hrdlicka convened on the topic of anthropometry and measurement methods. Other congresses aimed to create standards and databases for general use. During the late twentieth century authors such as Bruce Bradtmiller and K. H. E. Kroemer chronicled these congresses and offered unique ways to manage anthropometric data. During recent years the International Standardization Organization (ISO) technical committee on ergonomics published ISO
ANTHROPOMETRY ❚❙❘ 27
7250: Basic Human Body Measurements for Technical Design (1996) to standardize the language and measurement methods used in anthropometry and ISO 15535: General Requirements for Establishing an Anthropometric Database (2003) to standardize the variables and reporting methods of anthropometric studies.
Structural Anthropometric Measurement Methods Structural anthropometric measurement methods require a person to be measured while standing or sitting. Anatomical landmarks—observable body features such as the tip of the finger, the corner of the eye, or the bony protrusion of the shoulder known as the “acromion process”—standardize the locations on the body from which measurements are made. The desire to achieve consistent measurements has led to the use of standardized measurement postures held by people who are being measured. The anthropometric standing posture requires the person to hold the ankles close together, standing erect, arms relaxed and palms facing medially (lying or extending toward the median axis of the body) or anteriorly (situated before or toward the front), the head erect and the corners of the eyes aligned horizontally with the ears. The anthropometric seated posture requires the person to be seated erect on a standard seating surface. The elbows and knees are flexed 90 degrees. The palms face medially with the thumb superior (situated above or anterior or dorsal to another and especially a corresponding part) to the other digits. Structural dimensions include the distances between anatomical landmarks, the vertical distance from a body landmark to the floor, and the circumferences of body segments and are measured with a variety of instruments. Among the most common instruments is the anthropometer, which is a rod and sliding perpendicular arm used to measure heights, widths, and depths. A spreading caliper having two curved arms that are hinged together is sometimes used to measure segment widths and depths defined by the distance between the tips of the arms. Graduated cones are used to measure grip
circumferences, and tape measures are used to measure other circumferences such as the distance around the waist. Scales are used to measure body weight. Photographs and video are used to measure body dimensions in two dimensions. One method uses grids that are attached behind and to the side of the person measured. Photographs are then taken perpendicular to the grids, and the space covered by the person in front of the grids can be used to estimate body segment heights, widths, and depths. A variant of this method uses digital photography for which an anthropometric measurement is obtained by comparing the number of pixels (small discrete elements that together constitute an image, as in a television or computer screen) for a dimension to the number of pixels of a reference object also located in the digital photograph. Attempts to develop three-dimensional computer human models with conventional anthropometric data reveal that limitations exist, such as the uncertainty about three-dimensional definition of key points on the body surface, locations of circumferences, and posture. These limitations have resulted in the development of more sophisticated three-dimensional anthropometric measurement methods. Digital anthropometry is the use of digital and computerized technology in the collection of information about body size and physical ability. In this use, computers are responsible for the actual collection of anthropometric data and are not relegated solely to data analysis or storage. Digital anthropometry varies greatly from conventional anthropometry. This variation has changed the nature of anthropometry itself for both the anthropometrist and the experimental context in which measurements are taken. Human factors engineer Matthew Reed and colleagues have identified some of the potential benefits of digital anthropometry: The capacity to assemble more accurate models of human form, dimensions, and postures ■ The capacity to evaluate multiple body dimensions simultaneously ■ The capacity to measure the human and the environment together ■
28 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
The improved description of joint centers of rotation and movement in three dimensions ■ The capacity to make corrections to dimensions or create new dimensions after measurements have been recorded ■
Laser scanning is often used in digital anthropometry because it allows excellent resolution of the morphological (relating to the form and structure of an organism or any of its parts) features of the human body and can be completed rapidly. Laser scanning produces accurate three-dimensional representations of the complex body surfaces, and most protocols (detailed plans of a scientific or medical experiment, treatment or procedure) require the placement of surface markers on the body to ensure the proper location of bony protrusions that are used as measurement landmarks beneath the surface of the skin. Other protocols using laser scans have morphological extraction algorithms (procedures for solving a mathematical problem in a finite number of steps that frequently involve repetition of an operation) to estimate landmark locations based on morphological features. Potentiometry can also be used to collect digital anthropometric measurements. Electromechanical potentiometric systems allow the measurer to manually digitize points in three-dimensional space. The measurer guides a probe tip manually to render discrete points or body surface contours.
Functional Performance Measurements Conventional functional performance measurements include grip, push, and pull strength, and reaching abilities. For grip strength measurement, an individual is required to squeeze for several seconds at maximum effort a hand dynamometer (a force measurement device) set at one or more grip circumferences. For the measurement of push and pull strength, an individual usually holds a static (unchanging) posture while either pushing on or pulling against a force gauge at a maximum effort over several seconds. An individual’s reaching abilities can be evaluated with a number of methods, including
those that employ photography and potentiometry as described above, or methods that require an individual to mark with a hand-held pen or pencil the maximum or comfortable reach locations on a vertical or horizontal grid. Electromagnetic and video-based motion analysis systems provide new measures of physical abilities related to the way people move (kinematics) and can be used with other types of instrumentation, such as force plates (hardware that measure the force applied to it), to provide biomechanical (the mechanics of biological and especially muscular activity) information or measures of balance. These systems allow positions of body landmarks to be tracked over time during a physical activity. The data can be evaluated statistically or can serve as an example of a human task simulation. Such methods of data collection allow more lifelike dynamic digital human models that can be used to evaluate human performance in virtual environments. However, use of these methods is expensive and time consuming.
Measurement Consistency and Variation Anthropometric measurements are recordings of body dimensions and physical abilities that are subject to variability. No “correct” measurement exists because a measurement is simply an observation or recording of an attribute that is the cumulative contribution of many factors. Anthropometric studies have investigated the topic of measurement consistency in relation to intrinsic qualities of variability within a given measurement. J. A. Gavan (1950) graded anthropometry dimensions in terms of consistencies seen through expert anthropometrists and concluded that “consistency increased as: the number of technicians decreased, the amount of subcutaneous [under the skin] tissue decreased, the experience of the technician increased, and as the landmarks were more clearly defined” (Gavan 1950, 425). Claire C. Gordon and Bruce Bradtmiller (1992), Charles Clauser and associates (1998), Gordon and associates (1989), and others have also studied intra- and interobserver error contributions in anthropometric measurements,
ANTHROPOMETRY ❚❙❘ 29
including the contributions of different measurement instruments and the effects of breathing cyc l e s . O t h e r re s e a rch e r s , s u ch a s Ka t h e r i n e Brooke-Wavell and colleagues (1994), have evaluated the reliability of digital anthropometric measurement systems. These evaluations have brought about an awareness of anthropometric reliability and error as well as acceptable levels of reliability. Anthropometric data typically are collected for large samples of populations to capture distributional characteristics of a dimension so that it is representative of a target population. Many sources of anthropometric variability exist within populations. Men and women differ greatly in terms of structural and functional anthropometric dimensions. Additionally, the anthropometric dimensions of people have changed systematically through time. Today’s people are generally taller and heavier than those of previous generations, perhaps because of improved availability and nutrition of food in developed countries. Of course, a person’s body size also changes through time, even throughout the course of adulthood. As a person ages, for example, his or her height decreases. Other sources of anthropometric variability include ethnicity, geography, and occupational status. The distribution characteristics of an anthropometric dimension are often reported for different categories of age and gender, and sometime for different ethnicities or countries. Because the variability of anthropometric dimensional values within such subgroups often takes the shape of a Gausian (bell-shaped) distribution, the mean deviation and standard deviation of the sample data are often used to describe the distributional characteristics of a dimension. The percentile value—the value of a dimension that is greater than or equal to a certain percentage of a distribution—also provides useful information. For example, the fifth and ninety-fifth percentiles of a dimensional value define the outer boundaries of the 90 percent midrange of a population distribution that might enable a designer to develop an adjustable consumer product or environment feature that can accommodate 90 percent or more of the target population. Multivariate data analysis includes the use of correlation and regression analyses, as well as human
modeling methods. The correlation between two dimensions provides a measure of how strongly two dimensions covary linearly. When two measurements are highly correlated the values of one measurement can be used to predict the values of another in a regression analysis, therefore reducing the total number of measurements needed to construct a comprehensive set of anthropometric tables and human models based on partially extrapolated data. When combinations of anthropometric dimensions are considered simultaneously in the evaluation of a product or environment, mockups and task trialing involving people or simulation approaches using digital human modeling of people are required.
Important Data Sources The most comprehensive anthropometric studies have focused on military personnel, at least in part due to the need for the military to have information to provide well-designed uniforms, equipment, land vehicles, and aircraft. Perhaps one of the most comprehensive studies was the 1988 U.S. Army Anthropometric Survey (ANSUR), which summarized 132 dimensions of approximately nine thousand army personnel. One of the most inclusive sources of civilian anthropometric data is a U.S. National Aeronautics and Space Administration (NASA) technical report produced by the staff of the Anthropology Research Project in 1978. This report contains anthropometric data across a variety of civilian and military populations for a large number of anthropometric variables, including information about the mass distribution of body segments. More recently, the Civilian American and European Surface Anthropometry Resource (CAESAR) project used laser scanning to collect the body surface contours and sizes of approximately twentyfour hundred North American and two thousand European civilians from 1998 to 2000. Measurements were recorded with people in standing, standardized seated, and relaxed seated postures. Thousands of points that define the location of the body’s surface were collected with each scan, providing extremely accurate three-dimensional representations of the body surface contours for individual human
30 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
models that can be used to evaluate the fit of a product or of the person in an environment. Because markers are also placed over key body landmarks, conventional descriptive analysis of dimensions has also been performed. CAESAR is the largest and most valuable anthropometric data source of its kind.
Using Anthropometric Data in Design Conventional use of anthropometric data in design requires determining (1) the population for which a design in intended, known as the “target population,” (2) the critical dimension or dimensions of the design, (3) appropriate anthropometric data source, (4) the percentage of the population to be accommodated by the design, (5) the portion of the distribution that will be excluded, usually the largest and/or smallest values of the distribution, and (6) the appropriate design values through the use of univariate or bivariate statistical methods. Conventional application of anthropometric data, however, is not able to address the design problems that require the evaluation of many design characteristics simultaneously. Multivariate analysis using mockups and task trialing requires recruiting people with the desired range of body size and ability and assessing human performance during the simulation, such as judging whether people can reach a control or easily see a display for a particular design. Static and dynamic digital human modeling approaches require manipulating models of various sizes in virtual environments to assess the persondesign fit. Analysis methods for dynamic digital human modeling approaches are still in their infancy due to the limited amount of studies recording the needed information and the complicated nature of the data. A variety of fields uses anthropometric data, including anthropology, comparative morphology, human factors engineering and ergonomics, medicine, and architectural design. Additionally, digital anthropometry has been used outside of scientific and research endeavors, as seen in the application of a new suit-making technology for Brooks Brothers (known as “digital tailoring). The International Organization for Standardization has published numerous publications
that apply anthropometric data to the development of design guidelines. These publications include ISO 14738 Safety of Machinery—Anthropometric Requirements for the Design of Workstations at Machinery (2002), ISO 15534 Ergonomic Design for the Safety of Machinery (2000), and ISO 9241 Documents on the Ergonomic Requirements for Office Work with Visual Display Terminals (1992–2001). The latter publications were developed to improve the fit between people and their computers at work.
Future Research A major challenge of future research is how to summarize and interpret the information-rich but complex three-dimensional data that accompany the new methods of measurement described here. New methods of three-dimensional measurement of body dimensions such as whole-body scanning provide new opportunities to move conventional univariate anthropometric applications to complete three-dimensional static human models that can be used to evaluate design in new ways. Motion analysis methods in dynamic human modeling also provide a powerful tool to improve our understanding of the functional abilities of people. The reliability, accuracy, and applications of many of these anthropometric measurement methods, however, have yet to be fully explored. Perhaps what is most needed is simply more information about the physical dimensions and abilit i e s i n m o r e d i ve r s e u s e r g r o u p s . L a c k o f anthropometric information severely limits the use of anthropometry in the design of living and working spaces that can be used by diverse populations. U.S. government agencies, particularly the U.S. Architectur al and Tr anspor tation Bar r iers Compliance Board (Access Board) and the U.S. Department of Education’s National Institute on Disability and Rehabilitation Research (NIDRR), recently have started to address the information gap by studying the physical abilities of people with disabilities, such as people who use wheelchairs. However, much work remains to be done. In particular, the need for anthropometric data to inform the design of occupational, public, and residential environments of the elderly is expected
ANTHROPOMETRY ❚❙❘ 31
to increase substantially as the proportion of the elderly in the population continues to increase dramatically during the years to come. Victor Paquet and David Feathers See also Motion Capture
FURTHER READING Annis, J. F. (1989). An automated device used to develop a new 3-D database for head and face anthropometry. In A. Mital (Ed.), Advances in industrial ergonomics and safety (pp. 181–188). London: Taylor & Francis. Annis, J. F., Case, H. W., Clauser, C. E., & Bradtmiller, B. (1991). Anthropometry of an aging work force. Experimental Aging Research, 17, 157–176. Brooke-Wavell, K., Jones, P. R. M., & West, G. M. (1994). Reliability and repeatability of 3-D body scanner (LASS) measurements compared to anthropometry. Annals of Human Biology, 21, 571–577. Clauser, C., Tebbetts, I., Bradtmiller, B., McConville, J., & Gordon, C. (1998). Measurer’s handbook: U.S. Army anthropometric survey (Technical Report No. TR-88/043). Natick, MA: U.S. Army Natick Research, Development and Engineering Center. Damon, A., & Stout, H. (1963). The functional anthropometry of old men. Human Factors, 5, 485–491. Dempster, W. T., Gabel, W. C., & Felts, W. J. L. (1959). The anthropometry of the manual work space for the seated subject. American Journal of Physical Anthropometry, 17, 289–317. Eastman Kodak Company. (2003). Ergonomic design for people at work (2nd ed.). New York: Wiley. Garson, J. (1885). The Frankfort Craniometric Agreement, with critical remarks thereon. Journal of the Anthropological Institute of Great Britain and Ireland, 14, 64–83. Gavan, J. (1950). The consistency of anthropometric measurements. American Journal of Physical Anthropometry, 8, 417–426. Gordon, C., & Bradtmiller, B. (1992). Interobserver error in a large scale anthropometric survey. American Journal of Human Biology, 4, 253–263. Gordon, C., Bradtmiller, B., Clauser, C., Churchill, T., McConville, J., Tebbetts, I., & Walker, R. (1989). 1987–1988 anthropometric survey of U.S. Army personnel: Methods and summary statistics (Technical Report No. TR-89/027). Natick, MA: U.S. Army Natick Research, Development and Engineering Center. Haddon, A. (1934). The history of anthropology. London: Watts & Co. Hobson, D., & Molenbroek, J. (1990). Anthropometry and design for the disabled: Experiences with seating design for cerebral palsy population. Applied Ergonomics, 21(1), 43–54. Hoekstra, P. (1997). On postures, percentiles and 3D surface anthropometry. Contemporary Ergonomics (pp. 130–135). London: Taylor & Francis. Hrdlicka, A. (1918). Physical anthropology; its scope and aims, etc. American Journal of Physical Anthropometry, 1, 3–23. I n t e r n a t i o n a l O r g a n i z a t i o n f o r S t a n d a rd i z a t i o n . ( E d . ) . (1992–2003). Ergonomics requirements for office work with visual
display terminals (VDTs), (ISO Standard 9241). Geneva, Switzerland: International Organization for Standardization. International Organization for Standardization. (Ed.). (1996). Basic human body measurements for technical design (ISO Standard 7250). G e n e v a , Sw i t ze r l a n d : In te r n a t i o n a l O r g a n i z a t i o n f o r Standardization. International Organization for Standardization. (Ed.). (2000). Ergonomic design for the safety of machinery (ISO Standard 15534). G e n e v a , Sw i t ze r l a n d : In te r n a t i o n a l O r g a n i z a t i o n f o r Standardization. International Organization for Standardization. (Ed.). (2002). Safety of machinery—Anthropometric requirements for the design of workstations at machinery (ISO Standard 14738). Geneva, Switzerland: International Organization for Standardization. International Organization for Standardization. (Ed.). (2003). General requirements for establishing an anthropometric database (ISO Standard 15535). Geneva, Switzerland: International Organization for Standardization. Kroemer, K. H. E., Kroemer, H. J., & Kroemer-Elbert, K. E. (1997). Engineering anthropometry. In K. H. E. Kroemer (Ed.), Engineering physiology (pp. 1–60). New York: Van Nostrand Reinhold. Marras, W., & Kim, J. (1993). Anthropometry of industrial populations. Ergonomics, 36(4), 371–377. Molenbroek, J. (1987) Anthropometry of elderly people in the Netherlands: Research and applications. Applied Ergonomics, 18, 187–194. Paquet, V. (Ed.). (2004). Anthropometry and disability [Special issue]. International Journal of Industrial Ergonomics, 33(3). Paquette, S., Case, H., Annis, J., Mayfield, T., Kristensen, S., & Mountjoy, D. N. (1999). The effects of multilayered military clothing ensembles on body size: A pilot study. Natick, MA: U.S. Soldier and Biological Chemical Command Soldier Systems Center. Reed, M., Manary, M., Flannagan, C., & Schneider, L. (2000). Effects of vehicle interior geometry and anthropometric variables on automobile driving posture. Human Factors, 42, 541–552. Reed, M., Manary, M., & Schneider, L. (1999). Methods for measuring and representing automobile occupant posture (SAE Technical Paper No. 1999-01-0959). Warrendale, PA: Society of Automotive Engineers. Robinette, K. (1998). Multivariate methods in engineering anthropometry. In Proceedings of the Human Factors and Ergonomics Society 42nd annual meeting (pp. 719–721). Santa Monica, CA: Human Factors and Ergonomics Society. Robinette, K. (2000). CAESAR measures up. Ergonomics in Design, 8(3), 17–23. Roebuck, J., Kroemer, K. H. E., & Thomson, W. (1975). Engineering anthropometry methods. New York: Wiley. Steenbekkers, L., & Molenbroek, J. (1990). Anthropometric data of children for non-specialized users. Ergonomics, 33(4), 421–429. Ulijaszek, S., & Mascie-Taylor, C. G. N. (Eds.). (1994). Anthropometry: The individual and the population. Cambridge, UK: Cambridge University Press.
32 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
APPLICATION USE STRATEGIES Strategies for using complex computer applications such as word-processing programs and computeraided drafting (CAD) systems are general and goaldirected methods for performing tasks. These strategies are important to identify and learn because they can make users more efficient and effective in completing their tasks, and they are often difficult to acquire just by knowing commands on an interface. To understand strategies for using computer applications, consider the task of drawing three identical arched windows in a CAD system. As shown in Figure 1A, one way to perform the task is to draw all the arcs across the windows, followed by drawing all the vertical lines, followed by drawing all the horizontal lines. Another way to perform the same task (Figure 1B) is to draw all the elements of the first window, group the elements and then make three copies of the grouped elements. The first method is called sequence-by-operation because it organizes the drawing task by performing one set of identical operations (in this case draw arc), followed by performing the next set of similar operations (in this case draw line). The second method is called detail-aggregate-manipulate because it organizes the task by first detailing all the elements of the first object (in this case drawing the parts of the first window), aggregating the elements of the first object (in this case grouping all the parts of the first window), and then manipulating that aggregate (in this case making two copies of the grouped elements of the first window). Both the methods are strategies because they are general and goal-directed. For example, the detail-aggregatemanipulate strategy is general because it can be used to create multiple copies of sets of objects in a wide range of applications. The above example was for a CAD application, but the same strategy could be used to create many identical paragraphs for address labels in a word-processing application, such as Microsoft Word. The detail-aggregate-
manipulate strategy is also goal-directed because it can be used to complete the task of drawing three arched windows. The definition of a strategy given above subsumes more limited strategy definitions used in fields as diverse as business management and cognitive psychology. These definitions may be stated in terms of time (they may define strategy as a long-term plan for achieving a goal), the existence of alternate methods (they may consider a strateg y to be any method that is nonobligatory), or performance outcomes (they may define a strategy as a method that results in a competitive advantage). However, excluding these particulars (time, existence of alternate methods, and performance outcomes) from the definition of strategy enables us to describe strategies in a more encompassing way, irrespective of whether they are short term or long term, unique or one of many, or efficient or inefficient.
The Costs and Benefits of Using Strategies Although the two strategies shown in Figure 1 achieve the same goal, different costs and benefits are associated with each one’s use. By drawing all the arcs before the lines, the sequence-by-operation strategy reduces the cost of switching between the draw arc, and the draw line commands. Furthermore, the strategy uses simple commands that are useful for performing a large set of tasks. Therefore, the short-term learning cost of using this strategy is small. However, because the user is constructing every element in the drawing, the performance cost (measured in terms of time and effort) can become large when drawing repeated elements across many tasks, especially in the long term. In contrast, the detail-aggregatemanipulate strategy requires the user to draw the elements of only one window, and makes the computer construct the rest of the windows using the group, and copy commands. For a novice CAD user, the short-term learning cost for the detail-aggregatemanipulate strategy involves learning the group and copy commands and how to sequence them. However, as is common in the use of any new tool,
APPLICATION USE STRATEGIES ❚❙❘ 33
this short-term learning cost is amortized over the long term because of the efficiency gained over many invocations of the strategy. This amortization therefore lowers the overall performance cost. Research has shown that strategies like detail-aggregate-manipulate can save users between 40 percent and 70 percent of the time to perform typical drawing tasks, in addition to reducing errors. Furthermore, with properly designed strategy-based training, such strategies can be taught to novice computer users in a short amount of time. For users who care about saving time and producing accurate drawings, learning such strategies can therefore make them more efficient (save time) and more effective (reduce errors) with relatively short training.
A. Sequence-by-Operation Strategy
1.Draw arcs.
2. Draw vertical lines.
3. Draw horizontal lines.
B. Detail-Aggregate-Manipulate Strategy
1.Draw arc. 2.Draw lines. 3. Group lines. 4. Copy group.
Source: Bhavnani, S. K., John, B. E. (1996). Exploring the unrealized potential of computer-aided drafting. Proceedings of CHI’96, 337. Copyright 1996 ACM, Inc. Reprinted by permission.
Two strategies to perform the 3-window drawing task. FIGURE 1.
A Framework That Organizes Strategies for Using Complex Computer Applications Given the important role that strategies can play in improving overall productivity, researchers have attempted to identify and organize strategies for computer application use, such as authoring and information retrieval applications. Frameworks to organize strategies have suggested the design of: (1) training that teaches the strategies in a systematic way, (2) new systems that provide effective and efficient strategies to users with little experience, and (3) evaluation methods to ensure that designers consistently offer the commands for using efficient and effective strategies. One useful way to organize strategies is based on the general capabilities of computer applications that the strategies exploit. For example, the detail-aggregate-manipulate strategy described in Figure 1 exploits the iterative power of computers; it makes the computer (instead of the user) perform the repetitious task of copying the elements multiple times. Strategies have also been identified that exploit other powers of computers, such as the powers of propagation, organization, and visualization. Another way to organize strategies is by the scope of their use. For example, some strategies are
broad in scope because the powers they exploit are offered by a large range of computer applications such as authoring and information retrieval applications. Other strategies are narrower in scope and applicable to a smaller range of computer applications such as only to word processors. Large-Scope Strategies Given the ubiquity of graphical user interfaces (GUIs) across computer applications, most useful computer applications require some interaction with a visual interface. Such computer applications offer the power of visualization, that is, the power to selectively view information on the screen. For example, a common word-processing task is to compare information from one part of a document with information in another part of the document. When these two parts of the document cannot fit simultaneously on the screen, the user can perform the comparison task in several ways. One way is to scroll back and forth between the relevant parts of the document. This method is time-consuming and error-prone because it requires the user to remember the information that is not visible. Another way to perform the same comparison task is to first bring together on the computer screen the two relevant parts of the document, before
[
34 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Large Scope
Computer Applications
Visualization Strategies
Medium Scope
Other computer applications (e.g. information retrieval )
Authoring Applications
Iteration Strategies Propagation Strategies
Word Processors
Small Scope
Text Transformation Strategies
Spreadsheets
Small Scope
Formula Decomposition Strategies
Drawing Systems
Small Scope
Graphic Precision Strategies
Other authoring applications (e.g. Web authoring)
F I G U R E 2 . Strategies have been identified to exploit different powers of computers at different scopes levels. Large scope strategies are useful to many classes of computer applications, such as authoring and information retrieval applications. Medium scope strategies apply to a single class of computer applications, such as authoring applications. Small scope strategies apply to a single sub-class of applications, such as only to word processors. The dotted lines represent how future strategies can be included in the framework.
comparing them. The information can be brought together on the screen by different commands, such as by opening two windows of the same document scrolled to the relevant parts of the document, or by using the split window command in Microsoft Word to view two parts of the document simultaneously. In addition to being useful for word-processing tasks, this visualization strategy is also useful when one is drawing a complex building in a CAD system, or when one is comparing information from two different webpages when retrieving information on the Web. Hence strategies that exploit the power of visualization have wide scope, spanning many different classes of computer applications. Medium-Scope Strategies While visualization strategies have the widest use across classes of computer applications, there are three sets of strategies that are limited in scope to only one class of computer applications: First, there are strategies that exploit the iterative power of computers, such as the detail-aggregate-manipulate strategy discussed earlier. These are useful mainly for authoring applications such as drawing systems and word processors.
Second, there are strategies that exploit the power of propagation provided by authoring applications. The power of propagation enables users to set up dependencies between objects, such that modifications automatically ripple through to the dependent objects. For example, often users have to change the font and size of headings in a document to conform to different publication requirements. One way to perform this task is to make the changes manually. This is time-consuming, especially when the document is long, and error-prone, because certain headings may be missed or incorrectly modified. A more efficient and effective method of performing the same task is to first make the headings in a document dependent on a style definition in Microsoft Word. When this style definition is modified, all dependent headings are automatically changed. This strategy is useful across such applications as spreadsheets (where different results can be generated by altering a variable such as an interest rate), and CAD systems (where it can be used to generate variations on a repeated window design in a building façade). Third, there are strategies that exploit the power of organization provided by authoring applications. The power of organization enables users to explic-
APPLICATION USE STRATEGIES ❚❙❘ 35
itly structure information in representations (such as in a table). These explicit representations enable users to make rapid changes to the content without having to manually update the structure of the representation. For example, one way to represent tabular information in a word-processing application is by using tabs between the words or numbers. However, because tabs do not convey to the computer an explicit tabular representation consisting of rows and columns, the tabular structure may not be maintained when changes are made to the content. A more efficient and effective way to perform this task is to first make the table explicit to the computer by using the command insert table, and then to add content to the table. Because the computer has an internal data structure for representing a table, the tabular representation will be maintained during modifications (such as adding more content to a cell in the table). Organization strategies are also useful in other authoring applications. For example, information can be stored using a set-subset representation in a spreadsheet (as when different sheets are used to organize sets of numbers) and in a CAD system (as when different layers are used to organize different types of graphic information). As discussed above, strategies that exploit the powers of iteration, propagation, and organization are useful mainly for authoring applications. However, it is important to note that the powers of iteration, propagation, and organization can also be offered by other classes of computer applications, such as information retrieval applications. For example, many Web browsers offer users ways to organize the addresses of different retrieved webpages. (The organizing features provided by the favorites command in Internet Explorer is one example.) However, while powers provided by authoring applications can be provided in other classes of computer applications, the strategies that they exploit will tend to be the same. Small-Scope Strategies Small-scope strategies exploit powers provided by particular subclasses of applications. For example, the power of graphic precision is offered mainly by drawing systems, such as CAD systems. Strategies that exploit graphic precision enable users to cre-
ate and manipulate precise graphic objects. For example, a common precision drawing task is to create a line that is precisely tangent and touching the end of an arc (as shown in the arched windows in Figure 1). One way to perform this task is to visually locate, and then click the end of the arc when drawing the line. This is error-prone because the user relies on visual feedback to detect the precise location of the end of the arc. Another way is to use the snapto-object command, which enables the user to click a point that is only approximately at the end of the arc. The computer responds by automatically locating the precise end of the arc, and therefore enables the user to draw a line that is precisely tangent to the end of the arc. Similar small-scope strategies have been identified for word-processing applications (such as those that assist in transforming text to generate summaries or translations) and for spreadsheets (such as those that decompose formulas into subformulas to enable quick debugging). Future Extensions of the Strategy Framework The strategy framework described above focuses on authoring applications. However, the framework can also be extended to organize the large number of search strategies that have been identified for use with information retrieval applications such as general-purpose search engines like Google. In contrast to computer powers that are useful in organizing strategies for use with authoring applications, strategies for use with information retrieval systems appear to be driven by attributes of how information sources are structured. For example, a large portion of the Web comprises densely connected webpages referred to as the core of the Web. The densely connected structure of information sources in the core suggests the importance of using a variety of browsing strategies (that rely on using hyperlinks to move from one page to another) to locate relevant sources. There is also a large portion of the Web that consists of new pages that are not linked to many other pages. Strategies to find these pages therefore require the use of different query-based search engines, given that no single search engine indexes all webpages.
36 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
While there has been much research on strategies for finding relevant sources of information, one set of strategies works by selecting and ordering relevant sources of information based on the way information is distributed across sources. For example, health care information is typically scattered across different health care portals. In this situation a useful strategy is to visit specific kinds of portals in a particular order to enable comprehensive accumulation of the relevant information. Such strategies become critical when incomplete information can have dangerous consequences (as is the case with incomplete information on health issues). An important difference between strategies for using authoring applications and strategies for using information retrieval systems is that search strategies are fundamentally heuristic—that is, they are rules of thumb that do not guarantee successful task completion. This is in part because users’ evaluation of what is relevant changes based on what is being learned during the search process.
How the Identification of Strategies Can Improve Human-Computer Interaction The identification and analysis of application use strategies suggests three practical developments: strategy-based instruction, new search systems, and an analysis method to ensure consistency in capabilities across applications. Strategy-Based Instruction Strategies for using authoring applications have led to the design of strategy-based instruction. Strategy-based instruction teaches commands in combination with the authoring strategies that make use of authoring applications’ powers of iteration, propagation, and organization. Research has shown that students who took the strategy-based training acquired more efficient and effective strategies and demonstrated a greater ability to transfer that knowl-
edge across applications than did students who were taught only commands. New Search Systems The identification of search strategies to deal with the scatter of information across the Web has led to the design of a new kind of domain portal called a Strategy Hub. This type of domain portal implements the heuristic search strategy of visiting sources of information in a particular order. Recent studies show that such a system enables users to find more comprehensive information on specific topics when compared to the information retrieved by users of other search systems. An Analysis Method To Ensure Consistency in Capabilities across Applications To enable the widest use of strategies across computer applications, designers must provide a consistent set of commands. Therefore, a method called “designs conducive to the use of efficient strategies” (Design-CUES) has been developed that enables designers to systematically check if their designs provide the commands necessary for users to implement efficient and effective strategies.
Looking Forward Many years of research has shown that merely learning commands does not make for the best use of complex computer applications. The effective and efficient use of computer applications often requires the use of strategies in addition to commands. An important research goal has therefore been to identify strategies for using a wide range of computer applications. The strategies that have been identified to date have benefited users through strategybased instruction, new forms of search systems, and new design methods. As research on strategy identification continues, we can expect more developments along those lines, all with the ultimate goal of making users more effective and efficient in the use of complex computer applications. Suresh K. Bhavnani
ARPANET ❚❙❘ 37
FURTHER READING Bates, M. (1979). Information search tactics. Journal of the American Society for Information Science 30(4), 205–214. Bates, M. J. (1998). Indexing and access for digital libraries and the Internet: Human, database, and domain factors. Journal of the American Society for Information Science, 49(13), 1185–1205. Belkin, N., Cool, C., Stein, A., & Thiel, U. (1995). Cases, scripts, and information-seeking strategies: On the design of interactive information retrieval systems. Expert Systems with Applications, 9(3), 379–395. Bhavnani, S. K. (2002). Domain-specific search strategies for the effective retrieval of healthcare and shopping information. In Proceedings of CHI’02 (pp. 610–611). New York: ACM Press. Bhavnani, S. K. (in press). The distribution of online healthcare information: A case study on melanoma. Proceedings of AMIA ’03. Bhavnani, S. K., Bichakjian, C. K., Johnson, T. M., Little, R. J., Peck, F. A., Schwartz, J. L., et al. (2003). Strategy hubs: Next-generation domain portals with search procedures. In Proceedings of CHI ’03, (pp. 393–400). New York: ACM Press. Bhavnani, S. K., & John, B. E. (2000). The strategic use of complex computer systems. Human-Computer Interaction, 15(2–3), 107–137. Bhavnani, S. K., Reif, F., & John, B. E. (2001). Beyond command knowledge: Identifying and teaching strategic knowledge for using complex computer applications. In Proceedings of CHI ’01 (pp. 229–236). New York: ACM Press. Drabenstott, K. (2000). Web search strategies. In W. J. Wheeler (Ed.), Saving the user’s time through subject access innovation: Papers in honor of Pauline Atherton Cochrane (pp. 114–161). Champaign: University of Illinois Press. Mayer, R. E. (1988). From novice to expert. In M. Helander (Ed.), Handbook of human-computer interaction (pp. 781–796). Amsterdam: Elsevier Science. O’Day, V., & Jeffries, R. (1993). Orienteering in an information landscape: How information seekers get from here to there. In Proceedings of CHI 93 (pp. 438–445). New York: ACM Press. Shute, S., & Smith, P. (1993). Knowledge-based search tactics. Information Processing & Management, 29(1), 29–45. Siegler, R. S., & Jenkins, E. (1989). How children discover new strategies. Hillsdale, NJ: Lawrence Erlbaum Associates. Singley, M., & Anderson, J. (1989). The transfer of cognitive skill. Cambridge, MA: Harvard University Press.
ARPANET The Arpanet, the forerunner of the Internet, was developed by the U.S. Department of Defense’s Advanced Research Projects Agency (ARPA) in the early 1960s. ARPA was created in 1958 by President Dwight D. Eisenhower to serve as a quick-response research and development agency for the Department of Defense, specifically in response to the launch of the Soviet satellite Sputnik. The agency, now the Defense
Advanced Research Projects Agency (DARPA), funded some of the most important research of the twentieth century.
The Arpanet Concept The Arpanet long-distance computer network was a collection of ideas, breakthroughs, and people. The roots of the Arpanet can be traced to one of ARPA’s most famous managers, J. C. R. Licklider. In 1962 Licklider was recruited to work at ARPA, then housed in the Pentagon, to start a behavioral sciences program. Although a psychologist by training, Licklider had a passion for the emergent field of computers and was adamant that the future of computing resided in the interactions between humans and computers. In his seminal work, a paper entitled “Man-Computer Symbiosis” written in 1960, Licklider predicted that computers would not be merely tools for people to use but also extensions of people, forming a symbiotic relationship that would revolutionize the way people interact with the world. Through ARPA Licklider began to interact with the brightest minds in computing—scientists at Stanford, Berkeley, UCLA, MIT, and a handful of companies that made up what Licklider considered to be his “intergalactic computer network.” Of course, this network existed only in theory because people had no way to bring these resources together other than telephone or face-to-face meetings. However, Licklider had the vision of gathering these people and resources, making the intergalactic network a physical network through an integrated network of computers. Although originally brought on board to work on behavioral science issues in command-and-control systems, Licklider was directly responsible for transforming his command-and-control research office into the Information Processing Techniques Office (IPTO), which would be responsible for critical advanced computing achievements for decades to come. Although Licklider left ARPA in 1964, he had a lasting effect on the field of computing and the development of the Arpanet. In 1966 another computer visionary, Bob Taylor, became director of IPTO and immediately began to address the computer networking problem. The
38 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
We were just rank amateurs, and we were expecting that some authority would finally come along and say,“Here’s how we are going to do it.” And nobody ever came along. —Vint Cerf on the design of Arpanet
computing field at that time suffered from duplication of research efforts, no electronic links between computers, little opportunity for advanced graphics development, and a lack of sharing of valuable computing resources. Taylor asked the director of ARPA, Charles Herzfeld, to fund a program to create a test network of computers to solve these problems. Herzfeld granted Taylor’s request, and Taylor’s office received more than one million dollars to address the problems. Thus, the Arpanet project was born. Taylor needed a program manager for the Arpanet project. He recruited Larry Roberts from MIT’s Lincoln Labs. Roberts, twenty-nine years old, arrived at the Pentagon in 1966 and was ready to address head on the problem of communications between computers.
Fundamental Issues in Networking Several fundamental issues existed in the networking of computers. Networking had been conceived of to solve the problem of resource sharing between computers. During the 1960s computers were extremely large, expensive, and time consuming to operate. ARPA had already invested in computing resources at several computing centers across the country, but these centers had no way to communicate among one another or to share resources. At the same time, Cold War concerns were causing U.S. scientists to take a hard look at military communications networks across the country and to evaluate the networks’ survivability in case of a nuclear strike. In the United Kingdom scientists were looking at networks for purely communications use and were evaluating digital communication methods to work around the inefficiency of the analogue telephone system. Both U.S. and United Kingdom scientists were researching distributed networks (digital data communication networks that extend across multi-
ple locations), packet switching, dynamic routing algorithms (computational means of directing data flows), and network survivability/redundancy. Packet switching would be a critical element of network design because it would allow information to be broken down into pieces or “packets” that would be sent over the network and reassembled at their final destination. This design was a much more efficient messaging design, particularly when contrasted to analogue phone lines. Additionally, the distributed network design would be more efficient and robust. Without central nodes (locations that contain all the resources and then distribute them to the rest of the system), the system could survive a loss of one or more nodes and still route data traffic. This design also would allow more efficient data trafficking when coupled with an adaptive networking algorithm capable of determining the most efficient path for any packet to travel. Researchers addressed these issues prior to the Arpanet project. RAND’s Paul Baran recommended a distributed switching network to the U.S. Air Force in 1965 for the communications network of the Strategic Air Command, but the network was not developed. In the United Kingdom Don Davies was working on packet switching and adaptive networking for the Ministry of Defense. The two men independently came up with many of the same answers that would eventually be incorporated into the Arpanet.
The Arpanet Experiment Larry Roberts arrived at ARPA in 1966 with the charge to solve the computer networking problem. At an ARPA investigators meeting in Ann Arbor, Michigan, Roberts proposed a networking experiment that would become the Arpanet. He proposed that all of the ARPA time-sharing computers at various sites across the country be connected over dialup telephone lines. The time-sharing (or host) computers would serve double duty—both as resources and routers. Meeting participants met Roberts’s proposal with a great deal of skepticism. Why would people want to spend valuable computing resources to communicate between computers when people already had all the computing they needed at their site? At the time, sharing between
ARPANET ❚❙❘ 39
computing centers was a goal of ARPA and not necessarily of the scientific community itself. In addition, researchers would be reluctant to give up valuable computing power just so they could “share” with other researchers. However, a researcher at the meeting, Wes Clark, struck upon a solution that would allow the experiment to be carried out. Clark recommended keeping the host computers out of the networking duties. Instead, he suggested using a subnetwork of intermediary computers to handle packet switching and data trafficking. This subnetwork would reduce the computing demand on the host computers, and the use of a subnetwork of specialized computers would provide uniformity and control. This suggestion solved many problems, both technical and administrative, and would allow ARPA to control the subnetwork. The computers used at the subnetwork level were called “interface message processors” (IMPs). In addition to designing IMPs, researchers would have to develop protocols for how the IMPs would communicate with host computers and create the network. ARPA issued a request for proposals (RFP) in 1968, because the specifications for the network had become so detailed. These specifications included: Transfer of digital bits from source to specified location should be reliable. ■ Transit time through the subnetwork should be one-half second or less. ■ The subnetwork had to operate autonomously. ■ The subnetwork had to function even when IMP nodes went down. ■
The ARPA RFP was issued to determine which company could build the Arpanet to these specifications. After much debate, the contract was awarded in 1969 to the Bolt, Baranek, and Newman company (BBN), which had assembled an amazing team of scientists to transform this vision into reality. The choice of BBN was a surprise to many people because BBN was considered to be a consulting firm, not a computing heavy hitter. However, its proposal was so detailed and exacting that it could begin work immediately upon awarding of the contract. BBN had only twelve months to do the work. In their 1996 book, Where Wizards Stay Up Late, Katie Hafner, co-author of Cyberpunk, and Matthew
Lyon, assistant to the president of the University of Texas, unveil the Sputnik-era beginnings of the Internet, the groundbreaking scientific work that created it, and the often eccentric, brilliant scientists and engineers responsible. The team, led by Frank Heart, was dedicated to building the Arpanet on time and to specifications and had only nine months to deliver the first IMP. Despite hardware setbacks, the team delivered the first IMP to UCLA early. UCLA was also the site of the network management center, the “test track” for the Arpanet. The team was charged with testing the network’s limits and exposing bugs, flaws, and oddities. The initial Arpanet experiment consisted of four nodes, with an IMP at UCLA, Stanford Research Institute (SRI), University of Utah, and University of California at Santa Barbara. BBN also was responsible for two critical elements: the IMPs themselves (including IMP-to-IMP communications) and the specifications for the IMP-to-host communications. The specifications for the IMP-to-host communications were drafted by Bob Kahn, who became the intermediary between the Arpanet research community and BBN. Graduate students of the host institutions digested those specifications and developed the code that would serve as the interface between host and IMP. They formed the Network Working Group to hammer out the details of protocols, shared resources, and data transfer. They created file transfer protocols (which layout the rules for how all computers handle the transfer of files) that became the backbone of the Arpanet and made it functional. This experiment was so successful that the Arpanet was expanded to include other research sites across the country until it grew to twenty-nine nodes. In 1972 the Arpanet made its public debut at the International Conference on Computer Communication. It was an unequivocal hit, and the computer networking concept was validated in the public arena.
The Arpanet Evolves As members of a user community, the researchers involved in the Arpanet were always adding, creating, experimenting. The Arpanet became a bargaining tool in the recruiting of computer science faculty and an impromptu communication tool for “network mail” or electronic mail (e-mail). In 1973 an
40 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
ARPA study showed that 75 percent of all traffic on the Arpanet was e-mail. Researchers eventually wrote dedicated software to handle this “side use” of the Arpanet. In 1972 Bob Kahn left BBN and went to work at ARPA with Larry Roberts. Kahn was now in charge of the network that he had helped create. He formed a fruitful collaboration with Vint Cerf of Stanford ( w h o w a s a g r a d u a te s t u d e n t o n t h e U C L A Arpanet project) that led to the next evolution of networking. Together they tackled the problem of packet switching in internetworking, which would eventually become the Internet. In 1975 Vint Cerf went to DARPA to take charge of all of the ARPA Internet programs, and the Arpanet itself was transferred to the Defense Communication Agency, a transfer that upset some people in the non-bureaucratic computing research community. The Internet was created by the merging of the Arpanet, SATNET (Atlantic Packet Satellite Network), and a packet radio network—all based on the transmission-control protocol/Internet protocol (TCP/IP) standard that Cerf and Kahn created—and then more and more networks were created and connected until the Internet was born. The Arpanet eventually burgeoned to 113 nodes before it adopted the new TCP/IP standard and was split into MILNET and Arpanet in 1983. In 1989 the Arpanet was officially “powered down,” and all of the original nodes were transferred to the Internet.
The Internet and Beyond The creation of the Arpanet—and then the Internet— was the work of many researchers. Only with difficulty can we imagine our modern society without the interconnectedness that we now share. The Arpanet was a testament to the ingenuity of the human mind and people’s perhaps evolutionary desire to be connected to one another. The Arpanet not only brought us closer together but also brought us one step closer to J. C. R. Licklider’s vision of humancomputer interaction more than four decades ago. Amy Kruse, Dylan Schmorrow, and J. Allen Sears See also Internet—Worldwide Diffusion
FURTHER READING Adam, J. (1996, November). Geek gods: How cybergeniuses Bob Kahn and Vint Cerf turned a Pentagon project into the Internet and connected the world. Washingtonian Magazine, 66. Baranek, B., & Newman. (1981, April). A history of the ARPANET: The first decade. NTIS No. AD A 115440). Retrieved March 23, 2004, from http://www.ntis.gov Evenson, L. (1997, March 16). Present at the creation of the Internet: Now that we’re all linked up and sitting quietly, Vint Cerf, one of its architects, describes how the Internet came into being. San Francisco Chronicle (p. 3ff). Hafner, K., & Lyon, M. (1996). Where wizards stay up late: The origins of the Internet. New York: Simon & Schuster. Hughes, T. J. (1998). Rescuing Prometheus. New York: Pantheon Books. Norberg, A., & O’Neill, J. (1997). Transforming computer technology. Ann Arbor: Scholarly Publishing Office, University of Michigan Library. Salus, P. (1995). Casting the Net. Reading, MA: Addison-Wesley.
ARTIFICIAL INTELLIGENCE Most research in mainstream artificial intelligence (AI) is directed toward understanding how people (or even animals or societies) can solve problems effectively. These problems are much more general than mathematical or logical puzzles; AI researchers are interested in how artificial systems can perceive and reason about the world, plan and act to meet goals, communicate, learn, and apply knowledge such that they can behave intelligently. In the context of human-computer interaction (HCI), research in AI has focused on three general questions: How can the process of designing and implementing interactive systems be improved? ■ How can an interactive system decide which problems need to be solved and how they should be solved? ■ How can an interactive system communicate most effectively with the user about the problems that need to be solved? ■
The first question deals with the development process in HCI, the others with user interaction,
ARTIFICIAL INTELLIGENCE ❚❙❘ 41
specifically the issues of control and communication. These questions have been a central concern in HCI for the past thirty years and remain critical today. AI has been able to provide useful insights into how these questions can be answered. In sum, what AI brings to HCI development is the possibility of a more systematic exploration and evaluation of interface designs, based on automated reasoning about a given application domain, the characteristics of human problem solving, and general interaction principles. The AI approach can benefit end users because it encourages tailoring the behavior of an interactive system more closely to users’ needs.
The Concept of Search Almost all techniques for problem solving in AI are based on the fundamental concept of search. One way to understand search is by analogy to navigation on the World Wide Web. Imagine that my goal is to reach a specific webpage starting from my homepage, and that I have no access to automated facilities such as search engines. I proceed by clicking on the navigation links on my current page. For each new page that comes up, I decide whether I have reached my goal. If not, then I evaluate the new page, comparing it with the other pages that I have encountered, to see whether I am moving closer to my goal or farther away. Based on my evaluation, I may continue forward or go back to an earlier, more promising point to take a different path. An automated search process works in the same way. Pages correspond to states in a search space, or relevant information about the environment; navigation actions are operators, which transform one state into another; an evaluation function assesses information about the state to guide the selection of operators for further transformations. A large number of AI techniques have been developed to address specific classes of search problems, representing the problems in different ways. For example, planning systems search for sequences of interdependent operators to reach a set of goals; these systems can deal with complex tasks ranging from planning space missions to helping robots navigate over unfamiliar terrain. Expert systems, whether
acting as automated tax advisors, automobile repair advisors, or medical consultants, search opportunistically for combinations of if-then rules that derive plausible conclusions from input data and existing knowledge. Machine learning systems, including neutral networks, incrementally refine an internal representation of their environment, in a search for improved performance on given tasks. Natural language understanding systems search for correct interpretations through a space of ambiguous word meanings, grammatical constructs, and pragmatic goals. These brief descriptions are only approximate, but they help us understand how a system can represent and deal with some of the problems that arise in interacting with users or interface developers in an intelligent way.
ARTIFICIAL INTELLIGENCE (AI) The subfield of computer science that is concerned with symbolic reasoning and problem solving.
AI and the Development of User Interfaces Considerable attention in AI has focused on the process of developing user interfaces. Experienced developers generally have a working knowledge of software engineering practice, interface architectures, graphic design, and related areas, plus information about the purpose for which the interface is to be used. If this knowledge can be captured in computational form, an intelligent development environment can aid developers by testing and validating design specifications, by producing alternative designs for a given specification, by generating potential improvements to a design, and by automating some of the more common implementation tasks. The motivation for a search-based approach can be seen most clearly in the problem of layout design. If an experienced designer were asked to organize ten loosely related items of information (represented in text, pictures, and buttons) on a company’s toplevel webpage, the final product might be the result of comparing several alternatives, perhaps a few
42 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
dozen at most. The number of all possible layouts of ten items, however, runs into the millions and higher; this is much more than a designer can humanly consider. Most of these layouts will be unacceptable (for example, all possible orderings of items diagonally across the page), but there may be many effective designs that are missed simply because the number of possibilities is so enormous. A system that can search through different spatial relationships and evaluate the results, even without perfect accuracy, can give designers a more comprehensive view of the problem and its solutions. Automated layout design is just one aspect of interface design. Research in the general area of modelbased interface design aims to support developers in all stages of the design process. In MOBI-D and Mastermind, which are user interface generation tools, developers build and evaluate abstract models of computer applications (such as word processing applications, spreadsheet applications, or photographic design applications), interaction tasks and actions, presentations, even users and workplaces. The goal is to give developers decision-making tools that allow them to apply their design skills but do not overly restrict their choices. These tools test constraints, evaluate design implications, present suggestions, track changes, and so forth, facilitating the eventual construction of the actual interface. For example, if a developer specifies that the user must enter a number at some point, MOBI-D can present different interface alternatives, such as a slider (the software equivalent of a linear volume control) or a text box that the user can type into directly, for the developer to choose from. In Mastermind, the developer can switch between a number of visual formats, avoiding ones that are cumbersome. Current research in this area is helping to improve webpage design and build interfaces that meet the constraints of the next generation of interactive devices, including cell phones and handheld computers. AI research is also helping software companies with product evaluation. Partially automated testing of noninteractive software is now commonplace, but conventional techniques are not well suited to testing user interfaces. Software companies usually rely on limited user studies in the laboratory, plus a large population of alpha and beta testers (people who test
the product in real-world situations). Thanks to AI research, however, it is becoming possible to build artificial software agents that can stand in for real users. It is common to think about user interaction with the software in problem-solving terms, as goal-oriented behavior. For example, if my goal is to send an e-mail message, I divide this into subgoals: entering the recipient information and subject line information, writing a short paragraph of text, and attaching a picture. My paragraph subgoal breaks down further into writing individual sentences, with the decomposition continuing to the point of mouse movements and key presses. In AI terms, these decompositions can be represented by plans to be constructed and executed automatically. The PATHS system, a system designed to help automate the testing of graphical user interfaces, lets developers specify a beginning state, an end state, and a set of goals to be accomplished using the interface. PATHS then creates a comprehensive set of plans to achieve the goals. For example, given the goal of modifying a document, the planner will generate sequences of actions for opening the document, adding and deleting text, and saving the results, accounting for all the different ways that each action can be carried out. If a given sequence is found not to be supported when it should be, PATHS will record this as an error in the application. Similar work is carried out in the related field of cognitive modeling, which shares many concepts with AI. Cognitive modelers build computational models of human cognitive processing—perception, attention, memory, motor action, and so forth—in order to gain insight into human behavior. To make valid comparisons between a model’s performance and human performance, a common experimental ground is needed. User interfaces provide that common ground. Cognitive models comparable to planning systems have been developed for evaluating user interfaces, and they have the added benefit of giving developers information about the human side of interaction as well as the application side.
Interaction The metaphor of tool use has come to dominate the way we understand human interaction with computers, especially with regard to graphical user interfaces. Just as a carpenter keeps specialized sets of tools
ARTIFICIAL INTELLEGENCE ❚❙❘ 43
A Personal Story—Putting Humans First in Systems Design The field of augmented cognition is pushing the integration of human systems and information technology to the forefront, while also attempting to maximize human potential. My current (and anticipated future) experience with using an ever-increasing number of technologies during my everyday life compels me (propels me!) to help design a new class of systems for the user to interact with. Practitioners of traditional human-systems integration research and design have steadfastly urged that the human must be considered when designing systems for human use. An emerging concept is that not only are human beings the weak link in current human-systems relationships, but also that the number of systems that a single human interacts with is growing so rapidly that the human is no longer capable of using these technologies in truly meaningful ways. This specifically motivates me to develop augmented cognition technologies at the Defense Advanced Research Projects Agency (where I am a program manager). I want to decrease the number of system interfaces that we need to interact with, and increase the number of advanced systems that individuals are capable of using simultaneously. On any given day, I typically wear (carry) five computers: my wristwatch, cell phone, twoway pager with e-mailing capability, a personal digital assistant, and a laptop. I find these systems intrusive and the associated demands on my time to be unacceptable. My home is inundated with appliances that are evolving into computer devices—these systems have advanced “features” that require significant attention in order to use them optimally. Even with the world’s greatest human factors interface, I would never have time to interact with all of these systems that I use on a daily basis. Having said all of this, I need the systems that support me to exhibit some intelligence; I need them to be able to perceive and understand what is going on around and inside of me. I do not have time to overtly direct them. Ideally they will support me by “sensing” my limitations (and my capabilities) and determining how best to communicate with me if absolutely necessary. Augmented cognition technology will imbue into these systems the ability to interact with me. Indeed, augmented cognition is about maximizing human potential. If we humans are the “weak link,” it is because our current advanced computer systems are actually limiting our performance. In the future, we must have transparent technologies addressing our needs, or we will be overwhelmed by meaningless interactions. Dylan Schmorrow
for framing a house or building fine furniture, an experienced computer user has a variety of software tools for word processing, analyzing data with spreadsheets, or creating graphics and illustrations. User interfaces are often thought of as tool-using environments, which has important implications for the involvement of AI in user interaction. Let us extend the carpenter analogy. If I am intent on hammering a nail, I am not constantly reconsidering and recalibrating the relationship between the hammer and my hand, or the head of the hammer and the nail. Instead, after an initial adjustment, the hammer effectively becomes an extension of my arm, so that I can use it without thinking about it. Similarly, for a tool-based software environment, selecting individual tools should be intuitive, and applying a tool should quickly become second nature.
The principles of direct manipulation provide a foundation for tool-based environments. Direct-manipulation interfaces, as defined by Ben Shneiderman, the founding director of the Human-Computer Interaction Laboratory at the University of Maryland, provide a visual representation of objects, allow rapid operations with visible feedback, and rely mainly on physical actions (such as selecting and dragging or pressing buttons) to initiate actions. Modern graphical user interfaces can trace much of their power to direct-manipulation principles. Nevertheless, as powerful as direct-manipulation interfaces can be, they are not appropriate in all situations. For example, sometimes in using a piece of software I know what needs to be done—I can even describe in words what I would like to do—but I do not know exactly how to accomplish my task given the tools at hand.
44 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
These potential limitations, among others, have led AI researchers to consider alternatives to a strict tool-based approach. First, it is possible to build intelligent environments that take a more active role in assisting the user—for example, by automatically adapting their behavior to the user’s goals. Second, intelligent behavior can be encapsulated within a software agent that can take responsibility for different tasks in the environment, reducing the burden on the user. Third, these agents and environments can communicate with the user, rather than passively being acted upon by the user, as tools are. Intelligent Environments Some intelligent environments work by integrating AI search into an otherwise conventional interface. One recently developed technique, human-guided simple search, is intended to solve computationally intensive problems such as the traveling salesman problem. This problem involves a salesman who must visit a number of cities while keeping the distance traveled as small as possible. Finding the optimal route for even a small number of locations is beyond what can be done with pencil and paper; for ten locations there are over three million possible routes. Large problems are challenging even for the most sophisticated computer programs. The user works with the human-guided search (HUGSS) tool kit through a graphical display of routes that the system has found. By pressing a button, the user activates a search process that computes the best route it can find within a fixed period of time. The user examines the solution and modifies it by selecting parts of the route that need further refinement or identifying those parts that already have a reasonable solution. The user brings human perception and reasoning to bear on the problem by constraining the space that the search process considers (for example, by temporarily focusing the search on routes between five specific locations, rather than the entire set). Problem-solving responsibility is explicitly shared between the user and the system, with the amount and timing of the system’s effort always under the user’s control. HUGSS works faster than the best fully automated systems currently in use, and it produces results of equal quality.
Other approaches to building intelligent environments, such as programming by example (PBE), involve more significant changes to user interaction. PBE systems watch the user perform a procedure a number of times and then automatically generalize from these examples to create a fully functional program that can execute the repetitive actions so the user does not have to. The SMARTedit system is an example of a machine-learning approach to PBE, in the context of a text-editing application. Suppose that the user moves the cursor to the beginning of the word apple, erases the lowercase a, and types an uppercase A. There are several ways that those actions could be interpreted. Perhaps, for example, the user wanted to move the cursor forward n characters and replace the arbitrary character at that location with A, or perhaps the user wanted to move to the next occurrence of the letter a and capitalize it, or to correct the capitalization of the first word in a sentence, or some other possibility. Each of these interpretations is a different hypothesis maintained by SMARTedit about the user’s intentions. As the user takes further actions, repeating similar sequences on different text, ambiguity is reduced. Some hypotheses become more plausible while others are pruned away because they predict actions inconsistent with the user’s behavior. At any point, the user can direct SMARTedit to take over the editing process and watch the system apply its most hig hly ranked hy pothesis. If SMARTedit carries out a sequence incorrectly, the user can interrupt and correct the mistake, with the system learning from the feedback. Adaptive user interfaces are another type of intelligent environment. Their development is motivated by the observation that while the ideal software system is tailored to an individual user, for economic reasons a single system must be designed and released to thousands or even millions of users, who differ widely from one another in expertise, interests, needs, and so forth. The solution is a system that can adapt to its users when in use. A simple example is adaptive menus. A system can record how often the user selects different menu options, and modify the menu structure so that more frequently chosen options can be reached more efficiently. This basic idea
ARTIFICIAL INTELLEGENCE ❚❙❘ 45
also works in more sophisticated adaptive systems, many of which compile detailed models of users and their particular tasks and adapt accordingly. Adaptive systems have become especially relevant in efforts to personalize the World Wide Web as well as in research on intelligent tutoring systems and other applications of AI to education.
Intelligent Agents The engineer Michael Huhns and the computer scientist Munindar Singh define intelligent agents as “active, persistent (software) components that perceive, reason, act, and communicate” (Huhns and Singh 1997, 1). For our purposes, the most important characteristic of an agent is its autonomy—its ability to carry out activities without the constant, direct supervision of a human being. Agents in use at present include animated characters or “believable” agents, autonomous agents such as softbots (software agents that perform tasks on the Internet) and physical robots, and mobile agents whose processing is not limited to a single computer platform. Agents are also used in multi-agent systems, which may involve mixed teams of humans and agents. Most relevant to HCI are interface agents, which act as intelligent assistants within a user interface, sometimes carrying out tasks on their own but also able to take instructions and guidance from the user. Letizia is an interface agent that assists users in browsing the World Wide Web. Letizia operates in conjunction with a standard Web browser, maintaining two open windows for its own use. As the user navigates through the Web, Letizia records the information on each page that the user visits and performs an independent search of nearby pages that the user may not have seen. Letizia’s evaluation function compares the information on the pages that it visits with the information that the user has seen up to the current point. In this way Letizia can make suggestions about what the user might be interested in seeing next. As Letizia visits pages, it displays the most promising ones for a short time in one window and the overall winner it has encountered in the other window. The user can watch what Letizia is doing and take control at will.
Information retrieval is just one area in which agents have become popular. Agents have also appeared in help systems, planning and scheduling aids, scripting systems, intelligent tutoring systems, collaborative filtering applications, matchmaking applications, and electronic auctions. Work on agents is one of the fastest-growing areas of AI. An important topic within research on agents is how to make agents interact most effectively with users. Who should take the initiative—the user or the agent? And when? Should one ever interrupt the other? These are questions of mixed-initiative interaction. Some work on these questions is carried out in the area of rational decision making, wherein rationality is interpreted in an economic sense. If an agent has knowledge of the user’s preferences and can reason about the user’s goals, then it can, for example, determine that the value of the information it can contribute at some point will offset the cost of the user having to deal with an interruption. A different direction is taken by projects that are influenced by the ways that people interact with one another, especially in dialogue. TRIPS (The Rochester Interactive Planning System) is a mixed-initiative planning and scheduling assistant that collaborates with a human user to solve problems in crisis situations, such as planning and managing an evacuation. The COLLAGEN (from COLLaborative AGENt) system is a collaboration system that can be incorporated into agents to give them sophisticated collaboration capabilities across a range of application domains. TRIPS and COLLAGEN agents can interact with users via everyday natural language as well as through multimedia presentations, which leads to the topic of communication. Communication Some agents communicate by conventional means in a graphical user interface, for example by raising dialog windows and accepting typed input and button presses for responses. A common and reasonable expectation, however, is that if a system is intelligent, we should be able to talk with it as we would with other people, using natural language. (Natural language refers to the languages that people commonly use, such as English or French, in contrast to programming languages.) Unfortunately,
46 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
even a brief treatment of natural-language understanding and generation, not to mention voice recognition and speech output, is beyond the scope of this article. An example, however, may give some idea of the issues involved. Consider three remarks from the user’s side of a dialogue with a natural-language system (the bracketed text is not spoken by the user): User (1): Show me document.txt. User (2): What’s the last modification date [on the file document.txt]? User (3): Okay, print it [i.e., document.txt]. To respond correctly, the system must be able to reason that modification dates are associated with files and that files rather than dates are usually printed (“it” could grammatically refer to either.) Reading this dialogue, English-speaking humans make these inferences automatically, without effort or even awareness. It is only recently that computer systems have been able to match even a fraction of our abilities. The QuickSet communication system combines natural language and other methods of interaction for use in military scenarios. Shown a map on a tablet PC, the user can say, “Jeep 23, follow this evacuation route,” while drawing a path on the display. The system responds with the requested action. This interaction is striking for its efficiency: the user has two simultaneous modes of input, voice and pen-aided gesture, and the ambiguities in one channel (in this example, the interpretation of the phrase “this route”) are compensated for by information in the other channel (the drawn path). In general, voice and natural language can support a more engaging, natural style of interaction with the interface than approaches that use a single vector of communication. Embodied conversational agents take work in natural language a step further. When people speak with one another, communication is not limited to the words that are spoken. Gestures, expressions, and other factors can modify or even contradict the literal meaning of spoken words. Embodied conversational agents attempt to recognize and produce these broader cues in communication. REA, a simulated real estate agent research prototype developed at the Massachusetts Institute of Technology, is represented by a full body figure on a large-scale display. REA shows users around a house, making appropriate use of eye gaze, body posture, hand gestures, and facial expressions to en-
hance its spoken conversation. Users can communicate via speech or gesture, even by simply looking at particular objects, nonverbal behavior that is sensed by cameras. Systems like REA aim to make the computer side of face-to-face human-computer communication as rich and nuanced as the human side.
Future Directions This article has introduced the reader to AI approaches to HCI rather than give a taxonomy of AI systems; many of the systems touched upon are much broader in scope than can be conveyed through a category assignment and a few sentences. Developments that do not fit neatly within the categories discussed are listed below. Smart Rooms and Intelligent Classrooms Much of what makes a software environment intelligent can be generalized to the physical domain. Smart rooms and intelligent classrooms rely on the same kind of technology as an embodied conversational agent; they register users’ gestures and spoken commands and adjust thermostats, change lighting, run presentations and the like, accordingly. Games and Virtual Environments Intelligent agents have begun to enrich games and virtual environments, acting as teammates or opponents. Extending this line of research, the Mimesis system imposes a nonscripted, dynamic narrative structure onto a virtual gaming environment, so that external goals (for example, education on a historical period) can be met without compromising the user’s direct control over the environment. Human-Robot Interaction Robots are appearing outside the laboratory, in our workplaces and homes. Human-robot interaction examines issues of interaction with physical agents in real-world environments, even in social situations. Robots can be used to explore otherwise inaccessible environments and in search-and-rescue missions. It should be clear from this discussion that the most interesting problems in HCI are no longer found in software technology, at the level of the visible components of the interface. Effective AI
ASIAN SCRIPT INPUT ❚❙❘ 47
approaches to HCI focus on issues at deeper levels, probing the structure of problems that need to be solved, the capabilities and requirements of users, and new ways of integrating human reasoning with automated processing. Robert St. Amant FURTHER READING Anderson, D., Anderson, E., Lesh, N., Marks, J., Mirtich, B., Ratajczak, D., et al. (2000). Human-guided simple search. In Proceedings of the National Conference on Artificial Intelligence (AAAI) (pp. 209–216). Cambridge, MA: MIT Press. Cassell, J. (Ed.). (2000). Embodied conversational agents. Cambridge, MA: MIT Press. Cassell, J., Bickmore, T., Billinghurst, M., Campbell, L., Chang, K., Vilhjálmsson, H., et al. (1999). Embodiment in conversational interfaces: REA. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI) (pp. 520–527). New York: ACM Press. Cypher, A., (Ed.). (1993). Watch what I do: Programming by demonstration. Cambridge, MA: MIT Press. Huhns, M. N., & Singh, M. P. (Eds.). (1997). Readings in agents. San Francisco: Morgan Kaufmann. Kobsa, A. (Ed.). (2001). Ten year anniversary issue. User Modeling and User-Adapted Interaction, 11(1–2). Lester, J. (Ed.). (1999). Special issue on intelligent user interfaces. AI Magazine, 22(4). Lieberman, H. (1995). Letizia: An agent that assists Web browsing. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) (pp. 924–929). San Francisco: Morgan Kaufmann. Lieberman, H. (Ed.). (2001). Your wish is my command. San Francisco: Morgan Kaufmann. Lok, S., & Feiner, S. (2001). A survey of automated layout techniques for information presentations. In Proceedings of the First International Symposium on Smart Graphics (pp. 61–68). New York: ACM Press. Maybury, M. T., & Wahlster, W. (Eds.). (1998). Readings in intelligent user interfaces. San Francisco: Morgan Kaufmann. Memon, A. M., Pollack, M. E., Soffa, M. L. (2001). Hierarchical GUI test case generation using automated planning. IEEE Transactions on Software Engineering, 27(2), 144–155. Newell, A., & Simon, H. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall. Oviatt, S. L., Cohen, P. R., Wu, L., Vergo, J., Duncan, L., Suhm, B., et al. (2002). Designing the user interface for multimodal speech and gesture applications: State-of-the-art systems and research directions. In J. Carroll (Ed.), Human-computer interaction in the new millennium (pp. 419–456). Reading, MA: Addison-Wesley. Puerta, A.R. (1997). A model-based interface development environment. IEEE Software, 14(4), 41–47. Ritter, F. E., & Young, R. M. (Eds.). (2001). Special issue on cognitive modeling for human-computer interaction. International Journal of Human-Computer Studies, 55(1). Russell, S., & Norvig, P. (1995). Artificial intelligence: A modern approach. Englewood Cliffs, NJ: Prentice-Hall.
Shneiderman, B. (1998). Designing the user interface: Strategies for effective human-computer interaction. Boston: Addison-Wesley. Shneiderman, B., & Maes, P. (1997). Debate: Direct manipulation vs. interface agents. Interactions, 4(6), 42–61. Sullivan, J. W., & Tyler, S. W. (Eds.). (1991). Intelligent user interfaces. New York: ACM Press. St. Amant, R., & Healey, C. G. (2001). Usability guidelines for interactive search in direct manipulation systems. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) (pp. 1179–1184). San Francisco: Morgan Kaufman. Szekely, P., Sukaviriya, P., Castells, P., Muthukumarasamy, J., & Salcher, E. (1996). Declarative interface models for user interface construction tools: The Mastermind approach. In L. Bass & C. Unger (Eds.), Engineering for human-computer interaction (pp. 120–150). London and New York: Chapman & Hall. Wolfman, S. A., Lau, T. A., Domingos, P., & Weld, D. S. (2001). Mixed initiative interfaces for learning tasks: SMARTedit talks back. In Proceedings of the International Conference on Intelligent User Interfaces (pp. 67–174). New York: ACM Press.
ASIAN SCRIPT INPUT The Asian languages that employ the Chinese alphabet in their writing systems present difficult challenges for entering text into computers and word processors. Many Asian languages, such as Korean and Thai, have their own alphabets, and the Devanagari alphabet is used to write Sanskrit, Hindi, and some other languages of India. Designing keyboards and fonts for alphabets of languages—such as Hebrew, Greek, Russian, and Arabic—that do not employ the Roman alphabet used by English and other western European languages is relatively simple. The challenge with Chinese, simply put, is that a standard national database contains 6,763 symbols (called “characters” rather than “letters”), and a keyboard with so many keys would be completely unwieldy. As was the case with ancient Egyptian hieroglyphics and Mesopotamian cuneiform, Chinese writing began as pictographs that represented particular things. Evolving through time and modified for graceful drawing with an ink brush, these pictographs became the current system of characters representing concepts and sounds in a complex interplay of functions. A person fully literate in Chinese today uses 3,000 to 4,000 characters; newspapers have 6,000 to 7,000 available, but some dictionaries list as many as 50,000. In 1958 a standardized phonetic system based
48 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
on the Roman alphabet and called “pinyin” was introduced, but it has not replaced the traditional system of writing. Japanese employs two phonetic alphabets called kana, as well as Chinese characters called kanji. In 1983 the Japan Industrial Standard listed 2,963 commonly used characters plus another 3,384 that appear only rarely. Korean also makes some use of Chinese characters, but the chief form of writing is with an alphabet historically based on Chinese but phonetically representing the sounds of spoken Korean. Because Japan has been a leader in developing computer technology for decades, its language is the best example. Around 1915 Japan began experimenting with typewriters, but they were cumbersome and rare. Typewriters could be made simply for the kana, a centuries-old phonetic system for writing Japanese syllables, either in the traditional hiragana form or in the equivalent katakana form used for writing foreign words or telegrams. Occasionally reformers have suggested that Chinese characters should be abandoned in favor of the kana or the Roman alphabet, but this reform has not happened. Thus, newspapers employed vast collections of Chinese type, and careful handwriting was used in business, schools, and forms of printing such as photocopying that could duplicate handwriting. During the 1980s word processors were introduced that were capable of producing the traditional mixture of kanji, hiragana, and katakana, along with occasional words in Roman script and other Western symbols. The Macintosh, which was the first commercially successful computer with bitmapped (relating to a digital image for which an array of binary data specifies the value of each pixel) screen and printing, became popular in Japan because it could handle the language, but all Windows-based computers can now as well, as, of course, can indigenous Japanese word processors. Kana computer keyboards exist in Japan, but the most common input method for Chinese characters in both China and Japan requires the user to enter text into a Western keyboard, romanizing the words. Suppose that someone is using Microsoft Word in Japanese and wants to type the word meaning “comment.” The writer would press the Western keys that phonetically spell the Japanese word kannsou. If the word processor is set to do so, it will automatically dis-
play the equivalent hiragana characters instead of Western letters on the screen.
Many Meanings The writer probably does not want the hiragana but rather the kanji, but many Japanese words can be romanized kannsou. Asian languages have many homonyms (words that sound similar but have different meanings), and Chinese characters must represent the one intended meaning. The standard way in which word processors handle this awkward fact, in Chinese as well as Japanese, is to open a selection window containing the alternatives. For example, let’s say the user typed “kannsou,” then hit the spacebar (which is not otherwise used in ordinary Japanese) to open the selection window with the first choice highlighted. The user can select the second choice, which is the correct Chinese characters for the Japanese word meaning “a comment” (one’s thoughts and impressions about something). If the user wanted kannsou to mean not “comment,” but rather “dry,” he or she would select the third choice. The fourth through ninth choices mean “welcome” and “farewell,”“a musical interlude,” “completion, as of a race,”“meditate,”“hay”(dry grass), and “telling people’s fortunes by examining their faces.” Good Asian-language word processing software presents the choices in descending order of likelihood, and if a person selects a particular choice repeatedly it will appear on the top of the list. The word processor can be set so that the first kanji choice, instead of the hiragana, appears in the text being written. Pressing the spacebar once would transform it to the second choice, and pressing again could select the next choice and open the selection window. The choices may include a katakana choice as well. Many choices exist, and some Chinese word processors often fill the selection window four times over. Thus, research on the frequency of usage of various Chinese words is important in establishing their most efficient ordering in the selection window. Human-computer interaction (HCI) research has explored other ways of making the word selection, including eye tracking to select the alternative that the user’s eyes focus upon. The chief substitutes for keyboard text input are speech recognition and handwriting recognition. Speech recognition systems developed for English are
ASIAN SCRIPT INPUT ❚❙❘ 49
unsuitable for Asian languages. Notably, spoken Chinese is a tonal language in which each syllable has a characteristic pitch pattern, an important feature absent from English. Experts have done a good deal of research on computer recognition of Japanese and Chinese, but speech input introduces errors while requiring the same selection among choices, as does keyboard input. Handwriting recognition avoids the problem of alternative ways of writing homonyms, but despite much research it remains excessively error prone. Three approaches are being tried with Chinese: recognizing (1) the whole word, (2) the individual characters, or (3) parts of characters, called “radicals,” that may appear in many characters. All three approaches have high error rates because many characters are graphically complex, and people vary considerably in how they draw them. Thus, keyboard input remains by far the most popular method.
Modern word processors may change the balance of forces working for or against change in the traditional Asian scripts. They may degrade people’s Chinese character handwriting skills, but they may simultaneously help people employ more obscure characters. In the psychology of memory people have the ability to recognize things they would not have spontaneously produced. Chinese-language and Japanese-language word processors often include character palettes (comparable ranges, qualities, or uses of available elements), allowing users to select even obscure characters with a single click of the mouse, thereby perhaps encouraging them to do so. Computer and information scientists and engineers are rapidly producing search engines and a whole host of other tools that are giving the ancient Asian scripts a new life on the Internet and the World Wide Web. William Sims Bainbridge and Erika Bainbridge
East and West China, Japan, and Korea have from time to time considered abandoning the traditional Chinese characters, with Korea coming the closest to actually doing so. A phonetic writing system is easier to learn, thus giving students more time to study other things. The traditional Chinese system supported an entrenched intellectual elite, who feared that a simple alphabet might democratize writing. On the other hand, one advantage of the traditional system is that a vast region of the world speaking many dialects and languages could be united by a single writing system, and even today a Chinese person can communicate to some extent with a Japanese person—even though neither knows the other’s spoken language—by drawing the characters. Fluent bilingual readers of an Asian language and a Western language sometimes say they can read Chinese characters more quickly because the characters directly represent concepts, whereas Western letters represent sounds and thus only indirectly relate to concepts. Some writers have conjectured that dyslexia should be rare in Chinese, if difficulties in learning to read are an inability to connect letters with sounds. However, dyslexia seems to exist in every language, although its causes and characteristics might be somewhat different in Asian languages than in English.
See also Handwriting Recognition and Retrieval; Keyboard FURTHER READING Apple Computer Company. (1993). Macintosh Japanese input method guide. Cupertino, CA: Apple. Asher, R. E., & Simpson, J. M. Y. (Eds.). (1994). The encyclopedia of language and linguistics. Oxford, UK: Pergamon. Fujii, H., & Croft, W. B. (1993). A comparison of indexing techniques for Japanese text retrieval. In Proceedings of the 16th annual ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 237–246). New York: ACM Press. Ho, F.-C. (2002). An analysis of reading errors in Chinese language. In L. Jeffrey (Comp.), AARE 2002 conference papers (n.p.). Melbourne, Australia: Australian Association for Research in Education. Li,Y., Ding, X., & Tan, C. L. (2002). Combining character-based bigrams with word-based bigrams in contextual postprocessing for Chinese script recognition, ACM Transactions on Asian Language Information Processing, 1(4), 297–309. Shi, D., Damper, R. I., & Gunn, S. R. (2003). Offline handwritten Chinese character recognition by radical decomposition. ACM Transactions on Asian Language Information Processing, 2(1), 27–48. Wang, J. (2003). Human-computer interaction research and practice in China. ACM Interactions, 10(2), 88–96. Wang, J., Zhai, S., & Su, H. (2001). Chinese input with keyboard and eyetracking. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 349–356). New York: ACM Press.
50 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
THE ATANASOFF-BERRY COMPUTER The Atanasoff-Berry Computer (ABC) was the first electronic digital computer and the inspiration for the better-publicized 1946 ENIAC. It was conceived in late 1938, prototyped in 1939 at Iowa State College (now Iowa State University) in Ames, Iowa, and made usable for production computing by 1941. John Atanasoff, a professor of mathematics and physics, collaborated with Clifford Berry, a graduate student, to develop the system.
Physical Description In contrast to the computers that followed in the 1940s, the ABC was compact, movable, and easily operated by a single user. The original system no longer exists except for a logic module and a memory drum, but a functioning replica was constructed in the late 1990s. The Atanasoff-Berry Computer The ABC weighed about 750 pounds. It had the weight and maneuverability of an upright piano and could roll on four heavy-duty casters. The total power it drew was less than a kilowatt, and the heat generated by its vacuum tubes was low enough to dissipate without requiring fan-forced air. The ABC used ordinary 117-volt line power. An electric motor synchronized to standard 60-hertz line voltage served as the system clock. The electromechanical parts of the ABC, like those of a modern computer, were for purposes other than calculation; the computing itself was completely electronic. The arithmetic modules were identical and could easily be interchanged, removed, and repaired.
Intended Applications and Production Use The ABC was intended to solve dense systems of up to thirty simultaneous linear equations with 15-decimal precision. Atanasoff targeted a workload
like that of current scientific computers: curve-fitting, circuit analysis, structural analysis, quantum physics, and problems in mechanics and astronomy. The desktop calculators of the era were not up to the equation-solving task, and Atanasoff identified their limits as a common bottleneck in scientific research. His conception of a high-speed solution made several unprecedented leaps: binary internal arithmetic (with automatic binary-decimal conversion), allelectronic operation using logic gates, dynamicallyrefreshed memory separated from the arithmetic units, parallel operation of up to thirty simultaneous arithmetic units, and a synchronous system clock. The ABC achieved practical success at the curvefitting application. Atanasoff collaborated with a statistician colleague at Iowa State, George Snedecor, who supplied a steady stream of small linear-system problems to the ABC. Snedecor’s secretary was given the task of checking the results by desk calculation, which was simpler than solving the equations and could be performed manually.
Human Interface Compared to modern interfaces, the ABC interface resembled that of an industrial manufacturing machine. The user controlled the system with throw switches and card readers (decimal for input and binary for intermediate results). The user was also responsible for moving a jumper from one pair of contacts to another to indicate a particular variable in the system of equations. The ABC communicated to the user through a few incandescent lamp indicators, an ohmmeter to indicate correct working voltages, a binary punch card output, and a cylindrical readout for decimal numbers that resembled a car odometer. The inventors clearly designed the machine for operation by themselves, not general users. None of the switches or lamps was labeled; it was up to the user to remember what each switch did and what each lamp meant. One switch instructed the ABC to read a base-10 punch card, convert it to binary, and store it in the dynamic memory, for example. Furthermore, the open design of the ABC provided far less protection from electric shock than a modern appliance does. Exposed surfaces only a few
ATTENTIVE USER INTERFACE ❚❙❘ 51
centimeters apart could deliver a 120-volt shock to the unwary. A user entered the coefficients of the equations on standard punch cards, using an IBM card punch. Each coefficient required up to fifteen decimals and a sign, so five numbers fit onto a single eighty-column card. It was in the user’s best interest to scale up the values to use all fifteen decimals, since the arithmetic was fixed-point and accumulated rounding error. Because the ABC could hold only two rows of coefficients in its memory at once, it relied on a mass storage medium to record scratch results for later use. (The solution of two equations in two unknowns did not require scratch memory.) Since magnetic storage was still in its infancy, Atanasoff and Berry developed a method of writing binary numbers using high-voltage arcs through a paper card. The presence of a hole, representing a 1, was then readable with lower voltage electrodes. Both reading and writing took place at 1,500 bits per second, which was a remarkable speed for input/output in 1940. However, the reliability of this system was such that a 1-bit error would occur every 10,000 to 100,000 bits, and this hindered the ability to use the ABC for production computing beyond five equations in five unknowns. To obtain human-readable results, the ABC converted the 50-bit binary values stored in the memory to decimals on the odometer readout. The total process of converting a single 15-decimal number and moving the output dials could take anywhere from 1 second to 150 seconds depending on the value of the number. Atanasoff envisioned automating the manual steps needed for operation, but enhancement of the ABC was interrupted by World War II and never resumed. The ABC was a landmark in human-computer interaction by virtue of being the first electronic computer. Its use of punch cards for the input of high-accuracy decimal data, binary internal representation, operator console, and the management of mass storage and volatile storage were major advancements for the late 1930s when Atanasoff and Berry conceived and developed it. John Gustafson See also ENIAC
FURTHER READING Atanasoff, J. V. (1984). Advent of electronic digital computing. Annals of the History of Computing, 6(3), 229–282. Burks, A. R. (2003). Who invented the computer? The legal battle that changed computing history. Amherst, NY: Prometheus Books. Burks, A. R., & Burks, A. W. (1989). The first electronic computer: The Atanasoff story. Ann Arbor: University of Michigan Press. Gustafson, J. (2000). Reconstruction of the Atanasoff-Berry computer. In R. Rojas & U. Hashagen (Eds.), The first computers: History and architectures (91–106). Cambridge, MA: MIT Press. Mackintosh, A. R. (1988, August). Dr. Atanasoff ’s computer. Scientific American (pp. 72–78). Mollenhoff, C. R. (1988). Atanasoff: Forgotten father of the computer. Ames: Iowa State University Press. Randell, R. (Ed.). (1982). The or ig ins of dig ital computers (pp. 305–325). New York: Springer-Verlag. Reconstruction of the Atanasoff-Berr y Computer. (n.d.). Retrieved on January 27, 2004, from http://www.scl.ameslab .gov/ABC Sendov, B. (2003). John Atanasoff: The electronic Prometheus. Sofia, Bulgaria: St. Kliment Ohridski University Press. Silag, W. (1984). The invention of the electronic digital computer at Iowa State College, 1930–1942. The Palimpsest, 65(5), 150–177.
ATTENTIVE USER INTERFACE An attentive user interface is a context-aware human-computer interface that uses a person’s attention as its primary input to determine and act upon a person’s intent. Although we can read a person’s attention in her every word and action (even the way a person moves a cursor on a computer interface shows what she is attending to), we usually read attention in what and how people look at things. Visual attentive user interfaces concentrate on the autonomic (involuntary) and social responses that eyes communicate and read such eye movements as a lingering stare, a roving gaze, and a nervous blink in a language of ocular attention. Such interfaces also monitor the order in which people visually scan objects.
52 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Eye Tracking Eye tracking is a technique that monitors a person’s eye movements to determine where she is looking. Eye tracking has long held promise as the ultimate human-computer interface, although eye tracking products have not been a commercial success. Original eye tracking approaches used mechanical/optical instruments that tracked mirrored contact lens reflections or even instruments that measured eye muscle tension. Newer approaches illuminate the eye with infrared light and watch reflections with a camera. Researchers can indirectly determine where a person’s eye is focusing by noting that an electroencephalogram (EEG) signal is dominated by an ocular stimulus. Four or five video strobe rates on different parts of a display can be distinguished in an EEG. When a person attends to one of them, his EEG pulses at the video strobe rate. Codings of attention on a screen can be identified with an EEG frequency counter.
Attention Can Be Detected Whereas the advertising and psychology fields have long used eye movement to understand what a person is looking at, the human-computer interface field has struggled to use the eye as a controller. However, the breakthrough in visual attentive user interfaces is in observing what the eye does, not in giving it a tracking task. Interest Tracker is a system that monitors the time that a person spends gazing over a title area instead of the time that the person spends gazing at a specific character to determine selection. For example, the title of an article is presented at the bottom of a computer screen. A user might glance down to read the title; if his glance plays over the title for more than .3 seconds a window opens on the computer screen with the full article. That .3 seconds of dwell time is less than the typical 1 second that is required for a computer user to select something on a screen by using a pointing device. Interest Tracker registers whether a person is paying attention to, for example, news feeds, stock prices, or help information and learns what titles to audition at the bottom of the screen. MAGIC (Manual and Gaze Input Cascaded) pointing is a technique that lets a computer mouse manipulate what a user’s eyes look at on a screen. An
Researcher Mike Li demonstrates the technology used in the Invision eye-tracking experiment. The balls on the screen have names of companies that move around as he looks at them. The object under the screen is the eye tracker. Photo courtesy of Ted Selker.
eye tracker enables the user’s gaze to roughly position the cursor, which the mouse can then manipulate. If the user wants to change the application window he is working with, he stares at the application window that he wants to work in; this stare “warps” the cursor to that application window. MAGIC pointing speeds up context changes on the screen.
The Path of Attention Can Demonstrate Intention During the late 1960s it was shown that the way that a person’s eyes move while scanning a picture describes aspects of what she is thinking. When researchers asked viewers seven questions about a painting entitled The Unexpected Visitor, seven identifiable eye-scan patterns were recognizable. The order in which a person looks at things also is a key to what that person is thinking. Research on an experiment called “Invision” uses this fact in a user interface to prioritize activities. Invision’s grouping of things by the way a person looks improves eye tracking and uses gaze to group things of interest. Knowing that an eye moves between staring fixations can help find those fixations. By analyzing the
ATTENTIVE USER INTERFACE ❚❙❘ 53
eye-travel vectors between fixation vertices, Invision gains a much more accurate idea of what a person is trying to look at than by analyzing that person’s dwell time on a particular item. Attending to the order in which people look at things provides a powerful interface tool. Invision demonstrates that an attentive user interface can be driven from insights about where people look. Scenarios are created in which the attentive pattern of the eye gaze can be “understood” by a computer. By watching the vertices of a person’s eye moving through a visual field of company names, the system notices which ones interest the person. The company names aggregate themselves into clusters on the screen based on the person’s scanning patterns. A similar approach uses an ecological interface that is an image of a kitchen with several problems. On the counter is a dish with some food on it; the oven door is slightly ajar, as are the dishwasher and refrigerator doors. The manner in which a person’s eyes move around the kitchen image allows the interface to understand whether the person is hungry, thinking of taking care of problems, or thinking about something else in the kitchen. The interface uses the order in which the person views things in the image to bring up a menu and so forth. This approach aggregates eye motions into a story of what the person wants to do. The attention model drives the interface. The vertices of change in direction of eye movements easily give focus locations that have eluded most eye tracking research.
Ocular Attention without Eye Tracking EyeaRe is an ocular attention system that is based on the fact that many of the social cues that are made by an eye do not depend on where the eye is looking. In fact, EyeaRe has no eye tracking system. It simply measures reflected infrared (IR) from the sclera (the opaque white outer coat enclosing the eyeball except the part covered by the cornea) and pupil to a photo diode. The system uses this reflected infrared to determine whether the eye is open, closed, blinking, winking, or staring. Without a camera such
a sensor can recognize many aspects of attention. EyeaRe consists of a Microchip PIC microprocessor that records and runs the system, an LED and a photo diode looking at the eye, and another LED/photo diode pair that measures whether it is in front of other EyeaRe devices and communicates information. An IR channel communicates to a video base station or a pair of glasses. If an EyeaRe user is staring, the IR reflection off his eye does not change. Staring at a video base station starts a video; glancing away stops it. The video image can detect whether a user is paying attention to it; if the user doesn’t like it and blinks her eyes in frustration, the system puts up a more pleasing image. When two people stare at each other, EyeaRe uses the IR communication channel to exchange information. When one person stares at another person, the person being stared at receives the contact information of the person who is staring. People tend to move their eyes until they have to look 15 degrees to the side; EyeaRe has an 18degree horizontal field of view. Thus, gaze and blink detection occurs when a person looks at the EyeaRe base station or glasses. EyeaRe demonstrates that a system that doesn’t even track the eye can understand the intentions of attention.
A Simple Attentive Eye-Gesture Language To take eye communication one step further, the Eye Bed interface uses an eye-gesture language to perform tasks that are helpful to a person lying in bed. The Eye Bed demonstrates that computers can be attentive to people’s need to be horizontal eight hours a day. The Eye Bed interface uses eye tracking housed in a converted lamp hanging over the head of the person in bed. This interface easily distinguishes between staring at an object on the ceiling and glancing around indifferently. A language of attentional eye gestures drives the scenario. Glancing around shows lack of attention, whereas staring demonstrates attention. Blinking a long wink-like blink means selection. Blinking rapidly means dislike. Closing the eyes could mean that the user is going to sleep; thus, a sunset and a nighttime
54 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
scenario begin. Opening the eyes makes a morning and wakeup scenario begin. Intelligent systems analyze a person’s reactions to media on music and video jukeboxes. The media offerings are auditioned to detect the attention shown them. Blinking when one doesn’t like the media makes the system know that it should choose other music or video to show the person. Winking or closing the eyes turns off the system. The reading of eye gestures becomes an attentive user interface. Understanding attention requires a model of what eye movement means. Researchers can make a complexity of interfaces from some simple observations of eye behavior. As an output device the eye is a simpler user interface tool than is normally described. The eye can easily be used with a language of closing, opening, blinking, winking, making nervous movements, glancing around, and staring. This language can be sensed with eye-tracking cameras or with a simple reflected LED, as the EyeaRe system demonstrates.
Promises of the Future Attentive user interfaces hold great promise. People are now in a position to implement and extend such interfaces. The hardware to create and test them is easily accessible. With the use of the eye as a secondary indicator of intention, researchers can make robust and computationally simple visual interfaces. Models of human intention and attention are becoming part of all human-computer interfaces. The context of where we are and what we are doing can accomplish more than automatically opening the grocery store door. Many interfaces can be driven completely by noticing a person’s attention. Sensors in a given context can detect many things about human attention. For example, a sensor pad in front of an office door can detect if a person has arrived to visit. Many biometrics (relating to the statistical analysis of biological observations and phenomena) such as EEG changes, sweat responses, and heart rate variability are candidates for attentive user interfaces. People want to focus on what they are doing and on the people they are with. Attentive user interfaces can detect people’s intentions without taking
their attention—even encouraging their ocular focus to be on what they want to do. Attentive user interfaces allow people’s attention to make things happen. Ted Selker See also Eye Tracking
FURTHER READING Bolt, R. A. (1985). Conversing with computers. Technology Review, 88(2), 34–43. Gregory, R. L. (1997). Eye and brain: The psychology of seeing. Oxford, UK: Oxford University Press. Guo, X. (1999). Eye contact—Talking about non-verbal communication: A corpus study. Retrieved April 29, 2004, from http://www.languagemagazine.com/internetedition/ma99/sprpt35.html Maglio, P. P., Barrett, R., Campbell, C. S., & Selker, T. (2000). SUITOR: An attentive information system. New York: ACM Press. Morimoto, D., & Flickner, M. (2000). Pupil detection using multiple light sources. Image and Vision Computing, 18, 331–335. Nervous TV newscasters blink more. (1999). Retrieved April 29, 2004, from http://www.doctorbob.com/news/7_24nervous.html Rice, R., & Love, G. (1987). Electronic emotion: Socioemotional content in a computer-mediated communication. Communication Research, 14(1), 85–108. Russell, S., & Norvig, P. (1995). Artificial intelligence: A modern approach. Upper Saddle River, NJ: Prentice Hall. Selker, T., & Burleson, W. (2000). Context-aware design and interaction in computer systems. IBM Systems Journal, 39(3–4), 880–891. Shepard, R. N. (1967). Recognition memory for words, sentences and pictures. Journal of Verbal Learning and Verbal Behavior, 6, 156–163.
AUGMENTED COGNITION Augmented cognition is a field of research that seeks to extend a computer user’s abilities via technologies that address information-processing bottlenecks inherent in human-computer interaction (HCI). These bottlenecks include limitations in attention, memory, learning, comprehension, visualization abilities, and decision-making. Limitations in human cognition (the act or process of knowing) are due to intrinsic restrictions in the number of mental tasks that a person can execute at one time, and these restrictions may fluctuate from moment to moment
AUGMENTED COGNITION ❚❙❘ 55
depending on a host of factors, including mental fatigue, novelty, boredom, and stress. As computational interfaces have become more prevalent in society and increasingly complex with regard to the volume and type of information presented, researchers have investigated novel ways to detect these bottlenecks and have devised strategies to aid users and improve their performance via technologies that assess users’ cognitive status in real time. A computational interaction monitors the state of a user through behavioral, psychophysiological, and/or neurophysiological data and adapts or augments the computational interface to significantly improve users’ performance on the task at hand.
Emergence of Augmented Cognition The cognitive science and HCI communities have researched augmented cognition for several decades. Scientific papers in this field increased markedly during the late 1990s and addressed efforts to build and use models of attention in information display and notification systems. However, the phrase “augmented cognition” associated with this research did not find widespread use until the year 2000, when a U.S. Defense Department Defense Advanced Research Project Agency (DARPA) Information Science and Technology (ISAT) group study and a workshop on the field at the National Academy of Sciences were held. During the year 2002 the number of papers about augmented cognition increased again. This increase was due, in part, to the start of a DARPA research program in augmented cognition in 2001 with a focus on challenges and opportunities with the real-time monitoring of cognitive states with physiological sensors. This substantial investment in these developing technologies helped bring together a research community and stimulated a set of thematically related projects on addressing cognitive bottlenecks via the monitoring of cognitive states. By 2003 the augmented cognition field extended well beyond the boundaries of those specific Defense Department research projects, but that initial investment provided impetus for the infant field to begin to mature.
Early Investments in Related Work Augmented cognition does not draw from just one scientific field—it draws from fields such as neuroscience, biopsychology, cognitive psychology, human factors, information technology, and computer science. Each of these fields has itself undergone a substantial revolution during the past forty years that has allowed the challenges raised by researchers to begin to be investigated. Although many individual research projects contributed to the general development and direction of augmented cognition, several multimillion-dollar projects helped shape the foundation on which the field is built. Since the invention of the electronic computer, scientists and engineers have speculated about the unique relationship between humans and computers. Unlike mechanized tools, which are primarily devices for extending human force and action, the computer became an entity with which humans forged an interactive relationship, particularly as computers came to permeate everyday life. In 1960 one of the great visionaries of intelligent computing, J. C. R. Licklider, wrote a paper entitled “ManComputer Symbiosis.” Licklider was director of the Information Processing Techniques Office (IPTO) at the Defense Department’s Advanced Research Projects Agency (ARPA) during the 1960s. In his paper he stated, “The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today” (Licklider 1960, 4). Almost prophetic, this description of the symbiotic relationship between humans and computers is one of the first descriptions of what could be considered an augmented cognition computational system. Although research on this topic was not conducted during his tenure at ARPA during the 1960s, Licklider championed the research that developed into the now-burgeoning field of computer science, including creation of the Arpanet computer network (forerunner of the Internet). His research, vision, and direction had a significant impact on both computer science and information technology and set the stage for the field of augmented cognition.
56 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
During the early 1960s researchers speculated that electrical signals emanating from a human brain in the form of electroencephalographic (EEG) recordings could be used as indicators of specific events in human cognitive processing. Several Department of Defense investigations into detecting these signals and other measurements occurred through the biocybernetics and learning strategies programs sponsored by ARPA during the 1970s and 1980s. The earliest program was biocybernetics, which tested the hypothesis that EEG activity might be able to control military devices and serve as indicators of user performance. In this program biocybernetics was defined as a real-time connection between the operator and computational system via physiological signals recorded during specific tasks. Both the biocybernetics and learning strategies programs centered around the creation of closed-loop feedback systems (the relationship between user and computational system, where changes in the computational interface are driven by detected changes in the user’s physiological status, which in turn change as a result of the new format of the interface) between operator and computer for the selection and training of personnel, display/control design, and online monitoring of operator status (although with slightly different military application domains between the two programs). In both programs researchers saw the real-time identification of cognitive events as critical to understanding the best methods for aiding military users in a rapid and contextually appropriate way. However, when this research was begun, both computational systems and neuroscience were in their infancy, and the results of this research were not incorporated into production military systems. Augmented cognition can be viewed as a descendant of these early programs. Another investigation in this field was the Pilot’s Associate (PA) program sponsored by DARPA during the 1980s and early 1990s. Pilot’s Associate was an integrated system of five components that incorporated AI (artificial intelligence) techniques and cognitive modeling to aid pilots in carrying out their missions with increased situational awareness and enhanced decision-making. Unlike biocybernetics, PA utilized cognitive modeling alone and did not in-
corporate any physiological monitoring. Cognitive modeling was the cornerstone of the pilot-vehicle interface (PVI), which had the critical task of managing all pilot interactions with the system by inferring the pilot’s intentions and communicating these intentions to the other components of the PA system. The PVI was also responsible for modeling pilot workload to adapt and configure the information displays in the cockpit, conveying workload information to the other subsystems, and compensating for pilot behavior that might result in an error. An example of this work was a PA program at NASA-Ames Research Center that explored the use of probabilistic models of a pilot’s goals and workload over time, based on multiple inputs and the use of models to control the content and complexity of displays. Such models did not employ physiological measures of a pilot’s cognitive status. Other research occurred in the academic and private sectors, including the attentional user interface (AUI) project at Microsoft Research during the late 1990s, which provided conceptual support to efforts in augmented cognition. Researchers developed methods for building statistical models of attention and workload from data. Researchers built architectures to demonstrate how cognitive models could be integrated with real-time information from multiple sensors (including acoustical sensing, gaze and head tracking, and events representing interaction with computing systems) to control the timing and communication medium of incoming notifications. AUI work that included psychological studies complemented the systems and architectures work.
Foundations of Augmented Cognition In light of these earlier research efforts, the logical question arises: What sets augmented cognition apart from what has already been done? As mentioned, augmented cognition relies on many fields whose maturity is critical for its success. Although programs such as biocybernetics during the 1970s had similar goals, they did not have access to the advanced computational power necessary to process brain signals in real time, nor did researchers know enough about those signals to use them to control displays or machines. Likewise, the Pilot’s Associate program
AUGMENTED COGNITION ❚❙❘ 57
during the 1980s shared many aspirations of today’s augmented cognition, namely to develop adaptive interfaces to reduce pilot workload. However, PA could assess the status of a pilot from inferences and models based only on the pilot’s overt behavior and the status of the aircraft. What distinguishes augmented cognition is its capitalization on advances in two fields: behavioral/neural science and computer science. At the start of the twenty-first century researchers have an unparalleled understanding of human brain functioning. The depth of this understanding is due to the development of neuroscientific techniques funded by the U.S. National Institutes of Health (NIH) and other agencies during the 1990s, a period now referred to as the “Decade of the Brain.” The billion-dollar funding of the fields of neuroscience, cognitive science, and biopsychology resulted in some of the greatest advances in our understanding of the human biological system in the twentieth century. For example, using techniques such as functional magnetic resonance imaging (fMRI), scientists were able to identify discrete three-dimensional regions of the human brain active during specific mental tasks. This identification opened up the field of cognitive psychology substantially (into the new field of cognitive neuroscience) and enabled researchers to test their theories of the human mind and associate previously observed human thoughts and behaviors with neural activity in specific brain regions. Additional investment from the Department of Defense and other agencies during the twenty-first century has allowed researchers to develop even more advanced sensors that will eventually be used in augmented cognition systems. Novel types of neurophysiological signals that are measurable noninvasively include electrical signals—using electroencephalography and event-related potentials (identifiable patterns of activity within the EEG that occur either before specific behaviors are carried out, or after specific stimuli are encountered)— and local cortical changes in blood oxygenation (BOLD), blood volume, and changes in the scattering of light directly due to neuronal firing (using near infrared [NIR] light). Most of these signals, unlike fMRI, can be collected from portable measurement systems in real time, making them potentially available for everyday use. All augmented cognition systems do not necessarily contain advanced neurophysiolog-
INTELLIGENT AGENT Software program that actively locates information for you based on parameters you set. Unlike a search engine or information filter, it actively seeks specific information while you are doing other things.
ical sensors, but the field of augmented cognition is broadened even further by their inclusion. As a result of the “Decade of the Brain,” researchers have an increased knowledge of the cognitive limitations that humans face. The HCI field focuses on the design, implementation, and evaluation of interactive systems in the context of a user’s work. However, researchers in this field can work only with the data and observations easily accessible to them, that is, how people overtly behave while using interfaces. Through efforts in neuroscience, biopsychology, and cognitive neuroscience we can locate and measure activity from the brain regions that are actively involved in day-to-day information-processing tasks. Researchers will have a greater understanding of the cognitive resources that humans possess and how many of these resources are available during a computationally based task, whether or not the computational systems will include advanced sensors. After these cognitive resources are identified and their activity (or load) measured, designers of computational interfaces can begin to account for these limitations (and perhaps adapt to their status) in the design of new HCI systems. Finally, without advances in computer science and engineering, none of the neuroscientific developments listed here would be possible, and the field of augmented cognition would certainly not be feasible. During the past forty years society has experienced leaps in computational prowess and the sophistication of mathematical algorithms. These leaps have been due in part to the miniaturization of transistors and other silicon-based components so that more computational power is available per square inch of hardware. This miniaturization has allowed computers to shrink in size until they have permeated the very fabrics that people wear and even their environments. Computer code itself has become smaller and more flexible, with the emergence of
58 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
agent-based computing (the instantiation of active, persistent software components that perceive, reason, act, and communicate in software code), JAVA, and Internet services. Thus, augmented cognition has benefited from two computing advances—improvements in raw computational resources (CPUs, physical memory) and improvements in the languages and algorithms that make adaptive interfaces possible. Many other fields have benefited from these advances as well and in turn have fed into the augmented cognition community. These fields include user modeling, speech recognition, computer vision, graphical user interfaces, multimodal interfaces, and computer learning/artificial intelligence.
Components of an Augmented Cognition System At the most general level, augmented cognition harnesses computation and knowledge about human limitations to open bottlenecks and address the biases and deficits in human cognition. It seeks to accomplish these goals through continual background sensing, learning, and inferences to understand trends, patterns, and situations relevant to a user’s context and goals. At its most general level, an augmented cognition system should contain at least four components—sensors for determining user state, an inference engine or classifier to evaluate incoming sensor information, an adaptive user interface, and an underlying computational architecture to integrate the other three components. In reality a fully functioning system would have many more components, but these are the most critical. Independently, each of these components is fairly straightforward. Much augmented cognition research focuses on integrating these components to “close the loop” and create computational systems that adapt to their users. Thus, the primary challenge with augmented cognition systems is not the sensors component (although researchers are using increasingly complex sensors). The primary challenge is accurately predicting/assessing, from incoming sensor information, the correct state of the user and having the computer select an appropriate strategy to assist the user at that time. As discussed, humans have limitations in at-
tention, memory, learning, comprehension, sensory processing, visualization abilities, qualitative judgments, serial processing, and decision-making. For an augmented cognition system to be successful it must identify at least one of these bottlenecks in real time and alleviate it through a performance-enhancing mitigation strategy. Such mitigation strategies are conveyed to the user through the adaptive interface and might involve modality switching (between visual, auditory, and haptic [touch]), intelligent interruption, task negotiation and scheduling, and assisted context retrieval via book marking. When a user state is correctly sensed, an appropriate strategy is chosen to alleviate the bottleneck, the interface is adapted to carry out the strategy, and the resulting sensor information indicates that the aiding has worked—only then has a system “closed the loop” and successfully augmented the user’s cognition.
Applications of Augmented Cognition The applications of augmented cognition are numerous, and although initial investments in systems that monitor cognitive state have been sponsored by military and defense agencies, the commercial sector has shown interest in developing augmented cognition systems for nonmilitary applications. As mentioned, closely related work on methods and architectures for detecting and reasoning about a user’s workload (based on such information as activity with computing systems and gaze) have been studied for nonmilitary applications such as commercial notification systems and communication. Agencies such as NASA also have shown interest in the use of methods to limit workload and manage information overload. Hardware and software manufacturers are always eager to include technologies that make their systems easier to use, and augmented cognition systems would likely result in an increase in worker productivity with a savings of both time and money to companies that purchased these systems. In more specific cases, stressful jobs that involve constant information overload from computational sources, such as air traffic control, would also benefit from such technology. Finally, the fields of education and training are the next likely targets for augmented cognition technology after it reaches commercial vi-
AUGMENTED REALITY ❚❙❘ 59
ability. Education and training are moving toward an increasingly computational medium. With distance learning in high demand, educational systems will need to adapt to this new nonhuman teaching interaction while ensuring quality of education. Augmented cognition technologies could be applied to educational settings and guarantee students a teaching strategy that is adapted to their style of learning. This application of augmented cognition could have the biggest impact on society at large. Dylan Schmorrow and Amy Kruse See also Augmented Reality; Brain-Computer Interfaces; Information Overload FURTHER READING Cabeza, R., & Nyberg, L. (2000). Imaging cognition II: An empirical review of 275 PET and fMRI studies. Journal of Cognitive Neuroscience, 12(1), 1–47. Dix, A., Finlay, J., Abowd, G., & Beale, R. (1998). Human computer interaction (2nd ed.). London, New York: Prentice Hall. Donchin, E. (1989). The learning strategies project. Acta Psychologica, 71(1–3), 1–15. Freeman, F. G., Mikulka, P. J., Prinzel, L. J., & Scerbo, M. W. (1999). Evaluation of an adaptive automation system using three EEG indices with a visual tracking task. Biological Psychology, 50(1), 61–76. Gevins, A., Leong, H., Du, R., Smith, M. E., Le, J., DuRousseau, D., Zhang, J., & Libove, J. (1995). Towards measurement of brain function in operational environments. Biological Psychology, 40, 169–186. Gomer, F. (1980). Biocybernetic applications for military systems. Chicago: McDonnell Douglas. Gray, W. D., & Altmann, E. M. (2001). Cognitive modeling and human-computer interaction. In W. Karwowski (Ed.), International encyclopedia of ergonomics and human factors (pp. 387–391). New York: Taylor & Francis. Horvitz, E., Pavel, M., & Schmorrow, D. D. (2001). Foundations of augmented cognition. Washington, DC: National Academy of Sciences. Humphrey, D. G., & Kramer, A. F. (1994). Toward a psychophysiological assessment of dynamic changes in mental workload. Human Factors, 36(1), 3–26. Licklider, J. C. R. (1960). Man-computer symbiosis: IRE transactions on human factors in electronics. HFE-1 (pp. 4–11). Lizza, C., & Banks, S. (1991). Pilot’s Associate: A cooperative, knowledge-based system application. IEEE Intelligent Systems, 6(3), 18–29. Mikulka, P. J., Scerbo, M. W., & Freeman, F. G. (2002). Effects of a biocybernetic system on vigilance performance. Human Factors, 44, 654–664. Prinzel, L. J., Freeman, F. G., Scerbo, M. W., Mikulka, P. J., & Pope, A. T. (2000). A closed-loop system for examining psychophysiological measures for adaptive task allocation. International Journal of Aviation Psychology, 10, 393–410.
Wilson, G. F. (2001). Real-time adaptive aiding using psychophysiological operator state assessment. In D. Harris (Ed.), Engineering psychology and cognitive ergonomics (pp. 175–182). Aldershot, UK: Ashgate. Wilson, R. A., & Keil, F. C. (Eds.). (2001). The MIT encyclopedia of the cognitive sciences (MITECS). Cambridge, MA: MIT Press.
AUGMENTED REALITY Augmented reality is a new field of research that concentrates on integrating virtual objects into the real world. These virtual objects are computer graphics displayed so that they merge with the real world. Although in its infancy, augmented reality holds out the promise of enhancing people’s ability to perform certain tasks. As sensing and computing technologies advance, augmented reality is likely to come to play a significant role in people’s daily lives.
Augmented Reality and Virtual Reality An augmented-reality system merges the real scene viewed by the user with computer-generated virtual objects to generate a composite view for the user. The virtual objects supplement the real scene with additional and useful information. Sounds may be added through the use of special headphones that allow the user to hear both real sounds and synthesized sounds. There are also special gloves that a user can wear that provide tactile sensation such as hardness or smoothness. A user wearing such gloves could “feel” virtual furniture in a real room. In an augmented-reality system, users can walk around a real room, hear the echo of their footsteps, and feel the breeze from an air conditioning unit, while at the same time they can see computer-generated images of furniture or paintings. One of the requirements of an augmented-reality system is that it needs to be interactive in real time. Animation, sound, and textures are added in real time so that what the user sees, hears, and feels reflects the true status of the real world. The most important characteristic of augmented reality is the ability to render objects in three-dimensional space,
60 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
which makes them much more realistic in the eyes of the user. Virtual objects are drawn in relationship to the real objects around them, both in terms of position and size. If a virtual object is situated partially behind a real object (or vice versa) then the user should not see part of the obscured object. Occlusion of objects is the largest contributor to human depth perception. The major difference between augmented reality and virtual reality is that in virtual reality everything that is sensed by the user is computer generated. Therefore the virtual objects must be rendered as photorealistically as possible in order to achieve the feeling of immersion. Augmented reality uses both real and synthetic sights, sounds, and touches to convey the desired scene, so virtual objects do not bear the entire burden of persuading the user that the scene is real, and therefore they do not need to be so photorealistic. Augmented reality lies in the middle of the continuum between absolute reality (in which everything sensed is real) and virtual reality (in which everything that is sensed is created).
Different Types of Displays for Augmented Reality Most people depend on vision as their primary sensory input, so here we will discuss several types of visual displays that can be used with augmented reality, each with its own advantages and disadvantages. Visual displays include head-mounted displays (HMDs), monitor-based displays, projected images, and heads-up displays (HUDs). Head-Mounted Displays HMDs are headsets that a user wears. HMDs can either be see-through or closed view. The see-through HMD works as its name implies: The user looks through lenses to see the real world, but the lenses are actually display screens that can have graphics projected onto them. The biggest advantage of the see-through HMD mechanism is that it is simple to implement because the real world does not have to be processed and manipulated; the mechanism’s only task is to integrate the visual augmentations.
This reduces the safety risk, since the user can see the real world in real time. If there is a power failure, the user will still be able to see as well as he or she would when wearing dark sunglasses. If there is some kind of hazard moving through the area—a forklift, for example—the wearer does not have to wait for the system to process the image of the forklift and display it; the wearer simply sees the forklift as he or she would when not wearing the HMD. One disadvantage is that the virtual objects may appear to lag behind the real objects; this happens because the virtual objects must be processed, whereas real objects do not need to be. In addition, some users are reluctant to wear the equipment for fear of harming their vision, although there is no actual risk, and other users dislike the equipment’s cumbersome nature. A new version of the see-through HMD is being developed to resemble a pair of eyeglasses, which would make it less cumbersome. Closed-view HMDs cannot be seen through. They typically comprise an opaque screen in front of the wearer’s eyes that totally blocks all sight of the real world. This mechanism is also used for traditional virtual reality. A camera takes an image of the real world, merges it with virtual objects, and presents a composite image to the user. The advantage the closed-view has over the see-through version of the HMD is that there is no lag time for the virtual objects; they are merged with the real scene before being presented to the user. The disadvantage is that there is a lag in the view of the real world because the composite image must be processed before being displayed. There are two safety hazards associated with closed-view HMD. First, if the power supply is interrupted, the user is essentially blind to the world around him. Second, the user does not have a current view of the real world. Users have the same concerns and inhibitions regarding closed-view HMD as they do regarding see-through HMD. Monitor-Based Displays Monitor-based displays present information to the user for configuring an augmented-reality system this way. First, because a monitor is a separate display device, more information can be presented to
AUGMENTED REALITY ❚❙❘ 61
the user. Second, the user does not have to wear (or carry around) heavy equipment. Third, graphical lag time can be eliminated because the real world and virtual objects are merged in the same way they are for closed-view HMDs. The safety risk is avoided because the user can see the real world in true real time. There are also some drawbacks to using monitor-based displays instead of HMDs. First, the user must frequently look away from the workspace in order to look at the display. This can cause a slowdown in productivity. Another problem is that the user can see both the real world and—on the monitor—the lagging images of the real world. In a worse case situation in which things in the scene are moving rapidly, the user could potentially see a virtual object attached to a real object that is no longer in the scene. Projected-Image Displays Projected-image displays project the graphics and annotations of the augmented-reality system onto the workspace. This method eliminates the need for extra equipment and also prevents the user from having to look away from the work area to check the monitor-based display. The biggest disadvantage is that the user can easily occlude the graphics and annotations by moving between the projector and the workspace. Users also can put their hands and arms through the projected display, reducing their sense of the reality of the display. Heads-Up Displays Heads-up displays are very similar to see-through HMDs. They do not require the user to wear special headgear, but instead display the data on a see-through screen in front of the user. As in seethrough HMDs, these systems are easy to implement, however, there may be a lag time in rendering the virtual object.
Challenges in Augmented Reality A majority of the challenges facing augmented reality concern the virtual objects that are added to the real world. These challenges can be divided into two areas: registration and appearance. Registration
involves placing the virtual objects in the proper locations in the real world. This is an important element of augmented reality and includes sensing, calibration, and tracking. Appearance concerns what the virtual objects look like. In order to achieve seamless merging of real and virtual objects, the virtual objects must be created with realistic color and texture. In virtual-reality systems, tracking the relative position and motion of the user is an important research topic. Active sensors are widely used to track position and orientation of points in space. The tracking information thus obtained is fed into the computer graphics system for appropriate rendering. In virtual reality, small errors in tracking can be tolerated, as the user can easily overlook those errors in the entirely computer-generated scene. In augmented-reality systems, by contrast, the registration is performed in the visual field of the user. The type of display used in the system usually determines the accuracy needed for registration. One popular registration technique is visionbased tracking. Many times, there are fiducials (reference marks) marked out in the scene in which the virtual objects need to be placed. The system recognizes these fiducials automatically and determines the pose of the virtual object with respect to the scene before it is merged. There are also techniques that use more sophisticated vision algorithms to determine the pose without the use of fiducials. The motion of the user and the structure of the scene are computed using projective-geometry formulation. (Projective geometry is the branch of geometry that deals with projecting a geometric figure from one plane onto another plane; the ability to project points from one plane to another is essentially what is needed to track motion through space.) For a seamless augmented-reality system, it is important to determine the geometry of the virtual object with respect to the real scene, so that occlusion can be rendered appropriately. Stereo-based depth estimation and the z-buffer algorithm (an algorithm that makes possible the representation of objects that occlude each other) can be used for blending real and virtual objects. Also, using research results in radiosity (a technique for realistically
62 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
simulating how light reflects off objects), it is possible to “shade” the virtual object appropriately so that it blends properly with the background scene.
Applications Augmented reality has applications in many fields. In medicine, augmented reality is being researched as a tool that can project the output of magnetic resonance imaging (MRI), computed tomography (CT) scans, and ultrasound imaging onto a patient to aid in diagnosis and planning of surgical operations. Augmented reality can be used to predict more accurately where to perform a biopsy for a tiny tumor: All the information gathered from traditional methods such as MRIs can be projected onto the patient to reveal the exact location of the tumor. This enables a surgeon to make precise incisions, reducing the stress of the surgery and decreasing the trauma to the patient. In architecture and urban planning, annotation and visualization techniques can be used to show how the addition of a building will affect the surrounding landscape. Actually seeing the future building life sized, in the location it will occupy, gives a more accurate sense of the project than can be conveyed from a model. Augmented-reality simulations also make it easier to recognize potential problems, such as insufficient natural lighting for a building. Augmented reality also has the potential to let developers, utility companies, and home owners “see” where water pipes, gas lines, and electrical wires are run through walls, which is an aid when it comes to maintenance or construction work. In order for this technique to be implemented, the data must be stored in a format the augmented-reality system can use. Simply having a system that can project the images of electrical wiring on a wall would not be sufficient; the system first must know where all the wires are located. Augmented reality has the potential to make a big impact on the entertainment industry. A simple example is the glowing puck that is now used in many televised hockey games. In this application, the hockey puck is tracked and a brightly colored dot
is placed on top of it on the television video to make it easier for those watching the game on television to follow the rapid motion of the puck. Augmented reality could also make possible a type of virtual set, very similar to the blue-screen sets that are used today to film special effects. Augmented-reality sets would be interactive, would take up less space, and would potentially be simpler to build than traditional sets. This would decrease the overall cost of production. Another example, already developed is the game AR2 Hockey, in which the paddles and field (a table, as in air hockey) are real but the puck is virtual. The computer provides visual tracking of the virtual puck and generates appropriate sound effects when the paddles connect with the puck or when the puck hits the table bumpers. One military application is to use the technology to aim weapons based on the movement of the pilot’s head. Graphics of targets can be superimposed on a heads-up display to improve weapons’ accuracy by rendering a clearer picture of the target, which will be hard to miss. Many examples of assembly augmented-reality systems have been developed since the 1990s. One of the best known is the Boeing wire-bundling project, which was started in 1990. Although well known, this project has not yet been implemented in a factory as part of everyday use. The goal is relatively straightforward: Use augmented reality to aid in the assembly of wire bundles used in Boeing’s 747 aircraft. For this project, the designers decided to use a see-through HMD with a wearable PC to allow workers the freedom of movement needed to assemble the bundles, which were up to 19 meters long. The subjects in the pilot study were both computer science graduate students who volunteered and Boeing employees who were asked to participate. The developers ran into both permanent and temporary problems. One temporary problem, for example, was that the workers who participated in the pilot program were typically tired because the factory was running the pilot study at one of the busier times in its production cycle. Workers first completed their normal shift before working on the pilot project. Another temporary problem
AUGMENTED REALITY ❚❙❘ 63
was the curiosity factor: Employees who were not involved with the project often came over to chat and check out what was going on and how the equipment worked. More permanent problems were the employees’ difficulties in tracing the wires across complex subassemblies and their hesitance to wear the headsets because of fear of the lasers located close to their eyes and dislike of the “helmet head” effect that came from wearing the equipment. One of the strongest success points for this pilot study was that the bundles created using the augmented-reality system met Boeing’s quality assurance standards. Another good thing was that the general background noise level of the factory did not interfere with the acoustic tracker. In the pilot study, augmented reality offered no improvement in productivity and the only cost savings came from no longer needing to store the various assembly boards. (This can be, however, a significant savings.) The developers concluded that the reason there was no significant improvement in assembly time was because they still had some difficulty using the system’s interface to find specific wires. The developers are working on a new interface that should help to solve this problem. Augmented reality has also been used in BMW automobile manufacture. The application was designed to demonstrate the assembly of a door lock for a car door, and the system was used as a feasibility study. The annotations and graphics were taken from a CAD (computer-aided design) system that was used to construct the actual physical parts for the lock and the door. In this case, the augmentedreality system uses a see-through HMD and a voiceactivated computer—in part because the assembly process requires that the user have both hands free for the assembly process. Because this augmentedreality system mimicked an existing virtual-reality version of assembly planning for the door lock assembly, much of the required data was already available in an easily retrievable format, which simplified the development of the augmented-reality system. The developers had to overcome certain problems with the system in order to make the pilot work. The first was the issue of calibration. There is an ini-
tial calibration that must be performed as part of the start-up process. The calibration is then performed periodically when the system becomes confused or the error rate increases past a certain threshold. Users seemed to have difficulty keeping their heads still enough for the sensitive calibration process, so a headrest had to be built. Another problem was that the magnetic tracking devices did not work well because there were so many metal parts in the assembly. In addition, the speech recognition part of the system turned out to be too sensitive to background noise, so it was turned off. The pilot study for this project was used as a demonstration at a trade show in Germany in 1998. The program ran for one week without difficulty. Due to time considerations, the system was not calibrated for each user, so some people were not as impressed as the developers had hoped. Also, even with the headrest, some users never stayed still long enough for a proper calibration to be performed. Their reactions showed researchers that average users require some degree of training if they are to use this sort of equipment successfully. Despite setbacks, the developers considered the pilot a success because it brought the technology to a new group of potential users and it generated several possible follow-up ideas relating to the door lock assembly.
The Future Augmented reality promises to help humans in many of their tasks by displaying the right information at the right time and place. There are many technical challenges to be overcome before such interfaces are widely deployed, but driven by compelling potential applications in surgery, the military, manufacturing, and entertainment, progress continues to be made in this promising form of human-computer interaction. Rajeev Sharma and Kuntal Sengupta See also Augmented Cognition; Virtual Reality
64 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
AVATARS
FURTHER READING Aliaga, D. G. (1997). Virtual objects in the real world. Communications of the ACM, 40(3), 49–54. Azuma, R. (1997). A survey of augmented reality. Presence: Teleoperators and Virtual Environments, 6(3), 355–385. Bajura, M., Fuchs, H., & Ohbuchi, R. (1992). Merging virtual objects with the real world: Seeing Ultrasound imagery within the patient. Computer Graphics (Proceedings of SIGGRAPH’92), 26(2), 203–210. Das, H. (Ed.). (1994). Proceedings of the SPIE—The International Society for Optical Engineering. Bellingham, WA: International Society for Optical Engineering. Elvins, T. T. (1998, February). Augmented reality: “The future’s so bright I gotta wear (see-through) shades.” Computer Graphics, 32(1), 11–13. Ikeuchi, K., Sato, T., Nishino, K., & Sato, I. (1999). Appearance modeling for mixed reality: photometric aspects. In Proceedings of the 1999 IEEE International Conference on Systems, Man, and Cybernetics (SMC’99) (pp. 36–41). Piscataway, NJ: IEEE. Milgram, P., & Kishino, F. (1994, December). A taxonomy of mixed reality visual displays. IEICE Transactions on Information Systems, E77-D(12), 1321–1329. Neumann, U., & Majoros, A. (1998). Cognitive, performance, and systems issues for augmented reality applications in manufacturing and maintenance. In Proceedings of the IEEE 1998 Virtual Reality Annual International Symposium (pp. 4–11). Los Alamitos, CA: IEEE. Ohshima, T., Sato, K., Yamamoto, H., & Tamura, H. (1998). AR2 hockey: A case study of collaborative augmented reality. In Proceedings of the IEEE 1998 Virtual Reality Annual International Symposium (pp. 268–275) Los Alamitos, CA: IEEE. Ong, K. C., Teh, H. C., & Tan, T. S. (1998). Resolving occlusion in image sequence made easy. Visual Computer, 14(4), 153–165. Raghavan, V., Molineros, J., & Sharma, R. (1999). Interactive evaluation of assembly sequences using augmented reality. IEEE Transactions on Robotics and Automation, 15(3), 435–449. Rosenblum, L. (2000, January–February). Virtual and augmented reality 2020. IEEE Computer Graphics and Applications, 20(1), 38–39. State, A., Chen, D. T., Tector, C., Brandt, A., Chen, H., Ohbuchi, R., et al. (1994). Observing a volume rendered fetus within a pregnant patient. In Proceedings of IEEE Visualization 94 (pp. 364–368). Los Alamitos, CA: IEEE. Stauder, J. (1999, June). Augmented reality with automatic illumination control incorporating ellipsoidal models. IEEE Transactions on Multimedia, 1(2), 136–143. Tatham, E. W. (1999). Getting the best of both real and virtual worlds. Communications of the ACM, 42(9), 96–98. Tatham, E. W., Banissi, E., Khosrowshahi, F., Sarfraz, M., Tatham, E., & Ursyn, A. (1999). Optical occlusion and shadows in a “seethrough” augmented reality display. In Proceedings of the 1999 IEEE International Conference on Information Visualization (pp. 128–131). Los Alamitos, CA: IEEE. Yamamoto, H. (1999). Case studies of producing mixed reality worlds. In Proceedings of the 1999 IEEE International Conference on Systems, Man, and Cybernetics (SMC’99) (pp. 42–47). Piscataway, NJ: IEEE.
Avatar derives from the Sanskrit word avatarah, meaning “descent” and refers to the incarnation— the descent into this world—of a Hindu god. A Hindu deity embodied its spiritual being when interacting with humans by appearing in either human or animal form. In the late twentieth century, the term avatar was adopted as a label for digital representations of humans in online or virtual environments. Although many credit Neal Stephenson with being the first to use avatar in this new sense in his seminal science fiction novel Snow Crash (1992), the term and concept actually appeared as early as 1984 in online multiuser dungeons, or MUDs (role-playing environments), and the concept, though not the term, appeared in works of fiction dating back to the mid1970s. This entry explores concepts, research, and
Embodied Agent
Avatar
Digital Representation
Agent
Live Human Being
A representational schematic of avatars and embodied agents. When a given digital representation is controlled by a human, it is an avatar, and when it is controlled by a computational algorithm it is an embodied agent. Central to the current definition is the ability for real-time behavior, in that the digital representation exhibits behaviors by the agent or human as they are performed. FIGURE 1.
AVATAARS ❚❙❘ 65
ethical issues related to avatars as digital human representations. (We restrict our discussion to digital avatars, excluding physical avatars such as puppets and robots. Currently, the majority of digital avatars are visual or auditory information though there is no reason to restrict the definition as such.)
Agents and Avatars Within the context of human-computer interaction, an avatar is a perceptible digital representation whose behaviors reflect those executed, typically in real time, by a specific human being. An embodied agent, by contrast, is a perceptible digital representation whose behaviors reflect a computational algorithm designed to accomplish a specific goal or set of goals. Hence, humans control avatar behavior, while algorithms control embodied agent behavior. Both agents and avatars exhibit behavior in real time in accordance with the controlling algorithm or human actions. Figure 1 illustrates the fact that the actual digital form the digital representation takes has no bearing on whether it is classified as an agent or avatar: An algorithm or person can drive the same representation. Hence, an avatar can look nonhuman despite being controlled by a human, and an agent can look human despite being controlled by an algorithm. Not surprisingly, the fuzzy distinction between agents and avatars blurs for various reasons. Complete rendering of all aspects of a human’s actions (down to every muscle movement, sound, and scent) is currently technologically unrealistic. Only actions that can be tracked practically can be rendered analogously via an avatar; the remainder are rendered algorithmically (for example, bleeding) or not at all (minute facial expressions, for instance). In some cases avatar behaviors are under nonanalog human control; for example, pressing a button and not the act of smiling may be the way one produces an avatar smile. In such a case, the behaviors are at least slightly nonanalogous; the smile rendered by the button-triggered computer algorithm may be noticeably different from the actual human’s smile. Technically, then, a human representation can be and often is a hybrid of an avatar and an embodied agent, wherein the human controls the consciously generated verbal and nonverbal
gestures and an agent controls more mundane automatic behaviors. One should also distinguish avatars from online identities. Online identities are the distributed digital representations of a person. Humans are known to each other via e-mail, chat rooms, homepages, and other information on the World Wide Web. Consequently, many people have an online identity, constituted by the distributed representation of all relevant information, though they may not have an avatar.
Realism Avatars can resemble their human counterparts along a number of dimensions, but the two that have received the most attention in the literature are behavioral realism (reflected in the number of a given human’s behaviors the avatar exhibits) and photographic realism (reflected in how many of a given human’s static visual features the avatar possesses). Behavioral realism is governed by the capability of the implementation system to track and render behavior in real time. Currently, real-time behavioral tracking technology, while improving steadily, does not meet expectations driven by popular culture; for example, online representations of the character Neo in The Matrix (1999), Hiro from Snow Crash (1992), or Case from Neuromancer (1984). In those fictional accounts, the movements and gestures of avatars and the represented humans are generally perceptually indistinguishable. However, in actual practice, complete real-time behavior tracking is extremely difficult. Although gesture tracking through various mechanical, optical, and other devices has improved, the gap between actual movements and avatar movements remains large, reducing behavioral realism at least in situations requiring real-time tracking and rendering, such as online social interaction (for example, collaborative virtual work groups). Fewer barriers exist for photographic realism. Three-dimensional scanners and photogrammetric software allow for the photographically realistic recreation of static, digital human heads and faces that cannot be easily distinguished from photographs and videos of the underlying faces. Nonetheless, the key challenge to avatar designers is creating faces and
66 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Three views of a digital avatar modeled after a human head and face. This avatar is built by creating a threedimensional mesh and wrapping a photographic texture around it. Photo courtesy of James J. Blascovich. bodies in sufficient detail to allow for the realistic rendering of behavior, which brings us back to behavioral realism. In summary, static avatars currently can look quite a bit like their human controllers but can only perform a small subset of a dynamic human’s actions in real time.
Current Use of Avatars Depending on how loosely one defines digital representation, the argument can be made that avatars are quite pervasive in society. For example, sound is transformed into digital information as it travels over fiber-optic cables and cellular networks. Consequently, the audio representation we perceive over phone lines is actually an avatar of the speaker. This example may seem trivial at first, but becomes less trivial when preset algorithms are applied to the audio stream to cause subtle changes in the avatar, for example, to clean and amplify the signal. This can only be done effectively because the voice is translated into digital information. More often, however, when people refer to avatars, they are referring to visual representations. Currently, millions of people employ avatars in online role-playing games as well as in chat rooms used for virtual conferencing. In these environments,
users interact with one another using either a keyboard or a joystick, typing messages back and forth and viewing one another’s avatars as they move around the digital world. Typically, these are avatars in the minimal sense of the word; behavioral and photographic realism is usually quite low. In the case of online role-playing games, users typically navigate the online world using “stock” avatars with limited behavioral capabilities.
Avatar Research Computer scientists and others have directed much effort towards developing systems capable of producing functional and effective avatars. They have striven to develop graphics, logic, and the tracking capabilities to render actual movements by humans on digital avatars with accuracy, and to augment those movements by employing control algorithms that supply missing tracking data or information about static visual features. Furthermore, behavioral scientists are examining how humans interact with one another via avatars. These researchers strive to understand social presence, or copresence, a term referring to the degree to which individuals respond socially towards others during interaction among their avatars, com-
AVATARS ❚❙❘ 67
pared with the degree to which they respond to actual humans. The behavioral scientist Jim Blascovich and his colleagues have created a theoretical model for social influence within immersive virtual environments that provides specific predictions for how the interplay of avatars’ photographic and behavioral realism will affect people’s sense of the relevance of the avatarmediated encounter. They suggest that the inclusion of certain visual features is necessary if the avatar is to perform important, socially relevant behavioral actions. For example, an avatar needs to have recognizable eyebrows in order to lower them in a frown. Other data emphasize the importance of behavioral realism. In 2001 Jeremy Bailenson and his colleagues demonstrated that making a digital representation more photographically realistic does not increase its social presence in comparison with an agent that is more cartoon-like as long as both types of agents demonstrate realistic gaze behaviors. In findings presented in 2003, Maia Garau and her colleagues failed to demonstrate an overall advantage for more photographically realistic avatars; moreover, these researchers demonstrated that increasing the photographic realism of an avatar can actually cause a decrease in social presence if behavioral realism is not also increased. In sum, though research on avatars currently is largely in its infancy, investigators are furthering our understanding of computer-mediated human interaction. As avatars become more commonplace, research geared towards understanding these applications should increase.
Ethical Issues Interacting via avatars allows for deceptive interactions. In 2003 Bailenson and his colleagues introduced the notion of transformed social interactions (TSIs). Using an avatar to interact with another person is qualitatively different from other forms of communication, including face-to-face interaction, standard telephone conversations, and videoconferencing. An avatar that is constantly rerendered in real time makes it possible for interactants to systematically filter their appearance and behaviors (or
to have systems operators do this for them) within virtual environments by amplifying or suppressing communication signals. TSI algorithms can impact interactants’ abilities to influence interaction partners. For example, system operators can tailor the nonverbal behaviors of online teachers lecturing to more than one student simultaneously within an immersive virtual classroom in ways specific to each student independently and simultaneously. Student A might respond well to a teacher who smiles, and Student B might respond well to a teacher with a neutral expression. Via an avatar that is rendered separately for each student, the teacher can be represented simultaneously by different avatars to different students, thereby communicating with each student in the way that is optimal for that student. The psychologist Andrew Beall and his colleagues have used avatars to employ such a strategy using eye contact; they demonstrated that students paid greater attention to the teacher using TSI. However, there are ethical problems associated with TSIs. One can imagine a dismal picture of the future of online interaction, one in which nobody is who they seem to be and avatars are distorted so much from the humans they represent that the basis for judging the honesty of the communication underlying social interactions is lost. Early research has demonstrated that TSIs involving avatars are often difficult to detect. It is the challenge to researchers to determine the best way to manage this issue as the use of avatars becomes more prevalent.
State of the Art Currently, there are many examples of humans interacting with one another via avatars. For the most part, these avatars are simplistic and behaviorally and photographically unrealistic. The exception occurs in research laboratories, in which scientists are beginning to develop and test avatars that are similar in appearance and behavior to their human counterpart. As avatars become more ubiquitous, it is possible that we may see qualitative changes in social interaction due to the decoupling and transformation of behavior from human to avatar. While
68 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
there are ethical dangers in transforming behaviors as they pass from physical actions to digital representations, there are also positive opportunities both for users of online systems and for researchers in human-computer interaction. Jeremy N. Bailenson and James J. Blascovich See also Animation; Telepresence; Virtual Reality
FURTHER READING Badler, N., Phillips, C., & Webber, B. (1993). Simulating humans: Computer graphics, animation, and control. Oxford, UK: Oxford University Press. Bailenson, J. N., Beall, A. C., Blascovich, J., & Rex, C. (in press). Examining virtual busts: Are photogrammetrically generated head models effective for person identification? PRESENCE: Teleoperators and Virtual Environments. Bailenson, J. N., Beall, A. C., Loomis, J., Blascovich, J., & Turk, M. (in press). Transformed social interaction: Decoupling representation from behavior and form in collaborative virtual environments. PRESENCE: Teleoperators and Virtual Environments. Bailenson, J. N., Blascovich, J., Beall, A. C., & Loomis, J. M. (2001). Equilibrium revisited: Mutual gaze and personal space in virtual environments. PRESENCE: Teleoperators and Virtual Environments, 10, 583–598. Beall, A. C., Bailenson, J. N., Loomis, J., Blascovich, J., & Rex, C. (2003). Non-zero-sum mutual gaze in immersive virtual environments. In Proceedings of HCI International 2003 (pp. 1108–1112). New York: ACM Press.
Blascovich, J. (2001). Social influences within immersive virtual environments. In R. Schroeder (Ed.), The social life of avatars. Berlin, Germany: Springer-Verlag. Blascovich, J., Loomis, J., Beall, A. C., Swinth, K. R., Hoyt, C. L., & Bailenson, J. N. (2001). Immersive virtual environment technology as a methodological tool for social psychology. Psychological Inquiry, 13, 146–149. Brunner, J. (1975). Shockwaver rider. New York: Ballantine Books. Cassell, J., & Vilhjálmsson, H. (1999). Fully embodied conversational avatars: Making communicative behaviors autonomous. Autonomous Agents and Multi-Agent Systems, 2(1), 45–64. Garau, M., Slater, M.,Vinayagamoorhty,V., Brogni, A., Steed, A., & Sasse, M. A. (2003). The impact of avatar realism and eye gaze control on perceived quality of communication in a shared immersive virtual environment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 529–536). New York: ACM Press. Gibson, W. (1984). Neuromancer. New York: Ace Books. Morningstar, C., & Farmer, F.R. (1991). The lessons of Lucasfilm’s habitat. In M. Benedikt (Ed.), Cyberspace: First steps. Cambridge, MA: MIT Press. Slater, M., Howell, J., Steed, A., Pertaub, D., Garau, M., & Springel, S. (2000). Acting in virtual reality. ACM Collaborative Virtual Environments, CVE’2000, 103–110. Slater, M., Sadagic, A., Usoh, M., & Schroeder, R. (2000). Small group behaviour in a virtual and real environment: A comparative study. PRESENCE: Teleoperators and Virtual Environments, 9, 37–51. Stephenson, N. (1993). Snow crash. New York: Bantam Books. Thalmann, M. N, & Thalmann D. (Eds). (1999). Computer Animation and Simulation 99. Vienna, Austria: Springer-Verlag. Turk, M., & Kolsch, M. (in press). Perceptual Interfaces. In G. Medioni & S. B. Kang (Eds.), Emerging topics in computer vision. Upper Saddle River, NJ: Prentice-Hall. Turkle, S. (1995). Life on the screen: Identity in the age of the Internet. New York: Simon & Schuster. Yee, N. (2002). Befriending ogres and wood elves: Understanding relationship formation in MMORPGs. Retrieved January 16, 2004, from http://www.nickyee.com/hub/relationships/home.html
BETA TESTING BRAILLE BRAIN-COMPUTER INTERFACES BROWSERS
B BETA TESTING Beta testing, a stage in the design and development process of computer software and hardware, uses people outside a company, called “beta testers,” to be sure that products function properly for typical endusers outside the firm. Does a piece of software work under normal operating conditions? Can users navigate important features? Are there any critical programming flaws? These are the questions beta tests answer. The widespread use of beta tests warrants the examination of the process. Because the trade literature in computer programming focuses on the mechanics of designing, conducting, and interpreting beta tests, less has been written on the social implications of the growing use of beta testing. For example, as
will be discussed below, beta tests make it possible for endusers to contribute to the design and development of a product and may represent a shift in the organization of the production process.
Definitions of Beta Testing A beta test is an early (preshipping or prelaunch), unofficial release of hardware or software that has already been tested within the company for major flaws. In theory, beta versions are very close to the final product, but in practice beta testing is often simply one way for a firm to get users to try new software under real conditions. Beta tests expose software and hardware to real-world configurations of computing platforms, operating systems, hardware, and users. For example, a beta test of a website is “the time period just before a site’s official launch when a fully operational 69
70 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
product is used under normal operating conditions to identify any programming bugs or interface issues” (Grossnickle and Raskin 2001, 351). David Hilbert describes beta testing as a popular technique for evaluating the fit between application design and use. The term beta testing emerged from the practice of testing the unit, module, or components of a system first. This test was called alpha, whereas beta referred to the initial test of the complete system. Alpha and beta, derived from earlier nomenclature of hardware testing, were reportedly first used in the 1960s at IBM. Now alpha typically refers to tests conducted within the firm and beta refers to tests conducted externally. There is ample evidence that beta testing has increased in various forms over the last decade. James Daly, a technology business reporter and founder of the magazine Business 2.0, reports that by 1994, 50 percent of Fortune 1000 companies in the United States had participated in beta testing and 20 percent of those companies had used beta testing widely. However, the implementation—and the purposes—of beta testing vary by company. An online market-research handbook suggests that “for most ventures, standard beta-testing technique involves e-mailing friends, family, and colleagues with the URL of a new site” (Grossnickle and Raskin 2001, 351), which clearly would not produce a statistically representative sample of end users. A meta study of beta-test evaluations done more than a decade ago found that most beta testing was actually “driven by convenience or tradition rather than recognition of the costs and benefits involved” (Dolan and Matthews 1993, 318). In addition to determining whether or not a product works, a beta test can be used to increase a firm’s knowledge about the user base for its products, to support its marketing and sales goals, and to improve product support. More importantly, beta testers’ suggestions may be incorporated into the design of the product or used to develop subsequent generations of the product.
User Participation in Product Development Beta testing allows users to become involved in the product-development process. According to sociol-
ogists Gina Neff and David Stark, establishing a cycle of testing, feedback, and innovation that facilitates negotiations about what is made can make it possible to incorporate broader participation into the design of products and organizations. However, in practice, beta tests may be poorly designed to incorporate user feedback. Advice in the trade literature suggests that beta tests may not be constructed to provide more than “bug squashing and usability testing” (Grossnickle and Raskin n.d., 1). Beta tests also present firms with a chance to conduct research on their users and on how their products are used. Ideally, beta testers are statistically representative of typical product users. However, empirical research suggests that beta testers may not accurately reflect end-users because testers tend to have more technical training and hold more technical jobs than typical office workers.
Critical Views of Beta Testing The shift from total quality management to a testing-driven model of development means that “the generation and detection of error plays a renewed and desired role” in the production cycle (Cole 2002, 1052). With the rise of the acceptance of beta versions, companies and users alike may be more willing to tolerate flaws in widely circulated products, and end-users (including beta testers) may bear an increased burden for the number of errors that companies allow in these products. Some criticism has emerged that companies are “releasing products for beta testing that are clearly not ready for the market” and are exploiting free labor by “using beta testers as unpaid consultants to find the bugs in their products” (Garman 1996, 6). Users may also be frustrated by the continually updated products that beta testing can enable. The distribution of software in non-shrink-wrapped versions means that products are not clean end-versions b u t d e s t a b i l i ze d a n d con s t a n t l y ch a n g i n g . Technological advances in distribution, such as online distribution of software products, “makes it possible to distribute products that are continually updateable and almost infinitely customizable— products that, in effect, never leave a type of beta phase” (Neff and Stark 2003, 177).
BETA TESTING ❚❙❘ 71
Benefits to Beta Testers Because they are willing to risk bugs that could potentially crash their computers, beta testers accrue benefits such as getting a chance to look at new features and products before other users and contributing to a product by detecting software bugs or minor flaws in programming. More than 2 million people volunteered to be one of the twenty thousand beta testers for a new version of Napster. There is also an increase of beta retail products—early and often cheaper versions of software that are more advanced than a traditional beta version but not yet a fully viable commercial release. Although Apple’s public beta release of OS X, its first completely new operating system since 1984, cost $29.95, thousands downloaded it despite reports that it still had many bugs and little compatible software was available. These beta users saw the long-awaited new operating system six months before its first commercial release, and Apple fans and the press provided invaluable buzz about OS X as they tested it. Many scholars suggest that the Internet has compressed the product-development cycles, especially in software, often to the extent that one generation of product software is hard to distinguish from the next. Netscape, for example, released thirty-nine distinct versions between the beta stage of Navigator 1.0 and the release of Communicator 4.0.
Future Developments Production is an “increasingly dense and differentiated layering of people, activities and things, each operating within a limited sphere of knowing and acting that includes variously crude or sophisticated conceptualizations of the other” (Suchman 2003, 62). Given this complexity, beta testing has been welcomed as a way in which people who create products can inter act w ith those who use them. Internet communication facilitates this communication, making the distribution of products in earlier stages of the product cycle both easier and cheaper; it also facilitates the incorporation of user feedback into the design process. While it is true that “most design-change ideas surfaced by a beta test are passed onto product development for incorporation into the next genera-
tion of the product” (Dolan and Matthews 1993, 20), beta tests present crucial opportunities to incorporate user suggestions into the design of a product. Gina Neff See also Prototyping; User-Centered Design FURTHER READING Cole, R. E. (2002). From continuous improvement to continuous innovation. Total Quality Management, 13(8), 1051–1056. Daly, J. (1994, December). For beta or worse. Forbes ASAP, 36–40. Dolan, R. J., & Matthews, J. M. (1993). Maximizing the utility of consumer product testing: Beta test design and management. Journal of Product Innovation Management, 10, 318–330. Hove, D. (Ed.). The Free online dictionary of computing. Retrieved March 10, 2004, from http://www.foldoc.org Garman, N. (1996). Caught in the middle: Online professionals and beta testing. Online, 20(1), 6. Garud, R., Sanjay, J., & Phelps, C. (n.d.). Unpacking Internet time innovation. Unpublished manuscript, New York University, New York. Grossnickle, J., & Raskin, O. (2001). Handbook of online marketing research. New York: McGraw Hill. Grossnickle, J., & Raskin, O. (n.d.). Supercharged beta test. Webmonkey: Design. Retrieved January 8, 2004, from http://hotwired.lycos.com/ webmonkey Hilbert, D. M. (1999). Large-scale collection of application usage data and user feedback to inform interactive software development. Unpublished doctoral dissertation, University of California, Irvine. Kogut, B., & Metiu, A. (2001). Open source software development and distributed innovation. Oxford Review of Economic Policy, 17(2), 248–264. Krull, R. (2000). Is more beta better? Proceedings of the IEEE Professional Communication Society, 301–308. Metiu, A., & Kogut, B. (2001). Distributed knowledege and the global organization of software development. Unpublished manuscript, Wharton School of Business, University of Pennsylvania, Philadelphia. Neff, G., & Stark, D. (2003). Permanently beta: Responsive organization in the Internet era. In P. Howard and S. Jones (Eds.), Society Online. Thousand Oaks, CA: Sage. O'Mahony, S. (2002). The Emergence of a new commercial actor: Community managed software projects. Unpublished doctoral dissertation, Stanford University, Palo Alto, CA. Retrieved on January 8, 2004, from http://opensource.mit.edu/ Raymond, E. (1999). The Cathedral and the bazaar: Musings on Linux and open source from an accidental revolutionary. Sebastapol, CA: O'Reilly and Associates. Ross, R. (2002). Born-again Napster takes baby steps. Toronto Star, E04. Suchman, L. (2002). Located accountabilities in technology production. Retrieved on January 8, 2004, from http://www.comp.lancs.ac. uk/sociology/soc039ls.html. Centre for Science Studies, Lancaster University.
72 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Techweb (n.d.). Beta testing. Retrieved on January 8, 2004, from http://www.techweb.com/encyclopedia Terranova, T. (2000). Free labor: Producing culture for the digital economy. Social Text 18(2), 33–58.
BRAILLE Access to printed information was denied to blind people until the late 1700s, when Valentin Haüy, having funded an institution for blind children in Paris, embossed letters in relief on paper so that his pupils could read them. Thus, two hundred and fifty years after the invention of the printing press by the German inventor Johannes Gutenberg, blind people were able to read but not to write.
Historical Background In 1819 a French army officer, Charles Barbier, invented a tactile reading system, using twelve-dot codes embossed on paper, intended for nighttime military communications. Louis Braille, who had just entered the school for the blind in Paris, learned of the invention and five years later, at age fifteen, developed a much easier-to-read six-dot code, providing sixty-three dot patterns. Thanks to his invention, blind people could not only read much faster, but also write by using the slate, a simple hand tool made of two metal plates hinged together between which a sheet of paper could be inserted and embossed through cell-size windows cut in the front plate. Six pits were cut in the bottom plate to guide a hand-held embossing stylus inside each window. In spite of its immediate acceptance by his fellow students, Braille’s idea was officially accepted only thirty years later, two years after his death in 1852. Eighty more years passed before English-speaking countries adapted the Braille system in 1932, and more than thirty years passed before development of the Nemeth code, a Braille system of scientific notation, in 1965. Braille notation was also adopted by an increasing number of countries. In spite of its immense benefits for blind people, the Braille system embossed on paper was too bulky
and too expensive to give its users unlimited and quick access to an increasing amount of printed material: books, newspapers, leaflets, and so forth. The invention of the transistor in 1947 by three U.S. physicists and of integrated circuits in the late 1960s provided the solution: electromechanical tactile displays. After many attempts, documented by numerous patents, electronic Braille was developed simultaneously during the early 1970s by Klaus-Peter Schönherr in Germany and Oleg Tretiakoff in France.
First Electronic Braille Devices In electronic Braille, Braille codes—and therefore Braille books—are stored in numerical binary format on standard mass storage media: magnetic tapes, magnetic disks, and so forth. In this format the bulk and cost of Braille books are reduced by several orders of magnitude. To be accessible to blind users, electronically stored Braille codes must be converted into raised-dot patterns by a device called an “electromechanical Braille display.” An electromechanical Braille display is a flat reading surface that has holes arranged in a Braille cell pattern. The hemispherical tip of a cylindrical pin can either be raised above the reading surface to show a Braille dot or lowered under the reading surface to hide the corresponding Braille dot. The Braille dot vertical motion must be controlled by some kind of electromechanical actuator. Two such displays were almost simultaneously put onto the market during the mid-1970s. The Schönherr Braille calculator had eight Braille cells of six dots each, driven by electromagnetic actuators and a typical calculator keyboard. The dot spacing had to be increased to about 3 millimeters instead of the standard 2.5 millimeters to provide enough space for the actuators. The Tretiakoff Braille notebook carried twelve Braille standard cells of six dots each, driven by piezoelectric (relating to electricity or electric polarity due to pressure, especially in a crystalline substance) reeds, a keyboard especially designed for blind users, a cassette tape digital recorder for Braille codes storage, and a communication port to transfer data between the Braille notebook and other electronic devices. Both devices were portable and operated on replaceable or
BRAILLE ❚❙❘ 73
Enhancing Access to Braille Instructional Materials (ANS)—Most blind and visually impaired children attend regular school classes these days, but they are often left waiting for Braille and large-print versions of class texts to arrive while the other children already have the books. There are 93,000 students in kindergarten through 12th grade who are blind or have limited vision. Because this group represents a small minority of all schoolchildren, little attention has been paid to updating the cumbersome process of translating books into Braille, advocates said. Traditionally, publishers have given electronic copies of their books to transcribers, who often need to completely reformat them for Braille. Lack of a single technological standard and little communication between publishing houses and transcribers led to delays in blind students receiving instructional materials, experts said. The solution, said Mary Ann Siller, a national program associate for the American Foundation for the Blind who heads its Textbook and Instructional Materials Solutions Forum, is to create a single electronic file format and a national repository for textbooks that would simplify and shorten the production process. And that's exactly what is happening. In October, the American Printing House for the Blind in Louisville, Ky., took the first step in creating a repository by listing 140,000 of its own titles on the Internet. The group is now working to get publishers to deposit their text files, which transcribers could readily access. “Everyone is excited about it,” said Christine Anderson, director of resource services for the Kentucky organization. By having a central database with information about the files for all books available in Braille, large print, sound recording or computer files, costly duplications can be eliminated, she said. Pearce McNulty, director of publishing technology at Houghton Mifflin Co. in Boston, which is a partner in the campaign, said he is hopeful the repository will help solve the problem. Publishers and Braille producers historically
have misunderstood each other's business, he said, which led to frustration on both sides. Most blind children are mainstreamed into public school classrooms and receive additional help from a cadre of special teachers of the blind. Technology is also giving blind students more options. Scanning devices now download texts into Braille and read text aloud. Closed circuit television systems can enlarge materials for lowvision students. “These kids have very individual problems,” noted Kris Kiley, the mother of a 15-year-old who has limited vision.“It's not one size fits all. But if you don't teach them to read you've lost part of their potential.” New tools also bring with them new problems. For example, the new multimedia texts, which are available to students on CD-ROM, are completely inaccessible to blind students. And because graphics now dominate many books, lots of information, especially in math, does not reach those with limited vision. Simply recognizing the challenges faced by the blind would go a long way toward solving the problem, said Cara Yates. Yates, who recently graduated from law school, lost her sight at age 5 to eye cancer. She recalls one of her college professors who organized a series of tutors to help her “see” star charts when she took astrophysics. “A lot of it isn't that hard,” she said.“It just takes some thought and prior planning. The biggest problem for the blind is they can't get enough information. There's no excuse for it. It's all available.” Siller said the foundation also hoped to raise awareness about educational assessment; the importance of parental participation; better preparation for teachers; a core curriculum for blind students in addition to the sighted curriculum; and better Braille skills and a reduced caseload for teachers who often travel long distances to assist their students. Mieke H. Bomann Source: Campaign seeks to end blind students' wait for Braille textbooks. American News Service, December 16, 1999.
74 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
rechargeable batteries. The Tretiakoff Braille notebook, called “Digicassette,” measured about 20 by 25 by 5 centimeters. A read-only version of the Digicassette was manufactured for the U.S. National Library Services for the Blind of the Library of Congress.
Personal Braille Printers Braille books consist of strong paper pages embossed with a Braille dot pattern by high-speed machines and then bound together much like ordinary books. A typical Braille page can carry up to twenty-five lines of forty Braille characters each and can be explored rapidly from left to right and from top to bottom by a blind reader. Electronic Braille displays consist generally of a single line comprising usually from eighteen to forty Braille characters to keep the displays portable and affordable for individual users. The shift from a full page to a single line delayed the acceptance of Braille displays in spite of their ability to provide easy and high-speed access to electronically stored information. Personal Braille printers, also made possible by the development of integrated circuits, appeared soon after the first personal computers to fill the gap between industrially produced Braille books and single-line Braille displays. Similar in concept to dotmatrix ink printers, personal Braille printers allowed a blind user to emboss on a sheet of strong paper a few lines of Braille characters per minute from Braille codes received from an external source.
Tactile Graphics Although the first personal Braille printers were designed to print only a regularly spaced Braille pattern—at .6 centimeter spacing between characters —some were outfitted with print heads capable of printing regularly spaced dots, in both the horizontal and the vertical directions, allowing the production of embossed tactile graphics. Although the first electronic Braille displays were built with horizontally stacked piezoelectric reeds, whose length—about 5 centimeters—prevented the juxtaposition of more than two Braille lines, the mid1980s brought the first “vertical” piezoelectric Braille
cells used in Tretiakoff 's extremely portable Braille notebook, the P-Touch. In these “vertical” cells each piezoelectric actuator was located underneath the corresponding tactile dot, allowing tactile dots to be arranged in arrays of regularly spaced rows and columns for the electronic display of graphics. These vertical cells were about twice as high as conventional “horizontal” cells and no less expensive. Multiline or graphic displays were thus made technically feasible but remained practically unaffordable at about $12 per dot for the end user as early as 1985.
Active versus Passive Reading Since Louis Braille, blind people have performed tactile reading by moving the tip of one to three fingers across a Braille page or along a Braille line while applying a small vertical pressure on the dot pattern in a direction and at a speed fully controlled by the reader, hence the name “active reading.” Louis Braille used his judgment to choose tactile dot height and spacing; research performed during the last thirty years has shown that his choices were right on the mark. Objective experiments, in which the electrical response of finger mechanoreceptors (neural end organs that respond to a mechanical stimulus, such as a change in pressure) is measured from an afferent (conveying impulses toward the central nervous system) nerve fiber, have shown that “stroking”—the horizontal motion of the finger—plays an essential role in touch resolution, the ability to recognize closely spaced dots. Conversely, if a blind reader keeps the tip of one or more fingers still on an array of tactile dots that is moved in various patterns up or down under the fingertips, this is called “passive reading.” Passive reading has been suggested as a way to reduce the number of dots, and therefore the cost of tactile displays, by simulating the motion of a finger across a wide array of dots by proper control of vertical dot motion under a still finger. The best-known example of this approach is the Optacon (Optical to Tactile Converter), invented during the mid-1970s by John Linvill to give blind people immediate and direct access to printed material. The Optacon generated a vibrating tactile image of a small area of an object viewed by its camera placed and moved against its surface.
BRAIN-COMPUTER INTERFACES ❚❙❘ 75
Research has shown that touch resolution and reading speed are significantly impaired by passive reading, both for raised ordinary character shapes and for raised-dot patterns.
Current and Future Electronic Tactile Displays At the beginning of the twenty-first century, several companies make electromechanical tactile cells, which convert electrical energy into mechanical energy and vice versa, but the dominant actuator technology is still the piezoelectric (relating to electricity or electric polarity due to pressure, especially in a crystalline substance) bimorph reed, which keeps the price per tactile dot high and the displays bulky and heavy. The majority of electronic tactile displays are single-line, stand-alone displays carrying up to eighty characters or Braille computers carrying from eighteen to forty characters on a single line. Their costs range from $3,000 to more than $10,000. A small number of graphic tactile modules carrying up to sixteen by sixteen tactile dots are also available from manufacturers such as KGS in Japan. Several research-and-development projects, using new actuator technologies and designs, are under way to develop low-cost g raphic tactile displays that could replace or complement visual displays in highly portable electronic communication devices and computers. Oleg Tretiakoff See also Sonification; Universal Access FURTHER READING American Council of the Blind. (2001). Braille: History and Use of Braille. Retrieved May 10, 2004, from http://www.acb.org/resources/braille.html Blindness Resource Center. (2002). Braille on the Internet. Retrieved May 10, 2004, from http://www.nyise.org/braille.html
BRAIN-COMPUTER INTERFACES A brain-computer interface (BCI), also known as a direct brain interface (DBI) or a brain-machine interface (BMI), is a system that provides a means for people to control computers and other devices directly with brain signals. BCIs fall into the category of biometric devices, which are devices that detect and measure biological properties as their basis of operation. Research on brain-computer interfaces spans many disciplines, including computer science, neuroscience, psychology, and engineering. BCIs were originally conceived in the 1960s, and since the late 1970s have been studied as a means of providing a communication channel for people with very severe physical disabilities. While assistive technology is still the major impetus for BCI research, there is considerable interest in mainstream applications as well, to provide a hands-free control channel that does not rely on muscle movement. Despite characterizations in popular fiction, BCI systems are not able to directly interpret thoughts or perform mind reading. Instead, BCI systems monitor and measure specific aspects of a user’s brain signals, looking for small but detectable differences that signal the intent of the user. Most existing BCI systems depend on a person learning to control an aspect of brain signals that can be detected and measured. Other BCI systems perform control tasks, such as selecting letters from an alphabet, by detecting brain-signal reactions to external stimuli. Although BCIs can provide a communications channel, the information transmission rate is low compared with other methods of control, such as keyboard or mouse. The best reported user performance with current BCI systems is an information transfer rate of sixty-eight bits per minute, which roughly translates to selecting eight characters per minute from an alphabet. BCI studies to date have been conducted largely in controlled laboratory settings, although the field is beginning to target realworld environments for BCI use. A driving motivation behind BCI research has been the desire to help people with severe physical disabil-
76 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
ities such as locked-in syndrome, a condition caused by disease, stroke, or injury in which a person remains cognitively intact but is completely paralyzed and unable to speak. Traditional assistive technologies for computer access depend on small muscle movements, typically using the limbs, eyes, mouth, or tongue to activate switches. People with locked-in syndrome have such severely limited mobility that system input through physical movement is infeasible or unreliable. A BCI system detects tiny electrophysiological changes in brain signals to produce control instructions for a computer, thereby making it unnecessary for a user to have reliable muscle movement. Researchers have created applications for nondisabled users as well, including gaming systems and systems that allow hands-free, heads-up control of devices, including landing an aircraft. Brain signal interfaces have been used in psychotherapy to monitor relaxation responses and to teach meditation, although these are biofeedback rather than control interfaces.
Brain Signal Characteristics Brain signals are recorded using two general approaches. The most ubiquitous approach is the electroencephalogram (EEG), a recording of signals representing activity over the entire surface of the brain or a large region of the brain, often incorporating the activity of millions of neurons. An EEG can be recorded noninvasively (without surgery) from electrodes placed on the scalp, or invasively (requiring surgery) from electrodes implanted inside the skull or on the surface of the brain. Brain signals can also be recorded from tiny electrodes placed directly inside the brain cortex, allowing researchers to obtain signals from individual neurons or small numbers of colocated neurons. Several categories of brain signals have been explored for BCIs, including rhythms from the sensorimotor cortex, slow cortical potentials, evoked potentials, and action potentials of single neurons. A BCI system achieves control by detecting changes in the voltage of a brain signal, the frequency of a signal, and responses to stimuli. The type of brain signal processed has implications for the nature of the user’s interaction with the system.
Sensorimotor Cortex Rhythms Cortical rhythms represent the synchronized activity of large numbers of brain cells in the cortex that create waves of electrical activity over the brain. These rhythms are characterized by their frequency of occurrence; for example, a rhythm occurring between eight and twelve times a second is denoted as mu, and one occurring between eighteen and twentysix times a second is referred to as beta. When recorded over the motor cortex, these rhythms are affected by movement or intent to move. Studies have shown that people can learn via operant-conditioning methods to increase and decrease the voltage of these cortical rhythms (in tens of microvolts) to control a computer or other device. BCIs based on processing sensorimotor rhythms have been used to operate a binary spelling program and two-dimensional cursor movement.
Slow Cortical Potentials Slow cortical potentials (SCPs) are low-frequency shifts of cortical voltage that people can learn to control with practice. SCP shifts can occur in durations of a hundred milliseconds up to several seconds. SCP signals are based over the frontal and central cortex area, and are typically influenced by emotional or mental imagery, as well as imagined movement. SCPs are recorded both from electrodes on the scalp using an operant conditioning approach and from positive reinforcement to train users to alter their SCPs. Both nondisabled and locked-in subjects have been able to learn to affect their SCP amplitude, shifting it in either an electrically positive or negative direction. Locked-in subjects have used SCPs to communicate, operating a spelling program to write letters and even surfing with a simple web browser.
Evoked Potentials The brain's responses to stimuli can also be detected and used for BCI control. The P300 response occurs when a subject is presented with something familiar, such as a photo of a loved one, or of interest, such as a letter selected from an alphabet. The P300 response can be evoked by almost any stimulus, but most BCI systems employ either visual or auditory
BRAIN-COMPUTER INTERFACES ❚❙❘ 77
stimuli. Screening for the P300 is accomplished through an “oddball paradigm,” where the subject views a series of images or hears a series of tones, attending to the one that is different from the rest. If there is a spike in the signal power over the parietal region of the brain approximately 300 milliseconds after the “oddball” or different stimulus, then the subject has a good P300 response. One practical application that has been demonstrated with P300 control is a spelling device. The device works by flashing rows and columns of an alphabet grid and averaging the P300 responses to determine which letter the subject is focusing on. P300 responses have also been used to enable a subject to interact with a virtual world by concentrating on flashing virtual objects until the desired one is activated.
Action Potentials of Single Neurons Another approach to BCI control is to record from individual neural cells via an implanted electrode. In one study, a tiny hollow glass electrode was implanted in the motor cortices of three locked-in subjects, enabling neural firings to be captured and recorded. Subjects attempted to control this form of BCI by increasing or decreasing the frequency of neural firings, typically by imagining motions of paralyzed limbs. This BCI was tested for controlling two-dimensional cursor movement in communications programs such as virtual keyboards. Other approaches utilizing electrode arrays or bundles of microwires are being researched in animal studies.
Interaction Styles With BCIs How best to map signals from the brain to the control systems of devices is a relatively new area of study. A BCI transducer is a system component that takes a brain signal as input and outputs a control signal. BCI transducers fall into three general categories: continuous, discrete, and direct spatial positioning. Continuous transducers produce a stream of values within a specified range. These values can be mapped to cursor position on a screen, or they can directly change the size or shape of an object (such as a progress bar). A user activates a continuous transducer
by learning to raise or lower some aspect of his or her brain signals, usually amplitude or frequency. Continuous transducers have enabled users to perform selections by raising or lowering a cursor to hit a target on a screen. A continuous transducer is analogous to a continuous device, such as a mouse or joystick, that always reports its current position. A discrete transducer is analogous to a switch device that sends a signal when activated. Discrete transducers produce a single value upon activation. A user typically activates a discrete transducer by learning to cause an event in the brain that can be detected by a BCI system. Discrete transducers have been used to make decisions, such as whether to turn in navigating a maze. Continuous transducers can emulate discrete transducers by introducing a threshold that the user must cross to “activate” the switch. Direct-spatial-positioning transducers produce a direct selection out of a range of selection choices. These transducers are typically associated with evoked responses, such as P300, that occur naturally and do not have to be learned. Direct transducers have been used to implement spelling, by flashing letters arranged in a grid repeatedly and averaging the brain signal response in order to determine which letter the user was focusing on. A direct spatial positioning transducer is analogous to a touch screen. BCI system architectures have many common functional aspects. Figure 1 shows a simplified model of a general BCI system design as described by Mason and Birch (2003). Brain signals are captured from the user by an acquisition method, such as scalp electrodes or implanted electrodes. The signals are then processed by an acquisition component called a feature extractor that identifies signal changes that could signify intent. A signal translator then maps the extracted signals to device controls, which in turn send signals to a control interface for a device, such as a cursor, a television, or a wheelchair. A display may return feedback information to the user. Feedback is traditionally provided to BCI users through both auditory and visual cues, but some testing methods allow for haptic (touch) feedback and electrical stimulation. Which feedback mechanisms are most effective usually depends on the abilities and disabilities of the user; many severely disabled
78 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
FIGURE 1.
users have problems with vision that can be compensated for by adding auditory cues to BCI tasks. Some research teams have embraced usability testing to determine what forms of feedback are most effective; this research is under way.
Applications for BCIs As the BCI field matures, considerable interest has arisen in applying BCI techniques to real-world problems. The principal goal has been to provide a communication channel for people with severe motor disabilities, but other applications may also be possible. Researchers are focusing on applications for BCI technologies in several critical areas: Communication Making communication possible for a locked-in person is a critical and very difficult task. Much of the work in BCI technology centers around communication, generally in the form of virtual keyboards or iconic selection systems. Environmental Control The ability to control the physical environment is also an important quality-of-life issue. Devices that permit environmental control make it possible for locked-in people to turn a TV to a desired channel and to turn lights on and off, as well as controlling other physical objects in their world. Internet Access The Internet has the potential to enhance the lives of locked-in people significantly. Access to the Internet can provide shopping, entertainment, education, and sometimes even employment opportunities to people with severe disabilities. Efforts are under way to develop paradigms for BCI interaction with Web browsers.
BCI system architecture
Neural Prosthetics A BCI application with significant implications is neural prostheses, which are orthoses or muscle stimulators controlled by brain signals. In effect, a neural prosthesis could reconnect the brain to paralyzed limbs, essentially creating an artificial nervous system. BCI controls could be used to stimulate muscles in paralyzed arms and legs to enable a subject to learn to move them again. Preliminary work on a neurally controlled virtual hand was reported in 2000 with implanted electrodes: a noninvasive BCU has been demonstrated to control a hand-grasp orthosis for a person whose hand was paralyzed. An SSVEP-based BCI has also been used to control a functional electrical stimulator to activate paralyzed muscles for knee extension. Mobility Restoring mobility to people with severe disabilities is another area of research. A neurally controlled wheelchair could provide a degree of freedom and greatly improve the quality of life for locked-in people. Researchers are exploring virtual navigation tasks, such as virtual driving and a virtual apartment, as well as maze navigation. A noninvasive BCI was used to direct a remote-control vehicle, with the aim of eventually transferring driving skills to a power wheelchair.
Issues and Challenges for BCI There are many obstacles to overcome before BCIs can be used in real-world scenarios. The minute electrophysiological changes that characterize BCI controls are subject to interference from both electrical and cognitive sources. Brain-signal complexity and variability make detecting and interpreting changes very difficult except under controlled circumstances. Especially with severely disabled users, the effects of
BRAIN-COMPUTER INTERFACES ❚❙❘ 79
medications, blood sugar levels, and stimulants such as caffeine can all be significant. Cognitive distractions such as ambient environmental noise can affect a person’s ability to control a BCI in addition to increasing the cognitive load the person bears. Artifacts such as eye blinks or other muscle movements can mask control signals. BCIs and other biometric devices are also plagued by what is termed the Midas touch problem: How does the user signal intent to control when the brain is active constantly? Hybrid discrete/continuous transducers may be the answer to this problem, but it is still a major issue for BCIs in the real world. Another important issue currently is that BCI systems require expert assistance to operate. As BCI systems mature, the expectation is that more of the precise tuning and calibration of these systems may be performed automatically. Although BCIs have been studied since the mid1980s, researchers are just beginning to explore their enormous potential. Understanding brain signals and patterns is a difficult task, but only through such an understanding will BCIs become feasible. Currently there is a lively debate on the best approach to acquiring brain signals. Invasive techniques, such as implanted electrodes, could provide better control through clearer, more distinct signal acquisition. Noninvasive techniques, such as scalp electrodes, could be improved by reducing noise and incorporating sophisticated filters. Although research to date has focused mainly on controlling output from the brain, recent efforts are also focusing on input channels. Much work also remains to be done on appropriate mappings to control signals. As work in the field continues, mainstream applications for BCIs may emerge, perhaps for people in situations of imposed disability, such as jet pilots experiencing high G-forces during maneuvers, or for people in situations that require hands-free, heads-up interfaces. Researchers in the BCI field are just beginning to explore the possibilities of realworld applications for brain signal control. Melody M. Moore, Adriane B. Davis, and Brendan Allison See also Physiology
FURTHER READING Bayliss, J. D., & Ballard, D. H. (2000). Recognizing evoked potentials in a virtual environment. Advances in Neural Information Processing Systems, 12, 3–9. Birbaumer, N., Kubler, A., Ghanayim, N., Hinterberger, T., Perelmouter, J. Kaiser, J., et al. (2000). The thought translation device (TTD) for completely paralyzed patients. IEEE Transactions on Rehabilitation Engineering, 8(2), 190–193. Birch, G. E., & Mason, S. G. (2000). Brain-computer interface research at the Neil Squire Foundation. IEEE Transactions on Rehabilitation Engineering, 8(2), 193–195. Chapin, J., & Nicolelis, M. (2002). Closed-loop brain-machine interfaces. In J. R. Wolpaw & T. Vaughan (Eds.), Proceedings of BrainComputer Interfaces for Communication and Control: Vol. 2. Moving Beyond Demonstration, Program and Abstracts (p. 38). Rensselaerville, NY. Donchin, E., Spencer, K., & Wijesinghe, R. (2000). The mental prosthesis: Assessing the speed of a P300-based brain-computer interface. IEEE Transactions on Rehabilitation Engineering, 8(2), 174–179. Kandel, E., Schwartz, J., & Jessell, T. (2000). Principles of neural science (4th ed.). New York: McGraw-Hill Health Professions Division. Kennedy, P. R., Bakay, R. A. E., Moore, M. M., Adams, K., & Goldwaithe, J. (2000). Direct control of a computer from the human central nervous system. IEEE Transactions on Rehabilitation Engineering, 8(2), 198–202. Lauer, R. T., Peckham, P. H., Kilgore, K. L., & Heetderks, W. J. (2000). Applications of cortical signals to neuroprosthetic control: A critical review. IEEE Transactions on Rehabilitation Engineering, 8(2), 205–207. Levine, S. P., Huggins, J. E., BeMent, S. L., Kushwaha, R. K., Schuh, L. A., Rohde, M. M., et al. (2000). A direct-brain interface based on event-related potentials. IEEE Transactions on Rehabilitation Engineering, 8(2), 180–185. Mankoff, J., Dey, A., Moore, M., & Batra, U. (2002). Web accessibility for low bandwidth input. In Proceedings of ASSETS 2002 (pp. 89–96). Edinburgh, UK: ACM Press. Mason, S. G., & Birch, G. E. (In press). A general framework for braincomputer interface design. IEEE Transactions on Neural Systems and Rehabilitation Technology. Moore, M., Mankoff, J., Mynatt, E., & Kennedy, P. (2002). Nudge and shove: Frequency thresholding for navigation in direct braincomputer interfaces. In Proceedings of SIGCHI 2001Conference on Human Factors in Computing Systems (pp. 361–362). New York: ACM Press. Perelmouter, J., & Birbaumer, N. (2000). A binary spelling interface with random errors. IEEE Transactions on Rehabilitation Engineering, 8(2), 227–232. Pfurtscheller, G., Neuper, C., Guger, C., Harkam, W., Ramoser, H., Schlögl, A., et al. (2000). Current trends in Graz brain-computer interface (BCI) research. IEEE Transactions on Rehabilitation Engineering, 8(2), 216–218. Tomori, O., & Moore, M. (2003). The neurally controllable Internet browser. In Proceedings of SIGCHI 03 (pp. 796–798). Wolpaw, J. R., Birbaumer, N., McFarland, D., Pfurtscheller, G., & Vaughan, T. (2002). Brain-computer interfaces for communication and control. Clinical Neurophysiology, 113, 767–791.
80 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Wolpaw, J. R., McFarland, D. J., & Vaughan, T. M. (2000). Brain-computer interface research at the Wadsworth Center. IEEE Transactions on Rehabilitation Engineering, 8(2), 222–226.
BROWSERS For millions of computer users worldwide, a browser is the main interface with the World Wide Web, the world’s foremost Internet information exchange service. Banking, shopping, keeping in contact with friends and family through e-mail, accessing news, looking words up in the dictionary, finding facts and solving puzzles—all of these activities and many more can be carried out on the Web. After the 1993 release of the first graphical user interface Web browser (NCSA Mosaic), the Web rapidly evolved from a small user base of scientists accessing a small set of interlinked text documents to approximately 600 million users accessing billions of webpages that make use of many different media, including text, graphics, video, audio, and animation. Economies of scale clearly apply to the effectiveness of Web browsers. Although there has been substantial work on the “webification” of sources of information (for example, educational course materials), there has been surprisingly little research into understanding and characterizing Web user’s tasks, developing better browsers to support those tasks, and evaluating the browsers’ success. But ethnographic and field studies can give us a contextual understanding of Web use, and longitudinal records of users’ actions make possible long-term quantitative analyses, which in turn are leading to low-level work on evaluating and improving browsers.
What Do Web Users Do? The best way to understand fully what users do with their browsers, why they do it, and the problems they encounter is to observe and question users directly as they go about their everyday work. Unfortunately this approach puts inordinate demands on researchers’ time, so it is normally used only with small sets of participants. The study that best demonstrates
this ethnographic style of contextually immersed investigation is that of Michael Byrne and his colleagues (1999), who used their observations to create a taxonomy of Web-browsing tasks. Their method involved videotaping eight people whenever they used a browser in their work. The participants were encouraged to continually articulate their objectives and tasks, essentially thinking aloud. A total of five hours of Web use was captured on video and transcribed, and a six-part taxonomy of stereotypical tasks emerged: 1. Use information: activities relating to the use of information gathered on the Web; 2. Locate on page: searching for particular information on a page; 3. Go to: the act of trying to get the browser to display a particular URL (Web address); 4. Provide information: sending information to a website through the browser (for example, providing a billing address or supplying search terms to a search engine); 5. Configure browser: changing the configuration of the browser itself; and 6. React to environment: supplying information required for the browser to continue its operation (for example, responding to a dialog box that asks where a downloaded file should be saved). Although these results were derived from only a few hours of Web use by a few people, they provide initial insights into the tasks and actions accomplished using a browser. Another approach to studying how people use the Web is to automatically collect logs of users’ actions. The logs can then be analyzed to provide a wide variety of quantitative characterizations of Web use. Although this approach cannot provide insights into the context of the users’ actions, it has the advantage of being implementable on a large scale. Months or years of logged data from dozens of users can be included in an analysis. Two approaches have been used to log Web-use data. Server-side logs collect data showing which pages were served to which IP address, allowing Web designers to see, for instance, which parts of their sites are particularly popular or unpopular. Unfortunately,
BROWSERS ❚❙❘ 81
server-side logs only poorly characterize Web usability issues. The second approach uses client-side logs, which are established by equipping the Web browser (or a client-side browser proxy) so that it records the exact history of the user’s actions with the browser. The first two client-side log analyses of Web use were both conducted in 1995 using the then-popular XMosaic browser. The participants in both studies were primarily staff, faculty, and students in university computing departments. Lara Catledge and James Pitkow logged 3 weeks of use by 107 users in 1995, while Linda Tauscher and Saul Greenberg analyzed 5 to 6 weeks of use by 23 users in 1995. The studies made several important contributions to our understanding of what users do with the Web. In particular, they revealed that link selection (clicking on links in the Web browser) accounts for approximately 52 percent of all webpage displays, that webpage revisitation (returning to previously visited webpages) is a dominant navigation behavior, that the Back button is very heavily used, and that other navigation actions, such as typing URLs, clicking on the Forward button, or selecting bookmarked pages, were only lightly used. Tauscher and Greenberg also analyzed the recurrence rate of page visits—“the probability that any URL visited is a repeat of a previous visit, expressed as a percentage” (Tauscher and Greenberg 1997, 112). They found a recurrence rate of approximately 60 percent, meaning that on average users had previously seen approximately three out of five pages visited. In a 2001 study, Andy Cockburn and Bruce McKenzie showed that the average recurrence rate had increased to approximately 80 percent—four out of five pages a user sees are ones he or she has seen previously. Given these high recurrence rates, it is clearly important for browsers to provide effective tools for revisitation. The 1995 log analyses suggested that people rarely used bookmarks, with less than 2 percent of user actions involving bookmark use. However, a survey conducted the following year (Pitkow, 1996) indicates that users at least had the intention of using bookmarks, with 84 percent of respondents having more than eleven bookmarks. Pitkow reported from a survey of 6,619 users that organizing retrieved information is one of the top three problems people report relating to using the Web (reported by 34 percent
of participants). Cockburn and McKenzie’s log analysis suggested that bookmark use had evolved, with users either maintaining large bookmark collections or almost none: The total number of bookmarks in participants’ collections ranged from 0 to 587, with a mean of 184 and a high standard deviation of 166. A final empirical characterization of Web use from Cockburn and McKenzie’s log analysis is that Web browsing is surprisingly rapid, with many or most webpages being visited for only a very brief period (less than a couple of seconds). There are two main types of browsing behavior that can explain the very short page visits. First, many webpages are simply used as routes to other pages, with users following known trails through the series of links that are displayed at known locations on the pages. Second, users can almost simultaneously display a series of candidate “interesting” pages in independent top-level windows by shift-clicking on the link or by using the link’s context menu. For example, the user may rapidly pop up several new windows for each of the top result links shown as a result of a Google search.
Improving the Web Browser User Interface The studies reported above inform designers about what users do with the current versions of their browsers. Naturally, there is a chicken-and-egg problem in that stereotypical browser use is strongly affected by the support provided by browsers. Browser interfaces can be improved both by designing to better support the stereotypes and by innovative design that enables previously difficult or impossible tasks. The problems of hypertext navigation were well known long before the Web. As users navigate through the richly interconnected information nodes of the Web (or any hypertextual information space) their short-term memory becomes overloaded with the branches made, and they become “lost in hyperspace.” In the late 1980s many researchers were experimenting with graphical depictions of hypertext spaces in order to help users orient themselves: For example, the popular Apple language Hypercard provided a thumbnail graphical representation of the
82 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
recent cards displayed, and gIBIS provided a network diagram of design argumentation. Soon after the Web emerged in 1991, similar graphical techniques were being constructed to aid Web navigation. Example systems included MosaicG, which provided thumbnail images of the visited pages arranged in a tree hierarchy, WebNet, which drew a hub-and-spoke representation of the pages users visited and the links available from them, and the Navigational View Builder, which could generate a wide variety of two-dimensional and three-dimensional representations of the Web. Despite the abundance of tools that provide graphical representations of the user’s history, none have been widely adopted. Similarly, log analyses of Web use show that users seldom use the history tools provided by all of the main Web browsers. Given that Web revisitation is such a common activity, why are these history tools so lightly used? The best explanation seems to be that these tools are not needed most of the time, so they are unlikely to be on permanent display, where they would compete with other applications for screen real estate. Once iconified, the tools are not ready to hand, and it is overhead for users to think of using them, take the actions to display them, orient themselves within the information they display, and make appropriate selections. While the projects above focus on extending browser functionality, several other research projects have investigated rationalizing and improving browsers’ current capabilities. The interface mechanisms for returning to previously visited pages have been a particular focus. Current browsers support a wide range of disparate facilities for revisitation, including the Back and Forward buttons and menus, menus that allow users to type or paste the URLs of websites the user wants to visit, the history list, bookmarks or lists of favorites, and the links toolbar. Of these utilities, log analyses suggest that only the Back button is heavily used. The WebView system and Glabster both demonstrate how history facilities and bookmarks can be enhanced and integrated within the Back menu, providing a powerful and unified interface for all revisitation tasks. Both WebView and Glabster automatically capture thumbnail images of webpages, making it easier for the user to identify previously visited pages from the set displayed within the back menu.
Another problem users have with current Web browsers is that they misunderstand the behavior of the Back button. An experiment showed that eight of eleven computer scientists incorrectly predicted the behavior of Back in simple Web navigation tasks. The problem stems from users believing that Back provides access to a complete history of previously visited pages, rather than the stack-based subset that can actually be accessed. Cockburn and his colleagues describe the behavior and make an evaluation of a true history-based Back system, but results indicate that the pros and cons of the technique are closely balanced, such that the advantages do not outweigh the difficulties inherent in making a switch from current behavior. The World Wide Web revolution has been a great success in bringing computer technology to the masses. The widespread adoption and deployment of the Web and the browsers used to access it happened largely without input from researchers in human-computer interaction. Those researchers are now improving their understanding of the usability issues associated with Web browsers and browsing. As the technology and understanding matures, we can expect browser interfaces to improve, enhancing the efficiency of Web navigation and reducing the sensation of becoming lost in the Web. Andy Cockburn See also Mosaic; Website Design FURTHER READING Abrams, D., Baecker R., & Chignell, M. (1998). Information archiving with bookmarks: Personal Web space construction and organization. In Proceedings of CHI'98 Conference on Human Factors in Computing Systems (pp. 41–48). New York: ACM Press. Ayers, E., & Stasko, J. (1995). Using graphic history in browsing the World Wide Web. In Proceedings of the Fourth International World Wide Web Conference (pp. 451–459). Retrieved January 19, 2004, from http://www.w3j.com/1/ayers.270/paper/270.html Bainbridge, L. (1991). Verbal protocol analysis. In J. Wilson & E. Corlett (Eds.), Evaluation of human work: A practical ergonomics methodology (pp. 161–179). London: Taylor and Francis. Byrne, M., John, B., Wehrle, N., & Crow, D. (1999). The tangled Web we wove: A taskonomy of WWW Use. In Proceedings of CHI'99 Conference on Human Factors in Computing Systems (pp. 544–551). New York: ACM Press.
BROWSERS ❚❙❘ 83
Catledge, L., & Pitkow, J. (1995). Characterizing browsing strategies in the World Wide Web. In Computer systems and ISDN systems: Proceedings of the Third International World Wide Web Conference, 27, 1065–1073). Chi, E., Pirolli, P., & Pitkow, J. (2000). The scent of a site: A system for analyzing and predicting information scent, usage, and usability of a Web site. In Proceedings of CHI'2000 Conference on Human Factors in Computing Systems (pp.161–168). New York: ACM Press. Cockburn, A., Greenberg, S., McKenzie, B., Jason Smith, M., & Kaasten, S. (1999). WebView: A graphical aid for revisiting Web pages. In Proceedings of the 1999 Computer Human Interaction Specialist Interest Group of the Ergonomics Society of Australia (OzCHI'91) (pp. 15–22). Retrieved January 19, 2004, from http://www.cpsc.ucalgary.ca/Research/grouplab/papers/1999/99-WebView.Ozchi/ Html/webview.html Cockburn, A., & Jones, S. (1996). Which way now? Analysing and easing inadequacies in WWW navigation. International Journal of Human-Computer Studies, 45(1), 105–129. Cockburn, A., & McKenzie, B. (2001). What do Web users do? An empirical analysis of Web use. International Journal of HumanComputer Studies, 54(6), 903–922. Cockburn, A., McKenzie, B., & Jason Smith, M. (2002). Pushing Back: Evaluating a new behaviour for the Back and Forward buttons in Web browsers. International Journal of Human-Computer Studies, 57(5), 397–414. Conklin, J. (1988). Hypertext: An introduction and survey. In I. Greif (Ed.), Computer supported cooperative work: A book of readings (pp. 423–475). San Mateo, CA: Morgan-Kauffman.
Conklin, J., & Begeman, M. (1988). gIBIS: A hypertext tool for exploratory discussion. ACM Transactions on Office Information Systems, 6(4), 303–313. Coulouris, G., & Thimbleby, H. (1992). HyperProgramming. Wokingham, UK: Addison-Wesley Longman. Fischer, G. (1998). Making learning a part of life: Beyond the 'giftwrapping' approach of technology. In P. Alheit & E. Kammler (Eds.), Lifelong learning and its impact on social and regional development (pp. 435–462). Bremen, Germany: Donat Verlag. Kaasten, S., & Greenberg, S. (2001). Integrating Back, History and bookmarks in Web browsers. In Proceedings of CHI'01 (pp. 379–380). New York: ACM Press. Mukherjea, S., & Foley, J. (1995). Visualizing the World Wide Web with the navigational view builder. Computer Systems and ISDN Systems, 27(6), 1075–1087. Nielsen, J. (1990). The art of navigating through HyperText: Lost in hyperspace. Communications of the ACM, 33(3), 296–310. Pirolli, P., Pitkow, J., & Rao, R. (1996). Silk from a sow's ear: Extracting usable structures from the Web. In R. Bilger, S. Guest, & M. J. Tauber (Eds.), Proceedings of CHI'96 Conference on Human Factors in Computing Systems (pp. 118–125). New York: ACM Press. Pitkow, J. (n.d.). GVU's WWW User Surveys. Retrieved January 19, 2004, from http://www.gvu.gatech.edu/user_surveys/ Tauscher, L., & Greenberg, S. (1997). How people revisit Web pages: Empirical findings and implications for the design of history systems. International Journal of Human Computer Studies, 47(1), 97–138.
CATHODE RAY TUBES CAVE CHATROOMS CHILDREN AND THE WEB
C
CLASSROOMS CLIENT-SERVER ARCHITECTURE COGNITIVE WALKTHROUGH COLLABORATORIES COMPILERS COMPUTER-SUPPORTED COOPERATIVE WORK CONSTRAINT SATISFACTION CONVERGING TECHNOLOGIES CYBERCOMMUNITIES CYBERSEX CYBORGS
CATHODE RAY TUBES The cathode ray tube (CRT) has been the dominant display technology for decades. Products that utilize CRTs include television and computer screens in the consumer and entertainment market, and electronic displays for medical and military applications. CRTs are of considerable antiquity, originating in the late nineteenth century when William Crookes (1832–1919) studied the effects of generating an electrical discharge in tubes filled with various gases. (The tubes were known as discharge tubes.) It was over thirty years later in 1929 that the CRT was utilized to construct actual imagery for television applications by Vladimir Zworykin (1889–1982) of Westinghouse Electric Corporation. The further development and optimization of the CRT for televi-
sion and radar over the next fifty years provided the impetus for continual improvements. With the emergence of desktop computing in the 1980s, the CRT market expanded, and its performance continued to evolve. As portability has come to be more and more important in the consumer electronics industry, the CRT has been losing ground. The development of flat panel technologies such as liquid crystal displays and plasma displays for portable products, computer screens, and television makes the CRT very vulnerable. Because of the CRT’s maturity and comparatively low cost, however, its application will be assured for many years to come.
How Cathode Ray Tubes Work A CRT produces images when an electron beam is scanned over a display screen in a pattern that is 85
86 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
determined by a deflection mechanism. The display screen is coated with a thin layer of phosphor that luminesces under the bombardment of electrons. By this means the display screen provides a twodimensional visual display, corresponding to information contained in the electron beam. There are four major components of a CRT display: the vacuum tube, the electron source (known as the electron “gun”), the deflection mechanism, and the phosphor screen. The tube (sometimes referred to as a bulb) is maintained at a very high vacuum level to facilitate the flow of electrons in the electron beam. The front surface of the tube defines the visual area of the display, and it is this front surface that is covered with phosphor, which is in turn covered by the anode (the electron-collecting electrode). The tube has three main sections: the front surface, the funnel, and the neck. The entire tube is typically made of glass so that very high vacuums can be sustained, but in some cases the funnel and neck can be fabricated from metal or ceramic. For demanding applications that require additional robustness, an implosion-proof faceplate may be secured to the front tube surface for durability. This typically comes at the expense of optical throughput, but antireflection coatings are often used to improve contrast and to compensate for the transmission losses. The electron source, a hot cathode at the far end from the front surface, generates a high-density electron beam whose current can be modulated. The electron beam can be focused or reflected— deflected—by electrostatic or magnetic methods, and this deflection steers the electron beam to designated positions of the front surface to create visual imagery. The phosphor screen on the inside front surface of the tube converts the electron beam into visible light output. On top of the phosphor particles is the thin layer of conducting material (usually aluminum) that serves as the anode, drawing the electrons toward the screen. The directions on how to manipulate the electron stream are contained in an electronic signal called a composite video signal. This signal contains information on how intense the electron beam
must be and on when the beam should be moved across different portions of the screen.
Displaying Color One of the most important tasks of the modern display is rendering full-color images. Shadow-masking configurations are by far the most successful way to create full color images in CRT displays. The shadow mask CRT typically uses three electron beams deflected by one coil (the simplest configuration). The electron beams traverse a perforated metal mask (shadow mask) before impinging on selected phosphor materials (there are three sorts of phosphor that can emit red, green, and blue light). The shadow mask apertures are typically configured as stripes, circles, or slots. The arrangement of the electron optics and the deflection system is such that three electron beams converge onto the screen after passing through the shadow mask, each beam impinging on one phosphor, which, when bombarded with electrons, emits red, green, or blue visible light. The red, green, and blue phosphors are spatially arranged on the viewing screen. The Tr init ron desig n, invented by Sony Corporation, uses vertical stripe arrays rather than circular or slotted apertures. These arrays alternate red, green, and blue when viewed from the faceplate side of the tube. There is a single electron source, rather than three, which eliminates the problem of beam convergence. The Trinitron also has superior resolution in the vertical direction since its apertures are not limited in that direction. The only negative attribute of the Trinitron is that the mask is not selfsupporting, which ultimately limits the size of the vacuum tube. The advantages of CRT displays include their maturity, their well-understood manufacturing process, their ability to provide full-color and high-resolution imaging, and the comparatively low cost for high information content. CRTs are vulnerable to competition from liquid crystal displays and plasma displays (both of which make possible flat-panel displays), however, because CRTs are bulky, heavy, and big power consumers. In addition to the utility of flat-panel display for portable applications for which CRTs could never be considered, flat-
CAVE ❚❙❘ 87
panel displays have made significant inroads into desktop monitors and large-area televisions. As the price of flat-panel displays continues to plummet, they are certain to capture even more of the CRT market in the future. Gregory Philip Crawford See also Liquid Crystal Displays FURTHER READING Castelliano, J. (1992). Handbook of display technology. San Diego, CA: Academic Press. Keller, P. A. (1997). Electronic display measurement. New York: Wiley SID. MacDonald, L. W., & Lowe, A. C. (1997). Display systems: Design and applications. New York: Wiley SID.
CAVE The CAVE is a virtual reality (VR) room, typically 3 by 3 by 3 meters in size, whose walls, floor, and sometimes ceiling are made entirely of computer-projected screens. Viewers wear a six-degree-of-freedom location sensor called a tracker so that when they move within the CAVE, correct viewer-centered perspectives and surround-stereo projections are produced fast enough to give a strong sense of 3D visual immersion. Viewers can examine details of a complex 3D object simply by walking up to and into it. The CAVE was invented in 1991 for two reasons: to help scientists and engineers achieve scientific insight without compromising the color and distortionfree resolution available then on workstations and to create a medium worthy of use by fine artists. CAVE viewers see not only projected computergenerated stereo scenes but also their own arms and bodies, and they can interact easily with other people. The CAVE uses active stereo, which produces different perspective views for the left and right eyes of the viewer in synchrony with special electronic shutter glasses that go clear in front of the left eye when the
left eye image should be seen by the left eye and are opaque otherwise. Similarly, the right eye gets the right image. Images need to be generated at 100 to 120 hertz so each eye can get a flicker-free 50- to 60-hertz display. All screens need to be synchronized so that each eye sees the same phase stereo image on every screen, a requirement that until 2003 meant that only the most expensive SGI (Silicon Graphics, Inc.) computer graphics systems could be used. Synchronizing PC graphics cards now reduce the cost of CAVE computing and image generation by 90 percent. The CAVE’s projection onto the screens does not need to keep up with the viewer’s head motion nearly as much as is required in a head-mounted VR display (HMD), which needs to have small screens attached in front of the eyes. Of course, any movement of the viewer’s body within the space requires updating the scene perspective, but in normal investigative use, the CAVE needs to keep up only with body motion, not head rotation; the important result is that the delay of trackers is dramatically less of a problem with CAVEs than with HMDs. In addition, although only one viewer is tracked, other people can share the CAVE visuals at the same time; their view is also in stereo and does not swing with the tracked user’s head rotation, although their perspective is still somewhat skewed. Often the person
The CAVE is a multi-person, room-sized, highresolution, 3D video and audio environment. Photo courtesy of National Center for Supercomputing Applications.
88 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
CAVE Variants
The Personal Augmented Reality Immersive System (PARIS) has a half-silvered mirror at an angle in front of the user. The screen, above the desk facing down, superimposes a stereo image on the user’s hands working beyond the mirror. Photo courtesy of the Electronic Visualization Laboratory.
in the role of guide or instructor handles the controls (a 3D mouse called “Wanda”) and the student wears the tracker to get the best view, a mode of usage that is quite functional for both learning and demonstrations. The CAVE uses a rear-screen projection for the walls so the viewer does not block the light and cast shadows. The floor is typically projected down from the top, which creates a small shadow around the viewer’s feet. A CAVE with three walls and a floor minimally requires a 13- by 10-meter space with a ceiling 4.5 meters high. Six-sided CAVEs have rear projections from every direction, which require much higher ceilings, more elaborate support structures, and floor screens that can withstand the weight of several people. Someday, 3-square-meter flat-panel displays suspended as a ceiling, positioned vertically as walls, and tough enough to walk on would allow CAVEs in normal rooms. However, current technology paneldisplays refresh too slowly to use shutter glasses, so they must be otherwise modified for stereo display. The Varrier method involves placing a barrier screen so that the computed views to each eye are seen through perfectly placed thin black bars, that is, the correctly segmented image is placed in dynamic perspective behind the barrier in real time. Varrier viewers wear no special glasses since the image separation is performed spatially by the barrier screen.
Variants of the CAVE include the ImmersaDesk, a drafting-table-size rear-projected display with a screen set at an angle so that the viewer can look down as well as forward into the screen; looking down gives a strong sense of being in the scene. PARIS uses a similarly angled half-silvered screen that is projected from the top; the viewer’s hands work under the screen and are superimposed on the 3D graphics (rather than blocking them, as with normal projections). The CAVE originally used three-tube stereo projectors with special phosphors to allow a 100- to 120hertz display without ghosting from slow green phosphor decay. Tube projectors are now rather dim by modern standards, so the CAVE was rebuilt to use bright digital mirror-based projectors, like those used in digital cinema theaters. Projectors require significant alignment and maintenance; wall-sized flatpanel screens will be welcomed since they need no alignment and have low maintenance and no projection distance. The GeoWall, a passive stereo device, works differently, polarizing the output of two projectors onto a single screen. Viewers wear the throw-away polarized glasses used in 3D movies to see stereo. In addition to visual immersion, the CAVE has synchronized synthetic and sampled surround sound. The PARIS system features a PHANTOM tactile device, which is excellent for manipulating objects the size of a bread box or smaller.
CAVEs for Tele-Immersion The CAVE was originally envisioned as a teleimmersive device to enable distance collaboration between viewers immersed in their computergenerated scenes, a kind of 3D phone booth. Much work has gone into building and optimizing ultrahigh-speed computer networks suitable for sharing gigabits of information across a city, region, nation, or indeed, the world. In fact, scientists, engineers, and artists in universities, museums, and commercial manufacturing routinely use CAVEs and variants in this manner. Tom DeFanti and Dan Sandin
CHATROOMS ❚❙❘ 89
See also Virtual Reality; Telepresence; ThreeDimensional Graphics FURTHER READING Cruz-Neira, C., Sandin, D., & DeFanti, T. A. (1993). Virtual reality: The design and implementation of the CAVE. Proceedings of the SIGGRAPH 93 Computer Graphics Conference, USA, 135–142. Czernuszenko, M., Pape, D., Sandin, D., DeFanti, T., Dawe, G. L., & Brown, M. D. (1997). The ImmersaDesk and Infinity Wall projection-based virtual reality displays [Electronic version]. Computer Graphics, 31(2), 46–49. DeFanti, T. A., Brown M. D., & Stevens, R. (Eds.). (1996). Virtual reality over high-speed networks. IEEE Computer Graphics & Applications, 16(4), 14–17, 42–84. DeFanti, T., Sandin, D., Brown, M., Pape, D., Anstey, J., Bogucki, M., et al. (2001). Technologies for virtual reality/tele-immersion applications: Issues of research in image display and global networking. In R. Earnshaw, et al. (Eds.), Frontiers of Human-Centered Computing, Online Communities and Virtual Environments (pp. 137–159). London: Springer-Verlag. Johnson, A., Leigh, J., & Costigan, J. (1998). Multiway tele-immersion at Supercomputing ’97. IEEE Computer Graphics and Applications, 18(4), 6–9. Johnson, A., Sandin, D., Dawe, G., Qiu, Z., Thongrong, S., & Plepys, D. (2000). Developing the PARIS: Using the CAVE to prototype a new VR display [Electronic version]. Proceedings of IPT 2000, CD-ROM. Korab H., & Brown, M. D. (Eds.). (1995). Virtual Environments and Distributed Computing at SC’95: GII Testbed and HPC Challenge Applications on the I-WAY. Retrieved November 5, 2003, from http://www.ncsa.uiuc.edu/General/Training/SC95/GII .HPCC.html Lehner, V. D., & DeFanti, T. A. (1997). Distributed virtual reality: Supporting remote collaboration in vehicle design. IEEE Computer Graphics & Applications (pp. 13–17). Leigh, J., DeFanti, T. A., Johnson, A. E., Brown, M. D., & Sandin, D. J. (1997). Global tele-immersion: Better than being there. ICAT ’97, 7th Annual International Conference on Artificial Reality and Tele-Existence, pp. 10–17. University of Tokyo, Virtual Reality Society of Japan. Leigh, J., Johnson, A., Brown, M., Sandin, D., & DeFanti, T. (1999). Tele-immersion: Collaborative visualization in immersive environments. IEEE Computer Graphics & Applications (pp. 66–73). Sandin, D. J., Margolis, T., Dawe, G., Leigh, J., and DeFanti, T. A. (2001). The Varrier™ auto-stereographic display. Proceedings of Photonics West 2001: Electronics Imaging, SPIE. Retrieved on November 5, 2003, from http://spie.org/web/meetings/programs/pw01/ home.html Stevens, R., & DeFanti, T. A. (1999). Tele-immersion and collaborative virtual environments. In I. Foster & C. Kesselman (Eds.), The grid: Blueprint for a new computing infrastructure (pp. 131–158). San Francisco: Morgan Kaufmann.
CHATROOMS Defined most broadly, chatrooms are virtual spaces where conversations occur between two or more users in a synchronous or nearly synchronous fashion. Many different types of chat spaces exist on the Internet. One type is Internet Relay Chat (IRC), a multiuser synchronous chat line often described as the citizens band radio of the Internet. Another type of virtual space where computer-mediated communication (CMC) takes place is Multi-User Domains (MUDs, sometimes called “Multi-User Dungeons,” because of their origin as virtual locations for a Dungeons and Dragons role-playing type of networked gaming). MUDs were initially distinguished from IRC by their persistence, or continued existence over time, and their malleability, where users may take part in the building of a community or even a virtual world, depending on the tools and constraints built into the architecture of their particular MUD. Web-based chatrooms are a third type of chat space where users may converse synchronously in a persistent location hosted by Internet Service Providers (ISPs) or websites, which may be either large Web portals like Yahoo.com or small individual sites.
UNIX was not designed to stop people from doing stupid things, because that would also stop them from doing clever things. —Doug Gwyn
Another type of chat function on the Internet is instant messaging (IM), which allows users to chat with individuals (or invited groups) in “real time,” provided that they know a person’s user name. Instant messaging is distinguished from other chat functions in that it is often used to hold multiple, simultaneous, private one-on-one chats with others. IM is also unusual in that the user can also monitor a list of online friends to see when they are logged in to the instant messaging service. IM chats also differ from other types of chatrooms in that they are not persistent—that is, a user cannot log in to the same chat after the last chatterer has logged off. Instant
90 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
message chats are more likely to occur among a group of people with some personal or professional connection than among a group of strangers with only one shared interest who happen to be in the same virtual space at the same time.
History of Internet Chat The first function that allowed for synchronous or nearly synchronous communication over a network was Talk, available on UNIX machines and the networks that connected them. Developed in the early 1980s, Talk allowed for the nearly synchronous exchange of text between two parties; however, unlike its descendents, it displayed text messages as they were written, character by character, rather than as completed messages posted to the discussion all at once. Talk and its sister program Phone fell into disuse after the introduction of the World Wide Web in 1991 and the introduction of graphical and multiuser interfaces.
Home computers are being called upon to perform many new functions, including the consumption of homework formerly eaten by the dog. —Doug Larson
Internet Relay Chat Jarkko Oikarinen, a Finnish researcher, developed IRC in 1988 based on the older Bulletin Board System (BBS). BBSs were central locations where users could dial in to a central server using a modem and leave messages and hold discussions on this central server, usually dedicated to a certain topic or interest group. Oikarinen wrote the IRC program to allow users to have “real-time” discussions not available on the BBS. First implemented on a server at the University of Oulu where Oikarinen worked, IRC quickly spread to other Finnish universities, and then to universities and ISPs throughout Scandinavia and then the World. Each “channel” on IRC (the name was taken from the Citizen’s Band radio community) represents a specific topic. Initially each channel was desig-
nated by a hatch mark (#) and a number. Because that proved difficult to use as IRC expanded, each channel was also given a text label, like #hottub or #gayboston. IRC channels were originally not persistent—anyone could create a channel on any conceivable topic, and when the last person logged out of that channel it ceased to exist. Only with the introduction in 1996 of Undernet and later DalNet did it become possible to create persistent channels. IRC runs through client software—the client software is what allows the user to see the text in the chat channel that they’re using and to see who else is currently in that channel. The most popular client is mIRC, a windows-compatible client; others include Xircon and Pirch. IRC does not have a central organizing system; organizations like universities and research groups simply run the software on their servers and make it available to their users. In the late 1990s, IRC decentralized architecture contributed to a system breakdown. In mid-1996, when one IRC server operator, based in North America, started abusing the IRC system, other North American IRC server operators expelled the abuser; however, when he disconnected his server they discovered that he was also the main link between North American and European IRC networks. After weeks of negotiations between North American and European IRC server operators, who disagreed over the handling of the expulsion, the impasse was not resolved. While interconnectivity between continents has been restored, the two IRC networks remain separate (IRC net and Efnet [Eris Free Net]); they have their own separate channels and have developed separately. Other networks, including DALnet and Undernet, have developed since the separation. MUDs Pavel Curtis, a researcher at Xerox PARC who specializes in virtual worlds, gives this definition of a Multi-User Domain: “A MUD is a software program that accepts ‘connections’ from multiple users across some kind of network and provides to each user access to a shared database of ‘rooms’, ‘exits’ and other objects. Each user browses and manipulates this database from inside one of the rooms. A MUD is a kind
CHATROOMS ❚❙❘ 91
A Personal Story—Life Online In the mid 1990s, I went to visit my first online chatroom as part of a larger project on computer-mediated communication. I had no idea what to expect—whether the people would be who they said they were, whether I’d have anything in common with other visitors, or what it would be like to interact in a text-based medium. I found myself enjoying the experience of talking to people from all over the world and came to spend much time in this virtual community. I soon learned that the community was much larger than the chatroom I had visited, connected by telephone, e-mail, letters, and occasional face-to-face visits. Over the past five years, I’ve spoken or emailed with many new acquaintances, and have had the pleasure of meeting my online friends in person when my travels take me to their part of the country. Participation in a virtual community has provided me opportunities to talk in depth with people from around the world, including Australia, New Zealand, South America, Mexico, Europe, and even Thailand. The virtual community also brings together people from a wide range of socioeconomic backgrounds that might ordinarily never have mixed. It’s been fascinating to get to know such a diverse group of individuals. My personal experiences in an online community have helped shape my research into the societal dimensions of computing and computer-mediated communication. One of my current projects investigates the effects of participation in online support communities on people’s mental and physical well-being. In addition, the success with which I’ve been able to meet and become acquainted with others using a text-only medium has had a strong impact on my theories about how technologies can successfully support remote communication and collaboration. Susan R. Fussell
of virtual reality, an electronically represented ‘place’ that users can visit” (Warschauer 1998, 212). MUDs provide more virtual places to visit, hang out, socialize, play games, teach, and learn than IRC or Web-based chatrooms do. Some MUDs have been used to hold meetings or conferences because they allow participants to convene without travel hassles— virtual conferences may have different rooms for different topics and a schedule of events similar to that of a geographically located conference. Two British graduate students, Richard Bartle and Roy Trubshaw, developed the first MUD in 1979, as a multiuser text-based networked computer game. Other MUDs followed, and new subtypes grew, including MOOs (Multiuser domains Object Oriented), used primarily for educational purposes, and MUSHs (Multi-user Shared Hallucinations). MOOs allow for greater control because the users of the virtual space can build objects and spaces as well as contribute text. Because MUDs are complex virtual environments that users visit to master commands and understand protocols, rules, and mores, their use and appeal has been limited to a tech-savvy group of users.
Web-Based Chat Web-based chatting became possible after the World Wide Web was first released onto the Internet in December 1990, but it didn’t became popular until after the release of the Java programming language a few years later. Java allowed developers to create user-friendly graphical interfaces to chat spaces on websites or ISP portals that could function across different computing and Internet browsing platforms. Web-based chatting, like IRC, tends to be based around common themes, issues, or specific discussion topics—it has given rise to rooms like Love@AOL or sports- or hobby-themed rooms like The Runners Room on Yahoo Chats. Other chat spaces may be on an individual website devoted to a common theme (like the chat on the Atlantic County Rowing Association site, hosted by ezteams).
Chatroom Users All the different iterations of chatrooms discussed here have some common elements to help users navigate and quickly understand how to use the software.
92 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
After entering a chatroom, channel, or domain, a user is confronted with a screen that is split into two or more parts: One side, usually the left, shows the discussion in progress. In another box on the screen is a list of who is logged in to the room. Generally below these is the box where the user enters text or commands to begin the conversation or move about the space (in the case of MUDs). In some chat spaces, users can create their own private chat with a single individual from the chatroom. In some Webbased tools, the chatroom is designed to use an instant messaging program to conduct one-onone chats. In others the private chat tool is built-in— in MUDs, a user uses the “whisper” command to direct a comment or conversation to a particular individual, and in some Web-based chats a private chat may be opened in another smaller window in the same chatting interface. In a survey in the summer of 2002, the Pew Internet & American Life Project found that only onequarter of Internet users had ever visited a chatroom or participated in an online discussion, and only 4 percent had visited a chatroom on a typical day. Men are more likely to use chatrooms than women, as are those who are less well off; those earning less than $50,000 a year are much more likely to chat than those earning more. Younger people are also more likely to chat, particularly those between eighteen and twentynine, although among teens, particularly adolescent girls, chatting is frequently perceived as unsafe. Nevertheless, in spite of (or because of) chat’s reputation, 55 percent of young people between twelve and seventeen have visited a chatroom. Chatrooms have become the favorite playgrounds of many Internet users because they enable them to assume a character or a role different from the one they play in their offline life. As social psychologist Erving Goffman noted in his 1959 book Presentation of Self in Everyday Life, we present different images of ourselves to different people, and some theorists have described chatrooms as spaces of performance where an identity is “performed” for the audience of other chatters. In certain chatrooms, like MUDs, where gaming or role-playing is often the reason users go there, it is expected that visitors do not bear any resemblance to their selves at the keyboard. In IRC and
Web-based chat, however, there is the expectation that users are presenting themselves honestly. Nevertheless, all chat spaces give users the opportunity to explore portions of their identity, whether it is by choosing to have the opposite gender, a different race, or a different set of personal experiences, or in the case of some games, by exploring what it is like to be something other than human. Anonymity or pseudonymity on line gives many users a feeling of freedom and safety that allows them to explore identities that they dare not assume in the offline world. Users are separated by geographic distances so it is unlikely that actions taken or phrases uttered will come back to haunt them later. And finally, in chat environments without audio or video, communication is mediated by the technology so there are none of the cues that can make a conversation emotional. All of this leads to lower levels of inhibitions, which can either create greater feelings of friendship and intimacy among chat participants or lead to a greater feeling of tension and lend an argumentative, even combative quality to a chat space.
The Future of Chat In 1991 researchers at Cornell University created CUSeeMe, the first video chat program to be distributed freely online. Video and audio chat did not truly enter mainstream use until the late 1990s, and with the advent of Apple’s iChat and Microsoft’s improved chatting programs and web cams, video chat utilizing speakers and web cams looks to be the future direction of chatting. Today Yahoo.com and other portal-provided Web-based chatrooms allow audio and video chat in their rooms, though the number of users taking advantage of the technology is still relatively small. A user’s bandwidth and hardware capabilities are still limiting factors in the use of the bandwidth-intensive video chat, but as broadband Internet connectivity percolates through the population, the quality of video Web-based chatting available to most users will improve, and its adoption will undoubtedly become more widespread. MUDs and MOOs are also moving into HTMLbased environments, which will make it much easier for the average Internet user to adopt them,
CHILDREN AND THE WEB ❚❙❘ 93
and will perhaps move Multi-User Domains from the subculture of academics and devotees into everyday use. Amanda Lenhart See also E-mail, MUDs
Taylor, T.L. (1999). Life in virtual worlds: Plural existence, multimodalities and other online research challenges. American Behavioral Scientist, 4(3). Turkle, S. (1995). Life on the screen: Identity in the age of the Internet. New York: Simon & Schuster. Warshauer, S. C. (1998). Multi-user environment studies: Defining a field of study and four approaches to the design of multi-user environments. Literary and Linguistic Computing, 13(4). Young, J. R. (1994). Textuality and cyberspace: MUD’s and written experience. Retrieved July 31, 2003, from http://ftp.game.org/pub/ mud/text/research/textuality.txt
FURTHER READING Bartle, R. (1990). Early MUD History. Retrieved July 31, 2003, from http://www.ludd.luth.se/mud/aber/mud-history.html Bevan, P. (2002). The circadian geography of chat. Paper presented at the conference of the Association of Internet Researchers, Maastricht, Netherlands. Campbell, J. E. (2004). Getting it on online: Cyberspace, gay male sexuality and embodied identity. Binghamton, NY: The Haworth Press. Dibbell, J. (1998). A rape in cyberspace. In My tiny life: Crime and passion in a virtual world. Owl Books, chapter 1. Retrieved July 31, 2003, from http://www.juliandibbell.com/texts/bungle.html IRC.Net. IRC net: Our history. Retrieved July 30, 2003, from http:// www.irc.net/ Hudson, J. M., & Bruckman, A. S. (2002). IRC Francais: The creation of an Internet-based SLA community. Computer Assisted Language Learning, 1(2), 109–134. Kendall, L. (2002). Hanging out in the virtual pub: Masculinities and relationships online. Berkeley: University of California Press. Lenhart, A., et al. (2001). Teenage life online: The rise of the instant message generation and the Internet’s impact on friendships and family relationships. Pew Internet & American Life Project, retrieved August 21, 2003, from http://www.pewinternet.org/ Murphy, K. L., & Collins, M. P. (1997). Communication conventions in instructional electronic chats. First Monday, 11(2). Pew Internet & American Life Project. (2003). Internet activities (Chart). Retrieved July 31, 2003, from http://www.pewinternet.org/reports/ index.asp Pew Internet & American Life Project. (2003). Unpublished data from June-July 2002 on chatrooms. Author. Reid, E. M. (1994). Cultural formation in text-based virtual realities. Unpublished doctoral dissertation, University of Melbourne, Australia. Retrieved July 31, 2003, from http://www.aluluei.com/ cult-form.htm Rheingold, H. (1993). The virtual community: Homesteading on the electronic frontier. Cambridge, MA: MIT Press. Rheingold, H. (1998). Building fun online learning communities. Retrieved July 30, 2003, from http://www.rheingold.com/texts/ education/moose.html Schaap, F. (n.d.). Cyberculture, identity and gender resources (online hyperlinked bibliography). Retrieved July 31, 2003, from http:// fragment.nl/resources/ Surkan, K. (n.d.). The new technology of electronic text: Hypertext and CMC in virtual environments. Retrieved July 31, 2003, from http:// english.cla.umn.edu/GraduateProfiles/Ksurkan/etext/etable.html Talk mode (n.d.). The jargon file. Retrieved November 1, 2002, from http://www.tuxedo.org/~esr/jargon/html/entry/talk-mode.html
CHILDREN AND THE WEB Children are among the millions of people who have been introduced to new ways of accessing information on the World Wide Web, which was launched in 1991 and began to become popular with the adoption of a graphical user interface in 1993. The fact that the Web utilizes hypertext (content with active links to other content) and a graphical user interface have made it more congenial and much easier to use than earlier menu-driven, text-based interfaces (i.e., Gopher, Jughead, Veronica) with the Internet.
Children’s Web Use Children use the Web inside and outside the classroom, and they navigate it to find information for both simple and complex projects. They recognize the Web as a rich source of up-to-date information, hard-to-find information, and compelling images. Research by Dania Bilal (2000) and Jinx Watson (1998) has revealed that children who use the Web have a sense of independence, authority, and control. They are motivated, challenged, and selfconfident. They prefer the Web to print sources due to the vast amount of information available and their ability to search by keyword and browse subject hierarchies quickly. Research conducted for the Pew Internet & American Life Project revealed that both parents and children believe that the Internet helps with learning. While these positive perceptions of the Internet are encouraging, children’s success in finding information on the Web is questioned. Given
94 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Two of the many books available that educate children on the perils and promise of the Web. Safety on the Internet is geared to ages 6–9, while Cyber Space is meant for ages 9–12. the Web’s increasing complexity and the abundance of information available there, it is worth asking how well children handle the challenges of using the Web. Researchers from library and information science, educational psychology, sociology, cognitive science, and human-computer interaction have studied children’s interaction with the Web. In the field of information science, researchers have investigated children’s search strategies, their relative preferences for browsing and searching, their successes and failures, the nature of tasks and success, Web design, and children’s navigational skills, relevance judgment, and affective states (feelings, perception, motivation). Findings and conclusions from these studies have begun to provide a rich framework for improving system design and developing more effective Web training programs. The first study in library and information science appeared in 1997 when Jasmine Kafai and Marcia Bates examined elementary schoolchildren’s Web literacy skills. They found that children were enthusiastic about using the Web and were able to scroll webpages and use hyperlinks effectively. However, the researchers perceived that many websites had too much text to read and too much diffi-
cult vocabulary for elementary schoolchildren to understand. Children in that age range preferred sites with high visual content, animation, and short, simple textual content. In 1998 the researchers John Schacter, Gregory Chung, and Aimee Dorr studied the effect of types of tasks on the success of fifth and sixth graders in finding information. They found that children browsed more than they searched by keyword and performed better on open-ended (complex) than factual (simple) tasks. By contrast, in 2000 Terry Sullivan and colleagues found that middle and high school students were more successful on simple tasks than complex ones. Results from Dania Bilal’s research in 2000–2002 echoed Sullivan’s results and revealed that middle school students were most successful on tasks that they chose themselves than they were on tasks that were assigned. In 1999 Andrew Large, Jamshid Beheshti, and Haidar Moukdad examined the Web activities of Canadian sixth graders. These researchers found that children browsed more than they searched by keyword, had difficulty finding relevant information, and, although they had been given basic Web training, lacked adequate navigational skills. The children’s use of the Netscape “Back” command to
CHILDREN AND THE WEB ❚❙❘ 95
return to the previous page, for example, accounted for 90 percent of their total Web moves; they activated online search help only once. In fact, frequent use of the “Back” command is common among children and young adults. Various studies in the late 1990s and early 2000s found similar results. In a follow-up to a 1999 study, Andrew Large and Jamshid Beheshti (2000) concluded that children valued the Web for finding information on hard topics, speed of access, and the availability of color images, but perceived it as more difficult to use than print sources. Children expressed frustration with information overload and with judging relevance of the retrieved results. Information overload and problems determining relevance seem to be widespread among children and young adults using the Web; a study of elementary, middle, and high school students in England corroborated Large and Beheshti’s finding. Most children assume that the Web is an efficient and effective source for all types of information. Consequently, they rarely question the accuracy and authority of what they find. If they retrieve results that are close enough to the topic, they may cease to pursue their initial inquiry and take what they get at face value. Most studies focused on using the Web as a whole and on search engines that are developed for adult users rather than children. Bilal has investigated the information-seeking behavior of children who used Yahooligans!, a search engine and directory specifically designed for children aged seven through twelve. She found that 50 percent of the middle school children were successful on an assigned, factbased task, 69 percent were partially successful on an assigned, research-based task, and 73 percent were successful on tasks they selected themselves. The flexibility children had in topic selection and modification combined with their satisfaction with the results may have influenced their success rate on the selfselected task. Children were more motivated, stimulated, and engaged in completing their tasks when they selected topics of personal interest. The children used concrete concepts (selected from the search questions) in their searches and, when these concepts failed to generate relevant information, they utilized abstract ones (synonyms or related terms). The children had trouble finding informa-
tion, despite the fact that most of the concepts they employed were appropriate. The failure to find results can be attributed largely to the poor indexing of the Yahooligans! database. Overall, the children took initiative and attempted to counteract their information retrieval problems by browsing subject categories. Indeed, they were more successful when they browsed than when they searched by keyword. Children’s low success rates on the assigned tasks were attributed to their lack of awareness of the difference between simple and complex tasks, especially in regard to the approach to take to fulfill the assignment’s requirements. On the complex assigned task, for example, children tended to seek specific answers rather than to develop an understanding of the information found. On the positive side, children were motivated and persistent in using the Web. When asked about reasons for their motivation and persistence, children cited convenience, challenge, fun, and ease of use. Ease of use was described as the ability to search by keyword. On the negative side, children expressed frustration at both information overload and the zero retrieval that resulted from keyword searching. Indeed, this feature was central to most of the search breakdowns children experienced. Although Yahooligans! is designed for children aged seven through twelve, neither its interface design nor its indexing optimized children’s experience. Children’s inadequate knowledge of how to use Yahooligans! and their insufficient knowledge of the research process hindered their success in finding information.
Optimizing the Web for Children Children’s experiences with the Web can be greatly improved by designing Web interfaces that build on their cognitive developmental level, informationseeking behaviors, and information needs. Since 2002, Bilal (working in the United States) and Large, Beheshti, and Tarjin Rahman (working together in Canada), have begun projects that involve children in the design of such interfaces. Both groups have concluded that children are articulate about their information needs and can be effective design partners. Based on the ten interfaces that children designed for search engines, Bilal was able to identify the types
96 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
of information architecture and visual design children needed and the information architecture, functionality, and visual design they sought. In sum, both Bilal and the Canadian-based team concluded that children are creative searchers who are more successful when they browse than when they search by keyword. Typically, children prefer keyword searching but resort to browsing when they experience continued information-retrieval problems. Children do not take advantage of the search features provided in search engines and rarely activate the help file for guidance. The research also revealed that children have both positive and negative feelings when it comes to the Web. They associate the Web with motivation, challenge, convenience, fun, authority, independence, and self-control, but also with frustration, dissatisfaction, and disappointment caused by information overload, lack of success in searches, and inability to make decisions about document relevance. As to information literacy, children possess inadequate information-seeking skills, naïve Web navigational skills, and an insufficient conceptual understanding of the research process. These problems cry out to teachers and information specialists to provide more effective Web training and to design instructional strategies that successfully integrate the Web into effective learning. With regard to system design, it appears that websites, Web directories, and search engines are not easy for children to use. Too much text, difficult vocabulary, long screen display, deep subject hierarchies, ineffective help files, poor indexing, and misleading hyperlink titles all hinder children’s successful use.
Education, Design, and Future Research Use of the Web in school and its increased use at home does not ensure that children possess effective skills in using it. Information professionals (e.g., school and public librarians) who serve children need to collaborate with teachers to identify how the Web can effectively support meaningful learning. Teachers cannot make the Web an effective learning and research tool unless they first receive effective, structured training
in its use. Children, too, should be taught how to use the Web effectively and efficiently.With critical-thinking skills and an understanding of how to manipulate the Web, children can move from being active explorers of the Web to becoming discerning masters of it. In discussing how usable Web interfaces are for children, Jacob Neilsen notes that “existing Web [interfaces] are based at best by insights gleaned from when designers observe their own children, who are hardly representative of average kids, typical Internet skills, or common knowledge about the Web” (Neilsen 2002, 1). Thus, it is not surprising to find that children experience difficulty in using the Web. System developers need to design interfaces that address children’s cognitive developmental level, information needs, and information-seeking behaviors. Developing effective Web interfaces for children requires a team effort involving information scientists, software engineers, graphic designers, and educational psychologists, as well as the active participation of representative children. We have a growing understanding of the strengths and weaknesses of the Web as a tool for teaching and learning. We also know much about children’s perceptions of and experiences with the Web, as well as their information-seeking behavior on the Web. The rapid progress made in these areas of study is commended. However, research gaps remain to be filled. We do not have sufficient research on working with children as partners in designing Web interfaces. We have investigated children’s information-seeking behavior in formal settings, such as schools, to meet instructional needs, but with the exception of Debra J. Slone’s 2002 study, we have little information on children’s Web behavior in informal situations, when they are using it to meet social or entertainment needs. We also lack a sophisticated model that typifies children’s informationseeking behavior.We need to develop a model that more fully represents this behavior so that we can predict successful and unsuccessful outcomes, diagnose problems, and develop more effective solutions. Dania Bilal See also Classrooms; Graphical User Interface; Search Engines
CLASSROOMS ❚❙❘ 97
FURTHER READING Bilal, D. (1998). Children’s search processes in using World Wide Web search engines: An exploratory study. Proceedings of the 61st ASIS Annual Meeting, 35, 45–53. Bilal, D. (1999). Web search engines for children: A comparative study and performance evaluation of Yahooligans!, Ask Jeeves for Kids, and Super Snooper. Proceedings of the 62nd ASIS Annual Meeting, 36, 70–82. Bilal, D. (2000). Children’s use of the Yahooligans! Web search engine, I: Cognitive, physical, and affective behaviors on fact-based tasks. Journal of the American Society for Information Science, 51(7), 646–665. Bilal, D. (2001). Children’s use of the Yahooligans! Web search engine, II: Cognitive and physical behaviors on research tasks. Journal of the American Society for Information Science and Technology, 52(2), 118–137. Bilal, D. (2002). Children’s use of the Yahooligans! Web search engine, III: Cognitive and physical behaviors on fully self-generated tasks. Journal of the American Society for Information Science and Technology, 53(13), 1170–1183. Bilal, D. (2003). Draw and tell: Children as designers of Web interfaces. Proceedings of the 66th ASIST Annual Meeting, 40, 135–141. Bilal, D. (In press). Research on children’s use of the Web. In C. Cole & M. Chelton (Eds.), Youth Information Seeking: Theories, Models, and Approaches. Lanham, MD: Scarecrow Press. Druin, A., Bederson, B., Hourcade, J. P., Sherman, L., Revelle, G., Platner, M., et al. (2001). Designing a digital library for young children. In Proceedings of the first ACM/IEEE-CS Joint Conference on Digital Libraries (pp. 398–405). New York: ACM Press. Fidel, R., Davies, R. K., Douglass, M. H., Holder, J. K., Hopkins, C. J., Kushner, E. J., et al. (1999). A visit to the information mall: Web searching behavior of high school students. Journal of the American Society for Information Science, 50(1), 24–37. Hirsh, S. G. (1999). Children’s relevance criteria and information seeking on electronic resources. Journal of the American Society for Information Science, 50(14), 1265–1283. Kafai, Y. B., & Bates, M. J. (1997). Internet Web-searching instruction in the elementary classroom: Building a foundation for information literacy. School Library Media Quarterly, 25(2), 103–111. Large, A., & Beheshti, J. (2000). The Web as a classroom resource: Reactions from the users. Journal of the American Society for Information Science And Technology, 51(12), 1069–1080. Large, A., Beheshti J., & Moukdad, H. (1999). Information seeking on the Web: Navigational skills of grade-six primary school students. Proceedings of the 62nd ASIS Annual Meeting, 36, 84–97. Large, A., Beheshti, J., & Rahman, R. (2002). Design criteria for children’s Web portals: The users speak out. Journal of the American Society for Information Science and Technology, 53(2), 79–94. Lenhart, A., Rainie, L., Oliver, L. (2003). Teenage life online: The rise of the instant-message generation and the Internet’s impact on friendships and family relationships. Washington, DC: Pew Internet and American Life Project. Retrieved January 4, 2004, from http:// www.pewinternet.org/reports/pdfs/PIP_Teens_Report.pdf Lenhart, A., Simon, M., & Graziano, M. (2001). The Internet and education: Findings of the Pew Internet & American Life Project.
Washington, DC: Pew Internet and American Life Project. Retrieved January 4, 2004, from http://www.pewinternet.org/reports/ pdfs/PIP_Schools_Report.pdf Neilsen, J. (2002). Kids’ corner: Website usability for children. Retrieved Januar y 4, 2004, from http://www.useit.com/aler tbox/ 20020414.html Schacter, J., Chung, G. K. W. K., & Dorr, A. (1998). Children’s Internet searching on complex problems: Performance and process analyses. Journal of the American Society for Information Science, 49(9), 840–849. Shenton, A. K., & Dixon, P. (2003). A comparison of youngsters’ use of CD-ROM and the Internet as information resources. Journal of the American Society for Information Science and Technology, 54(11), 1049–2003. Slone, D. J. (2002). The influence of mental models and goals on search patterns during Web interaction. Journal of the American Society for Information Science and Technology, 53(13), 1152–1169. Wallace, R. M., Kupperman, J., and Krajcik, J. (2002). Science on the Web: Students on-line in a sixth-grade classroom. The Journal of the Learning Sciences, 9(1), 75–104. Watson, J. S. (1998). If you don’t have it, you can’t find it: A close look at students’ perceptions of using technology. Journal of the American Society for Information Science, 49(11), 1024–1036.
CLASSROOMS People have regarded electronic technology throughout its evolution as an instrument for improving learning in classrooms. Television and video were examples of early electronic technology used in classrooms, and now personal computers have shown how electronic technology can enhance teaching and learning. Some people have lauded the new kinds of learning activities afforded by electronic technology; but other people maintain that such technology can be detrimental in classrooms. Despite such criticisms, researchers in different fields— education, computer science, human-computer interaction—continue to explore how such technology, paired with innovative curricula and teacher training, can improve classrooms.
Early Visions of Learning Technologies Early visions of how technology could be applied to learning included so-called behaviorist teaching machines inspired by the U.S. psychologist B. F. Skinner in the 1960s. Skinner believed that classrooms
98 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
History Comes Alive in Cyberspace OLD DEERFIELD, Mass. (ANS)—On a blustery spring morning, 18 students from Frontier Regional High School made their way down Main Street here in this colonial village, jotting down notes on the Federal and Italianate architecture and even getting a look at an early 18th-century kitchen. But this was no ordinary field trip. The students were gathering information for an Internet-based project that is integrating state-of-the-art computer technology with the social studies curriculum throughout this rural western Massachusetts school district. The project, titled Turns of the Centuries, focuses on life at the turns of the last three centuries, beginning in 1700 and continuing through 1900. It’s an unusual partnership between three distinct entities—a secondary school, a university and a museum. In the project, the primary sources of the Pocumtuck Valley Memorial Association, a nationally recognized museum of frontier life in this region, will be available to students through a web site that teachers, students and researchers are putting together. Central to the project are the over 30,000 museum artifacts—diaries, letters and other ‘primary sources’—made available to students through the developing web site. The marriage of technology with the museum archives has made possible new opportunities for “inquiry-based” education, which focuses on developing the student as active learner. In essence, the educational project here is a cyberspace version of the museum, enabling students to access archives through the Internet, either from their homes or through
suffered from a lack of individual attention and that individualized instruction would improve learning. The idea was that individual students could use a computer that would teach and test them on different topics. Students would receive positive reinforcement from the computer through some reward mechanism (e.g., praise and advancement to the next level of instruction) if they gave correct responses.
computer labs that are being established throughout district schools. But as the trip to Old Deerfield demonstrated, students will also add to the pool of knowledge and contribute original data to the web site as well. “This is not just an electronic test book,” said Tim Neumann, executive director of Pocumtuck Valley Memorial Association and one of the project’s designers. “Students are not just surfing the web but actively engaging with the text and images on the screen.” Students also address questions posed by teachers and then conduct research via the Internet, as well as other field studies, he said. Building the web sites, from teachers’ notes and classroom lesson plans, are students and technicians at the University of Massachusetts Center for Computer-Based Instructional Technology. The students in Friday morning’s expedition were responding to an assignment to choose a colonial family and track them over time, using the resources at the museum. Those results will eventually be incorporated into the Turns of the Centuries web site, where other students throughout the K-12 district will be able to access them. In addition to helping acquaint students with emerging technologies, the Turns of the Centuries project instructs teachers how to teach social studies with a web-based curriculum, and how to access these resources in their classrooms, as well as exploring the potential partnerships among school and communities linked by the information highway. Robin Antepara Source: Students learn about history with classroom computers of tomorrow. American News Service, June 17, 1999.
Incorrect responses would prevent advancement to the next level of questions, giving students the opportunity to consider how they could correct their responses. Software adopting this approach is frequently called “drill and practice” software, but few examples of such software exist outside of educational games and other kinds of “flash card” programs that teach skills such as spelling and arithmetic.
CLASSROOMS ❚❙❘ 99
A different vision is found in the work of Seymour Papert, an MIT professor who has explored technology in education since the 1960s, advocating learning theories proposed by the Swiss psychologist Jean Piaget. Papert’s vision uses computers as tools that children use for exploratory and constructive activities. Through these activities children create and shape their own understanding of concepts. Papert incorporated these ideas in the Logo programming language, which was intended to let children write programs to create computer graphics by exploring deeper concepts, such as the mathematical concepts needed to draw geographical concepts. A related vision came from Alan Kay, a renowned computer science researcher, who proposed the Dynabook concept during the early 1970s. The Dynabook was envisioned as a device similar to today’s laptop computer that children could use in information-rich and constructive activities. The Dynabook would have basic software core functionality (using the Smalltalk computer language). However, children could extend their Dynabook’s functionality through Smalltalk programming. This would allow children to create new tools for creative expression, information gathering, simulation, and so forth by learning not only programming but also the fundamentals of the underlying content (e.g., to create a music tool, students would need to learn musical concepts).
Initial Attempts at Technology-Enhanced Classrooms Each technological vision has brought promises of how technology can improve classrooms and learning. With the advent of personal computers, educators rushed to place computers in classrooms with the hope of implementing different visions of learning technologies. However, many initial attempts of technology-enhanced classrooms fell short of their promise because of technological and contextual issues in classrooms. One issue was that although people had some ideas about what kinds of learning activities and goals computers might support, people had little
concrete design information to guide software developers in developing and assessing effective software for learning. Many software projects had little real grounding in learning theories and the nature of children. Thus, for every successful software project, many others had educational content that was lacking and classroom use that was less than successful. For instance, many software projects involved the development of educational games (sometimes called “edutainment” software) whose educational content was dubious and whose initial appeal to children soon wore off. Other approaches involved tools such as HyperCard, which allowed laypeople to create software with the hope that educators could create software for their students. However, although the idea of teachercreated software was well intentioned and although teachers have educational knowledge, they lack software design knowledge, again resulting in few major successes. Other issues were contextual. Many early attempts at educational computing were technocentric and lacked a full understanding of the support needed in classrooms. Inadequate training for busy teachers to use electronic technology can become a large enough burden that teachers simply bypass it. Furthermore, technology has been introduced into the classroom without a good understanding by teachers (and sometimes by researchers developing the technology) of how the technology interacts with the classroom curriculum and learning goals. Again, technology has little impact if it is not a good fit with the activities that teachers desire. Finally, schools have lacked adequate technological resources to make full use of technology, so disparities in the number of computers in classrooms and in network connectivity have hindered effective use of technology.
Designing Learner-Centered Technology Simply developing stand-alone technology for classrooms is not enough. If effective technologyenhanced classrooms are to become a reality, then designers must design an overall learning system
100 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
that integrates three factors: technology, curriculum, and teacher support and development. During the last ten to fifteen years designers have developed many learning systems in the classroom by considering these three factors. Research in many content areas, such as science education, is shedding light on effective technology-enhanced classrooms. In such educational approaches technology acts as a cognitive tool to support learners as they engage in curricular activities. For example, many educational approaches in science education use an inquiry-based technique in which students engage in the same kinds of scientific activity—finding scientific articles, gathering and visualizing scientific data, building scientific models, and so forth—in which experts engage. For example, learners can use software to search digital libraries for information, use handheld computers with probes to gather data in different locations, use software to build graphs and scientific models, and so forth. Such technology should be designed to support learners in mindfully engaging in curricular activities so that learners can meet the learning goals that their teachers have outlined. Given this motivation, the approach for designing learne-rcentered technologies shifts from simply designing technologies whose hallmark is “ease of use” to designing technologies that learners can use in new, information-rich activities. Developing learner-centered technologies requires designers to understand the kinds of work that learners should engage in (i.e., curricular activities) and the learning goals of those learners. Then designers need to understand the areas where learners may face difficulties in performing those kinds of work (e.g., learners may not know what kinds of activities comprise a science investigation or how to do those activities) so that designers can create support features that address those difficulties. Furthermore, such support features differ from usability-oriented traditional software design. Although ease of use is still important, learner-centered technologies should not necessarily make tasks as easy as possible. Rather, just as a good teacher guides students toward an answer without giving the answer outright, learnercentered technologies must provide enough support to make tasks accessible to novice learners but leave
enough challenge that learners still work in the mindful manner needed for real learning to occur. Teacher support and development are also key for technology-enhanced classrooms. Teacher schedules and the classroom environment are busy, and introducing technology into classrooms can make matters more complex for teachers. Teachers need support and development to show them how technology works, how they can integrate technology into their classroom activities, and how they can use technology effectively in the classroom.
New Visions of Technology-Enhanced Classrooms Current classroom technology includes primarily desktop-based software. Some software implements “scaffolding” features that support learners by addressing the difficulties they encounter in their learning activities. For example, one particular software feature implementing a scaffolding approach would be a visual process map that displays the space of activities that learners should perform (e.g., the activities in a science investigation) in a way that helps them understand the structure of the activities. Other classroom software includes intelligent tutoring systems that oversee students as they engage in new activity. Intelligent tutoring systems can sense when students have encountered problems or are working incorrectly and can provide “just-in-time” advice to help them see their errors and understand their tasks. Aside from traditional desktop-based software, researchers are exploring new technology. For example, handheld computers (e.g., Palm or PocketPC computers) are becoming more pervasive among students. The mobility of handheld computers lets students take them to any learning context, not just the classroom. Thus, researchers are exploring how to develop learning tools for handheld computers. An example of such tools is probes that can be attached to handhelds for scientific data gathering (e.g., probes to measure oxygen levels in a stream). Handhelds with wireless networking capability can be used to gather information (e.g., access digital libraries) from a range of locations outside of a classroom. Additionally, handhelds can be part of
CLASSROOMS ❚❙❘ 101
A Personal Story—Learning through Multimedia When I talk about why I became interested in exploring computers in education, I like to tell a story from my early graduate school days in the late 1990s. My research group was working in a local Michigan high school using the MediaText software they had developed earlier. MediaText was a simple text editor that made it possible to incorporate different media objects, such as images, sounds, or video, into the text one was writing. In one class, students had been given an assignment to explain a series of physics terms. One particular student sometimes had difficulty writing, but with MediaText, she could use other media types for her explanations. For example, using video clips from the movie Who Framed Roger Rabbit? she explained potential energy with a clip of a cartoon baby sitting on top of a stack of cups and saucers, swaying precariously without falling over. Then she explained kinetic energy with a clip of the same baby sliding across the floor of a room. What struck me was that it was clear from her choice of video clips that she understood those physics concepts. If she had been confined to textual explanations, she might not have been able to convey as much understanding. But because she had a choice of media types, she was able to successfully convey that she knew those concepts. This episode helped me realize how computers could impact learners by giving them a range of different media types for self-expression. Now sometimes this story gets me in trouble with people who say that if you give students all these alternatives to writing, they’ll never learn to write correctly. I’ll buy that…to a certain extent. But people are diverse— they learn differently and they express themselves differently. My response to the naysayers is that effectively incorporating different media in software tools isn’t for flash, but to give people different “languages” to learn from and use. By offering these alternatives, we open new educational doors, especially for today’s diverse, tech-savvy kids. After all, if one student can explain physics terms using a movie about a cartoon rabbit, then multimedia in the classroom is working. Chris Quintana
new kinds of learning activities called “participatory simulations” in which groups of students can use the “beaming” feature of wireless handhelds to be part of a simulation in which they exchange information. For example, students can explore epidemiological crises in a simulation in which they “meet” other people by exchanging information with their handhelds. During the simulation a student’s handheld might announce that it is “sick,” at which point students would engage in a discussion to understand how disease might spread through a community. Researchers are also exploring the other end of the spectrum, looking at how large displays and virtual reality can be used as learning tools. Such tools can help students explore virtual worlds and engage in activities such as “virtual expeditions.” Students can explore environments that might be difficult or impossible to explore in person (e.g., different ecosystems), thus allowing them to engage in inquiry-based activities throughout a range of locations and gather otherwise inaccessible information.
Meeting the Challenge Technology-enhanced classrooms have had failures as researchers have struggled to understand not only the kinds of effective learning technologies, but also the role of technology in classrooms and the support needed for effective technology-enhanced classrooms. Critics of technology in classrooms still exist. Education professor Larry Cuban has written extensively on the problems and failures of technology in classrooms. Scientist and author Clifford Stoll has also written about the possible adverse effects of technology and the caution that must be taken for children. However, successes and new visions of how technology-enhanced classrooms can support learners also exist. Designers of learning technologies need to understand that designing software for ease of use is not enough. Designers must understand learning theories, the nature of learners, and the classroom context to design cognitive learning technologies that students use to mindfully engage in substantive
102 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
learning activities. People implementing technologyenhanced classrooms need to consider other issues, such as classroom curriculum augmenting technology and teachers having the support and development that they need in order to understand and make full use of technology. As new technology arises, people will always attempt to see how that technology can be used to enhance learning. By understanding the classroom context and the local design issues involved in developing learner-centered technology, the humancomputer interaction community can make significant contributions to realizing the promise of technology-enhanced classrooms. Chris Quintana See also Children and the Internet; Psychology and HCI FURTHER READING Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.). (2000). How people learn: Brain, mind, experience, and school (Exp. ed.). Washington, DC: National Academy Press. Cuban, L. (1986). Teachers and machines: The classroom use of technology since 1920. New York: Teachers College Press. Kay, A., & Goldberg, A. (1977). Personal dynamic media. IEEE Computer, 10(3), 31–41. Papert, S. (1980). Mindstorms. New York: Basic Books. Quintana, C., Soloway, E., & Krajcik, J. (2003). Issues and approaches for developing learner-centered technology. In M. Zelkowitz (Ed.), Advances in computers: Volume 57. Information Repositories (pp. 272–323). New York: Academic Press. Reiser, B. J. (2002). Why scaffolding should sometimes make tasks more difficult for learners. Proceedings of CSCL 2002, 255–264. Soloway, E., Guzdial, M., & Hay, K. E. (1994). Learner-centered design: The challenge for HCI in the 21st century. Interactions, 1(2), 36–48.
CLIENT-SERVER ARCHITECTURE Client-server architecture is one of the many ways to structure networked computer software. Developed during the 1980s out of the personal com-
puter (PC) explosion, client-server architecture provides a distributed synthesis of the highly interactive personal computer (the client) with a remotely located computer providing data storage and computation (the server). The goal of client-server architecture is to create structures and communication protocols between the client computer and the server computer in order to optimize the access to a set of computational resources.
Motivating Example To understand client-server architecture, one can consider a manufacturing company using computer technology to support day-to-day business operations and long-range strategic planning. Product orders come from the sales department, inventory is maintained by the manufacturing department, and the raw materials orders are generated by the planning department. Furthermore, the accounting department tracks the money, and the chief executive officer (CEO) wants a perspective on all aspects of the company. To be judged successful, the software solution implemented should provide data storage and update capability for all aspects of the company operation. Further, the appropriate database segments should be accessible by all of the employees based on their particular job responsibility, regardless of where they are physically located. Finally, the application views of the database should be highly usable, interactive, and easy to build and update to reflect ongoing business growth and development.
Conflicting Goals One key feature of any software application is the database, the dynamic state of the application. For example, the status of inventory and orders for a factory would be maintained in a database management system (DBMS). Modern database management technology is quite well developed, supporting database lookup and update in a secure, high performance fashion. DBMS computers, therefore, are typically high-performance, focused on the task, and have large permanent storage capacity (disk) and large working memory. The cost of this hardware, the crit-
CLIENT-SERVER ARCHITECTURE ❚❙❘ 103
ical need for consistency, and the complexity of system management dictate that the DBMS be centrally located and administered. This goal was realized in the mainframe architecture of the 1960s and the time-sharing architecture of the 1970s. On the other hand, personal computer applications such as the spreadsheet program VisiCalc, introduced in 1979, demonstrate the power of highly interactive human-computer interfaces. Responding instantly to a user’s every keystroke and displaying results using graphics as well as text, the PC has widened the scope and number of users whose productivity would be enhanced by access to computing. These inexpensive computers bring processing directly to the users but do not provide the same scalable, high-performance data-storage capability of the DBMS. Furthermore, the goal of information management and security is counter to the personal computer architecture, in which each user operates on a local copy of the database. The network—the tie that binds together the DBMS and the human-computer interface—has evolved from proprietary system networks, such as IBM System Network Architecture (SNA), introduced in 1974, to local area networks, such as Ethernet, developed at Xerox’s Palo Alto Research Center (PARC) and introduced in 1976, to the Internet, which began as the U.S. Department of Defense’s Advanced Research Projects Agency network (Arpanet) in 1972 and continues to evolve. A networking infrastructure allows client software, operating on a PC, to make requests of the server for operations on the user’s behalf. In other words, the network provides for the best of both worlds: high-performance, high-reliability components providing centralized data computation and user interface components located on the personal computer providing high interactivity and thereby enhanced usability. Furthermore, by defining standardized messagepassing protocols for expressing the requests from client to server, a level of interoperability is achieved. Clients and servers coming from different vendors or implementing different applications may communicate effectively using protocols such as Remote Procedure Call (RPC) or Standard Query Language (SQL), together with binding services such as the
Common Object Request Broker Architecture (CORBA) or the Component Object Model (COM). Returning to our motivating example, the software solution would include a separate interactive PC application designed for each business function: sales manufacturing, accounting, planning, and the CEO. Each of these individual PC applications would use an RPC call for each query or update operation to the company database server. This partitioning of function is effective both in terms of hardware cost performance (relatively inexpensive client computers for each user versus a relatively expensive database server computer shared between all users) and end-user application design. As the number of simultaneous users grows, the portion of a server’s computation time spent managing client-server sessions grows as well. To mitigate this processing overhead, it is useful to introduce an intermediary server to help handle the client-server requests. Called a “message queuing server,” this software system accepts operations to be performed on the database and manages the request queues asynchronously. Priority information allows intelligent management and scheduling of the operations. Result queues, returning answers back to the requesting client, provide for asynchronous delivery in the other direction as well. Through a message server the queuing operations are offloaded from the database server, providing enhanced throughput (output). The message server also leads to increased flexibility because the message queuing provides a layer of translation and independence between the client software and the DBMS server.
Business Processes Although PC-client access to a server-based DBMS was an early client-server scenario and continues to be important, other client-server architectures include other types of network services. For example, an application server hosts computation rather than data storage, as with a DBMS server. The business processes for an enterprise may be implemented using an application server. Like the message queuing server, the application server sits between the client software and the DBMS, encapsulating functions that may be common across many clients, such as policies and procedures.
104 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
The Future of Client-Server Computing Client-server computing will continue to be important long into the future. PCs continue to drop in price, and new networked devices such as personal data assistants (PDAs) and the World Wide Web are driving network accessibility to a broader audience. The clientserver architecture, which lives on the network through standardized messaging protocols, will continue to have wide applicability, especially in business. Mark R. Laff See also Peer-to-Peer Architecture FURTHER READING Berson, A. (1992). Client/server architecture. New York: McGraw-Hill. Berson, A. (1995). Sybase and client/server computing. New York: McGraw-Hill. Comer, D. (1994). Internetworking with TCP/IP: Vol. 3. Client-server programming and applications. Englewood Cliffs, NJ: Prentice Hall. C o r b i n , J. R . ( 1 9 9 1 ) . T h e a r t o f d i s t r i b u t e d a p p l i c a t i o n s : Programming techniques for remote procedure calls. New York: Springer-Verlag. Edelstein, H. (1994). Unraveling client/server architecture. Redwood City, CA: M & T Publishing. Hall, C. (1994). Technical foundations of client/server systems. New York: Wiley. IBM Corporation. (2002). Websphere MQ application message interface. (SC34-6065-00). Armonk, NY: International Business Machines Corporation. Krantz, S. R. (1995). Real world client server: Learn how to successfully migrate to client/server computing from someone who’s actually done it. Gulf Breeze, FL: Maximum Press. Metcalfe, R. M., & Boggs, D. R. (1976). Ethernet: Distributed packet switching for local computer networks. Communications of the ACM, 19(5), 395–404. Sims, O. (1994). Business objects: Delivering cooperative objects for clientserver. New York: McGraw-Hill.
COGNITIVE WALKTHROUGH The cognitive walkthrough (CW) is a usability evaluation approach that predicts how easy it will be for people to learn to do particular tasks on a computer-
based system. It is crucial to design systems for ease of learning, because people generally learn to use new computer-based systems by exploration. People resort to reading manuals, using help systems, or taking formal training only when they have been unsuccessful in learning to do their tasks by exploration. CW has been applied to a wide variety of systems, including automatic teller machines (ATMs), telephone message and call forwarding systems, websites, computerized patient-record systems for physicians, programming languages, multimedia authoring tools, and computer-supported cooperative work systems. HCI researcher Andrew J. Ko and his associates innovatively applied CW (in lieu of pilot experiments) to predict problems that experimental participants might have with the instructions, procedures, materials, and interfaces used in experiments for testing the usability of a system (the system was a visual programming language).
Cognitive Walkthrough Methodology The CW approach was invented in 1990 and has evolved into a cluster of similar methods with the following four defining features: 1. The evaluation centers on particular users and their key tasks. Evaluators start a CW by carefully analyzing the distinctive characteristics of a particular user group, especially the relevant kinds of background knowledge these users can call upon when learning to perform tasks on the system. Next, CW evaluators select a set of key tasks that members of the user group will do on the system. Key tasks are tasks users do frequently, tasks that are critical even if done infrequently, and tasks that exhibit the core capabilities of the system. 2. The steps designers prescribe for doing tasks are evaluated. For each key task, CW evaluators record the full sequence of actions necessary to do the task on the current version of the system. Then CW evaluators walk through the steps, simulating users’ action selections and mental processes while doing the task. The simplest CW version asks two questions at each
COGNITIVE WALKTHROUGH ❚❙❘ 105
step: (1) Is it likely that these particular users will take the “right action”—meaning the action designers expect them to take—at this step? and (2) If these particular users do the “right action” and get the feedback the system provides (if any), will they know they made a good choice and realize that their action brought them closer to accomplishing their goal? To answer each question evaluators tell a believable success story or failure story. They record failure stories and have the option of adding suggestions for how to repair the problems and turn failures into successes. Anchoring the evaluation to the steps specified by designers communicates feedback to designers in their own terms, facilitating design modifications that repair the usability problems. 3. Evaluators use theory-based, empirically verified predictions. The foundation for CW is a theory of learning by exploration that is supported by extensive research done from the 1960s to the 1980s on how people attempt to solve novel problems when they lack expert knowledge or specific training. According to this theory, learning to do tasks on a computer-based system requires people to solve novel problems by using general problem-solving methods, general reading knowledge, and accumulated experience with computers. “The key idea is that correct actions are chosen based on their perceived similarity to the user’s current goal” (Wharton et al. 1994, 126). For software applications, the theory predicts that a user scans available menu item labels on the computer screen and picks the menu item label that is most similar in meaning to the user’s current goal. CW evaluators answer the first question with a success story if the “right action” designated by the designer is highly similar in meaning to the user’s goal and if the menu item labels on the screen use words familiar to the user. 4. Software engineers can easily learn how to make CW evaluations. It is crucial to involve software engineers and designers in CW, because they are the individuals responsible for revising the design to repair the problems. There is strong evidence that software engineers and
designers can readily learn CW, but they have a shallower grasp of the underlying theory than usability experts trained in cognitive psychology and consequently find less than half as many usability problems. A group CW, including at least one usability expert trained in cognitive psychology, can find a higher percentage of usability problems than an individual evaluator— up to 50 percent of the problems that appear in usability tests of the system. CW was one of the several evaluation methods pioneered in the early 1990s to meet a practical need, the need to identify and repair usability problems early and repeatedly during the product development cycle. The cost of repairing usability problems rises steeply as software engineers invest more time in building the actual system, so it is important to catch and fix problems as early as possible. For a product nearing completion the best evaluation method is usability testing with end users (the people who will actually use the system), but CW is appropriate whenever it is not possible to do usability testing. Early versions of CW were tedious to perform, but the 1992 cognitive jogthrough and streamlined CW of 2000, which still preserve all the essential CW features, are much quicker to perform.
Transforming CW to Faster and More Accurately Predict User Actions The cognitive walkthrough for the Web (CWW) has transformed the CW approach by relying on Latent Semantic Analysis (LSA)—instead of on the subjective judgments of usability experts and software engineers—to predict whether users are likely to select the “right action.” LSA is a computer software system that objectively measures semantic similarity— similarity in meaning—between any two passages of text. LSA also assesses how familiar words and phrases are for particular user groups. While analyzing the distinctive characteristics of the particular user group, CWW evaluators choose the LSA semantic space that best represents the background knowledge of the particular user group— the space built from documents that these users
106 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
are likely to have read. For example, CWW currently offers a college-level space for French and five spaces that accurately represent general reading knowledge for English at college level and at third-, sixth-, ninth-, and twelfth-grade levels. CWW uses LSA to measure the semantic similarity between a user’s information search goal (described in 100 to 200 words) and the text labels for each and every subregion of the web page and for each and every link appearing on a web page. CWW then ranks all the subregions and link labels in order of decreasing similarity to the user’s goal. CWW predicts success if the “right action” is the highest-ranking link, if that link is nested within the highest-ranking subregion, and if the “right action” link label and subregion avoid using words that are liable to be unfamiliar to members of the user group. Relying on LSA produces the same objective answer every time, and laboratory experiments confirm that actual users almost always encounter serious problems whenever CWW predicts that users will have problems doing a particular task. Furthermore, using CWW to repair the problems produces twoto-one gains in user performance. So far, CWW researchers have tested predictions and repairs only for users with college-level reading knowledge of English, but they expect to prove that CWW gives comparably accurate predictions for other user groups and semantic spaces.
APPLICATION A software program that performs a major computing function (such as word processing or Web browsing).
Research by cognitive psychologist Rodolfo Soto suggests that CW evaluations of software applications would be improved by relying on LSA, but to date CW has consistently relied on subjective judgments of human evaluators. Consequently the agreement between any two CW evaluators is typically low, raising concerns about the accuracy of CW predictions. Many studies have tried to assess the accuracy and cost-effectiveness of CW compared to usability testing and other evaluation methods. The results are inconclusive, because there is controversy
about the experimental design and statistics of these studies. Relying on LSA opens the door to fully automating CWW and increasing its cost-effectiveness. If other CW methods start to rely on LSA they, too, could be automated. The streamlined CW is more efficient than earlier CW methods, but it still consumes the time of multiple analysts and relies on subjective judgments of uncertain accuracy.
Objectively Predicting Actions for Diverse Users Relying on LSA makes it possible for CWW to do something that even usability experts trained in cognitive psychology can almost never do: objectively predict action selections for user groups whose background knowledge is very different from the background knowledge of the human evaluators. For example, selecting the sixth-grade semantic space enables LSA to “think” like a sixth grader, because the sixth-grade LSA semantic space contains only documents likely to have been read by people who have a sixth-grade education. In contrast, a collegeeducated analyst cannot forget the words, skills, and technical terms learned since sixth grade and cannot, therefore, think like a sixth grader. Since CW was invented in 1990, the number and diversity of people using computers and the Internet have multiplied rapidly. Relying on LSA will enable the CW approach to keep pace with these changes. In cases where none of the existing LSA semantic spaces offers a close match with the background knowledge of the target user group, new semantic spaces can be constructed for CWW (and potentially for CW) analyses—in any language at any level of ability in that language. Specialized semantic spaces can also be created for bilingual and ethnic minority user groups and user groups with advanced background knowledge in a specific domain, such as the domain of medicine for evaluating systems used by health professionals. Marilyn Hughes Blackmon See also Errors in Interactive Behavior; User Modeling
COLLABORATORIES ❚❙❘ 107
FURTHER READING Blackmon, M. H., Kitajima, M., & Polson, P. G. (2003). Repairing usability problems identified by the cognitive walkthrough for the web. In CHI 2003: Proceedings of the Conference on Human Factors in Computing Systems, 497–504. Blackmon, M. H., Polson, P. G., Kitajima, M., & Lewis, C. (2002). Cognitive walkthrough for the Web. In CHI 2002: Proceedings of the Conference on Human Factors in Computing Systems, 463–470. Desurvire, H. W. (1994). Faster, cheaper!! Are usability inspection methods as effective as empirical testing? In J. Nielsen & R. L. Mack (Eds.), Usability inspection methods (pp. 173–202). New York: Wiley. Gray, W. D., & Salzman, M. D. (1998). Damaged merchandise? A review of experiments that compare usability evaluation methods. Human-Computer Interaction, 13(3), 203–261. Hertzum, M., & Jacobsen, N. E. (2003). The evaluator effect: A chilling fact about usability evaluation methods. International Journal of Human Computer Interaction, 15(1), 183–204. John B. E., & Marks, S. J. (1997). Tracking the effectiveness of usability evaluation methods. Behaviour & Information Technology, 16(4/5), 188–202. John, B. E., & Mashyna, M. M. (1997). Evaluating a multimedia authoring tool. Journal of the American Society for Information Science, 48(11), 1004–1022. Ko, A. J., Burnett, M. M., Green, T. R. G., Rothermel, K. J., & Cook, C. R. (2002). Improving the design of visual programming language experiments using cognitive walkthroughs. Journal of Visual Languages and Computing, 13, 517–544. Kushniruk, A. W., Kaufman, D. R., Patel, V. L., Lévesque, Y., & Lottin, P. (1996). Assessment of a computerized patient record system: A cognitive approach to evaluating medical technology. M D Computing, 13(5), 406–415. Lewis, C., Polson, P., Wharton, C., & Rieman, J. (1990). Testing a walkthrough methodology for theory-based design of walk-up-anduse interfaces. In CHI ‘90: Proceedings of the Conference on Human Factors in Computing Systems, 235–242. Lewis, C., & Wharton, C. (1997). Cognitive walkthroughs. In M. Helander, T. K. Landauer, & P. Prabhu (Eds.), Handbook of human-computer interaction (2nd ed., revised, pp. 717–732). Amsterdam: Elsevier. Pinelle, D., & Gutwin, C. (2002). Groupware walkthrough: Adding context to groupware usability evaluation. In CHI 2002: Proceedings of the Conference on Human Factors in Computing Systems, 455–462. Polson, P., Lewis, C., Rieman, J., & Wharton, C. (1992). Cognitive walkthroughs: A method for theory-based evaluation of user interfaces. International Journal of Man-Machine Studies, 36, 741–773. Rowley, D. E., & Rhoades, D. G. (1992). The cognitive jogthrough: A fast-paced user interface evaluation procedure. In CHI ’92: Proceedings of the Conference on Human Factors in Computing Systems, 389–395. Sears, A., & Hess, D. J. (1999). Cognitive walkthroughs: Understanding the effect of task description detail on evaluator performance. International Journal of Human-Computer Interaction, 11(3), 185–200. Soto, R. (1999). Learning and performing by exploration: Label quality measured by Latent Semantic Analysis. In CHI ’99: Proceedings of the Conference on Human Factors and Computing Systems, 418–425.
Spencer, R. (2000). The streamlined cognitive walkthrough method, working around social constraints encountered in a software development company. In CHI 2000: Proceedings of the Conference on Human Factors in Computing Systems, 353–359. Wharton, C., Rieman, J., Lewis, C., & Polson, P. (1994). The cognitive walkthrough method: A practitioner’s guide. In J. Nielsen & R. L. Mack (Eds.), Usability inspection methods (pp. 105–140). New York: Wiley.
COLLABORATIVE INTERFACE See Multiuser Interfaces
COLLABORATORIES A collaboratory is a geographically dispersed organization that brings together scientists, instrumentation, and data to facilitate scientific research. In particular, it supports rich and recurring human interaction oriented to a common research area and provides access to the data sources, artifacts, and tools required to accomplish research tasks. Collaboratories have been made possible by new communication and computational tools that enable more flexible and ambitious collaborations. Such collaborations are increasingly necessary. As science progresses, the unsolved problems become more complex, the need for expensive instrumentation increases, larger data sets are required, and a wider range of expertise is needed. For instance, in highenergy physics, the next generation of accelerators will require vast international collaborations and will have a collaboratory model for remote access. At least 150 collaboratories representing almost all areas of science have appeared since the mid-1980s. Collaboratories offer their participants a number of different capabilities that fall into five broad categories: communication (including tools such as audio or video conferencing, chat, or instant messaging), coordination (including tools relating to access rights, group calendaring, and project management), information access (including tools for
108 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
accessing online databases, digital libraries, and document repositories), computational access (including access to supercomputers), and facility access (including tools for remotely accessing specialized facilities or instruments, such as a particle accelerator or a high-powered microscope). Research on collaboratories has focused mostly on solving technical problems. However, substantial gains in the practice of science are likely to be the combined effect of social and technical transformations. The gap between the raw performance capability of collaboratory tools (based on bandwidth, storage capacity, processor speed, and so forth) and the realized performance (usage for scientific purposes, which is limited by factors such as usability and fit to the work and culture) can limit the potential of collaboratories. This point will be discussed in greater detail later.
Types of Collaboratories: Research-Focused Collaboratories There are a number of different kinds of collaboratories. A collaboratory that satisfies all elements of the definition given above is a prototypical collaboratory— a distributed research center. Other kinds of collaboratories are missing one or more of the elements of that definition. The following four types of collaboratories focus on enabling geographically distributed research. Distributed Research Center This type of collaboratory functions like a full-fledged research center or laboratory, but its users are geographically dispersed—that is, they are not located at the research center. It has a specific area of interest and a general mission, with a number of specific projects. A good example of a distributed research center is the Alliance for Cellular Signaling, a large, complex distributed organization of universities whose goal is to understand how cells communicate with one another to make an organism work. Shared Instrument A shared-instrument collaboratory provides access to specialized or geographically remote facilities.
As the frontiers of science are pushed back, the instrumentation required for advances becomes more and more esoteric, and therefore usually more and more expensive. Alternatively, certain scientific investigations require instrumentation in specific geographic settings, such as an isolated or inhospitable area. A typical example is the Keck Observatory, which provides access to an astronomical observatory on the summit of Mauna Kea in Hawaii to a consortium of California universities. Community Data System An especially common collaboratory type is one in which a geographically dispersed community agrees to share their data through a federated or centralized repository. The goal is to create a more powerful data set on which more sophisticated or powerful analyses can be done than would be possible if the parts of the data set were kept separately. A typical example of a community data system is the Zebrafish Information Network (ZFIN), an online aggregation of genetic, anatomical, and methodological information for zebra fish researchers. Open-Community Contribution System Open-community contribution systems are an emerging organizational type known as a voluntary association. Interested members of a community (usually defined quite broadly) are able to make small contributions (the business scholar Lee Sproull calls them microcontributions) to some larger enterprise. These contributions are judged by a central approval organization and placed into a growing repository. The classic example is open-source software development, which involves hundreds or even thousands of contributors offering bug fixes or feature extensions to a software system. In science, such schemes are used to gather data from a large number of contributors. Two examples will help illustrate this. The NASA Ames Clickworkers project invited members of the public to help with the identification of craters on images from a Viking mission to Mars. They received 1.9 million crater markings from over 85,000 contributors, and the averaged results of these community contributions were equivalent in quality to those of expert geologists. A second example is MIT’s Open Mind Common Sense Initiative, which is collecting
COLLABORATORIES ❚❙❘ 109
examples of commonsense knowledge from members of the public “to help make computers smarter” (Singh n.d.).
Types of Collaboratories: Practice-Focused Collaboratories The next two collaboratory types support the professional practice of science more broadly, as opposed to supporting the conduct of research itself. Virtual Community of Practice This is a network of individuals who share a research area of interest and seek to share news of professional interest, advice, job opportunities, practical tips on methods, and the like. A good example of this kind of collaboratory is Ocean US, which supports a broad community of researchers interested in ocean observations. A listserv is another mechanism that is used to support a virtual community of practice, but much more common these days are websites and wikis. Virtual Learning Community This type of collaboratory focuses on learning that is relevant to research, but not research itself. A good example is the Ecological Circuitry Collaboratory, whose goal is to train doctoral students in ecology in quantitative-modeling methods.
Evolution and Success of Collaboratories Collaboratories that last more than a year or two tend to evolve. For example, a collaboratory may start as a shared-instrument collaboratory. Those who share the instrument may add a shared database component to it, moving the collaboratory toward a community data system. Then users may add communication and collaboration tools so they can plan experiments or data analyses, making the collaboratory more like a distributed research center. Some collaboratories are quite successful, while others do not seem to work very well. There are a number of factors that influence whether or not a
collaboratory is successful. What follow are some of the most important factors. Readiness for Collaboration Participants must be ready and willing to collaborate. Science is by its very nature a delicate balance of cooperation and competition. Successful collaborations require cooperation, but collaboration is very difficult and requires extra effort and motivation. Technologies that support collaboration will not be used if the participants are not ready or willing to collaborate. Various fields or user communities have quite different traditions of sharing. For instance, upper-atmospheric physicists have had a long tradition of collaboration; the Upper Atmospheric Research Collaboratory (UARC) began with a collaborative set of users. On the other hand, several efforts to build collaboratories for biomedical research communities (for instance, for researchers studying HIV/AIDS or depression) have had difficulty in part because of the competitive atmosphere. Readiness for collaboration can be an especially important factor when the collaboratory initiative comes from an external source, such as a funding agency. Technical Readiness The participants, the supporting infrastructure, and the design of the tools must be at a threshold technical level. Some communities are sufficiently collaborative to be good candidates for a successful collaboratory, but their experience with collaborative technologies or the supporting infrastructure is not sufficient. Technical readiness can be of three kinds. People in various organizations or fields have different levels of experience with collaboration tools. A specific new technology such as application sharing may be a leap for some and an easy step for others. It is important to take account of users’ specific experience when introducing new tools.
INDIVIDUAL TECHNICAL READINESS
Collaborative technologies require good infrastructure, both technical and social. Poor networks, incompatible workstations, or a lack of control over different versions of software can cause major problems. It is also very important INFRASTRUCTURE READINESS
110 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
to have good technical support personnel, especially in the early phases of a collaboratory. The Worm Community System (WCS) was a very early collaboratory project, intended to support a community of researchers who studied the organism c. elegans (a type of nematode). Sophisticated software was developed for the WCS on a UNIX platform that was not commonly used in the laboratories of the scientists. Since the tools were thus not integrated with everyday practice, they were seldom used. Furthermore, the necessary technical support was not generally present in the lab, so when there were problems, they were showstoppers. The social interactions that take place in teams are affected both by the characteristics of team members and by the tools that are used. The study of the impact of technology characteristics on this process may be called social ergonomics (ergonomics is the application of knowledge about humans to the design of things). For example, video conferencing systems often ignore such details as screen size, display arrangement in relation to participants, camera angle, and sound volume. But it turns out that these details can have social effects. For example, a study conducted by the researchers Wei Huang, Judith Olson, and Gary Olson found that the apparent height of videoconference participants, as conveyed via camera angle, influenced a negotiation task. The apparently taller person was more influential in shaping the final outcome than the apparently shorter person. SOCIAL ERGONOMICS OF TOOLS
Aligned Incentives Aligning individual and organizational incentives is an important element of successful collaborations. Consider the incentives to participate in a community data system: What motivates a researcher to contribute data to a shared database? By contributing, the researcher gives up exclusive access to the data he or she has collected. There are a variety of incentive schemes for encouraging researchers to collaborate. ZFIN has relied on the goodwill of its members. Most of the members of this community had a connection to one specific senior researcher who both pioneered the use of zebra fish as a model organism
GOODWILL
and also created for the community a spirit of generosity and collaboration. Although goodwill among the community of researchers has been a sufficient incentive for participation, ZFIN is now expanding its participation beyond its founders, and it will be interesting to see how successful the goodwill incentive is in the context of the expanded community. G O O D W I L L P L U S K A R M A P O I N T S Slashdot is a very large and active community of open-source software developers who share and discuss news. Slashdot rewards those who make the most informative contributions by bringing them more into the center of attention and allocating them karma points. Karma points are allocated in accordance with how highly a contributor’s postings are rated by others. These karma points give contributors some additional privileges on the site, but their main value is as a tangible measure of community participation and status. Karma points are a formalization of goodwill, valuable primarily because the members of the community value them as an indicator of the quality of the sharing done by specific individuals. REQUIRING CONTRIBUTION AS A PREREQUISITE FOR OTHER
In order to get the details of gene sequences out of published articles in journals, a consortium of high-prestige journals in biology requires that those who submit articles to the consortium’s journals have a GenBank accession number indicating that they have stored their gene sequences in the shared database.
ACTIVITY
The Alliance for Cellular Signaling has taken a novel approach to providing researchers with an incentive to contribute molecule pages to the Alliance’s database. Because the molecule pages represent a lot of work, the Alliance has worked out an agreement with Nature, one of the high-prestige journals in the field, to count a molecule page as a publication in Nature. Nature coordinates the peer reviews, and although molecule-page reviews do not appear in print, the molecule pages are published online and carry the prestige of the Nature Publishing Group. The Alliance’s editorial director has written letters in support of promotion and tenure cases indicating that NEW FORMS OF PUBLICATION
COLLABORATORIES ❚❙❘ 111
molecule page contributions are of journalpublication quality. This agreement is a creative attempt to ensure that quality contributions will be made to the database; it also represents an interesting evolution of the scholarly journal to include new forms of scholarly publication. Data Issues Data are a central component of all collaborations. There are numerous issues concerning how data are represented and managed; how these issues are resolved affects collaboratory success. For example, good metadata—data about data—are critical as databases increase in size and complexity. Library catalogs and indexes to file systems are examples of metadata. Metadata are key to navigation and search through databases. Information about the provenance or origins of the data is also important. Data have often been highly processed, and researchers will want to know what was done to the original raw data to arrive at the processed data currently in the database. Two related collaboratories in high-energy physics, GriPhyN and iVDGL, are developing schemes for showing investigators the paths of the transformations that led to the data in the database. This will help researchers understand the data and will also help in identifying and correcting any errors in the transformations. For some kinds of collaboratories, the complex jurisdictional issues that arise when data are combined into a large database pose an interesting new issue. The BIRN project is facing just such an issue as it works to build up a database of brain images. The original brain images were collected at different universities or hospitals under different institutional review boards, entities that must approve any human data collection and preservation, and so the stipulations under which the original images were collected may not be the same in every case.
Other Issues Many collaboratory projects involve cooperation between domain scientists, who are the users of the collaboratory, and computer scientists, who are responsible for the development of the tools. In many
projects there are tensions between users, who want reliable tools that do what they need done, and computer scientists, who are interested in technical innovations and creative software ideas. There is little incentive for the computer scientists to go beyond the initial demonstration versions of tools to the reliable and supported long-term operational infrastructure desired by the users. In some fields, such as high-energy physics, this tension has been at least partially resolved. The field has used advanced software for so long that it is understood that the extra costs associated with having production versions of tools must be included in a project. Other fields are only just discovering this. The organization of the George E. Brown, Jr., Network for Earthquake Engineering Simulation (NEES) project represents an innovation in this regard. The National Science Foundation, which funds the project, established it in two phases, an initial four-year system-integration phase in which the tools are developed and tested, and a ten-year operational phase overseen by a NEES consortium of user organizations. Any large organization faces difficult management issues, and practicing scientists may not always have the time or the skills to properly manage a complex enterprise. Management issues get even more complicated when the organization is geographically distributed. Many large collaboratories have faced difficult management issues. For instance, the two physics collaboratories mentioned earlier, GriPhyN and iVDGL, found that it was necessary to hire a fulltime project manager for each collaboratory in order to help the science project directors manage the day-by-day activities of the projects. The Alliance for Cellular Signaling has benefited from a charismatic leader with excellent management skills who has set up a rich management structure to oversee the project. The BIRN collaboratory has an explicit governance manual that contains guidelines for a host of tricky management issues; it also has a steering committee that is responsible for implementing these management guidelines.
Collaboratories in the Future Geographically distributed research projects are becoming commonplace in all the sciences. This
112 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
proliferation is largely driven by what is required to work at the frontiers of science. In the future, widely shared knowledge about how to put together successful collaboratories will be essential. Of course, scientists are not alone in attempting geographically distributed collaborations. Similar issues are faced in industry, education, government, and the nonprofit sector. Good tools for collaboration and the social and organizational knowledge to make effective use of them will be critical in all domains. Gary M. Olson
Sproull, L., Conley, C., & Moon, J. Y. (in press). Pro-social behavior on the net. In Y. Amichai-Hamburger (Ed.), The social net: The social psychology of the Internet. New York: Oxford University Press. Sproull, L. & Kiesler, S. (in press). Public volunteer work on the Internet. In B. Kahin & W. Dutton (Eds.), Transforming enterprise. Cambridge, MA: MIT Press. Star, S. L., & Ruhleder, K. (1994). Steps towards an ecology of infrastructure: Complex problems in design and access for large-scale collaborative systems. In Proceedings of CSCW 94 (pp. 253–264). New York: ACM Press. Teasley, S., & Wolinsky, S. (2001). Scientific collaborations at a distance. Science, 292, 2254–2255. Torvalds, L., & Diamond, D. (2001). Just for fun: The story of an accidental revolutionary. New York: Harper Business. Wulf, W.A. (1993). The collaboratory opportunity. Science, 261, 854–855.
See also Computer-Supported Cooperative Work; Groupware
COMPILERS FURTHER READING Aldhous, P. (1993). Managing the genome data deluge. Science, 262, 502–3. Birnholtz, J., & Bietz, M. (2003). Data at work: Supporting sharing in science and engineering. In Proceedings of Group 2003. New York: ACM Press. Cinkosky, M. J., Fickett, J. W., Gilna, P., & Burks, C. (1991). Electronic data publishing and GenBank. Science, 252, 1273–1277. Finholt, T. A. (2002). Collaboratories. In B. Cronin (Ed.), Annual Review of Information Science and Technology, 36, 74–107. Washington, DC: American Society for Information Science and Technology. Finholt, T. A., & Olson, G. M. (1997). From laboratories to collaboratories: A new organizational form for scientific collaboration. Psychological Science, 8(1), 28–36. Huang, W., Olson, J. S., & Olson, G. M. (2002). Camera angle affects dominance in video-mediated communication. In Proceedings of CHI 2002, short papers (pp. 716–717). New York: ACM Press. National Science Foundation. (2003) Revolutionizing science and engineering through cyberinfrastructure: Report of the National Science Foundation blue-ribbon panel on cyberinfrastructure. Retrieved December 24, 2003, from http://www.communitytechnology.org/ nsf_ci_report/ Olson, G. M., Finholt, T. A., & Teasley, S. D. (2000). Behavioral aspects of collaboratories. In S. H. Koslow & M. F. Huerta (Eds.), Electronic collaboration in science (pp. 1–14). Mahwah, NJ: Lawrence Erlbaum Associates. Olson, G. M., & Olson, J. S. (2000). Distance matters. Human-Computer Interaction, 15(2–3), 139–179. Raymond, E. S. (1999). The cathedral and the bazaar: Musing on Linux and open source by an accidental revolutionary. Sebastopol, CA: O’Reilly. Schatz, B. (1991). Building an electronic community system. Journal of Management Information Systems, 8(3), 87–107. Singh, Push (n.d.). Open mind common sense. Retrieved December 22, 2003, from http://commonsense.media.mit.edu/cgi-bin/ search.cgi
Compilers are computer programs that translate one programming language into another. The original program is usually written in a high-level language by a programmer and then translated into a machine language by a compiler. Compilers help programmers develop user-friendly systems by allowing them to program in high-level languages, which are more similar to human language than machine languages are.
Background Of course, the first compilers had to be written in machine languages because the compilers needed to operate the computers to enable the translation process. However, most compilers for new computers are now developed in high-level languages, which are written to conform to highly constrained syntax to ensure that there is no ambiguity. Compilers are responsible for many aspects of information system performance, especially for the run-time performance. They are responsible for making it possible for programmers to use the full power of programming language. Although compilers hide the complexity of the hardware from ordinary programmers, compiler development requires programmers to solve many practical algorithmic and engineering problems. Computer hardware architects constantly create new challenges
COMPILERS ❚❙❘ 113
for compiler developers by building more complex machines. Compilers translate programming languages and the following are the tasks performed by each specific compiler type: Assemblers translate low-level language instructions into machine code and map low-level language statements to one or more machinelevel instructions. ■ Compilers translate high-level language instructions into machine code. High-level language statements are translated into more than one machine-level instruction. ■ Preprocessors usually perform text substitutions before the actual translation occurs. ■ High-level translators convert programs written in one high-level language into another highlevel language. The purpose of this translation is to avoid having to develop machine-languagebased compilers for every high-level language. ■ Decompilers and disassembers translate the object code in a low-level language into the source code in a high-level language. The goal of this translation is to regenerate the source code. ■
In the 1950s compilers were often synonymous with assemblers, which translated low-level language instructions into directly executable machine code. The evolution from an assembly language to a highlevel language was a gradual one, and the FORTRAN compiler developers who produced the first successful high-level language did not invent the notion of programming in a high-level language and then compiling the source code to the object code. The first FORTRAN compiler was designed and written between 1954 and 1957 by an IBM team led by John W. Backus, but it had taken about eighteen personyears of effort to develop. The main goal of the team led by Backus was to produce object code that could execute as efficiently as human machine coders could.
Translation Steps Programming language translators, including compilers, go through several steps to accomplish their task, and use two major processes—an analytic process and a synthetic process. The analytic process
takes the source code as input and then examines the source program to check its conformity to the syntactic and semantic constraints of the language in which the program was written. During the synthetic process, the object code in the target language is generated. Each major process is further divided. The analytic process, for example, consists of a character handler, a lexical analyzer, a syntax analyzer, and a constraint analyzer. The character handler identifies characters in the source text, and the lexical analyzer groups the recognized characters into tokens such as operators, keywords, strings, and numeric constants. The syntax analyzer combines the tokens into syntactic structures, and the constraint analyzer checks to be sure that the identified syntactic structures meet scope and type rules. The synthetic process consists of an intermediate code generator, a code optimizer, and a code generator. An intermediate code generator produces code that is less specific than the machine code, which will be further processed by another language translator. A code optimizer improves the intermediate code with respect to the speed of execution and the computer memory requirement. A code generator takes the output from the code optimizer and then generates the machine code that will actually be executed on the target computer hardware.
Interpreters and Interpretive Compilers In general, compilers produce the executable object code at the full speed, and compilers are usually designed to compile the entire source code before executing the resulting object code. However, it is common for programmers to expect to execute one or more parts of a program before completing the program. In addition, many programmers want to write programs using a trial-and-error or what-if strategy. These cases call for the use of an interpreter in lieu of a traditional compiler because an interpreter, which executes one instruction at a time, can take the source program as input and then execute the instructions without generating any object code. Interpretive compilers generate simple intermediate code, which satisfies the constraints of the
114 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
practical interpreters. The intermediate code is then sent as input to an interpreter, which executes the algorithm embedded in the source code by utilizing a virtual machine. Within the virtual machine setting, the intermediate code plays the role of executable machine code.
Famous Compiler: GNU Compiler Collection (GCC) Many high-level language compilers have been implemented using the C programming language and generating C code as output. Because almost all computers come with a C compiler, source code written in C is very close to being truly hardwareindependent and portable. The GNU Compiler Collection (GCC) provides code generation for many programming languages such as C, C++, and Java, and supports more than two hundred different software and hardware platforms. The source code of GCC is free and open, based on GNU General Public License, which allows people to distribute the compiler’s source code as long as the original copyright is not violated and the changes are published under the same license. This license enables users to port GCC to their platform of choice. Presently almost all operating systems for personal computers are supported by GCC and ship the compiler as an integrated part of the platform. For example, Apple’s Mac OS X is compiled using GCC 3.1. Other companies such as Sun and The Santa Cruz Operation also offer GCC as their standard system compiler. These examples show the flexibility and portability of GCC.
Compiler Constructor: Lex and Yacc Roughly speaking, compilers work in two stages. The first stage is reading the source code to discover its structure. The second stage is generating the executable object code based on the identified structure. Lex, a lexical-analyzer generator, and Yacc, a compiler-compiler, are programs used to discover the structure of the source code. Lex splits the source code into tokens and then writes a program whose
control flow is manipulated by instances of regular expressions in the input stream. Regular expressions consist of normal characters, which include upperand lower-case letters and digits, and metacharacters, which have special meanings. For example, a dot is a metacharacter, which matches any one character other than the new-line character. There is also a table of regular expressions and their associated program pieces, called Lex source, and the resulting program is a translation of the table. The program reads the input stream and generates the output stream by partitioning the input into strings that match the given regular expression. Yacc is a general tool for describing the source code to a program. After the Yacc user specifies the structures to be recognized and the corresponding codes to be invoked, Yacc finds the hierarchical structures and transforms their specifications into subroutines that process the input.
The Future of Compilers Proebstring’s Law states that “compiler advances double computing power every 18 years” (Proebsting, n.d., 1). This implies that compiler-optimization work makes a very minor contribution because it means that while the processing power of computer hardware increases by about 60 percent per year, the compiler optimization increases by only 4 percent. Furthermore, some people claim that compilers will become obsolete with the increased use of scripting languages, which rely on interpreters or interpretive compilers. Scripting languages, such as Python, are popular among new programmers and people who do not care about minute efficiency differences. However, there are arguments for the continued existence of compilers. One of the arguments is that there has to be a machine code on which the interpreters rely in order for a programmer’s intended algorithm to be executed. In addition, there will always be new and better hardware, which will then rely on new compilers. It will also be impossible to extinguish the continuing desire to achieve even minute performance improvements and compiletime error-detection capability. One of the proposed future directions for compilers is to aid in increas-
COMPUTER-SUPPORTED COOPERATIVE WORK ❚❙❘ 115
ing the productivity of programmers by optimizing the high-level code. Another possible direction is to make compilers smarter by making them selfsteering and self-tuning, which would allow them to adapt to input by incorporating artificial-intelligence techniques. Woojin Paik See also Programming Languages
Pizka, M. (1997). Design and implementation of the GNU INSEL Compiler gic. Technical Report TUM–I 9713. Munich, Germany: Munich University of Technology. Proebstring, T. (n.d.). Todd Proebsting’s home page. Retrieved January 20, 2004, from http://research.microsoft.com/~toddpro/ Rice compiler group. (n.d.). Retrieved January 20, 2004, from http:// www.cs.rice.edu/CS/compilers/index.html Terry, P. D. (1997). Compilers and compiler generators— an introduction with C++. London: International Thomson Computer Press. The comp.compilers newsgroup. (2002). Retrieved January 20, 2004, from http://compilers.iecc.com/index.html The Lex and Yacc page. (n.d.). Retrieved January 20, 2004, from http://dinosaur.compilertools.net/ Why compilers are doomed. (April 14, 2002). Retrieved January 20, 2004, from http://www.equi4.com/jcw/wiki.cgi/56.html
FURTHER READING Aho, A. V., Sethi, R., & Ulman, J. D. (1986). Compilers: principles, techniques and tools. Reading, MA: Addison-Wesley. Aho, A. V., & Ulman, J. D. (1977). Principles of compiler design. Reading, MA: Addison-Wesley. Bauer, A. (2003). Compilation of functional programming languages using GCC—Tail Calls. Retrieved January 20, 2004, from http://home.in.tum.de/~baueran/thesis/baueran_thesis.pdf A Brief History of FORTRAN /fortran. (1998). Retrieved January 20, 2004, from http://www.ibiblio.org/pub/languages/FORTRAN/ ch1-1.html Catalog of free compilers and interpreters. (1998). Retrieved January 20, 2004, from http://www.idiom.com/free-compilers/ Clodius W. (1997). Re: History and evolution of compilers. Retrieved January 20, 2004, from http://compilers.iecc.com/comparch/article/ 97-10-008 Compiler Connection. (2003). Retrieved January 20, 2004, from http://www.compilerconnection.com/index.html Compiler Internet Resource List. (n.d.). Retrieved January 20, 2004, from http://www.eg3.com/softd/compiler.htm Cooper, K., & Torczon, L. (2003). Engineering a Compiler. Burlington, MA: Morgan Kaufmann. Cooper, K., Kennedy, K., and Torczon, L. (2003). COMP 412 Overview of the course. Retrieved January 20, 2004, from http://www.owlnet.rice.edu/~comp412/Lectures/L01Intro.pdf Cranshaw, J. (1997). Let’s build a compiler. Retrieved January 20, 2004, from http://compilers.iecc.com/crenshaw/ GCC Homepage. (January 26, 2004). Retrieved January 26, 2004, from http://gcc.gnu.org/ Free Software Foundation. (1991). GNU General Public License. Retrieved January 20, 2004, from http://www.fsf.org/licenses/ gpl.html Joch, A. (January 22, 2001). Compilers, interpreters and bytecode. Retrieved January 20, 2004, from http://www.computerworld.com/ softwaretopics/software/story/0,10801,56615,00.html Lamm, E. (December 8, 2001). Lambda the Great. Retrieved January 20, 2004, from http://lambda.weblogs.com/2001/12/08 Mansour, S. (June 5, 1999). A Tao of Regular Expressions. Retrieved January 20, 2004, from http://sitescooper.org/tao_regexps.html Manzoor, K. (2001). Compilers, interpreters and virtual machines. Retrieved January 20, 2004, from http://homepages.com.pk/ kashman/jvm.htm
COMPUTER-SUPPORTED COOPERATIVE WORK Computer-supported cooperative work (CSCW) is the subarea of human-computer interaction concerned with the communication, collaboration, and work practices of groups, organizations, and communities, and with information technology for groups, organizations, and communities. As the Internet and associated networked computing activities have become pervasive, research in CSCW has expanded rapidly, and its central concepts and vocabulary are still evolving. For the purposes of this discussion, we understand cooperative work as any activity that includes or is intended to include the coordinated participation of at least two individuals; we take computer support of such work to be any information technology used to coordinate or carry out the shared activity (including archiving of the records of an activity to allow subsequent reuse by another). Several themes dominate research and practice in CSCW: studies of work, in which activities and especially tool usage patterns are observed, analyzed, and interpreted through rich qualitative descriptions; design and use of computer-mediated communication (CMC) systems and of groupware, designed to aid with collaborative planning, acting, and sense making; and analyses of the adoption and adaptation of CSCW systems.
116 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
A Personal Story—Social Context in Computer-Supported Cooperative Work (CSCW) In the early 1980s, our research group at the IBM Watson Research Center focused on the early stages of learning word processing systems, like the IBM Displaywriter. We carried out an extensive set of studies over several years. In these investigations, we noticed that people tried to minimize the amount of rote learning they engaged in, preferring to adopt action-oriented approaches in their own learning. Eventually, we developed a description of the early stages of learning to use computer applications that helped to define new design approaches and learning support. But this work also made us wonder what more advanced learning might be like. To investigate this, my colleague John Gould and I visited an IBM customer site, to observe experienced users of Displaywriters as they worked in their everyday environments. These individuals were competent and confident in their use of the software. However we observed a pattern of distributed expertise: Each member of the staff had mastered one advanced function. Whenever someone needed to use an advanced function, she contacted the corresponding expert for personal, one-on-one coaching. This was a win-win situation: the requestors received customized help, and the specialized experts earned an increase in status. These field observations taught us the importance of people’s social context in the use and evaluation of information technology, something we now take for granted in CSCW. Mary Beth Rosson
Studies of Work A fundamental objective of CSCW is to understand how computers can be used to support everyday work practices. Early research in the 1980s focused on workflow systems. This approach codifies existing business procedures (for example, relating to the hiring of a new employee) in a computer model and embeds the model in a tracking system that monitors execution of the procedures, providing reminders, coordination across participants, and assurance that appropriate steps are followed. Computerized workflow systems are highly rational technological tools whose goal is to support the effective execution of normative procedures. Ironically, a major lesson that emerged from building and studying the use of these systems is that exceptions to normative business procedures are pervasive in real activity, and that handling such exceptions characteristically involves social interactions that need to be fluid and nuanced in order to succeed. Indeed, the failure of direct and rational workflow support was to a considerable extent the starting point for modern CSCW, which now emphasizes balance between structured performance
and learning support and flexibility in the roles and responsibilities available to human workers. Studies of work often employ ethnographic methods adapted from anthropology. In ethnographic research, the activities of a group are observed over an extended period of time. This allows collaborative activity to be seen in context. Thus, tasks are not characterized merely in terms of the steps comprising procedures, but also in terms of who interacts with whom to carry out and improvise procedures, what tools and other artifacts are used, what information is exchanged and created, and the longer-term collateral outcomes of activity, such as personal and collective learning and the development of group norms and mutual trust. This work has demonstrated how, for example, the minute interdependencies and personal histories of doctors, nurses, patients, administrators, and other caregivers in the functioning of a hospital must be analyzed to properly understand actions as seemingly simple as a doctor conveying a treatment protocol to a nurse on the next shift.
COMPUTER-SUPPORTED COOPERATIVE WORK ❚❙❘ 117
Sometimes the observer tries to be invisible in ethnographic research, but sometimes the investigator joins the group as a participant-observer. Typically video recordings of work activities are made, and various artifacts produced in the course of the work are copied or preserved to enable later analysis and interpretation. Ethnographic methods produce elaborate and often voluminous qualitative descriptions of complex work settings. These descriptions have become central to CSCW research and have greatly broadened the notion of context with respect to understanding human activity. Theoretical frameworks such as activity theory, distributed cognition, and situated action, which articulate the context of activity, have become the major paradigms for science and theory in CSCW. Much of what people do in their work is guided by tacit knowledge. A team of engineers may not realize how much they know about one another’s unique experience, skills, and aptitudes, or how well they recruit this knowledge in deciding who to call when problems arise or how to phrase a question or comment for best effect. But if an analyst observes them at work, queries them for their rationale during problem-solving efforts, and asks for reflections on why things happen, the tacit knowledge that is uncovered may point to important trade-offs in building computerized support for their work processes. For instance, directing a question to an expert colleague provides access to the right information at the right time, but also establishes and reinforces a social network. Replacing this social behavior with an automated expert database may answer the query more efficiently, but may cause employees to feel more disconnected from their organization. A persistent tension in CSCW studies of work springs from the scoping of activities to be supported. Many studies have shown how informal communication—dropping by a coworker’s office, encountering someone in the hall, sharing a coffee— can give rise to new insights and ideas and is essential in creating group cohesion and collegiality, social capital to help the organization face future challenges. But communication is also time consuming and often
ambiguous, entailing clarifications and confirmations. And of course informal interactions are also often unproductive. Balancing direct support for work activities with broader support for building and maintaining social networks is the current state of the classic workflow systems challenge.
Computer-Mediated Communication The central role of communication in the behavior of groups has led to intense interest in how technology can be used to enable or even enhance communication among individuals and groups. Much attention has been directed at communication among group members who are not colocated, but even for people who share an office, CMC channels such as e-mail and text chat have become pervasive. Indeed e-mail is often characterized as the single most successful CSCW application, because it has been integrated so pervasively into everyday work activities. The medium used for CMC has significant consequences for the communicators. Media richness theory suggests that media supporting video or voice are most appropriate for tasks that have a subjective or evaluative component because the nonverbal cues provided by a communicator’s visual appearance or voice tone provide information that helps participants better understand and evaluate the full impact of one another’s messages. In contrast, text-based media like e-mail or chat are better for gathering and sharing objective information. Of course, even textbased channels can be used to express emotional content or subjective reactions to some extent; a large and growing vocabulary of character-based icons and acronyms are used to convey sadness, happiness, surprise, and so on. Use of CMC has also been analyzed from the perspective of the psychologist Herbert Clark’s theory of common ground in language—the notion that language production, interpretation, and feedback relies extensively on communicators’ prior knowledge about one another, the natural language they are using, the setting they are in, and their group and cultural affiliations. In CMC settings some of this information may be missing. Furthermore, many of the acknowledgement and feedback mechanisms that
118 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
humans take for granted in face-to-face conversation (for example, head nods and interjected uhhuhs and so on) become awkward or impossible to give and receive in CMC. The theory of common ground argues that these simple acknowledgement mechanisms are crucial for fluid conversation because they allow conversation partners to monitor and track successful communication: A head nod or an uh-huh tells the speaker that the listener understands what the speaker meant, is acknowledging that understanding, and is encouraging the speaker to continue. Despite the general acknowledgement that textbased CMC media such as e-mail and chat are relatively poor at conveying emotion and subjective content, these channels have advantages that make them excellent choices for some tasks. E-mail, for example, is usually composed and edited in advance of sending the message; it can be read and reviewed multiple times; and it is very easily distributed to large groups. E-mail is also easy to archive, and its text content can be processed in a variety of ways to create reusable information resources. Because e-mail is relatively emotion-free, it may be appropriate for delicate or uncomfortable communication tasks. With so many CMC options, people are now able to make deliberate (or tacit) choices among CMC channels, using a relatively informal and unobtrusive medium like text chat for low-cost interaction, more formally composed e-mail for business memos, and video or audio conferencing for important decision-making tasks. The relative anonymity of CMC (particularly with text-based channels) has provoked considerable research into the pros and cons of anonymous communication. Communicators may use their real names or screen names that only loosely convey their identity; in some situations virtual identities may be adopted explicitly to convey certain aspects of an invented personality or online persona. Anonymity makes it easier to express sensitive ideas and so can be very effective when brainstorming or discussion is called for but social structures would otherwise inhibit a high degree of sharing. However the same factors that make anonymity an aid to brainstorming also can lead to rude or inappropriate exchanges and may make it difficult to establish common ground
and to build trusting relationships. Indeed, there have been a number of well-publicized episodes of cruel behavior in CMC environments such as chatrooms and MUDs (multiuser dungeons or domains). During the 1990s, cell phones, pagers, personal digital assistants, and other mobile devices rendered people and their work activities more mobile. As a consequence, the context of CMC became quite varied and unpredictable. A research area that has developed in response to users’ changing environments is context-aware computing, wherein the technology is used not only to support work activities, but also to gather information about the users’ situation. For example, it is relatively straightforward to set up distinct settings for how a cell phone will operate (e.g., ring tone or volume) at work, home, outdoors, and so on, but it takes time and attention to remember to activate and deactivate them as needed. Thus the goal is to build devices able to detect changes in people’s environment and to activate the appropriate communication options or tasks. Whether such mode changes take place automatically or are managed by the individual, the resulting context information can be important to collaborators, signaling if and when they can initiate or return to a shared activity.
Groupware CSCW software is often categorized by the timing of the collaboration it supports: Synchronous groupware supports interaction at the same point in time, while asynchronous groupware supports collaboration across time. Another distinction is the collaborators’ relative location, with some groupware designed for colocated interaction and some for distributed activities. For example, group decision support systems are typically used for synchronous and colocated interaction: As part of a face-to-face meeting, group members might use a shared online environment to propose, organize, and prioritize ideas. In contrast, an online forum might be used for asynchronous discussions among distributed group members. A longstanding goal for many groupware developers has been building support for virtual meetings— synchronous group interactions that take place
COMPUTER-SUPPORTED COOPERATIVE WORK ❚❙❘ 119
A Personal Story—Internet Singing Lessons Having immigrated to the United States from India at an early age, I have always had a problem mastering the fine melodic nuances required to sing traditional Hindi songs. This problem has limited my singing repertoire. Last winter, during a religious meeting of Indian immigrants living in the Ann Arbor area, I was struck with how well a young Indian man sang a haunting Hindu chant. Later that evening I asked him to help me improve how I sang Hindu chants, which he did willingly. However, he soon informed me that he was returning to India the following week as he was in the U.S. on a temporary work visa. Because I was disappointed in losing such a willing teacher, my friend suggested a technological solution. He suggested that I set up an account with Yahoo! Messenger, and to buy a stereo headset through which we could continue our music interaction. Yahoo! Messenger is an instant messaging system that enables logged-in users to exchange text messages, and to talk free of charge on the Internet. When my friend returned to India, we had to deal with two problems. First, we had to deal with the time-difference. India is 10½ hours ahead of the U. S. Second, we had to deal with the problem that my friend only had access to an Internet connection at the office where he worked. This is because computers and Internet connections are still quite expensive for the average Indian. We therefore decided that the best time for undisturbed instant voice messaging would be at 7:30 a.m. Indian Standard Time when other employees had not yet arrived in my friend’s office. This time also work out well for me because it would be 9:00 p.m. (EST), the time when I liked to pluck on my guitar and sing. The above plan worked well—on February 8th, 2004, I had my first “transcontinental singing lesson.” Despite a slight delay in sound transmission due to the Internet bandwidth problem, my friend was able to correct the fine melodic nuances that I missed when I sang my favorite Hindu chant. I can now sing a Hindu chant with nuances approved by a singing teacher sitting in front of a computer many oceans away. Suresh Bhavnani
entirely online as a substitute for traditional face-toface meetings. As businesses have become increasingly international and distributed, support for virtual meetings has become more important. A virtual meeting may use technology as simple as a telephone conference call or as complex as a collaborative virtual environment (CVE) that embodies attendees and their work resources as interactive objects in a three-dimensional virtual world. Because virtual meetings must rely on CMC, attendees have fewer communication cues and become less effective at turn taking, negotiation, and other socially rich interaction. It is also often difficult to access and interact with meeting documents in a CVE, particularly when the meeting agenda is open and information needs to evolve during the meeting. Some researchers have argued that online meetings will never equal face-to-face interaction, and that researchers should focus instead on the special qualities offered by a virtual medium—for example, the archiving, re-
viewing, and revising of content that is a natural consequence of working together online. When collaborators meet online, participant authentication is an important issue. Many work situations have policies and procedures that must be respected; for example, meetings may have a specified attendee list or restricted documents, or decisions may require the approval of a manager. Enforcing such restrictions creates work for both the organizer of the activity (who must activate the appropriate controls) and the participants (who must identify themselves if and when required). Depending on a group’s culture and setting, the meeting organizers may choose to make no restrictions at all (for example, they may meet in an online chatroom and rely on group members to self-enforce relevant policies and group behavior), or they may rely on a set of roles (such as leader, attendee, or scribe) built into the groupware system to manage information access and interaction.
120 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
A significant technical challenge for synchronous groupware is ensuring data consistency. When collaborators are able to communicate or edit shared data in parallel, there is the possibility that simultaneous requests will conflict: One participant might correct the spelling of a word at the same time that another member deletes a phrase containing the word, for example. The simplest technique for avoiding consistency problems is to implement a floor control mechanism that permits only one participant at a time to have the virtual pen, with others waiting until it is passed to them. Because such mechanisms can be awkward and slow, many groupware systems have explored alternatives, including implicit locking of paragraphs or individual words, and fully optimistic serialization, which processes all input in the order in which it is received, with the assumption that well-learned social protocols of turn taking and coordination will reduce conflict and ensure smooth operation. Many other technical challenges plague the smooth operation of groupware. For instance, it is quite common for collaborators to be interacting with rather different hardware and software platforms. Although work groups may settle on a standard set of software, not all group members may follow all aspects of the standard, and beyond the work group, there may be no standards. Thus interoperability of data formats, search tools, editing or viewing software, and analysis tools is a constant concern. As work settings have become more mobile and dynamic, the variety of technical challenges has increased: Some members at a virtual meeting may join by cell phone, while others may use a dedicated broadband network connection. It is increasingly common for groupware systems to at least provide an indicator of such variation, so that collaborators can compensate as necessary (for example, by recognizing that a cell phone participant may not be able to see the slides presented at a meeting). The general goal of promoting awareness during CSCW interactions has many facets. During synchronous work, groupware often provides some form of workspace awareness, with telepointers or miniaturized overviews showing what objects are selected or in view by collaborators. In more extended collaborations, partners depend on social awareness to
know which group members are around, available for interaction, and so on. Social awareness can be provided through mechanisms such as buddy lists, avatars (online representations of group members), or even regularly updated snapshots of a person in their work setting. For a shared project that takes place over weeks or months, collaborators need activity awareness: They must be aware of what project features have changed, who has done what, what goals or plans are currently active, and how to contribute. However, promoting activity awareness remains an open research topic; considerable work is needed to determine how best to integrate across synchronous and asynchronous interactions, what information is useful in conveying status and progress, and how this information can be gathered and represented in a manner that supports rather than interrupts collaborative activities.
Adoption and Adaptation of CSCW Systems Even when great care is taken in the design and implementation of a CSCW system, there is no guarantee that it will be successfully adopted and integrated into work practices—or that when it is adopted it will work as originally intended. Many case studies point to a sociotechnical evolution cycle: Initially, delivered CSCW systems do not fit onto existing social and organizational structures and processes. During a process of assimilation and accommodation, the organization changes (for example, a new role may be defined for setting up and facilitating virtual meetings) in concert with the technology (for example, a set of organization-specific templates may be defined to simplify agenda setting and meeting management). Several implications follow from this view of CSCW adoption. One is that participatory design of the software is essential—without the knowledge of praxis provided by the intended users, the software will not be able to evolve to meet their specific needs; furthermore if users are included in the design process, introduction of the CSCW system into the workplace will already have begun by the time the system is deployed. Another implication is that
COMPUTER-SUPPORTED COOPERATIVE WORK ❚❙❘ 121
CSCW software should have as open an architecture as possible, so that when the inevitable need for changes is recognized months or years after deployment, it will be possible to add, delete, or otherwise refine existing services. A third implication is that organizations seeking CSCW solutions should be ready to change their business structures and processes—and in fact should undertake business process reengineering as they contribute to the design of a CSCW system. A frequent contributing factor in groupware failure is uneven distribution of costs and benefits across organizational roles and responsibilities. There are genuine costs to collaboration: When an individual carries out a task, its subtasks may be accomplished in an informal and ad hoc fashion, but distributing the same task among individuals in a group is likely to require more advance planning and negotiation, recordkeeping, and explicit tracking of milestones and partial results. Collaboration implies coordination. Of course the benefits are genuine as well: One can assign tasks to the most qualified personnel, one gains multiple perspectives on difficult problems, and social recognition and rewards accrue when individuals combine efforts to reach a common goal. Unfortunately, the costs of collaboration are often borne by workers, who have new requirements for online planning and reporting, while its benefits are enjoyed by managers, who are able to deliver on-time results of higher quality. Therefore, when designing for sociotechnical evolution, it is important to analyze the expected costs and benefits and their distribution within the organization. Equally important are mechanisms for building social capital and trust, such that individuals are willing to contribute to the common good, trusting that others in the group will reward or care for them when the time comes. Critical mass is another determinant of successful adoption—the greater the proportion of individuals within an organization who use a technology, the more sense it makes to begin using it oneself. A staged adoption process is often effective, with a high-profile individual becoming an early user and advocate who introduces the system to his or her group. This group chronicles its adoption experience and passes the technology on to other groups, and so on. By the time the late adopters begin to use
the new technology, much of the sociotechnical evolution has taken place, context-specific procedures have been developed and refined in situ, and there are local experts to assist new users. As more and more of an organization’s activities take place online—whether through e-mail or videoconferencing or shared file systems—via CSCW technology, the amount of online information about the organization and its goals increases exponentially. The increased presence of organizational information online has generated great interest in the prospects for organizational memory or knowledge management. The hope is that one side effect of carrying out activities online will be a variety of records about how and why tasks are decomposed and accomplished, and that these records can provide guidance to other groups pursuing similar goals. Of course once again, there are important cost-benefit issues to consider: Recording enough information to be helpful to future groups takes time, especially if it is to be stored in any useful fashion, and the benefit in most cases will be enjoyed by other people. One solution is to give computers the job of recording, organizing, and retrieving. For example, even a coarse-grained identification of speakers making comments in a meeting can simplify subsequent browsing of the meeting audiotape.
Research Directions Much of the active research in CSCW is oriented toward new technologies that will enhance awareness, integrate multiple devices, populations, and activities, and make it possible to visualize and share rich data sets and multimedia documents. The need to interconnect people who are using diverse devices in diverse settings entails many research challenges, some related to the general issues of multiplatform computing and others tied to understanding and planning for the social and motivational differences associated with varied work settings. The rapidly expanding archives in organizations offer many research opportunities related to data processing and analysis as well as information visualization and retrieval. At the same time, these digital storehouses raise important questions about individual privacy and identity—the more information an organization
122 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
collects about an individual, the more opportunity there is for inappropriate access to and use of this information. A methodological challenge for CSCW is the development of effective evaluation methods. Field studies and ethnographic analyses yield very rich data that can be useful in understanding system requirements and organizational dynamics. But analyzing such detailed records to answer precise questions is time consuming and sometimes impossible due to the complexity of real-world settings. Unfortunately, the methods developed for studying individual computer use do not scale well to the evaluation of multiple users in different locations. Because social and organizational context are a key component of CSCW activities, it is difficult to simulate shared activities in a controlled lab setting. Groupware has been evolving at a rapid rate, so there are few if any benchmark tasks or results to use for comparison studies. One promising research direction involves fieldwork that identifies interesting collaboration scenarios; these are then scripted and simulated in a laboratory setting for more systematic analysis. In the 1980s human-computer interaction focused on solitary users finding and creating information using a personal computer. Today, the focus is on several to many people working together at a variety of times and in disparate places, relying heavily on the Internet, and communicating and collaborating more or less continually. This is far more than a transformation of human-computer interaction; it is a transformation of human work and activity. It is still under way, and CSCW will continue to play a large role. Mary Beth Rosson and John M. Carroll See also Collaboratories; Ethnography; MUDs; Social Psychology and HCI FURTHER READING Ackerman, M. S. (2002). The intellectual challenge of CSCW: The gap between social requirements and technical feasibility. In J. M. Carroll (Ed.), Human-computer interaction in the new millennium (pp. 303–324). New York: ACM Press.
Baecker, R. M. (1993). Readings in groupware and computer-supported cooperative work: Assisting human-human collaboration. San Francisco: Morgan-Kaufmann. Beaudouin-Lafon, M. (Ed). (1999). Computer supported co-operative work. Chichester, UK: John Wiley & Sons. Bikson, T. K., & Eveland, J. D. (1996). Groupware implementation: Reinvention in the sociotechnical frame. In Proceedings of the Conference on Computer Supported Cooperative Work: CSCW ’96 (pp. 428–437). New York: ACM Press. Carroll, J. M., Chin, G., Rosson, M .B., & Neale, D. C. (2000). The development of cooperation: Five years of participatory design in the virtual school. In Designing interactive systems: DIS 2000 (pp. 239–251). New York: ACM Press. Carroll, J. M., & Rosson, M.B. (2001). Better home shopping or new democracy? Evaluating community network outcomes. In Proceedings of Human Factors in Computing Systems: CHI 2001 (pp. 372–379). New York: ACM Press. Dourish, P., & Bellotti, V. (1992). Awareness and coordination in shared workspaces. In Proceedings of the Conference on Computer Supported Cooperative Work: CSCW ’92 (pp. 107–114). New York: ACM Press. Grudin, J. (1994). Groupware and social dynamics: Eight challenges for developers. Communications of the ACM, 37(1), 92–105. Gutwin, C., & Greenberg, S. (1999). The effects of workspace awareness support on the usability of real-time distributed groupware. ACM Transactions on Computer-Human Interaction, 6(3), 243–281. Harrison, S., & Dourish, P. (1996). Re-placing space: The roles of place and space in collaborative systems. In Proceedings of the Conference on Computer Supported Cooperative Work: CSCW ’96 (pp. 67–76). New York: ACM Press. Hughes, J., King, V., Rodden, T., & Andersen, H. (1994). Moving out from the control room: Ethnography in system design. In Proceedings of the Conference on Computer Supported Cooperative Work: CSCW ’94 (pp. 429–439). New York: ACM Press. Hutchins, E. (1995). Distributed cognition. Cambridge, MA: MIT Press. Malone, T. W., & Crowston, K. (1994). The interdisciplinary study of coordination. ACM Computing Surveys, 26(1), 87–119. Markus, M. L. (1994). Finding a happy medium: Explaining the negative effects of electronic communication on social life at work. ACM Transactions on Information Systems, 12(2), 119–149. Nardi, B. A. (1993). A small matter of programming. Cambridge, MA: MIT Press. Nardi, B. A. (Ed). (1996). Context and consciousness: Activity theory and human-computer interaction. Cambridge, MA: MIT Press. Olson, G. M., & Olson, J. S. (2000). Distance matters. Human Computer Interaction, 15(2–3), 139–179. Orlikowski, W. J. (1992). Learning from notes: Organizational issues in groupware implementation. In Proceedings of the Conference on Computer Supported Cooperative Work: CSCW ’92 (pp. 362–369). New York: ACM Press. Roseman, M., & Greenberg, S. (1996). Building real time groupware with Groupkit, a groupware toolkit. ACM Transactions on Computer Human Interaction, 3(1), 66–106. Sproull, L., & Kiesler, S. (1991). Connections: New ways of working in the networked organization. Cambridge, MA: MIT Press. Streitz, N. A., Geißler, J., Haake, J., & Hol, J. (1994). DOLPHIN: Integrated meeting support across local and remote desktop environments and liveboards. In Proceedings of the Conference on Computer Supported Cooperative Work: CSCW ’94 (pp. 345–358). New York: ACM Press.
CONSTRAINT SATISFACTION ❚❙❘ 123
Suchman, L. (1987). Plans and situated actions: The problem of humanmachine communication. Cambridge, UK: Cambridge University Press. Sun, C., & Chen, D. (2002). Consistency maintenance in real-time collaborative graphics editing systems. ACM Transactions on Computer Human Interaction, 9(1), 1–41. Tang, J., Yankelovich, N., Begole, J., Van Kleek, M., Li, F., & Bhalodia, J. (2001). Connexus to awarenex: Extending awareness to mobile users. In Proceedings of Human Factors in Computing Systems: CHI 2001 (pp. 221–228). New York: ACM Press. Winograd, T. (1987/1988). A language/action perspective on the design of cooperative work. Human-Computer Interaction, 3(1), 3–30.
CONSTRAINT SATISFACTION Constraint satisfaction refers to a set of representation and processing techniques useful for modeling and solving combinatorial decision problems; this paradigm emerged from the artificial intelligence community in the early 1970s. A constraint satisfaction problem (CSP) is defined by three elements: (1) a set of decisions to be made, (2) a set of choices or alternatives for each decision, and (3) a set of constraints that restrict the acceptable combinations of choices for any two or more decisions. In general, the task of a CSP is to find a consistent solution— that is, a choice for every decision such that all the constraints are satisfied. More formally, each decision is called a variable, the set of alternative choices for a given variable is the set of values or domain of the variable, and the constraints are defined as the set of allowable combinations of assignments of values to variables. These combinations can be given in extension as the list of consistent tuples, or defined in intention as a predicate over the variables.
The 4-Queen Problem A familiar example of a CSP is the 4-queen problem. In this problem, the task is to place four queens on a 4×4 chessboard in such a way that no two queens attack each other. One way to model the 4-queen problem as a CSP is to define a decision variable for each square on the board. The square can be either empty (value 0) or have a queen (value 1). The con-
straints specify that exactly four of the decision variables have value 1 (“queen in this square”) and that there cannot be two queens in the same row, column, or diagonal. Because there are sixteen variables (one for each square) and each can take on two possible values, there are a total of 2¹⁶ (65,536) possible assignments of values to the decision variables. There are other ways of modeling the 4-queen problem within the CSP framework. One alternative is to treat each row on the board as a decision variable. The values that can be taken by each variable are the four column positions in the row. This formulation yields 4⁴ (256) possibilities. This example illustrates how the initial formulation or model affects the number of possibilities to be examined, and ultimately the performance of problem solving.
CSP Representations A CSP is often represented as an undirected graph (or network), which is a set of nodes connected by a set of edges. This representation opens up the opportunity to exploit the properties and algorithms developed in graph theory for processing and solving CSPs. In a constraint graph, the nodes represent the variables and are labeled with the domains of the variables. The edges represent the constraints and link the nodes corresponding to the variables to which the constraints apply. The arity of a constraint designates the number of variables to which the constraint applies, and the set of these variables constitutes the scope of the constraint. Constraints that apply to two variables are called binary constraints and are represented as edges in the graph. Constraints that apply to more than two variables are called nonbinary constraints. While, early on, most research has focused on solving binary CSPs, techniques for solving nonbinary CSPs are now being investigated.
The Role of CSPs in Science Beyond puzzles, CSPs have been used to model and solve many tasks (for example, temporal reasoning, graphical user interfaces, and diagnosis) and have been applied in many real-world settings (for example, scheduling, resource allocation, and product configuration and design). They have been used
124 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
in various areas of engineering, computer science, and management to handle decision problems. A natural extension of the CSP is the constrained optimization problem (COP), where the task is to find an optimal solution to the problem given a set of preferences and optimization criteria. The problems and issues studied in the constraint processing (CP) community most obviously overlap with those investigated in operations research, satisfiability and theoretical computer science, databases, and programming languages. The 1990s have witnessed a sharp increase in the interactions and cross-fertilization among these areas. A special emphasis is made in CP to maintain the expressiveness of the representation. Ideally, a human user should be able to naturally express the various relations governing the interactions among the entities of a given problem without having to recast them in terms of complex mathematical models and tools, as would be necessary in mathematical programming. The area of constraint reformulation is concerned with the task of transforming the problem representation in order to improve the performance of problem solving or allow the use of available solution techniques. Sometimes such transformations are truthful (that is, they preserve the essence of the problem), but often they introduce some sufficient or necessary approximations, which may or may not be acceptable in a particular context.
Solution Methods The techniques used to solve a CSP can be divided into two categories: constraint propagation (or inference) and search. Further, search can be carried out as a systematic, constructive process (which is exhaustive) or as an iterative repair process (which often has a stochastic component). Constraint Propagation Constraint propagation consists in eliminating, from the CSP, combinations of values for variables that cannot appear in any solution to the CSP. Consider for example two CSP variables A and B representing two events. Assume that A occurred between 8 a.m. and 12 p.m. (the domain of A is the interval [8, 12]), B occurred between 7 a.m. and 11 a.m. (the
domain of B is the interval [7, 11]), and B occurred one hour after A (B-A ≥ 1). It is easy to infer that the domains of A and B must be restricted to [8, 10] and [9, 11] respectively, because B cannot possibly occur before 9, or A after 10, without violating the constraint between A and B. This filtering operation considers every combination of two variables in a binary CSP. It is called 2-consistency. A number of formal properties have been proposed to characterize the extent to which the alternative combinations embedded in a problem description are likely to yield consistent solutions, as a measure of how “close is the problem to being solved.” These properties characterize the level of consistency of the problem (for example, k-consistency, minimality, and decomposability). Algorithms for achieving these properties, also known as constraint propagation algorithms, remain the subject of intensive research. Although the cost of commonly used constraint propagation algorithms is a polynomial function of the number of variables of the CSP and the size of their domains, solving the CSP remains, in general, an exponential-cost process. An important research effort in CP is devoted to finding formal relations between the level of consistency in a problem and the cost of the search process used for solving it. These relations often exploit the topology of the constraint graph or the semantic properties of the constraint. For example, a tree-structured constraint graph can be solved backtrack-free after ensuring 2-consistency, and a network of constraints of bounded differences (typically used in temporal reasoning) is solved by ensuring 3-consistency. Systematic Search In systematic search, the set of consistent combinations is explored in a tree-like structure starting from a root node, where no variable has a value, and considering the variables of the CSP in sequence. The tree is typically traversed in a depth-first manner. At a given depth of the tree, the variable under consideration (current variable) is assigned a value from its domain. This operation is called variable instantiation. It is important that the value chosen for the current variable be consistent with the instantiations of the past variables. The process of checking the consistency of a value for the current variable
CONSTRAINT SATISFACTION ❚❙❘ 125
with the assignments of past variables is called backchecking. It ensures that only instantiations that are consistent (partial solutions) are explored. If a consistent value is found for the current variable, then this variable is added to the list of past variables and a new current variable is chosen from among the un-instantiated variables (future variables). Otherwise (that is, no consistent value exists in the domain of the current variable), backtracking is applied. Backtracking undoes the assignment of the previously instantiated variable, which becomes the current variable, and the search process attempts to find another value in the domain of this variable. The process is repeated until all variables have been instantiated (thus yielding a solution) or backtrack has reached the root of the tree (thus proving that the problem is not solvable). Various techniques for improving the search process itself have been proposed. For systematic search, these techniques include intelligent backtracking mechanisms such as backjumping and conflict-directed backjumping. These mechanisms attempt to remember the reasons for failure and exploit them during search in order to avoid exploring barren portions of the search space, commonly called thrashing. The choices of the variable to be instantiated during search and that of the value assigned to the variable are handled, respectively, by variable and value ordering heuristics, which attempt to reduce the search effort. Such heuristics can be applied statically (that is, before the search starts) or dynamically (that is, during the search process). The general principles that guide these selections are “the most constrained variable first” and “the most promising value first.” Examples of the former include the least domain heuristic (where the variable with the smallest domain is chosen for instantiation) and the minimal-width heuristic (where the variables are considered in the ordering of minimal width of the constraint graph). Iterative-Repair Search In iterative repair (or iterative improvement) search, all the variables are instantiated (usually randomly) regardless of whether or not the constraints are satisfied. This set of complete instantiations, which is not necessarily a solution, constitutes a state. Iterative-repair search operates by moving from one
state to another and attempting to find a state where all constraints are satisfied. This move operator and the state evaluation function are two important components of an iterative-repair search. The move is usually accomplished by changing the value of one variable (thus the name local search). However, a technique operating as a multiagent search allows any number of variables to change their values. The evaluation function measures the cost or quality of a given state, usually in terms of the number of broken constraints. Heuristics, such as the min-conflict heuristic, are used to choose among the states reachable from the current state (neighboring states). The performance of iterative-repair techniques depends heavily on their ability to explore the solution space. The performance is undermined by the existence in this space of local optima, plateaux, and other singularities caused by the nonconvexity of the constraints. Heuristics are used to avoid falling into these traps or to recover from them. One heuristic, a breakout strategy, consists of increasing the weight of the broken constraints until a state is reached that satisfies these constraints. Tabu search maintains a list of states to which search cannot move back. Other heuristics use stochastic noise such as random walk and simulated annealing. Blending Solution Techniques Constraint propagation has been successfully combined with backtrack search to yield effective lookahead strategies such as forward checking. Combining constraint propagation with iterative-repair strategies is less common. On the other hand, randomization, which has been for a long time utilized in local search, is now being successfully applied in backtrack search.
Research Directions The use of constraint processing techniques is widespread due to the success of the constraint programming paradigm and the increase of commercial tools and industrial achievements. While research on the above topics remains active, investigations are also invested in the following directions: user interaction; discovery and exploitation of symmetry relations; propagation algorithms for high-arity constraints
126 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
and for continuous domains; preference modeling and processing; distributed search techniques; empirical assessment of problem difficulty; and statistical evaluation and comparison of algorithms. Berthe Y. Choueiry See also Artificial Intelligence; N-grams
FURTHER READING Bistarelli, S., Montanari, U., & Rossi, F. (1997). Semiring-based constraint satisfaction and optimization. Journal of the ACM, 44(2), 201–236. Borning, A., & Duisberg, R. (1986). Constraint-based tools for building user interfaces. ACM Transactions on Graphics, 5(4), 345–374. Cohen, P. R. (1995). Empirical methods for artificial intelligence. Cambridge, MA: MIT Press. Dechter, R. (2003). Constraint processing. San Francisco: Morgan Kaufmann. Ellman, T. (1993). Abstraction via approximate symmetry. In Proceedings of the 13th IJCAI (pp. 916–921). Chambéry, France. Freuder, E. C. (1982). A sufficient condition for backtrack-free search. JACM, 29(1), 24–32. Freuder, E. C. (1985). A sufficient condition for backtrack-bounded search. JACM, 32(4), 755–761. Freuder, E. C. (1991). Eliminating interchangeable values in constraint satisfaction problems. In Proceedings of AAAI-91 (pp. 227–233). Anaheim, CA. Gashnig, J. (1979). Performance measurement and analysis of certain search algorithms. Pittsburgh, PA: Carnegie-Mellon University. Glaisher, J. W. L. (1874). On the problem of the eight queens. Philosophical Magazine, 4(48), 457–467. Glover, F. (1989). Tabu Search—Part I. ORSA Journal on Computing, 1(3), 190–206. Gomes, C. P. (2004). Randomized backtrack search. In M. Milano (Ed.), Constraint and Integer Programming: Toward a Unified Methodology (pp. 233–291). Kluwer Academic Publishers. Haralick, R. M., & Elliott, G. L. (1980). Increasing tree search efficiency for constraint satisfaction problems. Artificial Intelligence, 14, 263–313. Hogg, T., Huberman, B. A., & Williams, C. P. (Eds.). (1996). Special volume on frontiers in problem solving: Phase transitions and complexity. Artificial Intelligence, 81(1–2). Burlington, MA: Elsevier Science. Hooker, J. (2000). Logic-based methods for optimization: Combining optimization and constraint satisfaction. New York: Wiley. Hoos, H. H., & Stützle, T. (2004). Stochastic local search. San Francisco: Morgan Kaufmann. Kirkpatrick, S., Gelatt, J. C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220(4598), 671–680. Liu, J., Jing, H., & Tang, Y. Y. (2002). Multi-agent oriented constraint satisfaction. Artificial Intelligence, 136(1), 101–144. Minton, S., et al. (1992). Minimizing conflicts: A heuristic repair method for constraint satisfaction and scheduling problems. Artificial Intelligence, 58, 161–205.
Montanari, U. (1974). Networks of constraints: Fundamental properties and application to picture processing. Information Sciences, 7, 95–132. Prosser, P. (1993). Hybrid algorithms for the constraint satisfaction problem. Computational Intelligence, 9(3), 268–299. Régin, J.-C. (1994). A filtering algorithm for constraints of difference in constraint satisfaction problems. In Proceedings from the National Conference on Artificial Intelligence (AAAI 1994) (pp. 362–437). Seattle, WA. Revesz, P. (2002). Introduction to constraint databases. New York: Springer. Stuckey, K. M. (1998). Programming with constraints: An introduction. Cambridge, MA: MIT Press. Tsang, E. (1993). Foundations of constraint satisfaction. London, UK: Academic Press. Yokoo, M. (1998). Distributed constraint satisfaction. New York: Springer.
CONVERGING TECHNOLOGIES Human-computer interaction (HCI) is a multidisciplinary field arising chiefly in the convergence of computer science, electrical engineering, information technology, and cognitive science or psychology. In the future it is likely to be influenced by broader convergences currently in progress, reaching out as far as biotechnology and nanotechnology. Together, these combined fields can take HCI to new levels where it will unobtrusively but profoundly enhance human capabilities to perceive, to think, and to act with maximum effectiveness.
The Basis for Convergence During the twentieth century a number of interdisciplinary fields emerged, bridging the gaps between separate traditionally defined sciences. Notable examples are astrophysics (astronomy plus physics), biochemistry (biology plus chemistry), and cognitive science (psychology plus neurology plus computer science). Many scientists and engineers believe that the twenty-first century will be marked by a broader unification of all of the sciences, permitting a vast array of practical breakthroughs—notably in the convergence of nanotechnology, biotechnology, information technology, and cognitive technology—
CONVERGING TECHNOLOGIES ❚❙❘ 127
based on the unification of nanoscience, biology, information science, and cognitive science. HCI itself stands at the junction between the last two of these four, and it has the potential to play a major role in the emergence of converging technologies. A number of scientific workshops and conferences, organized by scientists and engineers associated with the U.S. National Science Foundation and building upon the United States National Nanotechnology Initiative, have concluded that nanoscience and nanotechnology will be especially important in convergence. Nanoscience and nanotechnology concern scientific research and engineering (respectively) at the nanoscale, the size range of physical structures between about 1 nanometer and 100 nanometers in shortest dimension. A nanometer is 1 billionth of a meter, or 1 millionth of a millimeter, and a millimeter is about the thickness of a dime (the thinnest U.S. coin). Superficially, nanoscience and nanotechnology
seem remote from HCI because the human senses operate at a much larger scale. However, we can already identify a number of both direct and indirect connections, and as work at the nanoscale promotes convergence between other fields it will create new opportunities and challenges for HCI. The largest single atoms, such as those of uranium, are just smaller than 1 nanometer. The structures of complex matter that are fundamental to all sciences originate at the nanoscale. That is the scale at which complex inorganic materials take on the characteristic mechanical, electrical, and chemical properties they exhibit at larger scales. The nanoscale is where the fundamental structures of life arise inside biological cells, including the human DNA (deoxyribonucleic acid) molecule itself. The double helix of DNA has the proportions of a twisted piece of string, about 2.5 nanometers thick but as much as 4 centimeters (40 million nanometers) long
The BRLESC-II, a solid-state digital computer introduced in 1967. It was designed to be 200 times faster than the ORDVAC computer it replaced. Photo courtesy of the U.S. Army.
128 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
if uncoiled. The synaptic gaps between neurons in the human brain, and the structures that contain the neurotransmitter chemicals essential to their functioning, are on the order of 20 to 50 nanometers. Nanotechnology and nanoscience are chiefly a partnership of physics, chemistry, and materials science (an interdisciplinary field at the intersection of physics, chemistry, and engineering that deals with the properties of materials, including composite materials with complex structures). In the near term nanotechnology offers engineering a host of new materials, including powders with nanoscale granules, thin coatings that transform the properties of surfaces, and composite materials having nanoscale structure that gives them greater strength, durability, and other characteristics that can be precisely designed for many specific uses. In the midterm to long term, nanotechnology is expected also to achieve practical accomplishments with complex nanostructures, including new kinds of electronic components and nanoscale machines. Biotechnology applies discoveries in biology to the invention and production of products that are valuable for human health, nutrition, and economic well-being. The traditional application areas for biotechnology are medicine and agriculture, including the production of chemicals and construction materials having organic origins. Biotechnology has a long history, extending back thousands of years to ancient industries such as fermentation of alcohol, tanning of hides, dyeing of clothing, and baking of bread. The pace of innovation accelerated throughout the nineteenth and twentieth centuries, leading to the latest developments in genomics (a branch of biotechnology concerned with applying the techniques of genetics and molecular biology to the genetic mapping and DNA sequencing of sets of genes or the complete genomes of selected organisms) and a growing understanding of the structures and processes inside the living cell. Information technology is a creation of the second half of the twentieth century, revolutionizing traditional communication technologies through the introduction of electronic computation. It comprises computers, information systems, and communication networks such as Internet and the World
Wide Web, both hardware and software. Many of the early applications have been new ways of accomplishing old tasks, for example, word processors, digital music and television, and more recently digital libraries. The integration of mobile computing with the Internet is expected to unleash a wave of radically different innovations, many of which cannot even be imagined today, connected to ubiquitous availability of information and of knowledge tools. Cognitive science is the study of intelligence, whether human, nonhuman animal, or machine, including perception, memory, decision, and understanding. It is itself a convergence of fields, drawing upon psychology, social psychology, cultural anthropology, linguistics, economics, sociology, neuroscience, artificial intelligence, and machine learning. The fundamental aim is a profound understanding of the nature of the human mind. By the beginning of the twenty-first century a new universe of cognitive technologies clearly was opening up, especially in partnerships between humans and computers. The result could be technologies that overcome breakdowns in human awareness, analysis, planning, decision making, and communication. Each of these four fields is a fertile field of scientific research and technological development, but in combination they can achieve progress much more rapidly and broadly than they can alone. Following are examples of the science and engineering opportunities in each of the six possible pairs. Nanotechnology–Biotechnology Research at the nanoscale can reveal the detailed, dynamic geometry of the tiny structures that carry out metabolism, movement, and reproduction inside the living cell, thereby greatly expanding biological science. Biology provides conceptual models and practical tools for building inorganic nanotechnology structures and machines of much greater complexity than currently possible. Nanotechnology–Information Technology Nanoelectronic integrated circuits will provide the fast, efficient, highly capable hardware to support new systems for collecting, managing, and distributing information wherever and whenever it is
CONVERGING TECHNOLOGIES ❚❙❘ 129
needed. Advances in information technology will be essential for the scientific analysis of nanoscale structures and processes and for the design and manufacture of nanotechnology products. Nanotechnology–Cognitive Technology New research methods based on nanoscale sensor arrays will enable neuroscientists to study the fine details of neural networks in the brain, including the dynamic patterns of interaction that are the basis of human thought. Cognitive science will help nanoscientists and educators develop the most readily intelligible models of nanoscale structures and the innovative curriculum needed for students to understand the world as a complex hierarchy of systems built up from the nanoscale. Biotechnology–Information Technology Principles from evolutionary biology can be applied to the study of human culture, and biologically inspired computational methods such as genetic algorithms (procedures for solving a mathematical problem in a finite number of steps that frequently involve repetition of an operation) can find meaningful patterns in vast collections of information. Bioinformatics, which consists of biologically oriented databases with lexicons for translating from one to another, is essential for managing the huge trove of data from genome (the genetic material of an organism) sequencing, ecological surveys, large-scale medical and agricultural experiments, and systematic comparisons of evolutionary connections among thousands of species. Biotechnology–Cognitive Technology Research techniques and instruments developed in biotechnology are indispensable tools for research on the nature and dynamics of the nervous system, in both humans and nonhuman animals, understood as the products of millions of years of biological evolution. Human beings seem to have great difficulty thinking of themselves as parts of complex ecological systems and as the products of evolution by natural selection from random evolution, so advances will be needed to design fresh approaches to scientific education and new visual-
ization tools to help people understand biology and biotechnology correctly. Information Technology–Cognitive Technology Experiments on human and nonhuman animal behavior depend upon computerized devices for data collection and on information systems for data analysis, and progress can be accelerated by sharing information widely among scientists. Discoveries by cognitive scientists about the ways the human mind carries out a variety of judgments provide models for how machines could do the same work, for example, to sift needed information from a vast assembly of undigested data.
HCI Contributions to Convergence Attempting to combine two scientific disciplines would be futile unless they have actually moved into adjacent intellectual territories and proper means can be developed to bridge between them. Disciplines typically develop their own distinctive assumptions, terminologies, and methodologies. Even under the most favorable conditions, transforming tools are needed, such as new concepts that can connect the disparate assumptions of different disciplines, ontologies—category schemes and lexicons of concepts in a particular domain—that translate language across the cultural barriers between disciplines, and research instrumentation or mathematical analysis techniques that can be applied equally well in either discipline. Because many of these transforming tools are likely to be computerized, humancomputer interaction research will be essential for scientific and technological convergence. One of the key ways of developing fresh scientific conceptualizations, including models and metaphors that communicate successfully across disciplinary barriers, is computer visualizations. For example, three-dimensional graphic simulations can help students and researchers alike understand the structures of complex molecules at the nanoscale, thus bridging between nanoscience and molecular biology, including genomics and the study of the structures inside the living cell. In trying to understand the behavior of protein molecules, virtual reality (VR)
130 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
may incorporate sonification (the use of sounds to represent data and information) in which a buzzing sound represents ionization (the dissociation of electrons from atoms and molecules, thus giving them an electric charge), and haptics (relating to the sense of touch) may be used to represent the attraction between atoms by providing a counteracting force when a VR user tries to pull them apart. For data that do not have a natural sensory representation, a combination of psychology and user-centered design, focusing on the needs and habitual thought patterns of scientists, will identify the most successful forms of data visualization, such as information spaces that map across the conceptual territories of adjacent sciences. HCI is relevant not only for analyzing statistical or other data that have already been collected and computerized, but also for operating scientific instruments in real time. Practically every kind of scientific research uses computerized instruments today. Even amateur astronomical telescopes costing under $500 have guidance computers built into them. In the future expensive computerized instruments used in nanoscience, such as atomic force microscopes (tools for imaging individual atoms on a surface, allowing one to see the actual atoms), may provide haptic feedback and three-dimensional graphics to let a user virtually feel and see individual atoms when manipulating them, as if they have been magnified 10 million times. In any branch of science and engineering, HCIoptimized augmented cognition and augmented reality may play a useful role, and after scientists and engineers in different fields become accustomed to the same computer methods for enhancing their abilities, they may find it easier to communicate and thus collaborate with each other. For example, primate cognitive scientists, studying the behavior of baboons, may collaborate with artificial-intelligence researchers, and both can employ augmented reality to compare the behavior of a troop of real animals with a multiagent system designed to simulate them. Internet-based scientific collaboratories can not only provide a research team at one location with a variety of transforming tools, but also let researchers from all around the world become members of the team through telepresence.
Researchers in many diverse sciences already have established a shared data infrastructure, such as international protein structure and genomics databases and the online archives that store thousands of social and behavioral science questionnaire datasets. The development of digital libraries has expanded the range of media and the kinds of content that can be provided to scholars, scientists, and engineers over the Internet. Grid computing, which initially served the supercomputing community by connecting geographically distributed “heavy iron” machines, is maturing into a vast, interconnected environment of shared scientific resources, including data collection instrumentation, information storage facilities, and major storehouses of analytic tools. As more and more research traditions join the grid world, they will come to understand each other better and find progressively more areas of mutual interest. This convergence will be greatly facilitated by advances in human-computer interaction research.
Implications for Computing Because HCI already involves unification of information and cognitive technologies, distinctive effects of convergence will primarily occur in unification with the two other realms: nanotechnology and biotechnology. Nanotechnology is likely to be especially crucial because it offers the promise of continued improvement in the performance of computer components. Already a nanoscale phenomenon called the “giant magnetoresistance” (GMR) effect has been used to increase the data density on mass production computer hard disks, giving them much greater capacity at only slight cost. The two key components of a computer hard disk are a rotatable magnetic disk and a read-and-write head that can move along the radius of the disk to sense the weak magnetism of specific tiny areas on the disk, each of which represents one bit (a unit of computer information equivalent to the result of a choice between two alternatives) of data. Making the active tip of the readand-write head of precisely engineered materials constructed in thin (nanoscale) layers significantly increases its sensitivity. This sensitivity, in turn, allows the disk to be formatted into a larger number of smaller areas, thereby increasing its capacity.
CONVERGING TECHNOLOGIES ❚❙❘ 131
Since the beginning of the human-computer interaction field, progress in HCI has depended not only on the achievements of its researchers, but also on the general progress in computer hardware. For example, early in the 1970s the Altair computer pioneered the kind of graphic user interface employed by essentially all personal computers at the end of the twentieth century, but its memory chips were too expensive, and its central processing unit was too slow. A decade later the chips had evolved to the point where Apple could just barely market the Macintosh, the first commercially successful computer using such an interface. Today many areas of HCI are only marginally successful, and along with HCI research and development, increased power and speed of computers are essential to perfect such approaches as virtual reality, real-time speech recognition, augmented cognition, and mobile computing. Since the mid-1960s the density of transistors on computer chips has been doubling roughly every eighteen months, and the cost of a transistor has been dropping by half. So long as this trend continues, HCI can count on increasingly capable hardware. At some point, possibly before 2010, manufacturers will no longer be able to achieve progress by cramming more and more components onto a chip of the traditional kind. HCI progress will not stop the next day, of course, because a relatively long pipeline of research and development exists and cannot be fully exploited before several more years pass. Progress in other areas, such as parallel processing and wireless networking, will still be possible. However, HCI would benefit greatly if electronic components continued to become smaller and smaller because this miniaturization means they will continue to get faster, use progressively less power, and possibly also be cheaper. Here is where nanotechnology comes in. Actually, the transistors on computer chips have already shrunk into the nanoscale, and some of them are less than 50 nanometers across. However, small size is only one of the important benefits of nanotechnology. Equally important are the entirely new phenomena, such as GMR, that do not even exist at larger scales. Nanotechnologists have begun exploring alternatives to the conventional microelectronics that we have been using for decades, notably molecular logic gates (components made of individual molecules
that perform logical operations) and carbon nanotube transistors (transistors made of nanoscale tubes composed of carbon). If successful, these radically new approaches require development of an entire complex of fresh technologies and supporting industries; thus, the cost of shifting over to them may be huge. Only a host of new applications could justify the massive investments, by both government and industry, that will be required. Already people in the computer industry talk of “performance overhang,” the possibility that technical capabilities have already outstripped the needs of desirable applications. Thus, a potential great benefit for HCI becomes also a great challenge. If HCI workers can demonstrate that a range of valuable applications is just beyond the reach of the best computers that the old technology can produce, then perhaps people will have sufficient motivation to build the entire new industries that will be required. Otherwise, all of computer science and engineering may stall. During the twentieth century several major technologies essentially reached maturity or ran into social, political, or economic barriers to progress. Aircraft and automobiles have changed little in recent years, and they were certainly no faster in 2000 than in 1960. The introduction of high-definition television has been painfully slow, and applications of haptics and multimodal augmented reality outside the laboratory move at a snail’s pace. Space flight technology has apparently stalled at about the technical level of the 1970s. Nuclear technology has either been halted by technical barriers or blocked by political opposition, depending on how one prefers to analyze the situation. In medicine the rate of introduction of new drugs has slowed, and the great potential of genetic engineering is threatened by increasing popular hostility. In short, technological civilization faces the danger of stasis or decline unless something can rejuvenate progress. Technological convergence, coupled with aggressive research at the intersections of technical fields, may be the answer. Because HCI is a convergent field itself and because it can both benefit from and promote convergence, HCI can play a central role. In addition to sustaining progress as traditionally defined, convergence enables entirely new
132 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
applications. For example, nanotechnology provides the prospect of developing sensors that can instantly identify a range of chemicals or microorganisms in the environment, and nano-enabled microscale sensor nets can be spread across the human body, a rain forest, and the wing of an experimental aircraft to monitor their complex systems of behavior.
Paradigm Transformation Convergence is not just a matter of hiring a multidisciplinary team of scientists and engineers and telling them to work together. To do so they need effective tools, including intellectual tools such as comprehensive theories, mathematical techniques for analyzing dynamic systems, methods for visualizing complex phenomena, and well-defined technical words with which to talk about them. Decades ago historian Thomas Kuhn described the history of science as a battle between old ways of thought and new paradigms (frameworks) that may be objectively better but inevitably undergo opposition from the old-guard defenders of the prevailing paradigm. His chief example was the so-called Copernican Revolution in astronomy, when the notion that the Earth is the center of the universe was displaced by a new notion that the sun is the center of the solar system and of a vast, centerless universe far beyond. The problem today is that many paradigms exist across all branches of science and engineering. Some may be equivalent to each other, after their terms are properly translated. Others may be parts of a larger intellectual system that needs to be assembled from them. However, in many areas inferior paradigms that dominate a particular discipline will need to be abandoned in favor of one that originated in another discipline, and this process is likely to be a hard-fought and painful one taking many years. The human intellectual adventure extends back tens of thousands of years. In their research and theoretical work on the origins of religion, Rodney Stark and William Sims Bainbridge observed that human beings seek rewards and try to avoid costs— a commonplace assumption in economics and other branches of social science. To solve the problems they faced every day, ancient humans sought
explanations—statements about how and why rewards may be obtained and costs are incurred. In the language of computer science, such explanations are algorithms. Some algorithms are very specific and apply only under certain narrowly defined circumstances. If one wants meat, one takes a big stick from the forest, goes into the meadow, and clobbers one of the sheep grazing there. If one wants water, one goes to the brook at the bottom of the valley. These are rather specific explanations, assuming that only one meadow, one kind of animal, one brook, and one valley exist. As the human mind evolved, it became capable of working out much more general algorithms that applied to a range of situations. If one wants meat, one takes a club, goes to any meadow, and sees what one can clobber there. If one wants water, the bottoms of deep valleys are good places to look. In the terms of artificial intelligence, the challenge for human intelligence was how to generalize, from a vast complexity of experience, by reasoning from particular cases to develop rules for solving particular broad kinds of problems. Stark and Bainbridge noted how difficult it is for human beings to invent, test, and perfect very general explanations about the nature of the universe and thereby to find empirically good algorithms for solving the problems faced by our species. In other words, science and technology are difficult enterprises that could emerge only after ten thousand years of civilization and that cannot be completed for many decades to come. In the absence of a desired reward, people often will accept algorithms that posit attainment of the reward in the distant future or in some other non-verifiable context. Thus, first simple magic and then complex religious doctrines emerged early in human history, long before humans had accurate explanations for disease and other disasters, let alone effective ways of dealing with them. If the full convergence of all the sciences and technologies actually occurs, as it may during the twenty-first century, one can wonder what will become not only of religion but of all other forms of unscientific human creativity, what are generally called the “humanities.” The U.S. entomologist and sociobiologist Edward O. Wilson has written about the convergence that
CYBERCOMMUNITIES ❚❙❘ 133
is far advanced among the natural sciences, calling it “consilience,” and has wondered whether the humanities and religion will eventually join in to become part of a unified global culture. Here again human-computer interaction may have a crucial role to play because HCI thrives exactly at the boundary between humans and technology. During the first sixty years of their existence, computers evolved from a handful of massive machines devoted to quantitative problems of engineering and a few physical sciences to hundreds of millions of personal tools, found in every school or library, most prosperous people’s homes, and many people’s pockets. Many people listen to music or watch movies on their computers, and thousands of works of literature are available over the Internet. A remarkable number of digital libraries are devoted to the humanities, and the U.S. National Endowment for the Humanities was one of the partner agencies in the Digital Library Initiative led by the U.S. National Science Foundation. The same HCI methods that are used to help scientists visualize complex patterns in nature can become new ways of comprehending schools of art, tools for finding a desired scholarly reference, or even new ways of creating the twenty-second-century equivalents of paintings, sculptures, or symphonies. The same virtual reality systems that will help scientists collaborate across great distances can become a new electronic medium, replacing television, in which participants act out roles in a drama while simultaneously experiencing it as theater. Cyberinfrastructure resources such as geographic information systems, automatic language translation machines, and online recommender systems can be used in the humanities as easily as in the sciences. The conferences and growing body of publications devoted to converging technologies offer a picture of the world a decade or two in the future when information resources of all kinds are available at all times and places, organized in a unified but malleable ontology, and presented through interfaces tailored to the needs and abilities of individual users. Ideally, education from kindergarten through graduate school will be organized around a coherent set of concepts capable of structuring reality in ways that are simultaneously accurate and congenial to
human minds of all ages. Possibly no such comprehensive explanation of reality (an algorithm for controlling nature) is possible. Or perhaps the intellectuals and investors who must build this future world may not be equal to the task. Thus, whether it succeeds or fails, the technological convergence movement presents a huge challenge for the field of human-computer interaction, testing how well we can learn to design machines and information systems that help humans achieve their maximum potential. William Sims Bainbridge See also Augmented Cognition; Collaboratories FURTHER READING Atkins, D. E., Drogemeier, K. K., Feldman, S. I., Garcia-Molina, H., Klein, M. L., Messerschmitt, D. G., Messina, P., Ostriker, J. P., & Wright, M. H. (2003). Revolutionizing science and engineering through cyberinfrastructure. Arlington, VA: National Science Foundation. Kuhn, T. (1962). The structure of scientific revolutions. Chicago: University of Chicago Press. Roco, M. C., & Bainbridge, W. S. (2001). Societal implications of nanoscience and nanotechnology. Dordrecht, Netherlands: Kluwer. Roco, M. C., & Bainbridge, W. S. (2003). Converging technologies for improving human performance. Dordrecht, Netherlands: Kluwer. Roco, M. C., & Montemagno, C. D. (Eds.). (2004). The coevolution of human potential and converging technologies. Annals of the New York Academy of Sciences, 1013. New York: New York Academy of Sciences. Stark, R., & Bainbridge, W. S. (1987). A theory of religion. New York: Toronto/Lang. Wilson, E. O. (1998). Consilience: The unity of knowledge. New York: Knopf.
CYBERCOMMUNITIES For many people, the primary reason for interacting with computers is the ability, through computers, to communicate with other people. People form cybercommunities by interacting with one another through computers. These cybercommunities are conceived of as existing in cyberspace, a conceptual realm created through the networking and interconnection that computers make possible.
134 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Cybercommunity Definition and History The prefix cyber first appeared in the word cybernetics, popularized by Norbert Weiner (1894–1964) in the 1940s to refer to the science of “control and communication in the animal and the machine” (the subtitle of Weiner’s 1948 book Cybernetics). Since that time, cyber has prefixed many other words to create new terms for various interconnections between computers and humans. One of the terms, cyberspace, has become a popular metaphor for the perceived location of online interactions. Coined by William Gibson in his 1984 novel Neuromancer, cyberspace originally referred to a graphical representation of computerized data to which people connected through direct electrical links to the brain. Since then, the term has come to mean any virtual forum in which people communicate through computers, whether the form of communication involves text, graphics, audio, or combinations of those. Cybercommunities predate widespread use of the Internet, with the first forming in localized systems called bulletin board services (BBSs). BBSs usually ran on a single computer, and participants connected through modems and a local phone line. This meant that most participants lived within a limited geographical area. Thus many BBSs were able to hold occasional face-to-face get-togethers, enhancing community relationships. Communication on BBSs was usually asynchronous; that is, people logged on at different times and posted messages in various topical forums for others to read and respond to later. (E-mail and similar bulletin boards now available on the World Wide Web are also asynchronous forms of online communication, while the various types of online chat and instant messaging are considered to be synchronous forms of communication, since participants are present on a forum simultaneously and can spontaneously respond to each other’s communications.) From the earliest days of the Internet and its military-funded precursor, the Arpanet (established in 1969), online participants began forming cybercommunities. E-mail immediately emerged as the largest single use of the Internet, and remained so until 2002, when it was matched by use of the World Wide Web (an information-exchange service avail-
able on the Internet first made available in 1991 and given a graphical interface in 1993). People also began using the Internet in the early 1980s to run bulletin board services such as Usenet, which, unlike the earlier local BBSs, could now be distributed to a much larger group of people and accessed by people in widely dispersed geographical locations. Usenet expanded to include many different cybercommunities, most based around a common interest such as Linux programming or soap operas. Existing Cybercommunities Cybercommunities have risen in number with the increasing availability and popularity of the Internet and the World Wide Web. Even within the short overall history of cybercommunities, some cybercommunities have been short-lived. However, there are several, begun in the early days of computer networking, that still exist online and therefore present a useful view of factors involved in the formation and maintenance of online communities. One of the oldest still-extant cybercommunities is The WELL, which began in 1985 as a local BBS in the San Francisco Bay Area in California. Laurence Brilliant, a physician with an interest in computer conferencing, and Stewart Brand, editor of the Whole Earth Review and related publications, founded The WELL with the explicit goal of forming a virtual community. One savvy method the founders used to attract participants was to give free accounts to local journalists, many of whom later wrote about their participation, generating further interest and publicity. In the early years, when most participants lived in the same geographical area, The WELL held monthly face-to-face meetings. Currently owned by Salon.com, The WELL is now accessible through the World Wide Web. Another venerable cybercommunity, LambdaMOO, also began as an experiment in online community. In contrast to The WELL, LambdaMOO provided a forum for synchronous communication and allowed people to create a virtual environment within which to interact. LambdaMOO is an example of a type of program called a MUD, (for multiuser dimension or multiuser dungeon). MUDs are similar to online chatrooms, but also allow
CYBERCOMMUNITIES ❚❙❘ 135
participants to create their own virtual spaces and objects as additions to the program, with which they and others can then interact. These objects enhance the feel of being in a virtual reality. Created by the computer scientist Pavel Curtis as a research project for Xerox, LambdaMOO opened in 1990. A 1994 article about it in Wired magazine led to a significant increase in interest in it and to dramatic population growth. Pavel Curtis has moved on to other projects, and LambdaMOO is no longer associated with Xerox. But although it has undergone considerable social changes over the years, it still attracts hundreds of participants. MUDs began as interactive text-based roleplaying games inspired by similar face-to-face roleplaying games such as Dungeons and Dragons (hence dungeon in one expansion of the acronym). More recently, similar online games have become available with the enhancement of a graphical interface. People have used MMORPGs (massively multiplayer online role-playing games) such as Everquest as forums for socializing as well as gaming, and cybercommunities are forming amongst online gamers. Web logs, or blogs, are a relatively new and increasingly popular platform for cybercommunities. Blogs are online journals in which one can post one’s thoughts, commentary, or reflections, sometimes also allowing others to post comments or reactions to these entries. Many blogs provide a forum for amateur (or, in some cases, professional) journalism, however others more closely resemble online personal diaries. There are many different programs available for blogging, and some give specific attention to community formation. LiveJournal, for instance, enables each participant to easily gather other journals onto a single page, making it easy to keep up with friends’ journals. Links between participants are also displayable, enabling people to see who their friends’ friends are and to easily form and expand social networks. Community Networks Some cybercommunities grow out of existing offline communities. In particular, some local municipalities have sought to increase citizen participation in
Welcome to LamdaMOO
B
elow is an introduction to the cybercommunity LamdaMOO, as presented on www.lamdamoo.info: LambdaMOO is sort of like a chat room. It’s a text-only based virtual community of thousands of people from all over the world. It’s comprised of literally thousands of “rooms” that have been created by the users of LambdaMOO, and you endlessly navigate (walk around) north, south, etc. from room to room, investigating, and meeting people that you can interact with to your hearts content. You get there not thru an HTML browser like Netscape or IE but through another program called TELNET (search). Your computer most likely has Telnet but enhanced versions can be found. (Telnet address: telnet://lambda.moo .mud.org:8888/). You can try the Lambda button at the top of this page to see if all goes well. If so, a window will open and you’ll be able to log in. When you get the hang of it, you can create a character who has a name and a physical description, and who can be “seen” by all who meet you. As you walk around from room to room you are given a description of the room and a list of contents (including other people). You can “look” at each person to get a more detailed description and when you do, they see a message stating that you just checked them out. You can talk to them and they see your words in quotes, like reading spoken words in a book. You can also emote (communicate with body language) using gestures such as a smile or a nod of the head. In time you’ll learn to create your own rooms or other objects, which are limited only by your imagination. There are many people to meet up with and build “cyber-friendships” with. When you first get there you’ll be asked to log in. First timers can sign in as a guest. After that you can apply for a permanent character name and password. Give it a try and if you see me around say hi. Felis~Rex Status: Programmer/108/33%Fogy/PC Parent: Psychotic Class of Players Seniority: 1320/4093, (33%) MOO-age: 108 months. (1995 January 28, Saturday)
136 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
the local government and community by forming community computer networks that allow people access to government officials and provide forums for community discussion. The first of these, the Public Electronic Network (PEN), started in Santa Monica, California, in 1989. It was particularly successful in providing access to low-income citizens who might not otherwise have had access to computers or computer networks. In more recent years, some offline communities have gone beyond providing an online forum specifically related to the community and have also sought to promote computer use and connectivity in general. For instance, the town of Blacksburg, Virginia, with the help of Virginia Polytechnic Institute and State University (known as Virginia Tech and located in town) and local businesses, is attempting to provide the infrastructure necessary to bring Internet connectivity to every household in town and in the surrounding rural area. This project, called the Blacksburg Electronic Village (BEV) has had several goals, including expanding the local economy through the promotion of high-tech industry, increasing citizen access to online resources, and promoting a stronger sense of community. Recent evaluations by project leaders indicate that BEV has been more successful in the first two areas than in the third. Access Issues As the BEV project recognized, in order to participate in cybercommunities, people need access to computers and to computer networks, especially to the Internet and the World Wide Web. Although such access has been expanding rapidly, people in poorer nations and disadvantaged populations in more affluent countries still have limited access to the Internet, if they have it at all. This issue has been particularly salient for community networks, which are often created with the specific goal of making it possible for disadvantaged groups to access and influence their local governmental structures. Thus many community networks, in addition to setting up websites, have provided publicly accessible terminals for the use of those who do not have access to computers at home or at work.
As more and more online sites use multimedia and bandwidth-intensive enhancements (that is, enhancements that can only be successfully transmitted across a wide range—or band—of electromagnetic frequencies), speed of access has also become a crucial issue. People with older equipment—slower modems and computer processors—are disadvantaged in their ability to access online materials, especially at multimedia sites. Some governments, notably in South Korea and Japan, have sought to address that problem by subsidizing the development of broadband networks, enabling widespread relatively inexpensive access in those countries to high-speed Internet connections. In addition to access to equipment and networks, people need the skills that enable them to use that access. Research has also shown that people are unlikely to take advantage of the availability of computers and the Internet if they do not consider computer-related activities useful and do not have social support for such activities from people they know, especially their peers. This is particularly apparent in wealthier nations such as the United States, where the usefulness of and accessibility to online resources is taken for granted by more affluent members of society but where such online resources are less likely to be perceived as desirable by members of less affluent communities. To address that problem, several nonprofit groups in the United States have set up community computing centers in poorer neighborhoods, where they provide both training in necessary computer skills and a communitybased context for valuing such skills. Another approach to broadening community access to the Internet has been to integrate Internet connections into the construction of new buildings or entire neighborhoods. However, these types of developments also benefit only those who can afford to buy into them. Interfaces The direct-brain interfaces envisioned by Gibson, if possible at all, are likely many years in the future (although there have been some promising early experiments in recent years, including one in which a blind person was given partial sight through a video feed wired to the optical nerve). Most people cur-
CYBERCOMMUNITIES ❚❙❘ 137
rently access and participate in cybercommunity through personal computers. Usually, these computers are connected to the Internet by a modem or other wired connection to an Internet service provider. However, wireless services are increasing, and in some countries, most notably Japan, cell phones are commonly used to access the Internet and to communicate textually with others. In other countries, including the United States, people are also beginning to use cell phones and personal digital assistants (PDAs) for these purposes. Most communication in cybercommunities occurs through text, although some forums use graphics or voice communication, often supplemented by text. Some of the oldest existing cybercommunities are still text-only and therefore require a high level of literacy as well as comfort with computers. Early text-based forums were not always particularly easy to use, either. The WELL’s original interface was notoriously difficult to work with. This meant that only those with an understanding of computers and a strong interest in the possibilities of cybercommunity had the motivation and ability to participate. Currently, The WELL has a much more accessible Web interface and a concomitantly more diverse population of users. As available Internet bandwidth and computer processing speeds have increased, cybercommunities are able to use graphical representations of people and objects within the cyberspace. One of the earliest examples, from 1985, was Habitat, a role-playing game and socializing space that emulated an offline community. Habitat featured a local economy (based on points rather than real money) and such social structures as a church and sheriff ’s office. Habitat used two-dimensional cartoon-like drawings to represent people and objects within the forum. Many current graphical worlds also use flat cartoon-type representations. Habitat originated the use of the term avatar to refer to the representation of people in such graphical worlds, and that term has persisted in most such systems. The technical difficulties inherent in rendering three-dimensional spaces through which characters can move and in which people can manipulate virtual objects, along with the high level of computer processing power required to make possible three-
dimensional virtual spaces, has slowed the development of cybercommunities using three-dimensional spaces and avatars. One such community, Active Worlds (introduced in 1995), provides a threedimensional view of the environment similar to those first used in first-person shooter computer games (games in which you see on the screen what your character sees, rather than watching your character move about) such as Doom and Quake. Technical considerations, including the simple problem of the amount of “real estate” available on a computer screen, meant that participants in the early years of Active Worlds could see only the twelve closest avatars. This contrasts with text-only interactive forums such as MUDs and chat, in which thirty to fifty participants can be simultaneously involved in overlapping textual conversations. Graphical interfaces provide both limitations and enhancements to online communications. Identity in Cybercommunities Aside from the more technical aspects of interface design, cybercommunities have also had to grapple with the question of self-representation. How do participants appear to one another? What can they know about one another at the outset, and what can they find out? How accountable are cybercommunity members for their words and behavior within the virtual space? In purely text-based systems such as chat forums or MUDs, participants are generally expected to provide some sort of description or personal information, although on some systems it is understood that this information may be fanciful. On LamdaMOO, for instance, many participants describe themselves as wizards, animals, or creatures of light. However, each participant is required to choose a gender for their character, partly in order to provide pronoun choice for text generated by the MUD program, but also indicating the assumed importance of this aspect of identity. In a divergence from real life, LambdaMOO provides ten choices for gender identification. Despite this, most participants choose either male or female. LambdaMOO participants choose what other personal information they wish to reveal. On other MUDs, especially those intended as professional spaces or as forums for discussions
138 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
relating to “real life” (offline life), participants may be required to provide e-mail addresses or real names. In graphical forums, participants are represented both by the textual information they provide about themselves and by their avatar. Design choices involved in avatar creation in different virtual spaces often reveal important underlying social assumptions, as well as technical limitations. In the early years on Active Worlds, for instance, participants were required to choose from a limited number of existing predesigned avatars. In part this stemmed from the difficulties of rendering even nominally human-seeming avatars in the three-dimensional space. However, the particular avatars available also revealed biases and assumptions of the designers. In contrast to MUDs such as LambdaMOO, all avatars were human. At one point, participants exploited a programming loophole to use other objects, such as trees and walls, as personal representations, but this loophole was quickly repaired by the designers, who felt strongly that human representations promoted better social interaction. Active Worlds’ avatars also displayed a very limited range of human variation. Most were white, and the few non-white avatars available tended to display stereotypical aspects. For instance, the single Asian avatar, a male, used kung-fu moves, the female avatars were all identifiable by their short skirts, and the single black male avatar sported dreadlocks. Since then, programming improvements and feedback from users has enabled Active Worlds to improve their graphics (the avatars now have distinct facial features) and expand their representational offerings. In two-dimensional graphical environments such as Worlds Away (introduced in 1995), variation tended to be greater from the beginning, and participants were given the ability to construct avatars from components. They could even change avatar appearance at will by (for example) buying new heads from the “head shop.” In some systems, participants can also import their own graphics to further customize their online self-representation. Cybercommunities with greater ties to offline communities also have to deal with interface and representations issues. In order to provide community access to as wide a range of townspeople as possible, networks such as PEN need an easy-to-use inter-
face that can recover well from erroneous input. Names and accountability are another crucial issue for community forums. PEN found that anonymity tended to facilitate and perhaps even encourage “flaming” (caustic criticism or verbal abuse) and other antisocial behavior, disrupting and in some cases destroying the usefulness of the forums for others. Conflict Management and Issues of Trust Cybercommunities, like other types of communities, must find ways to resolve interpersonal conflicts and handle group governance. In the early years of the Internet, users of the Internet were primarily white, male, young, and highly educated; most were connected to academic, government, or military institutions, or to computing-related businesses. However, in the mid-1990s the Internet experienced a great increase in participation, especially from groups who had previously been on private systems not connected to the Internet, notably America Online (AOL). This sudden change in population and increase in diversity of participants created tensions in some existing cybercommunities. In one now-famous Usenet episode in 1993, participants in a Usenet newsgroup called alt.tasteless, a forum for tasteless humor frequented primarily by young men, decided to stage an “invasion” of another newsgroup, rec.pets.cats, whose participants, atypically for Usenet newsgroups at the time, were largely women, older than Usenet participants in general, and in many cases relatively new to the Internet. The alt.tasteless participants flooded rec.pets.cats with gross stories of cat mutilation and abuse, disrupting the usual discussions of cat care and useful information about cats. Some of the more computer-savvy participants on rec.pets.cats attempted to deal with the disruption through technical fixes such as kill files (which enable a participant to automatically eliminate from their reading queue messages posted by particular people), but this was difficult for participants with less understanding of the somewhat arcane Usenet system commands. The invaders, meanwhile, found ways around those fixes. The conflict eventually spread to people’s offline lives, with some rec.pets.cats participants receiving physical threats, and at least one alt.tasteless participant having their Internet access terminated for abusive be-
CYBERCOMMUNITIES ❚❙❘ 139
havior. Eventually, the invaders tired of their sport and rec.pets.cats returned to normal. Some newsgroups have sought to avoid similar problems by establishing a moderator, a single person who must approve all contributions before they are posted to the group. In high traffic groups, however, the task of moderation can be prohibitively time-consuming. LambdaMOO experienced a dramatic population surge in the late 1990s, causing not only social tensions, but also technical problems as the LambdaMOO computer program attempted to process the increasing numbers of commands. LambdaMOO community members had to come up with social agreements for slowing growth and for limiting commands that were particularly taxing on the server. For instance, they instituted a limit on the numbers of new participants that could be added each day, started deleting (“reaping”) the characters and other information of participants who had been inactive for several months, and set limits on the number of new virtual objects and spaces that participants could build. This created some tension as community members attempted to find fair ways to determine who would be allowed to build and how much. The solution, achieved through vote by participants, was to create a review board elected by the community that would reject or approve proposed projects. Designers of cybercommunity forums have also had to consider what types of capabilities to give participants and what the social effects of those capabilities might be. For instance, Active Worlds originally did not allow participants to have private conversations that were not visible to all other participants in the same virtual space. The designers felt that such conversations were antisocial and might lead to conflicts. However, participants continued to request a command that enabled such “whispered” conversations, and also implemented other programs, such as instant messaging, in order to work around the forum’s limitations. The designers eventually acquiesced and added a whisper command. Similarly, some MUDs have a command known as mutter. Rather than letting you talk only to one other person, as is the case with whisper, mutter lets you talk to everyone else in the virtual room except a designated person; in other words, it enables you to talk behind a person’s back while that person is
present—something not really possible offline. As a positive contribution, this command can allow community members to discuss approaches to dealing with a disruptive participant. However, the command can also have negative consequences. The use of avatars in graphical forums presents another set of potential conflicts. In the twodimensional space of Worlds Away, participants found that they could cause another participant to completely disappear from view by placing their own avatar directly on top of the other’s. With no available technical fix for this problem, users had to counter with difficult-to-enforce social sanctions against offenders. Trust The potential for conflicts in cybercommunities is probably no greater than that in offline communities. On the one hand, physical violence is not possible online (although in theory escalating online conflicts can lead to offline violence). On the other hand, the difficulty in completely barring offenders from a site (since people can easily reappear using a different e-mail address) and the inability to otherwise physically enforce community standards has increased cybercommunities’ vulnerability to disruption. In some cases, the greater potential for anonymity or at least pseudonymity online has also facilitated antisocial behavior. Many cybercommunities have therefore tried to find ways to enhance trust between community members. Some have sought to increase accountability by making participants’ e-mail addresses or real life names available to other participants. Others have set rules for behavior with the ultimate sanction being the barring of an individual from the forum (sometimes technologically tricky to implement). LambdaMOO, for instance, posts a set of rules for polite behavior. Because it is also one of the most famous (and most documented) cybercommunities, LambdaMOO’s opening screen also displays rules of conduct for journalists and academic researchers visiting the site. LiveJournal requires potential participants to acquire a code from an existing user in order to become a member, which it is hoped ensures that at least one person currently a member of the community
140 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
vouches for the new member. LiveJournal is considering abandoning this practice in favor of a complex system of interpersonal recommendations that give each participant a trust rating, theoretically an indication of their trustworthiness and status within the community. Although perhaps not as complex, similar systems are in use at other online forums. Slashdot, a bulletin board service focusing primarily on computer-related topics, allows participants to rank postings and then to filter what they read by aggregated rank. A participant can, for instance, decide to read only messages that achieve the highest average rating, as averaged from the responses of other participants. The online auction site eBay has a feedback system through which buyers and sellers rate one another’s performance after each transaction, resulting in a numerical score for each registered member. Each instance of positive feedback bestows a point, and each instance of negative feedback deletes one. A recent change in the way auctions are displayed now lists a percentage of positive feedback for each seller. Users can also read the brief feedback messages left for other users. These features are intended to allow users to evaluate a person’s trustworthiness prior to engaging in transactions with that person. The degree to which these types of trustpromotion systems work to foster and enhance community is unclear. Participants in various cybercommunities continue to consider issues of trust and to work on technological enhancements to the virtual environment that will help suppress antisocial behavior and promote greater community solidarity. Future Directions As cybercommunities first developed, mainstream media commentary discussed a variety of hyperbolic fears and hopes. People feared that cybercommunities would replace and supplant other forms of community and that cybercommunities were less civilized, with greater potential for rude and antisocial behavior. On the other hand, people also hoped that cybercommunities might provide forms of interconnectedness that had otherwise been lost in modern life. Some people also suggested that cybercommunities could provide a forum in which pre-
vious prejudices might be left behind, enabling a utopian meeting of minds and ideas. So far, it appears that cybercommunities tend to augment rather than supplant people’s other social connections. They appear to contain many of the same positive and negative social aspects present in offline communities. Further, many cybercommunities emerge from existing offline groups, also include an offline component (including face-toface contact between at least some participants), or utilize other technologies such as the telephone to enhance connections. Whatever form cybercommunities take in the future, their presence and popularity from the earliest days of computer networks makes it clear that such interconnections will continue to be a significant part of human-computer interaction. Lori Kendall See also Avatars; Digital Divide; MUDs FURTHER READING Baym, N. K. (2000). Tune in, log on: Soaps, fandom, and online community. Thousand Oaks, CA: Sage. Belson, K., & Richtel, M. (2003, May 5). America’s broadband dream is alive in Korea. The New York Times, p. C1. Benedikt, M. (Ed.). (1992). Cyberspace: First steps. Cambridge, MA: MIT Press. Blackburg Electronic Village. (n.d.) About BEV. Retrieved August 12, 2003, from http://www.bev.net/about/index.php Cherny, L. (1999). Conversation and community: Chat in a virtual world. Stanford, CA: CSLI Publications. Damer, B. (1998). Avatars! Berkeley, CA: Peachpit Press. Dibbell, J. (1998). My tiny life: Crime and passion in a virtual world. New York: Henry Holt and Company. Gibson, W. (1984). Neuromancer. New York: Ace Books. Hafner, K. (2001). The Well: A Story of love, death & real life in the seminal online community. Berkeley, CA: Carroll & Graf. Hampton, K. (2001). Living the wired life in the wired suburb: Netville, glocalization and civil society. Unpublished doctoral dissertation, University of Toronto, Ontario, Canada. Herring, S. C., with D. Johnson & T. DiBenedetto. (1995). “This discussion is going too far!” Male resistance to female participation on the Internet. In M. Bucholtz & K. Hall (Eds.), Gender articulated: Language and the socially constructed self (pp. 67–96). New York: Routledge. Jones, S. (Ed.). (1995). Cybersociety: Computer-mediated communication and community. Thousand Oaks, CA: Sage. Jones, S. (Ed.). (1997). Virtual culture: Identity and communication in cybersociety. London: Sage.
CYBERSEX ❚❙❘ 141
Kavanaugh, A., & Cohill, A. (1999). BEV research studies, 1995– 1998. Retrieved August 12, 2003, from http://www.bev.net/about/ research/digital_library/docs/BEVrsrch.pdf Kendall, L. (2002). Hanging out in the virtual pub. Berkeley, CA: University of California Press. Kiesler, S. (1997). Culture of the Internet. Mahwah, NJ: Lawrence Erlbaum Associates. McDonough, J. (1999). Designer selves: Construction of technologicallymediated identity within graphical, multi-user virtual environments. Journal of the American Society for Information Science, 50(10), 855–869. McDonough, J. (2000). Under construction. Unpublished doctoral dissertation, University of California at Berkeley. Morningstar, C., & Farmer, F. R. (1991). The lessons of Lucasfilm’s Habitat. In M. Benedikt (Ed.), Cyberspace: First steps (pp. 273–302). Cambridge, MA: The MIT Press. Porter, D. (1997). Internet culture. New York: Routledge. Renninger, K. A., & Shumar, W. (Eds.). (2002). Building virtual communities. Cambridge, UK: Cambridge University Press. Rheingold, H. (1993). The virtual community: Homesteading on the electronic frontier. Reading, MA: Addison-Wesley. Smith, M., & Kollock, P. (Eds.). (1999). Communities and cyberspace. New York: Routledge. Taylor, T. L. (2002). Living digitally: Embodiment in virtual worlds. In R. Schroeder (Ed.), The social life of avatars: Presence and interaction in shared virtual environments. London: Springer Verlag. Turkle, S. (1995). Life on the screen: Identity in the age of the Internet. New York: Simon & Schuster. Wellman, B. (2001). The persistence and transformation of community: From neighbourhood groups to social networks. Report to the Law Commission of Canada. Retrieved August 12, 2003, from http:// www.chass.utoronto.ca/~wellman/publications/lawcomm/ lawcomm7.htm Wellman, B., & Haythornthwaite, C. (Eds.). (2002). The Internet in everyday life. Oxford, UK: Blackwell. Wellman, B., Boase, J., & Chen, W. (2002). The networked nature of community online and offline. IT & Society, 1(1), 151–165. Weiner, N. (1948). Cybernetics, or control and communication in the animal and the machine. Cambridge, MA: MIT Press. WELL, The. (2002). About the WELL. Retrieved August, 2003, from http://www.well.com/aboutwell.html
CYBERSEX The term cybersex is a catch-all word used to describe various sexual behaviors and activities performed while on the Internet. The term does not indicate that a particular behavior is good or bad, only that the sexual behavior occurred in the context of the Internet. Examples of behaviors or activities that may be considered cybersex include sexual conversations in Internet chatrooms, retrieving sexual media (for example, photographs, stories, or videos) via the
Internet, visiting sex-related websites, masturbating to sexual media from the Internet, engaging in sexualized videoconferencing activities, creating sexual materials for use/distribution on the Internet, and using the Internet to obtain/enhance offline sexual behaviors. A broader term used to describe Internet sexual behavior is “online sexual activity” (OSA), which includes using the Internet for any sexual purpose, including recreation, entertainment, exploration, or education. Examples of OSA are using online services to meet individuals for sexual /romantic purposes, seeking sexual information on the Internet (for instance, about contraception and STDs), and purchasing sexual toys/paraphernalia online. What distinguishes cybersex from OSA is that cybersex involves online behaviors that result in sexual arousal or gratification, while other online sexual activities may lead to offline sexual arousal and gratification. Sexual arousal from cybersex is more immediate and is due solely to the online behavior.
Venues Many people assume that the World Wide Web is the main venue for cybersex. In fact, the Web represents only a small portion of the places where cybersex activities can occur. Other areas of the Internet where cybersex may take place include the following: Newsgroups— This area serves as a bulletin board where individuals can post text or multimedia messages, such as sexual text, pictures, sounds, and videos; ■ E-mail—E-mail can be used for direct communication with other individuals or groups of individuals. In the case of cybersex, the message may be a sexual conversation, story, picture, sound, or video; ■ Chatrooms—Both sexualized conversation and multimedia can be exchanged in chatrooms. Casual users are familiar with Web-based chatting such as Yahoo Chat or America Online (AOL) Chat. Most Web-based chat areas have sections dedicated to sexual chats. However, the largest chat-based system is the Internet Relay Chat (IRC), an area largely unfamiliar to most casual users. In addition to text-based chatting, ■
142 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
IRC contains a number of chatrooms specifically dedicated to the exchange of pornography through “file servers”; ■ Videoconferencing/Voice Chatting—The use of these areas is rapidly increasing. As technology improves and connection speeds increase, the use of the Internet for “live” cybersex sessions will become commonplace. Videoconferencing combined with voice chat constitutes a high-tech version of a peep show mixed with an obscene phone call; and ■ Peer-to-Peer File Sharing—Software packages such as Napster and Kazaa have made file sharing a popular hobby. Casual users of this software know its use for exchanging music files, but any file can be shared on the network, including sexual images, sounds, and videos.
Statistics Although the term cybersex often has negative connotations, research in this area suggests that nearly 80 percent of individuals who engage in Internet sex report no significant problems in their lives associated with their online sexual activities. Although this may be an underestimate since the research relied on the self-reports of respondents, it is safe to assume that the majority of individuals who engage in cybersex behavior report this activity to be enjoyable and pleasurable, with few negative consequences. However, there are individuals who engage in cybersex who do report significant negative consequences as a result of their online sexual behavior. These individuals often report that their occupational, social, or educational life areas have been negatively impacted or are in jeopardy as a result of their sexual use of the Internet. Often these individuals report a sense of being out of control or compulsive in their sexual use of the Internet and often compare it to addictions like gambling, eating, shopping, or working. Several large-scale studies estimate the percentage of individuals who are negatively impacted by cybersex behaviors. While exact numbers are impossible given the size of the Internet, estimates are
that from 11 to 17 percent of individuals who engaged in cybersex report some consequences in their life and score moderately high on measures of general sexual compulsivity. In addition, approximately 6 percent report feeling out of control with their Internet sexual behavior and scored high on measures of sexual compulsivity.
Healthy Versus Problematic Cybersex One of the difficulties in defining cybersex as either healthy or problematic is the fact that there are few agreed-upon definitions about what constitutes sexually healthy behavior. Society has clearly delineated some behaviors as unhealthy, for example, sex with children or other non-consenting partners. However, people disagree about whether masturbation, multiple affairs, bondage, and fetishes are healthy or unhealthy. In the world of cybersex, these same gray areas exist between healthy and unhealthy and are often even more difficult to define since the behavior does not include actual sexual contact. It is also important not to assume that frequency is the key factor in determining whether an individual is engaged in unhealthy cybersex. Some individuals engage in cybersex at a high frequency and have few problems, while others who engage in it only a few hours a week have significant negative consequences. Physician and researcher Jennifer Schneider proposed three criteria to help determine if someone’s behavior has become compulsive—that is, whether the person has crossed the line from a “recreational” to a “problematic” user of cybersex. The three criteria are (1) loss of freedom to choose whether to stop the behavior; (2) negative consequences as a result of the behavior; and (3) obsessive thinking about engaging in the behavior. The Internet Sex Screening Test (ISS) described by counseling professor David Delmonico and professor of school psychology Jeffrey Miller can be used to conduct initial screening of whether an individual has a problem with cybersex.
CYBERSEX ❚❙❘ 143
The Appeal of the Internet With an estimated 94 million users accessing it regularly, it is difficult to dispute the Internet’s widespread appeal. In 2001 Delmonico, Moriarity, and marriage and family therapist Elizabeth Griffin, proposed a model called “the Cyberhex” for understanding why the Internet is so attractive to its users. Their model lists the following six characteristics: Integral: The Internet is nearly impossible to avoid. Even if a cybersex user decided to never use the Internet again, the integral nature of the Internet would make that boundary nearly impossible, since many need the Internet for work, or to access bank information, and so on. In addition, public availability, the use of e-mail, and other activities like shopping and research make the Internet a way of life that is integrated into our daily routines. Imposing: The Internet provides an endless supply of sexual material 7 days a week, 365 days a year. The amount of information and the imposing nature of marketing sexual information on the Internet contributes to the seductiveness of the world of cybersex. Inexpensive: For a relatively small fee, twenty to forty dollars per month, a user can access an intoxicating amount of sexual material on the Internet. In the offline world such excursions can be cost-prohibitive to many. Isolating: Cybersex is an isolating activity. Even though interpersonal contact may be made during the course of cybersex, these relationships do not require the same level of social skills or interactions that offline behaviors require. The Internet becomes a world in itself, where it is easy to lose track of time, consequences, and real-life relationships. The isolation of cybersex often provides an escape from the real world, and while everyone takes short escapes, cybersex often becomes the drug of choice to anesthetize any negative feelings associated with real-life relationships. Interactive: While isolating in nature, the Internet also hooks individuals into pseudorelationships. These pseudorelationships often approximate reality without running the risks of real relationships—like emotional and physical vulnerability and intimacy. This close approximation to reality can be fuel for the fantasy life of those who experience problems with their cybersex behaviors.
Cybersex Addiction
T
he Center for Online and Internet Addiction (www .netaddiction.com) offers the following test to help diagnose cybersex addiction:
1. Do you routinely spend significant amounts of time in chat rooms and private messaging with the sole purpose of finding cybersex? 2. Do you feel preoccupied with using the Internet to find on-line sexual partners? 3. Do you frequently use anonymous communication to engage in sexual fantasies not typically carried out in real-life? 4. Do you anticipate your next on-line session with the expectation that you will find sexual arousal or gratification? 5. Do you find that you frequently move from cybersex to phone sex (or even real-life meetings)? 6. Do you hide your on-line interactions from your significant other? 7. Do you feel guilt or shame from your on-line use? 8. Did you accidentally become aroused by cybersex at first, and now find that you actively seek it out when you log on-line? 9. Do you masturbate while on-line while engaged in erotic chat? 10. Do you provide less investment with your real-life sexual partner only to prefer cybersex as a primary form of sexual gratification? Source: Are you addicted to cybersex. Center for Online and Internet Addiction. Retrieved March 23, 2004, from http://www.netaddiction.com/ resources/cybersexual_addiction_test.htm
Intoxicating: This is what happens when the preceding five elements are added together. This combination makes for an incredibly intoxicating experience that is difficult for many to resist. The intoxication of the Internet is multiplied when cybersex is involved since behaviors are reinforced with one of the most powerful rewards, sex. Any single aspect of the Internet can be powerful enough to entice a cybersex user. However, it is typically a combination of these six factors that draws problematic cybersex users into their rituals
144 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
and leads to their loss of control over their cybersex use.
Special Populations Engaged in Cybersex The following subgroups of cybersex users have been studied in some detail: Males and Females: In the early to mid-1990s there were three times as many males online as females. Recent research shows that the gap has closed and that the split between male and female Internet users is nearly fifty-fifty. As a result, research on cybersex behavior has also included a significant number of females who engage in cybersex. Most of this research suggests that men tend to engage in more visual sex (for example, sexual media exchange), while women tend to engage in more relational sex (for example, chatrooms and e-mail). Females may find the Internet an avenue to sexual exploration and freedom without fear of judgment or reprisal from society. In this way, the Internet can have genuine benefits. Gays and Lesbians: Researchers have reported that homosexuals tend to engage in cybersex at higher levels than heterosexuals, which may be because they don’t have to fear negative cultural responses or even physical harm when they explore sexual behaviors and relationships on the Internet. Some homosexuals report that cybersex is a way to engage in sexual behavior without fear of HIV or other sexually transmitted diseases. By offering homosexuals a safe way to explore and experience their sexuality, the Internet gives them freedom from the stigma often placed on them by society. Children and Adolescents: Studies conducted by AOL and Roper Starch revealed that children use the Internet not only to explore their own sexuality and relationships, but also to gather accurate sexual health information. Since many young adults have grown up with the Internet, they often see it through a different lens than adults. Children, adolescents, and young adults use the Internet to seek answers to a multitude of developmental questions, including sexuality, which they may be afraid to address directly with other adults. Although the Internet can
be useful in educating children and adolescents about sexuality, it can also be a dangerous venue for the development of compulsive behavior and victimization by online predators. Although the effect of hardcore, explicit pornography on the sexual development of children and adolescents has yet to be researched, early exposure to such pornography may impact their moral and sexual development. Physically or Developmentally Challenged People: Only recently have questions been raised about the appropriate use of the Internet for sexual and relational purposes among physically challenged individuals. This area warrants more research and exploration, but initial writings in this area suggest that the Internet can confer a tremendous benefit for sexual and relationship exploration for persons with disabilities. While sex on the Internet can be a positive experience for these subpopulations, it can also introduce the people in these groups to the same problems associated with cybersex that other groups report.
Implications Cybersex is changing sexuality in our culture. The positive side is that sexual behavior is becoming more open and varied, and better understood. The negative implications are that sexuality may become casual, trivial, and less relational. The pornography industry continues to take advantage of the new technologies with the primary goal of profit, and these new technologies will allow for faster communication to support better video and voice exchanges. The eventual development of virtual reality technologies online will further enhance the online sexual experience, and perhaps make the sexual fantasy experience more pleasurable than real life. These technological advances will continue to alter the way we interact and form relationships with others. Researchers are just starting to realize the implications of sex on the Internet. Theories like Cyberhex are helpful in understanding why people engage in cybersex, but the best methods for helping those struggling with cybersex have yet to be discovered. However, society will continue to be impacted by the Internet and cybersex. Parents, teachers, and others who
CYBORGS ❚❙❘ 145
have not grown up with the Internet will fail future generations if they discount the significant impact it can have on social and sexual development. Continued research and education will be necessary to help individuals navigate the Internet and the world of cybersex more safely. David L. Delmonico and Elizabeth J. Griffin See also Chatrooms; Cybercommunities FURTHER READING Carnes, P. J. (1983). Out of the shadows. Minneapolis, MN: CompCare. Carnes, P. J., Delmonico, D. L., Griffin, E., & Moriarity, J. (2001). In the shadows of the Net: Breaking free of compulsive online behavior. Center City, MN: Hazelden Educational Materials. Cooper, A. (Ed.). (2000). Sexual addiction & compulsivity: The journal of treatment and prevention. New York: Brunner-Routledge. Cooper, A. (Ed.). (2002). Sex and the Internet: A guidebook for clinicians. New York: Brunner-Routledge. Cooper, A., Delmonico, D., & Burg, R. (2000). Cybersex users, abusers, and compulsives: New findings and implications. Sexual Addiction and Compulsivity: Journal of Treatment and Prevention, 7, 5–29. Cooper, A., Scherer, C., Boies, S. C., & Gordon, B. (1999). Sexuality on the Internet: From sexual exploration to pathological expression. Professional Psychology: Research and Practice, 30(2), 154–164. Delmonico, D. L. (1997). Internet sex screening test. Retrieved August 25, 2003, from http://www.sexhelp.com/ Delmonico, D. L., Griffin, E. J., & Moriarity, J. (2001). Cybersex unhooked: A workbook for breaking free of compulsive online behavior. Wickenburg, AZ: Gentle Path Press. Delmonico, D. L., & Miller, J. A. (2003). The Internet sex screening test: A comparison of sexual compulsives versus non-sexual compulsives. Sexual and Relationship Therapy, 18(3), 261–276. Robert Starch Worldwide, Inc. (1999). The America Online/Roper Starch Youth Cyberstudy. Author. Retrieved on December 24, 2003, from http://www.corp.aol.com/press/roper.html/ Schneider, J. P. (1994). Sex addiction: Controversy within mainstream addiction medicine, diagnosis based on the DSM-III-R and physician case histories. Sexual Addiction & Compulsivity: The Journal of Treatment and Prevention, 1(1), 19–44. Schneider, J. P., & Weiss, R. (2001). Cybersex exposed: Recognizing the obsession. Center City, MN: Hazelden Educational Materials. Tepper, M. S., & Owens, A. (2002). Access to pleasure: Onramp to specific information on disability, illness, and other expected changes throughout the lifespan. In A. Cooper (Ed.), Sex and the Internet: A guidebook for clinicians. New York: BrunnerRoutledge. Young, K. S. (1998). Caught in the Net. New York: Wiley. Young, K. S. (2001). Tangled in the web: Understanding cybersex from fantasy to addiction. Bloomington, IN: 1st Books Library.
CYBORGS A cyborg is a technologically enhanced human being. The word means cybernetic organism. Because many people use the term cybernetics for computer science and engineering, a cyborg could be the fusion of a person and a computer. Strictly speaking, however, cybernetics is the science of control processes, whether they are electronic, mechanical, or biological in nature. Thus, a cyborg is a person, some of whose biological functions have come under technological control, by whatever means. The standard term for a computer-simulated person is an avatar, but when an avatar is a realistic copy of a specific real person, the term cyclone is sometimes used, a cybernetic clone or virtual cyborg.
Imaginary Cyborgs The earliest widely known cyborg in literature, dating from the year 1900, is the Tin Woodman in The Wonderful Wizard of Oz by L. Frank Baum. Originally he was a man who earned his living chopping wood in the forest. He wanted to marry a beautiful Munchkin girl, but the old woman with whom the girl lived did not want to lose her labor and prevailed upon the Wicked Witch of the East to enchant his axe. The next time he went to chop wood, the axe chopped off his left leg instead. Finding it inconvenient to get around without one of his legs, he went to a tinsmith who made a new one for him. The axe then chopped off his right leg, which was then also replaced by one made of tin. This morbid process continued until there was nothing left of the original man but his heart, and when that finally was chopped out, he lost his love for the Munchkin girl. Still, he missed the human emotions that witchcraft and technology had stolen from him, and was ready to join Dorothy on her journey to the Emerald City, on a quest for a new heart. This story introduces one of the primary themes associated with cyborgs: the idea that a person accepts the technology to overcome a disability. That is, the person is already less than complete, and the technology is a substitute for full humanity, albeit an inferior one. This is quite different from assimilating
146 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
new technology in order to become more than human, a motive severely criticized by the President’s Council on Bioethics in 2003. A very different viewpoint on what it means to be “disabled” has been expressed by Gregor Wolbring, a professor at the University of Calgary. Who decides the meanings of disability and normality is largely a political issue, and Wolbring argues that people should generally have the power to decide for themselves. He notes the example of children who are born without legs because their mothers took thalidomide during pregnancy, then forced to use poorly designed artificial legs because that makes them look more normal, when some other technology would have given them far better mobility.
C makes it easy to shoot yourself in the foot. C++ makes it harder, but when you do, it blows away your whole leg. Bjarne Stroustrup
A common variation on the disability theme is the hero who suffers a terrible accident, is rebuilt, and becomes a cyborg superhero. A well-known example is The Six Million Dollar Man, a television series that aired 1973–1978 and was based on the 1972 novel Cyborg by Martin Caidin. Test pilot Steve Austin is severely injured in a plane crash, then rebuilt with bionic (biological plus electronic) technology. A spinoff series, The Bionic Woman (1976–1978) focuses on tennis player Jaime Sommers who is similarly disabled in a parachute accident. Both become superhero special agents, perhaps to justify the heavy investment required to insert and maintain their bionics. An especially striking example is the motion picture Robocop (1987). Policeman Alex Murphy lives in a depressing future Detroit, dominated by a single, exploitative corporation. To control the increasingly violent population, the corporation develops robot police possessing overwhelming firepower but lacking the judgment to interact successfully with human beings. Thus, when Murphy is blown to pieces by criminals, the corporation transforms him into a cyborg that combines human judgment with machine power. The corporation denies Murphy the right to be considered human, thereby forcing him to
become its enemy. This reflects a second persistent literary theme associated with cyborgs: They reflect the evils of an oppressive society in which technology has become a tool by which the masters enslave the majority. By far the most extensive treatment of the idea that cyborg technology is wicked can be found in the Dalek menace from the long-running BBC television series, Dr. Who. Sometimes mistaken for robots, Daleks are metal-clad beings that resemble huge salt shakers, wheeled trash cans, or British post boxes. They became extremely popular villains since their first appearance in 1963. Two low-budget feature films that retold the first TV serials added to their fame, Dr. Who and the Daleks (1965) and Dr. Who: Daleks Invasion Earth 2150 A.D. (1966). Inside a Dalek’s metal shell lurks a helpless, sluggish creature with vestigial claws, yet the combination of biology and technology gave it the possibility of conquering the universe. Their motto describes how they treat all other living creatures: “Exterminate.” The secret of their origins is revealed in the 1975 serial, “Genesis of the Daleks.” The protagonist of Dr. Who, The Doctor, lands his time machine on the battlescarred planet Skaro, just as the nuclear war between the Thals and the Kaleds reaches its climax. Davros, the evil (and disabled) Kaled scientist, recognizes that chemical weapons are causing his people to mutate horribly, and rather than resist this trend, he accelerates it, transforming humans into the vile Dalek cyborgs.
Real Cyborg Research Since human beings began wearing clothing, the boundary between ourselves and our technology has blurred. Arguably, everybody who wears a wristwatch or carries a cell phone is already a cyborg. But the usual definition implies that a human body has been modified, typically by insertion of some nonbiological technology. In the early years of the twentieth century, when surgeons first gained technological control over pain and infection, many brave or irresponsible doctors began experimenting with improvements to their patients. Sir William Arbuthnot Lane, the British royal physician, theorized that many illnesses were caused by a sluggish movement of food
CYBORGS ❚❙❘ 147
through the bowels that supposedly flooded the system with poisonous toxins. Diagnosing this chronic intestinal stasis in many cases, Lane performed surgery to remove bands and adhesions, and free the intestines to do their job. Some of his colleagues operated on neurotic patients, believing that moving the abdominal organs into their proper places could alleviate mental disorders. Later generations of doctors abandoned these dangerous and useless procedures, but one of Lane’s innovations has persisted. He was the first to “plate” a bone—that is to screw a supportive metal plate onto a broken bone. Today many thousands of people benefit from artificial hip and knee joints. In World War I, even before the introduction of antibiotics, rigorous scientific techniques were sufficiently effective to prevent death from infection in most wounded cases, thereby vastly increasing the number of people who survived with horrendous war-caused disabilities. The Carrel-Dakin technique was especially impressive, employing an antiseptic solution of sodium hypochlorite in amazingly rigorous procedures. Suppose a soldier’s leg had been badly torn by an artillery shell. The large and irregular wound would be entirely opened up and cleaned. Then tubes would be placed carefully in all parts of the wound to drip the solution very slowly, for days and even for weeks. Daily, a technician takes samples from every part of the wound, examining them under the microscope, until no more microbes are seen and the wound can be sewed up. Restorative plastic surgery and prosthetics could often help the survivors live decent lives. In the second half of the twentieth century, much progress was achieved with transplants of living tissue—such as kidneys from donors and coronary artery bypass grafts using material from the patient. Inorganic components were also successfully introduced, from heart values to tooth implants. Pacemakers to steady the rhythm of the heart and cochlear transplants to overcome deafness are among the relatively routine electronic components inserted into human bodies, and experiments are being carried out with retina chips to allow the blind to see. There are many difficult technical challenges, notably how to power artificial limbs, how to connect large components to the structure of the human body
safely, and how to interface active or sensory components to the human nervous system. Several researchers, such as Miguel Nicolelis of Duke University, have been experimenting with brain implants in monkeys that allow them to operate artificial arms, with the hope that this approach could be applied therapeutically to human beings in the near future.
Visions of the Future Kevin Warwick, professor of Cybernetics at Reading University in Britain, is so convinced of the near-term prospects for cyborg technology, that he has experimented on his own body. In 1998, he had surgeons implant a transponder in his left arm so a computer could monitor his movements. His first implant merely consisted of a coil that picked up power from a transmitter and reflected it back, letting the computer know where he was so it could turn on lights when he entered a room. In 2002 he had neurosurgeons connect his nervous system temporarily to a computer for some very modest experiments, but in the future he imagines that implants interfacing between computers and the human nervous system will allow people to store, playback, and even share experiences. He plans someday to experiment with the stored perceptions associated with drinking wine, to see if playing them back really makes him feel drunk. His wife, Irena, has agreed that someday they both will receive implants to share feelings such as happiness, sexual arousal, and even pain. In the long run, Warwick believes, people will join with their computers to become superhuman cyborgs. In so doing, they will adopt a radically new conception of themselves, including previously unknown understandings, perceptions, and desires. Natasha Vita-More, an artist and futurist, has sketched designs for the cyborg posthuman she calls Primo, based on aesthetics and general technological trends. Although she is not building prototypes or experimenting with components at the present time, she believes that her general vision could be achieved within this century. Primo would be ageless rather than mortal, capable of upgrades whenever an organ wore out or was made obsolete by technical progress, and able to change gender
148 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
whenever (s)he desires. Nanotechnology would give Primo 1,000 times the brainpower of a current human, and thus capable of running multiple viewpoints in parallel rather than being locked into one narrow frame of awareness. Primo’s senses would cover a vastly wider bandwidth, with sonar mapping onto the visual field at will, an internal grid for navigating and moving anywhere like an acrobatic dancer with perfect sense of direction, and a nervous system that can transmit information from any area of the body to any other instantly. Primo’s nose could identify any chemical or biological substance in the environment, and smart skin will not only protect the body, but provide vastly enhanced sensations. Instead of the depression and envy that oppress modern humans, (s)he would be filled with ecstatic yet realistic optimism. The old fashioned body’s need to eliminate messy wastes will be transcended by Primo’s ability to recycle and purify. William J. Mitchell, the director of Media Arts and Sciences at the Massachusetts Institute of Technology, argues that we have already evolved beyond traditional homo sapiens by become embedded in a ubiquitous communication network. The title of his book, ME++: The Cyborg Self and the Networked City (2003), offers a nice metaphor derived from the C language for programming computers. C was originally developed by a telephone company (Bell Labs) and has become possibly the most influential language among professional programmers, especially in the modular version called C++. In C (and in the Java language as well), “++” means to increment a number by adding 1 to it. Thus, C++ is one level more than C, and ME++ is one level more than me, in which
technology takes me above and beyond myself. A person who is thoroughly plugged in experiences radically transformed consciousness: “I construct, and I am constructed, in a mutually recursive process that continually engages my fluid, permeable boundaries and my endlessly ramifying networks. I am a spatially extended cyborg” (Mitchell 2003, 39). Williams Sims Bainbridge FURTHER READING Bainbridge, W. S. (1919). Report on medical and surgical developments of the war. Washington, DC: Government Printing Office. Barnes, B. A. (1977). Discarded operations: Surgical innovation by trial and error. In J. P. Bunker, B. A. Barnes, & F. Mosteller (Eds.), Costs, risks, and benefits of surgery (pp. 109–123). New York: Oxford University Press. Baum, L. F. (1900). The wonderful wizard of Oz. Chicago: G. M. Hill. Bentham, J. (1986). Doctor Who: The early years. London: W. H. Allen. Caidin, M. (1972). Cyborg. New York: Arbor House. Haining, P. (Ed.). (1983). Doctor Who: A celebration. London: W. H. Allen. Mitchell, W. J. (2003). ME++: The cyborg self and the networked city. Cambridge, MA: MIT Press. Nicolelis, M. A. L., & Srinivasan, M. A. (2003). Human-machine interaction: Potential impact of nanotechnology in the design of neuroprosthetic devices aimed at restoring or augmenting human performance. In M. C. Roco & W. S. Bainbridge (Eds.), Converging technologies for improving human performance (pp. 251–255). Dordrecht, Netherlands: Kluwer. President’s Council on Bioethics. (2003). Beyond therapy: Biotechnology and the pursuit of happiness. Washington, DC: President’s Council on Bioethics. Warwick, K. (2000). Cyborg 1.0. Wired, 8(2), 144–151. Wolbring, G. (2003). Science and technology and the triple D (Disease, Disability, Defect). In M. C. Roco & W. S. Bainbridge (Eds.), Converging technologies for improving human performance (pp. 232–243). Dordrecht, Netherlands: Kluwer.
DATA MINING DATA VISUALIZATION DEEP BLUE DENIAL-OF-SERVICE ATTACK
D DATA MINING Data mining is the process of automatic discovery of valid, novel, useful, and understandable patterns, associations, changes, anomalies, and statistically significant structures from large amounts of data. It is an interdisciplinary field merging ideas from statistics, machine learning, database systems and datawarehousing, and high-performance computing, as well as from visualization and human-computer interaction. It was engendered by the economic and scientific need to extract useful information from the data that has grown phenomenally in all spheres of human endeavor. It is crucial that the patterns, rules, and models that are discovered be valid and generalizable not only in the data samples already examined, but also in
DESKTOP METAPHOR DIALOG SYSTEMS DIGITAL CASH DIGITAL DIVIDE DIGITAL GOVERNMENT DIGITAL LIBRARIES DRAWING AND DESIGN
future data samples. Only then can the rules and models obtained be considered meaningful. The discovered patterns should also be novel, that is, not already known to experts; otherwise, they would yield very little new understanding. Finally, the discoveries should be useful as well as understandable. Typically data mining has two high-level goals: prediction and description. The former answers the question of what and the latter the question of why. For prediction, the key criterion is the accuracy of the model in making future predictions; how the prediction decision is arrived at may not be important. For description, the key criterion is the clarity and simplicity of the model describing the data in understandable terms. There is sometimes a dichotomy between these two aspects of data mining in that the most accurate prediction model for a problem may not be easily understandable, and the 149
150 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
most easily understandable model may not be highly accurate in its predictions.
Steps in Data Mining Data mining refers to the overall process of discovering new patterns or building models from a given dataset. There are many steps involved in the mining enterprise. These include data selection, data cleaning and preprocessing, data transformation and reduction, data mining task and algorithm selection, and finally, postprocessing and the interpretation of discovered knowledge. Here are the most important steps: Understand the application domain: A proper understanding of the application domain is necessary to appreciate the data mining outcomes desired by the user. It is also important to assimilate and take advantage of available prior knowledge to maximize the chance of success. Collect and create the target dataset: Data mining relies on the availability of suitable data that reflects the underlying diversity, order, and structure of the problem being analyzed. Therefore, it is crucial to collect a dataset that captures all the possible situations relevant to the problem being analyzed. Clean and transform the target dataset: Raw data contain many errors and inconsistencies, such as noise, outliers, and missing values. An important element of this process is the unduplication of data records to produce a nonredundant dataset. Another important element of this process is the normalization of data records to deal with the kind of pollution caused by the lack of domain consistency. Select features and reduce dimensions: Even after the data have been cleaned up in terms of eliminating duplicates, inconsistencies, missing values, and so on, there may still be noise that is irrelevant to the problem being analyzed. These noise attributes may confuse subsequent data mining steps, produce irrelevant rules and associations, and increase computational cost. It is therefore wise to perform a dimension-reduction or feature-selection step to separate those attributes that are pertinent from those that are irrelevant. Apply data mining algorithms: After performing the preprocessing steps, apply appropriate data
mining algorithms—association rule discovery, sequence mining, classification tree induction, clustering, and so on—to analyze the data. Interpret, evaluate, and visualize patterns: After the algorithms have produced their output, it is still necessary to examine the output in order to interpret and evaluate the extracted patterns, rules, and models. It is only by this interpretation and evaluation process that new insights on the problem being analyzed can be derived.
Data Mining Tasks In verification-driven data analysis the user postulates a hypothesis, and the system tries to validate it. Common verification-driven operations include querying and reporting, multidimensional analysis, and statistical analysis. Data mining, on the other hand, is discovery driven—that is, it automatically extracts new hypotheses from data. The typical data mining tasks include the following: Association rules: Given a database of transactions, where each transaction consists of a set of items, association discovery finds all the item sets that frequently occur together, and also the rules among them. For example, 90 percent of people who buy cookies also buy milk (60 percent of grocery shoppers buy both). Sequence mining: The sequence-mining task is to discover sequences of events that commonly occur together. For example, 70 percent of the people who buy Jane Austen’s Pride and Prejudice also buy Emma within a month. Similarity search: An example is the problem where a person is given a database of objects and a “query” object, and is then required to find those objects in the database that are similar to the query object. Another example is the problem where a person is given a database of objects, and is then required to find all pairs of objects in the databases that are within some distance of each other. Deviation detection: Given a database of objects, find those objects that are the most different from the other objects in the database—that is, the outliers. These objects may be thrown away as noise, or they may be the “interesting’’ ones, depending on the specific application scenario.
DATA MINING ❚❙❘ 151
Classification and regression: This is also called supervised learning. In the case of classification, someone is given a database of objects that are labeled with predefined categories or classes. They are required to develop from these objects a model that separates them into the predefined categories or classes. Then, given a new object, the learned model is applied to assign this new object to one of the classes. In the more general situation of regression, instead of predicting classes, real-valued fields have to be predicted. Clustering: This is also called unsupervised learning. Here, given a database of objects that are usually without any predefined categories or classes, the individual is required to partition the objects into subsets or groups such that elements of a group share a common set of properties. Moreover, the partition should be such that the similarity between members of the same group is high and the similarity between members of different groups is low.
Challenges in Data Mining Many existing data mining techniques are usually ad hoc; however, as the field matures, solutions are being proposed for crucial problems like the incorporation of prior knowledge, handling missing data, adding visualization, improving understandability, and other research challenges. These challenges include the following: Scalability: How does a data mining algorithm perform if the dataset has increased in volume and in dimensions? This may call for some innovations based on efficient and sufficient sampling, or on a trade-off between in-memory and disk-based processing, or on an approach based on high-performance distributed or parallel computing. New data formats: To date, most data mining research has focused on structured data, because it is the simplest and most amenable to mining. However, support for other data types is crucial. Examples include unstructured or semistructured (hyper)text, temporal, spatial, and multimedia databases. Mining these is fraught with challenges, but it is necessary because multimedia content and digital libraries proliferate at astounding rates. Handling data streams: In many domains the data changes over time and/or arrives in a constant
stream. Extracted knowledge thus needs to be constantly updated. Database integration: The various steps of the mining process, along with the core data mining methods, need to be integrated with a database system to provide common representation, storage, and retrieval. Moreover, enormous gains are possible when these are combined with parallel database servers. Privacy and security issues in mining: Privacypreserving data mining techniques are invaluable in cases where one may not look at the detailed data, but one is allowed to infer high-level information. This also has relevance for the use of mining for national security applications. Human interaction: While a data mining algorithm and its output may be readily handled by a computer scientist, it is important to realize that the ultimate user is often not the developer. In order for a data mining tool to be directly usable by the ultimate user, issues of automation—especially in the sense of ease of use—must be addressed. Even for computer scientists, the use and incorporation of prior knowledge into a data mining algorithm is often a challenge; they too would appreciate data mining algorithms that can be modularized in a way that facilitates the exploitation of prior knowledge. Data mining is ultimately motivated by the need to analyze data from a variety of practical applications— from business domains such as finance, marketing, telecommunications, and manufacturing, or from scientific fields such as biology, geology, astronomy, and medicine. Identifying new application domains that can benefit from data mining will lead to the refinement of existing techniques, and also to the development of new methods where current tools are inadequate. Mohammed J. Zaki FURTHER READING Association for Computing Machinery’s special interest group on knowledge discovery and data mining. Retrieved August 21, 2003, from http://www.acm.org/sigkdd. Dunham, M. H. (2002). Data mining: Introductory and advanced topics. Upper Saddle River, NJ: Prentice Hall.
152 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Han, J., & Kamber, M. (2000). Data mining: Concepts and techniques. San Francisco: Morgan Kaufman. Hand, D. J., Mannila, H., & Smyth, P. (2001). Principles of data mining. Cambridge, MA: MIT Press. Kantardzic, M. (2002). Data mining: Concepts, models, methods, and algorithms. Somerset, NJ: Wiley-IEEE Press. Witten, I. H., & Frank, E. (1999). Data mining: Practical machine learning tools and techniques with Java implementations. San Francisco: Morgan Kaufmann.
DATA VISUALIZATION Data visualization is a new discipline that uses computers to make pictures that elucidate a concept, phenomenon, relationship, or trend hidden in a large quantity of data. By using interactive threedimensional (3D) graphics, data visualization goes beyond making static illustrations or graphs and emphasizes interactive exploration. The pervasive use of computers in all fields of science, engineering, medicine, and commerce has resulted in an explosive growth of data, presenting people with unprecedented challenges in understanding data. Data visualization transforms raw data into pictures that exploit the superior visual processing capability of the human brain to detect patterns and draw inferences, revealing insights hidden in the data. For example, data visualization allows us to capture trends, structures, and anomalies in the behavior of a physical process being modeled or in vast amounts of Internet data. Furthermore, it provides us with a visual and remote means to communicate our findings to others. Since publication of a report on visualization in scientific computing by the U.S. National Science Foundation in 1987, both government and industry have invested tremendous research and development in data-visualization technology, resulting in advances in visualization and interactive techniques that have helped lead to many scientific discoveries, better engineering designs, and more timely and accurate medical diagnoses.
Visualization Process A typical data-visualization process involves multiple steps, including data generation, filtering, map-
ping, rendering, and viewing. The data-generation step can be a numerical simulation, a laboratory experiment, a collection of sensors, an image scanner, or a recording of Web-based business transactions. Filtering removes noise, extracts and enhances features, or rescales data. Mapping derives appropriate representations of data for the rendering step. The representations can be composed of geometric primitives such as point, lines, polygons, and surfaces, supplemented with properties such as colors, transparency, textures. Whereas the visualization of a computerized tomography (CT) scan of a fractured bone should result in an image of a bone, plenty of room for creativity exists when making a visual depiction of the trend of a stock market or the chemical reaction in a furnace. Rendering generates two-dimensional or three-dimensional images based on the mapping results and other rendering parameters, such as the lighting model, viewing position, and so forth. Finally, the resulting images are displayed for viewing. Both photorealistic and nonphotorealistic rendering techniques exist for different purposes of visual communication. Nonphotorealistic rendering, which mimics how artists use brushes, strokes, texture, color, layout, and so forth, is usually used to increase the clarity of the spatial relationship between objects, improve the perception of an object’s shape and size, or give a particular type of media presentation. Note that the filtering and mapping steps are largely application dependent and often require domain knowledge to perform. For example, the filtering and mapping steps for the visualization of website structure or browsing patterns would be quite different from those of brain tumors or bone fractures. A data-visualization process is inherently iterative. That is, after visualization is made, the user should be able to go back to any previous steps, including the data-generation step, which consists o f a nu m e r i c a l o r p hy s i c a l m o d e l , to m a ke changes such that more information can be obtained from the revised visualization. The changes may be made in a systematic way or by trial and error. The goal is to improve the model and understanding of the corresponding problem via this visual feedback process.
DATA VISUALIZATION ❚❙❘ 153
Computational Steering Data visualization should not be performed in isolation. It is an integral part of data analysis and the scientific discovery process. Appropriate visualization tools integrated into a modeling process can greatly enhance scientists’ productivity, improve the efficiency of hardware utilization, and lead to scientific breakthroughs. The use of visualization to drive scientific discovery processes has become a trend. However, we still lack adequate methods to achieve computational steering—the process of interacting with as well as changing states, parameters, or resolution of a numerical simulation—and to be able to see the effect immediately, without stopping or restarting the simulation. Consequently, the key to successful data visualization is interactivity, the ability to effect change while watching the changes take effect in real time on the screen. If all the steps in the modeling and visualization processes can be performed in a highly interactive fashion, steering can be achieved. The ability to steer a numerical model makes the visualization process a closed loop, becoming a scientific discovery process that is self-contained. Students can benefit from such a process because they can more easily move from concepts to solutions. Researchers can become much more productive because they can make changes according to the interactive graphical interpretation of the simulation states without restarting the simulation every time. Computational steering has been attempted in only a few fields. An example is the SCIRun system used in computational medicine. To adopt computational steering, researchers likely must redesign the computational model that is required to incorporate feedback and changes needed in a steering process. More research is thus needed to make computational steering feasible in general.
Computer-Assisted Surgery During the past ten years significant advances have been made in rendering software and hardware technologies, resulting in higher fidelity and real-time visualization. Computer-assisted surgery is an application of such advanced visualization technologies with a direct societal impact. Computer
visualization can offer better 3D spatial acuity than humans have, and computer-assisted surgery is more reliable and reproducible. However, computerassisted surgery has several challenges. First, the entire visualization process—consisting of 3D reconstruction, segmentation, rendering, and image transport and display—must be an integrated part of the end-to-end surgical planning and procedure. Second, the visualization must be in real time, that is, flicker free. Delayed visual response could lead to dangerous outcomes for a surgery patient. Most important, the visualization must attain the required accuracy and incorporate quantitative measuring mechanisms. Telesurgery, which allows surgeons at remote sites to participate in surgery, will be one of the major applications of virtual reality and augmented reality (where the virtual world and real world are allowed to coexist and a superimposed view is presented to the user). Due to the distance between a patient and surgeons, telesurgery has much higher data visualization, hardware, and network requirements. Nevertheless, fast-improving technology and decreasing costs will make such surgery increasingly appealing. Whether and how much stereoscopic viewing can benefit surgeons remains to be investigated. The most needed advance, however, is in interface software and hardware.
User Interfaces for Data Visualization Most data visualization systems supply the user with a suit of visualization tools that requires the user to be familiar with both the corresponding user interfaces and a large visualization parameter space (a multidimensional space which consists of those input variables used by the visualization program). Intuitive and intelligent user interfaces can greatly assist the user in the process of data exploration. First, the visual representation of the process of data exploration and results can be incorporated into the user interface of a visualization system. Such an interface can help the user to keep track of the visualization experience, use it to generate new visualizations, and share it with others. Consequently, the interface needs to display not only the visualizations but also the visualization process to the user. Second, the
154 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
task of exploring large and complex data and visualization parameter space during the mapping step can be delegated to an intelligent system such as a neural network. One example is to turn the 3D segmentation problem into a simple 2D painting process for the user, leaving the neural network to classify the multidimensional data. As a result, the user can focus on the visualizations rather than on the user interface widgets (e.g., a color editor, plotting area, or layout selector) for browsing through the multidimensional parameter space. Such nextgeneration user interfaces can enhance data understanding while reducing the cost of visualization by eliminating the iterative trial-and-error process of parameter selection. For routine analysis of largescale data sets, the saving can be tremendous.
Research Directions The pervasiveness of the World Wide Web in average people’s lives has led to a data explosion. Some data are relevant to some people’s needs, but most are not. Nevertheless, many people do their everyday jobs by searching huge databases of information distributed in locations all over the world. A large number of computer services repeatedly operate on these databases. Information visualization, a branch of visualization, uses visual-based analysis of data with no spatial references, such as a large amount of text and document. A data mining step (the procedure to reduce the size, dimensionality, and/or complexity of a data set), which may be considered as the filtering step, usually precedes the picture-making step of visualization. The mapping step often converts reduced relations into graphs or charts. Most information visualizations are thus about displaying and navigating 2D or 3D graphs. People need new reduction, mapping, and navigation methods so that they can manage, comprehend, and use the fast-growing information on the World Wide Web. Other important research directions in data visualization include improving the clarity of visualizations, multidimensional and multivariate data (a data set with a large number of dependent variables) visualization, interaction mechanisms for large and shared display space, visualization designs guided
by visual perception study, and user studies for measuring the usability of visualization tools and the success of visualizations. Kwan-Liu Ma See also Information Spaces; Sonification FURTHER READING Johnson, C., & Parker, S. (1995). Applications in computational medicine using SCIRun: A computational steering programming environment. The 10th International Supercomputer Conference (pp. 2–19). Ma, K. L. (2000). Visualizing visualizations: Visualization viewpoints. IEEE Computer Graphics and Applications, 20(5), 16–19. Ma, K.-L. (2004). Visualization—A quickly emerging field. Computer Graphics, 38(1), 4–7. McCormick, B., DeFanti, T., & Brown, M. (1987). Visualization in scientific computing. Computer Graphics, 21(6)
DEEP BLUE In 1997, the chess machine Deep Blue fulfilled a long-standing challenge in computer science by defeating the human world chess champion, Garry Kasparov, in a six-game match. The idea that a computer could defeat the best humanity had to offer in an intellectual game such as chess brought many important questions to the forefront: Are computers intelligent? Do computers need to be intelligent in order to solve difficult or interesting problems? How can the unique strengths of humans and computers best be exploited?
Early History Even before the existence of electronic computers, there was a fascination with the idea of machines that could play games. The Turk was a chess-playing machine that toured the world in the eighteenth and nineteenth centuries, to much fanfare. Of course the technology in the Turk was mainly concerned with concealing the diminutive human chess master hidden inside the machine.
DEEP BLUE ❚❙❘ 155
In 1949, the influential mathematician Claude Shannon (1916–2001) proposed chess as an ideal domain for exploring the potential of the then-new electronic computer. This idea was firmly grasped by those studying artificial intelligence (AI), who viewed games as providing an excellent test bed for exploring many types of AI research. In fact, chess has often been said to play the same role in the field of artificial intelligence that the fruit fly plays in genetic research. Although breeding fruit flies has no great practical value, they are excellent subjects for genetic research: They breed quickly, have sufficient variation, and it is cheap to maintain a large population. Similarly, chess avoids some aspects of complex real-world domains that have proven difficult, such as natural-language understanding, vision, and robotics, while having sufficient complexity to allow an automated problem solver to focus on core AI issues such as search and knowledge representation. Chess programs made steady progress in the following decades, particularly after researchers abandoned the attempt to emulate human thought processes and instead focused on doing a more thorough and exhaustive exploration of possible move sequences. It was soon observed that the playing strength of such “brute-force” chess programs correlated strongly with the speed of the underlying computer, and chess programs gained in strength both from more sophisticated programs and faster computers.
Deep Blue The Deep Blue computer chess system was developed in 1989–1997 by a team (Murray Campbell, A. Joseph Hoane, Jr., Feng-hsiung Hsu) from IBM’s T. J. Watson Research Center. Deep Blue was a leap ahead of the chess-playing computers that had gone before it. This leap resulted from a number of factors, including: a computer chip designed specifically for highspeed chess calculations, ■ a large-scale parallel processing system, with more than five hundred processors cooperating to select a move, ■
a complex evaluation function to assess the goodness of a chess position, and ■ a strong emphasis on intelligent exploration (selective search) of the possible move sequences. ■
The first two factors allowed the full Deep Blue system to examine 100–200 million chess positions per second while selecting a move, and the complex evaluation function allowed Deep Blue to make more informed decisions. However, a naive brute-force application of Deep Blue’s computational power would have been insufficient to defeat the top human chess players. It was essential to combine the computer’s power with a method to focus the search on move sequences that were “important.” Deep Blue’s selective search allowed it to search much more deeply on the critical move sequences. Deep Blue first played against world champion Garry Kasparov in 1996, with Kasparov winning the six-game match by a score of 4-2. A revamped Deep Blue, with improved evaluation and more computational power, won the 1997 rematch by a score of 3.5–2.5.
Human and Computer Approaches to Chess It is clear that systems like Deep Blue choose moves using methods radically different from those employed by human experts. These differences result in certain characteristic strengths of the two types of players. Computers tend to excel at the shortrange tactical aspects of a game, mainly due to an extremely thorough investigation of possible move sequences. Human players can only explore perhaps a few dozen positions while selecting a move, but can assess the long-term strategic implications of these moves in a way that has proven difficult for a computer. The combination of human and computer players has proven to be very powerful. High-level chess players routinely use computers as part of their preparation. One typical form of interaction would have the human player suggest strategically promising moves that are validated tactically by the computer player.
156 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Research Areas While computer programs such as Deep Blue achieve a very high level of play, most of the knowledge in such systems is carefully crafted by human experts. Much research is needed to understand how to have future systems learn the knowledge necessary to play the game without extensive human intervention. TD-gammon, a neural-network system that plays world-class backgammon, was an early leader in this area. Determining the best mode for humans and computers to interact in a more cooperative manner (for example, with one or the other acting as assistant, trainer, or coach) is another area worthy of further research. Game-playing programs that are based on large-scale searches have problems in translating the search results into forms that humans can deal with easily. Some types of games, such as the Chinese game Go, have too many possible moves to allow a straightforward application of the methods used for chess, and pose significant challenges. Games with hidden information and randomness, such as poker or bridge, also require new and interesting approaches. Interactive games, which employ computer-generated characters in simulated worlds, can be more realistic and entertaining if the characters can behave in intelligent ways. Providing such intelligent characters is a key goal for future AI researchers.
Newborn, M. (2003). Deep Blue: An artificial intelligence milestone. New York: Springer-Verlag. Schaeffer, J. (2001). A gamut of games. AI Magazine, 22(3), 29–46. Schaeffer, J., & van den Herik, J. (Eds.). (2002). Chips challenging champions: Games, computers, and artificial intelligence. New York: Elsevier. Shannon, C. (1950). Programming a computer for playing chess. Philosophical Magazine, 41, 256–275. Standage, T. (2002). The Turk: The life and times of the famous eighteenth-century chess-playing machine. New York: Walker & Company.
DENIAL-OF-SERVICE ATTACK A denial-of-service (DoS) attack causes the consumption of a computing system’s resources— typically with malicious intent—on such a scale as to compromise the ability of other users to interact with that system. Virgil Gligor coined the term denialof-service attack in reference to attacks on operating systems (OS) and network protocols. Recently the term has been used specifically in reference to attacks executed over the Internet. As governments and businesses increasingly rely on the Internet, the damage that a DoS attack can cause by the disruption of computer systems has provided incentive for attackers to launch such attacks and for system operators to defend against such attacks.
Murray Campbell See also Artificial Intelligence
Evolution of Denial-of-Service Attacks
FURTHER READING
DoS vulnerabilities occur when a poor resources-allocation policy allows a malicious user to allocate so many resources that insufficient resources are left for legitimate users. Early DoS attacks on multiuser operating systems involved one user spawning a large number of processes or allocating a large amount of memory, which would exhaust the memory available and result in operating system overload. Early network DoS attacks took advantage of the fact that the early Internet was designed with implicit trust in the computers connected to it. The unin-
Campbell, M., Hoane, A. J., & Hsu, F. (2002). Deep Blue. Artificial Intelligence, 134(1–2), 57–83. Frey, P. W. (Ed.). (1983). Chess skill in man and machine. New York: Springer-Verlag. Hsu, F. (2002). Behind Deep Blue: Building the computer that defeated the world chess champion. Princeton, NJ: Princeton University Press. Laird, J. E., & van Lent, M. (2000). Human-level AI’s killer application: Interactive computer games. AI Magazine, 22(2), 15–26. Marsland, T. A., & Schaeffer, J. (Eds.). (1990). Computers, chess, and cognition. New York: Springer-Verlag.
DENIAL-OF-SERVICE ATTACK ❚❙❘ 157
tended result of this trust was that users paid little attention to handling packets (the fundamental unit of data transferred between computers on the Internet) that did not conform to standard Internet protocols. When a computer received a malformed packet that its software was not equipped to handle, it might crash, thus denying service to other users. These early DoS attacks were relatively simple and could be defended against by upgrading the OS software that would identify and reject malformed packets. However, network DoS attacks rapidly increased in complexity over time. A more serious threat emerged from the implicit trust in the Internet’s design: The protocols in the Internet themselves could be exploited to execute a DoS attack. The difference between exploiting an Internet software implementation (as did the previous class of DoS attacks) and exploiting an Internet protocol itself was that the former were easy to identify (malformed packets typically did not occur outside of an attack) and once identified could be defended against, whereas protocol-based attacks could simply look like normal traffic and were difficult to defend against without affecting legitimate users as well. An example of a protocol attack was TCP SYN flooding. This attack exploited the fact that much of the communication between computers over the Internet was initiated by a TCP handshake where the communicating computers exchanged specialized packets known as “SYN packets.” By completing only half of the handshake, an attacker could leave the victim computer waiting for the handshake to complete. Because computers could accept only a limited number of connections at one time, by repeating the half-handshake many times an attacker could fill the victim computer’s connection capacity, causing it to reject new connections by legitimate users or, even worse, causing the OS to crash. A key component of this attack was that an attacker was able to hide the origin of his or her packets and pose as different computer users so the victim had difficulty knowing which handshakes were initiated by the attacker and which were initiated by legitimate users. Then another class of DoS attack began to appear: distributed denial-of-service (DDoS) attacks. The difference between traditional DoS attacks
and the DDoS variant was that attackers were beginning to use multiple computers in each attack, thus amplifying the attack. Internet software implementation and protocol attacks did not require multiple attackers to be successful, and effective defenses were designed (in some cases) against them. A DDoS attack, however, did not require a software implementation or protocol flaw to be present. Rather, a DDoS attack would consist of an attacker using multiple computers (hundreds to tens of thousands) to send traffic at the maximum rate to a victim’s computer. The resulting flood of packets was sometimes enough to either overload the victim’s computer (causing it to slow to a crawl or crash) or overload the communication line from the Internet to that computer. The DDoS attacker would subvert control of other people’s computers for use in the attack, often using flaws in the computer’s control code similar to Internet software implementation DoS attacks or simply attaching the attack control codes in an e-mail virus or Internet worm. The presence of attacking computers on many portions of the Internet gave this class of attacks its name.
Defense against DoS Attacks Defending against DoS attacks is often challenging because the very design of the Internet allows them to occur. The Internet’s size requires that even the smallest change to one of its fundamental protocols be compatible with legacy systems that do not implement the change. However, users can deploy effective defenses without redesigning the entire Internet. For example, the defense against Internet software implementation DoS attacks is as simple as updating the software on a potential victim’s computer; because the packets in this type of attack are usually malformed, the compatibility restriction is easy to meet. Defending against a protocol-level attack is more difficult because of the similarity of the attack itself to legitimate traffic. Experts have proposed several mechanisms, which mostly center on the concept of forcing all computers initiating a handshake to show that they have performed some amount of “work” during the handshake. The expectation is that an attacker will not have the computing power to
158 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
impersonate multiple computers making handshake requests. Unfortunately, this class of defenses requires a change in the Internet protocol that must be implemented by all computers wanting to contact the potential victim’s computer. Moreover, more protocol-compliant solutions involve placing between the victim’s computer and the Internet specialized devices that are designed to perform many handshakes at once and to pass only completed handshakes to the victim. The DDoS variant of DoS attacks is the most difficult to defend against because the attack simply overwhelms the victim’s computer with too many packets or, worse, saturates the victim’s connection to the Internet so that many packets are dropped before ever reaching the victim’s computer or network. Some businesses rely on overprovisioning, which is the practice of buying computer resources far in excess of expected use, to mitigate DDoS attacks; this practice is expensive but raises the severity of an attack that is necessary to disable a victim. Proposed defenses against this type of attack—more so than proposed defenses against other types of attacks— have focused on changing Internet protocols. Many proposals favor some type of traceback mechanism, which allows the victim of an attack to determine the identity and location of the attacking computers, in the hope that filters can be installed in the Internet to minimize the flood of traffic while leaving legitimate traffic unaffected. At the time of this writing, no DDoS defense proposal has been accepted by the Internet community.
The Future DoS attacks are likely to trouble the Internet for the foreseeable future. These attacks, much like urban graffiti, are perpetrated by anonymous attackers and require a substantial investment to defend against, possibly requiring a fundamental change in the Internet’s protocols. Although several DoS attacks have succeeded in bringing down websites of well-known businesses, most attacks are not as wildly successful, nor have all businesses that have been victimized reported attacks for fear of publicizing exactly how weak their computing infrastructure is.
We must wait to see whether DoS attacks will further threaten the Internet, provoking the acceptance of radical defense proposals, or will simply fade into the background and become accepted as a regular aspect of the Internet. Adrian Perrig and Abraham Yaar See also Security; Spamming FURTHER READING Aura, T., Nikander, P., & Leiwo, J. (2000). DoS-resistant authentication with client puzzles. Security Protocols—8th International Workshop. Gligor, V. D. (1983). A note on the denial of service problem. Proceedings of 1983 Symposium on Security and Privacy (pp. 139–149). Gligor, V. D. (1986). On denial of service in computer networks. Proceedings of International Conference on Data Engineering (pp. 608–617). Gligor, V. D. (2003). Guaranteeing access in spite of service-flooding attacks. Proceedings of the Security Protocols Workshop. Savage, S., Wetherall, D., Karlin, A., & Anderson, T. (2000). Practical network support for IP traceback. Proceedings of ACM SIGCOMM 2000 (pp. 295–306). Wang, X., & Reiter, M. K. (2003). Defending against denial-of-service attacks with puzzle auctions. Proceedings of the 2003 IEEE Symposium on Security and Privacy (pp. 78–92). Yaar, A., Perrig, A., & Song, D. (2003). Pi: A path identification mechanism to defend against DDoS attacks. IEEE Symposium on Security and Privacy (pp. 93–107).
DESKTOP METAPHOR The desktop metaphor is being used when the interface of an interactive software system is designed such that its objects and actions resemble objects and actions in a traditional office environment. For example, an operating system designed using the desktop metaphor represents directories as labeled folders and text documents as files. In graphical user interfaces (GUIs), the bitmap display and pointing devices such as a mouse, a trackball, or a light pen are used to create the metaphor: The bitmap display presents a virtual desk, where documents can be created, stored, retrieved, reviewed, edited, and discarded. Files, folders, the trash can (or recycle bin) and so
DESKTOP METAPHOR ❚❙❘ 159
forth are represented on the virtual desktop by graphical symbols called icons. Users manipulate these icons using the pointing devices. With pointing devices, the user can select, open, move, or delete the files or folders represented by icons on the desktop. Users can retrieve information and read it on the desktop just as they would read actual paper documents at a physical desk. The electronic document files can be stored and organized in electronic folders just as physical documents are saved and managed in folders in physical file cabinets. Some of the accessories one finds in an office are also present on the virtual desktop; these include the trash can, a clipboard, a calendar, a calculator, a clock, a notepad, telecommunication tools, and so on. The metaphor of the window is used for the graphical boxes that let users look into information in the computer. Multiple windows can be open on the desktop at once, allowing workers to alternate quickly between multiple computer applications (for example, a worker may have a word processing application, a spreadsheet application, and an Internet browser open simultaneously, each in its own window). Computer users can execute, hold, and resume their tasks through multiple windows.
BITMAP An array of pixels, in a data file or structure, which correspond bit for bit with an image.
Beginning in the late 1970s, as personal computers and workstations became popular among knowledge workers (people whose work involves developing and using knowledge—engineers, researchers, and teachers, for example), the usability of the computers and the productivity of those using them became important issues. The desktop metaphor was invented in order to make computers more usable, with the understanding that more-usable computers would increase users’ productivity. The desktop metaphor enabled users to work with computers in a more familiar, more comfortable manner and to spend less time learning how to use them. The invention of the desktop metaphor greatly enhanced the quality of human-computer interaction.
Historical Overview In the 1960s and 1970s, several innovative concepts in the area of HCI were originated and implemented using interactive time-shared computers, graphics screens, and pointing devices. Sketchpad Sketchpad was a pioneering achievement that opened the field of interactive computer graphics. In 1963, Ivan Sutherland used a light pen to create engineering drawings directly on the computer screen for his Ph.D. thesis, Sketchpad: A Man-Machine Graphical Communications System. His thesis initiated a totally new way to use computers. Sketchpad was executed on the Lincoln TX-2 computer at MIT. A light pen and a bank of switches were the user interface for this first interactive computer graphics system. Sketchpad also pioneered new concepts of memory structures for storing graphical objects, rubberbanding of lines (stretching lines as long as a user wants) on the screen, the ability to zoom in and out on the screen, and the ability to make perfect lines, corners, and joints. NLS The scientist and inventor Doug Engelbart and his colleagues at Stanford University introduced the oN Line System (NLS) to the public in 1968. They also invented the computer mouse. The NLS was equipped with a mouse and a pointer cursor for the first time; it also was the first system to make use of hypertext. Among other features, the system provided multiple windows, an online context-sensitive help system, outline editors for idea development, two-way video conferencing with shared workspace, word processing, and e-mail. Smalltalk In the early 1970s, Alan Kay and his colleagues at Xerox’s Palo Alto Research Center (PARC) invented an object-oriented programming language called Smalltalk. It was the first integrated programming environment, and its user interface was designed using the desktop metaphor. It was designed not only for expert programmers of complex software, but also for the novice users, including children: Its
160 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
designers intended it to be an environment in which users learned by doing. In 1981 Xerox PARC integrated the innovations in the fields of human-computer symbiosis, personal computing, objected-oriented programming languages, and local-area networks and arrived at the Xerox 8010 Star information system. It was the first commercial computer system that implemented the desktop metaphor, using a mouse, a bitmap display, and a GUI. In interacting with the system, users made use of windows, icons, menus, and pointing devices (WIMP). Most of the workstations and personal computers that were developed subsequently, including Apple Computer’s Lisa (1983) and Macintosh (1984) and Microsoft’s Windows (1985), were inspired by Star; like the Star system, they too adopted the desktop metaphor. Apple’s Lisa was designed to be a high-quality, easy-to-use computer for knowledge workers such as secretaries, managers, and professionals in general office environments. Its design goals were: User friendliness: The developers of the Lisa wanted users to use the computer not only because doing so was part of their job, but also because it was fun to use. The users were expected to feel comfortable because the user interface resembled their working environment. Standard method of interaction: A user was provided with consistent look and feel in the system and all applications, which meant that learning time could be dramatically decreased and training costs lowered. Gradual and intuitive learning: A user should be able to complete important tasks easily with minimal training. The user should not be concerned with more sophisticated features until they are necessary. Interaction with the computer should be intuitive; that is, the user should be able to figure out what he or she needs to do. Error protection: A user should be protected from obvious errors. For example, Lisa allowed users to choose from a collection of possible operations that were proper for the occasion and the object. By limiting the choices, fatal errors and obvious errors could be avoided. Any error from a user should be processed in a helpful manner by generating a warning message or providing a way of recovering from the error.
Personalized interaction: A user could set up attributes of the system in order to customize the interaction with the system. The personalized interaction did not interfere with the standard interaction methods. Multiple tasks: Because workers in office environments perform many tasks simultaneously and are often interrupted in their work, Lisa was designed to be able to hold the current work while users attended to those interruptions and to other business. The idea was that the user should be able to switch from one task to another freely and instantly. Apple’s Lisa provided knowledge workers with a virtual desktop environment complete with manipulable documents, file folders, calculators, electronic paper clips, wastebasket, and other handy tools. The documents and other office-based objects were represented by naturalistic icons. The actions defined for the icons, such as selecting, activating, moving, and copying, were implemented by means of mouse operations such as clicking, double-clicking, and dragging. Lisa users did not have to memorize commands such as “delete” (“del”), “remove” (“rm”), or “erase” in order to interact with the system. The first version of Microsoft Windows was introduced in 1985. It provided an interactive software environment that used a bitmap display and a mouse. The product included a set of desktop applications, including a calendar, a card file, a notepad, a calculator, a clock, and telecommunications programs. In 1990 Windows 3.0 was introduced, the first real GUI-based system running on IBM-compatible PCs. It became widely popular. The Windows operating system evolved through many more incarnations, and in the early 2000s was the most popular operating system in the world.
Research Directions The desktop metaphor, implemented through a graphical user interface, has been the dominate metaphor for human-computer interfaces since the 1980s. What will happen to the human-computer interaction paradigm in the future? Will the desktop metaphor continue to dominate? It is extremely difficult to predict the future in the computer world. However, there are several pioneering researchers
DESKTOP METAPHOR ❚❙❘ 161
exploring new interaction paradigms that could replace the desktop-based GUI.
OPERATING SYSTEM Software (e.g., Windows 98, UNIX, or DOS) that enables a computer to accept input and produce output to peripheral devices such as disk drives and printers.
A Tangible User Interface At present, interactions between human and computers are confined to a display, a keyboard, and a pointing device. The tangible user interface (TUI) proposed by Hiroshi Ishii and Brygg Ullmer at MIT’s Media Lab in 1997 bridges the space between human and computers in the opposite direction. The user interface of a TUI-based system can be embodied in a real desk and other real objects in an office environment. Real office objects such as actual papers and pens could become meaningful objects for the user interface of the system. Real actions on real objects can be recognized and interpreted as operations applied to the objects in the computer world, so that, for example, putting a piece of paper in a wastebasket could signal the computer to delete a document. This project attempts to bridge the gap between the computer world and the physical office environment by making digital information tangible. While the desktop metaphor provides the users with a virtual office environment, in a TUI the physical office environment, including the real desktop, becomes the user interface. Ishii and Ullmer designed and implemented a prototype TUI called metaDESK for Tangible Geospace, a physical model of landmarks such as the MIT campus. The metaDESK was embodied in realworld objects and regarded as a counterpart of the virtual desktop. The windows, icons, and other graphical objects in the virtual desktop corresponded to physical objects such as activeLENS (a physically embodied window), phicon (a physically embodied icon—in this case, models of MIT buildings such as the Great Dome and the Media Lab building), and so forth. In the prototype system, the activeLENS was equivalent to a window of the virtual desktop and was used in navigating and examining the three di-
mensional views of the MIT geographical model. Users could physically control the phicons by grasping and placing them so that a two dimensional map of the MIT campus appears on the desk surface beneath the phicons. The locations of the Dome and the Media Lab buildings on the map should match with the physical locations of the phicons on the desk. Ubiquitous Computing In 1988 Mark Weiser at Xerox PARC introduced a computing paradigm called ubiquitous computing. The main idea was to enable users to access computing services wherever they might go and whenever they might need them. Another requirement was that the computers be invisible to the users, so the users would not be conscious of them. The users do what they normally do and the computers in the background recognize the intention of the users and provide the best services for them. It means that the users do not have to learn how to operate computers, how to type a keyboard, how to access the Internet, etc. Therefore, the paradigm requires that new types of computing services and computer systems be created. New technologies such as context awareness, sensors, and intelligent distributed processing. are required. Their interaction methods must be based on diverse technologies such as face recognition, character recognition, gesture recognition, and voice recognition.
OPEN-SOURCE SOFTWARE Open-source software permits sharing of a program’s original source code with users, so that the software can be modified and redistributed to other users.
As new computing services and technologies are introduced, new types of computing environments and new interaction paradigms will emerge. The desktop metaphor will also evolve to keep pace with technological advances. However, the design goals of the user interfaces will not change much. They should be designed to make users more comfortable, more effective, and more productive in using their computers. Jee-In Kim
162 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
See also Alto; Augmented Reality; Graphical User Interface FURTHER READING Goldberg, A. (1984). Smalltalk-80: The interactive programming environment. Reading, MA: Addison-Wesley. Ishii, H., & Ullmer, B. (1997). Tangible bits: Towards seamless interfaces between people, bits and atoms. In Proceedings of CHI ’97 (pp. 234–241), New York: ACM Press. Kay, A. (1993) The early history of Smalltalk. ACM SIGPLAN Notices, 28(3), 69–95. Kay, A., & Goldberg, A (1977). Personal dynamic media. IEEE Computer, 10(3), 31–42. Myers, B., Ioannidis, Y., Hollan, J., Cruz, I., Bryson, S., Bulterman, D., et al. (1996). Strategic directions in human computer interaction. ACM Computing Survey, 28(4), 794–809. Perkins, R., Keller, D., & Ludolph, F. (1997). Inventing the Lisa user interface. Interactions, 4(1), 40–53. Shneiderman, B. (1998). Designing the user interface: Strategies for effective human-computer interaction (3rd ed.). Reading, MA: Addison-Wesley. Weiser, M. (1991). The computer for the 21st century. Scientific American, 256(3), 94–104.
DIALOG SYSTEMS Speech dialog systems are dialog systems that use speech recognition and speech generation to allow a human being to converse with a computer, usually to perform some well-defined task such as making travel reservations over the telephone. A dialog is a two-way interaction between two agents that communicate. Dialogs are incremental and can be adapted dynamically to improve the effectiveness of the communication. While people communicate efficiently and effectively using dialog, computers do not typically engage in dialogs with people. More common are presentation systems, which are concerned with the effective presentation of a fixed content, subject to a limited number of constraints. Unlike dialogs, presentations are planned and displayed in their entirety (without intermediate feedback from the user) and thus do not allow the system to monitor the effectiveness of the presentation or allow the user to interrupt and request clarification. Dialog systems have been less common than presentation systems because they
are more resource intensive; however, despite the extra costs, the need to reach a broader community of users and the desire to create systems that can perform tasks that require collaboration with users has led to a shift toward more dialog-based systems. Speech is a good modality for remote database access systems, such as telephone information services, which would otherwise require a human operator or a tedious sequence of telephone keystrokes. Spoken interaction is also useful when a user’s hands are busy with other tasks, such as operating mechanical controls, or for tasks for which the sound of the user’s speech is important, such as tutoring speakers in oral reading. Speech interaction can also make computers more accessible to people with vision impairments. Speech dialog systems have been used for tutoring in oral reading; for providing information about public transportation, train schedules, hotels, and sight seeing; for making restaurant and real estate recommendations; for helping people diagnose failures in electronic circuits; and for making travel reservations. Spoken dialog is most successful when the scope of the task is well-defined and narrow, such as providing airline reservations or train schedules, because the task creates expectations of what people will say—and the more limited the scope, the more limited the expectations. These expectations are needed for the system to interpret what has been said; in most cases the same group of speech sounds will have several different possible interpretations, but the task for which the dialog system is used makes one of the interpretations by far the most likely.
The Architecture of Speech Dialog Systems Speech dialog systems include the following components or processes: speech recognition, natural-language parsing, dialog management, natural-language generation, and speech synthesis. There is also an application or database that provides the core functionality of the system (such as booking a travel reservation) and a user interface to transmit inputs from the microphone or telephone to the speech-recognition component.
DIALOG SYSTEMS ❚❙❘ 163
Speech Recognition Understanding speech involves taking the sound input and mapping it onto a command, request, or statement of fact to which the application can respond. Speech recognition is the first step, which involves mapping the audio signal into words in the target language. Early approaches, such as HEARSAY II and HARPY, which were developed in the 1970s, were based on rule-based artificial intelligence. They were not very successful. Current approaches are based on statistical models of language that select the most probable interpretation of each sound unit given immediately preceding or following ones and the context of the task (which determines the vocabulary). For spoken-dialog systems, the level of speech recognition quality that is desired is known as telephone quality, spontaneous speech (TQSS). This level of recognition is necessary if spoken-dialog applications such as reservation services are to be successful over a telephone line. TQSS is more difficult to understand than face-to-face speech because over the telephone the audio signal normally includes background and channel noise, acoustical echo and channel variations, and degradation due to bandwidth constraints. Moreover, spontaneous speech includes pauses, disfluencies (such as repetitions and incomplete or ill-formed sentences), pronunciation variations due to dialects, as well as context-dependent formulations and interruptions or overlapping speech (known as “barge-in”). One way that speech recognition is made more accurate is by limiting the vocabulary that the system allows. To determine a sublanguage that will be sufficiently expressive to allow people to use the application effectively and comfortably, two techniques are generally used: One approach is to observe or stage examples of two people engaged in the domain task; the other approach is to construct a simulated man-machine dialog (known as a Wizard of Oz [WOZ] simulation) in which users try to solve the domain task. In a WOZ simulation, users are led to believe they are communicating with a functioning system, while in reality the output is generated by a person who simulates the intended functionality of the system. This approach allows the designers to see what language people will use in response to the limited vocabulary of the proposed system and
its style of dialog interaction. In either case, a vocabulary and a language model can be obtained that may only require a few thousand (and possibly only a few hundred) words. Natural-Language Parsing Natural-language parsing maps the sequence of words produced by the speech recognizer onto commands, queries, or propositions that will be meaningful to the application. There are a variety of approaches to parsing; some try to identify general-purpose linguistic patterns, following a so-called syntactic grammar, while others look for patterns that are specific to the domain, such as a semantic grammar. Some systems use simpler approaches, such as word spotting, pattern matching, or phrase spotting. The output of the parser will typically be a slot-andfiller-based structure called a case frame, in which phrases in the input are mapped to slots corresponding to functions or parameters of the application. A key requirement of parsing for spoken dialog is that the parser be able to handle utterances that do not form a complete sentence or that contain the occasional grammatical mistake. Such parsers are termed robust. Syntactic and semantic parsers work best when the input is well formed structurally. Simpler methods such as pattern matching and phrase spotting can be more flexible about structural ill-formedness, but may miss important syntactic variations such as negations, passives, and topicalizations. Also, the simpler approaches have little information about how to choose between two different close matches. To be useful in practice, another requirement is that the parser be fast enough to work in real time, which is usually only possible if the analysis is expectation driven. By the late 1990s, most spoken-dialog systems still focused on getting the key components to work together and had not achieved real-time behavior (interpretation and generation of speech)—with only a few systems using real-time behavior. Dialog Management Dialog management involves interpreting the representations created by the natural-language parser and deciding what action to take. This process is often the central one and drives the rest of the
164 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
system. Dialog management may involve following a fixed pattern of action defined by a grammar (for example, answering a question, or it may involve reasoning about the users’ or the system’s current knowledge and goals to determine the most appropriate next step. In this second instance, then, dialog management may also keep track of the possible goals of the users and their strategies (plans) for achieving them. It may also try to identify and resolve breakdowns in communication caused by lack of understanding, misunderstanding, or disagreement. One factor that distinguishes dialog managers is the distribution of control between the system and the user. This has been referred to as initiative, or the mode of communication, with the mode being considered from the perspective of the computer system. When the computer has complete control, it is responsible for issuing queries to the user, collecting answers, and formulating a response. This has been called directive mode. At the opposite extreme, some systems allow the user to have complete control, telling the system what the user wants to do and asking the system to provide answers to specific queries. This is known as passive mode. In the middle are systems that share initiative with the user. The system may begin by issuing a query to the user (or receiving a query from the user) but control may shift if either party wishes to request clarification or to obtain information needed for a response. Control may also shift if one party identifies a possible breakdown in communication or if one party disagrees with information provided by the other. Dialogs have been shown to more efficient if control can shift to the party with the most information about the current state of the task. Natural-Language Generation Natural-language generation is used to generate answers to the user’s queries or to formulate queries for the user in order to obtain the information needed to perform a given task. Natural-language generation involves three core tasks: content selection (deciding what to say), sentence planning (deciding how to organize what to say into units), and realization (mapping the planned response onto a grammatically correct sequence of words).
Historically, natural-language generation components have not run in real time, with the realization component being an important bottleneck. These systems can be slow if they follow an approach that is essentially the inverse of parsing—taking a structural description of a sentence, searching for grammar rules that match the description, and then applying each of the rules to produce a sequence of words. As a result, many spoken-dialog systems have relied on preformulated answers (canned text). More recently, real-time approaches to text generation have been developed that make use of fixed patterns or templates that an application can select and thereby bypass the need to perform a search within the generation grammar. Speech Synthesis Speech synthesis allows the computer to respond to the user in spoken language. This may involve selecting and concatenating pieces of prerecorded speech or generating speech two sounds at a time, a method known as diphone-based synthesis. (Diphone refers to pairs of sounds.) Databases of utterances to be prerecorded for a domain can be determined by analyzing the utterances produced by a human performing the same task as the information system and then selecting the most frequent utterances. Diphonebased synthesis also requires a database of prerecorded sound; however instead of complete utterances the database will contain a set of nonsense words (that have examples of all pairs of sounds), containing all phone-phone transitions for the target output language. Then when the synthesizer wants to generate a pair of sounds, it selects a word that contains the sound-pair (diphone) and uses the corresponding portion of the recording. Although these basic components of speech dialog systems can be combined in a number of ways, there are three general approaches: pipelined architectures, agent-based architectures, and huband-spoke-based architectures. In a pipelined architecture, each component in the sequence processes its input and initiates the next component in the sequence. Thus, the audio interface would call the speech recognizer, which would call the naturallanguage parser, and so on, until the speech synthesis
DIALOG SYSTEMS ❚❙❘ 165
component is executed. In an agent-based approach, a centralized component (typically the dialog manager) initiates individual components and determines what parameters to provide them. This may involve some reasoning about the results provided by the components. In a hub-and-spoke architecture there is a simple centralized component (the hub) which brokers communication among the other components, but performs no reasoning. Since 1994, a hub-and-spoke architecture called Galaxy Communicator has been under development. It has been proposed as a standard reference architecture that will allow software developers to combine “plug-and-play”-style components from a variety of research groups or commercial vendors. The Galaxy Communicator effort also includes an opensource software infrastructure.
Dialog System Toolkits Creating speech dialog systems is a major undertaking because of the number and complexity of the components involved. This difficulty is mitigated by the availability of a number of software toolkits that include many, if not all, of the components needed to create a new spoken-dialog system. Currently such toolkits are available both from commercial vendors (such as IBM, which markets ViaVoice toolkits for speech recognition and speech synthesis) and academic institutions. Academic institutions generally distribute their software free for noncommercial use, but sell licenses for commercial applications. Below, we consider a few of the major (academically available) speech dialog toolkits. In addition to these toolkits, a number of institutions distribute individual components useful in building speech dialog systems. For example, the Festival Speech Synthesis System developed by the University of Edinburgh has been used in a number of applications. The Communicator Spoken Dialog Toolkit, developed by researchers at Carnegie Mellon University, is an open-source toolkit that provides a complete set of software components for building and deploying spoken-language dialog systems for both desktop and telephone applications. It is built on top of the Galaxy Communicator software
infrastructure and is distributed with a working implementation for a travel-planning domain. It can be downloaded freely from the Internet. The group also maintains a telephone number connected to their telephone-based travel-planning system that anyone can try. The Center for Spoken Language Research (CSLR) at the University of Colorado in Boulder distributes the Conversational Agent Toolkit. This toolkit includes modules that provide most of the functionality needed to build a spoken-dialog system, although code must be written for the application itself. As a model, CSLR distributes their toolkit with a sample (open-source) application for the travel domain that can be used as a template; it is based on the Galaxy Communicator hub architecture. TRINDIKIT is a toolkit for building and experimenting with dialog move engines (mechanisms for updating what a dialog system knows, based on dialog moves (single communicative actions such as “giving positive feedback”) and information states (information stored by the dialog system). It has been developed in the TRINDI and SIRIDUS projects, two European research projects that investigate humanmachine communication using natural language. TRINDIKIT specifies formats for defining information states, rules for updating the information state, types of dialog moves, and associated algorithms.
The Evaluation of Speech Dialog Systems Prior to 1990, methods of evaluating speech dialog systems concentrated on the number of words that the speech recognizer identified correctly. In the early 1990s, there was a shift to looking at the quality of the responses provided by spoken-dialog systems. For example, in 1991, the U.S. Defense Advanced Research Projects Agency community introduced a metric that evaluates systems based on the number of correct and incorrect answers given by the system. Systems are rewarded for correct answers and penalized for bad answers, normalized by the total number of answers given. (The effect is that it is better to give a nonanswer such as “I do not understand”
166 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
or “please rephrase your request” than to give an incorrect answer.) This approach relies on the existence of a test database with a number of sample sentences from the domain along with the correct answer, as well as a set of answers from the system to be evaluated. Starting in the late 1990s, approaches to evaluating dialog success have looked at other measures, such as task-completion rates and user satisfaction (as determined by subjective questionnaires). Subjective factors include perceived system-response accuracy, likeability, cognitive demand (how much effort is needed to understand the system), habitability (how comfortable or natural the system is to use), and speed. There has also been success in predicting user satisfaction or task completion on the basis of objectively observable features of the dialog, such as task duration, the number of system words per turn, the number of user words per turn, the number of overlapping turns, sentence error rates, and perceived task completion. Statistical methods such as multiple regression models and classification trees are then used to predict user satisfaction and task-completion scores.
The Research Community for Speech Dialog Systems Research on speech dialog systems is interdisciplinary, bringing together work in computer science, engineering, linguistics, and psychology. There are a number of journals, conferences, and workshops through which researchers and developers of spokendialog systems disseminate their work. Important journals include Computer Speech and Language and Natural Language Engineering. Conferences most focused on such systems include Eurospeech and Interspeech (the International Conference on Spoken Language Processing). In addition, the Special Interest Group on Discourse and Dialog (SIGdial) organizes an annual workshop. SIGdial is a Special Interest Group (SIG) of both the Association for Computational Linguistics and the International Speech Communication Association (ISCA). SIGdial is an international, nonprofit cooperative organization that includes researchers from academia, industry,
and government. Among its goals are promoting, developing, and distributing reusable discourseprocessing components; encouraging empirical methods in research; sharing resources and data among the international community; exploring techniques for evaluating dialog systems; promoting standards for discourse transcription, segmentation, and annotation; facilitating collaboration between developers of various system components; and encouraging student participation in the discourse and dialog community. Susan W. McRoy See also Natural-Language Processing, Open Source Software, Speech Recognition, Speech Synthesis
FURTHER READING Allen, J. F., Schubert, L. K., Ferguson, G., Heeman, P., Hwang, C. H., Kato, T., et al. (1995). The TRAINS project: A case study in building a conversational planning agent. Journal of Experimental and Theoretical Artificial Intelligence, 7, 7–48. Bernsen, N. O., Dybkjaer, H., & Dybkjaer, L. (1998). Designing interactive speech systems: From first ideas to user testing. New York: Springer Verlag. Fraser, N. (1997). Assessment of interactive systems. In D. Gibbon, R. Moore, and R. Winski (Eds.), Handbook of standards and resources for spoken language systems (pp. 564–614). New York: Mouton de Gruyter. Grosz, B. J., & Sidner, C. (1986). Attention, intention, and the structure of discourse. Computational Linguistics, 12(3), 175–204. Haller, S., Kobsa, A., & McRoy, S. (Eds.). (1999). Computational models for mixed-initiative interaction. Dordrect, Netherlands: Kluwer Academic Press. Huang X. D., Alleva, F., Hon, H. W., Hwang, M. Y., Lee, K. F., and Rosenfeld, R. (1993). The Sphinx II Speech Recognition System: An overview. Computer Speech and Language, 7(9), 137–148. Jurafsky, D., & Martin, J. (2000). Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition. Upper Saddle River, NJ: Prentice-Hall. Larsson, S., & Traum, D. (2000). Information state and dialogue management in the TRINDI Dialogue Move Engine Toolkit [Special issue on best practice in spoken dialogue systems]. Natural Language Engineering, 6(3–4), 323–340. Luperfoy, S. (Ed.). (1998). Automated spoken dialog systems. Cambridge, MA: MIT Press. McRoy, S. W. (Ed.). (1998). Detecting, repairing, and preventing human-machine miscommunication [Special issue]. International Journal of Human-Computer Studies, 48(5). McRoy, S. W., Channarukul, S., & Ali, S. S. (2001). Creating natural language output for real-time applications intelligence. Intelligence: New Visions of AI in Practice, 12(2), 21–34.
DIGITAL CASH ❚❙❘ 167
McRoy, S. W., Channarukul, S., & Ali, S. S. (2003). An augmented template-based approach to text realization. Natural Language Engineering, 9(2), 1–40. Minker, W., Bühler, D., & Dybkjær, L. (2004). Spoken multimodal human-computer dialog in mobile environments. Dordrect, Netherlands: Kluwer. Mostow, J., Roth, S. F., Hauptmann, A., & Kane, M. (1994). A prototype reading coach that listens. In Proceedings of the Twelfth National Conference on Artificial Intelligence (AAAI-94) (pp. 785–792). Seattle, WA: AAAI Press. Pellom, B., Ward, W., Hansen, J., Hacioglu, K., Zhang, J., Yu, X., & Pradhan, S. (2001, March). University of Colorado dialog systems for travel and navigation. Paper presented at the Human Language Technology Conference (HLT-2001), San Diego, CA. Roe, D. B., & Wilpon, J. G. (Eds.). (1995). Voice communication between humans and machines. Washington, D.C.: National Academy Press. Seneff, S., Hurley, E., Lau, R. Pau, C., Schmid, P., & Zue, V. (1998). Galaxy II: A reference architecture for conversational system development. Proceedings of the 5th International Conference on Spoken Language Processing, 931–934. Smith, R., & Hipp, D. R. (1995). Spoken natural language dialog systems: A practical approach. New York: Oxford University Press. Smith, R., & van Kuppevelt, J. (Eds.). (2003). Current and new directions in discourse and dialogue. Dordrect, Netherlands: Kluwer. van Kuppevelt, J, Heid, U., & Kamp, H. (Eds.). (2000). Best practice in spoken dialog systems [Special issue]. Natural Language Engineering, 6(3–4). Walker, M., Litman, D., Kamm, C., & Abella, A. (1998). Evaluating spoken dialogue agents with PARADISE: Two case studies. Computer Speech and Language, 12(3), 317–347. Walker, M. A., Kamm, C. A., & Litman, D. J. (2000). Towards developing general models of usability with PARADISE [Special issue on best practice in spoken dialogue systems]. Natural Language Engineering, 6(3–4). Wilks, Y. (Ed.). (1999). Machine conversations. Dordrect, Netherlands: Kluwer.
DIGITAL CASH The use of digital cash has increased in parallel with the use of electronic commerce; as we purchase items online, we need to have ways to pay for them electronically. Many systems of electronic payment exist.
Types of Money Most systems of handling money fall into one of two categories:
1. Token-based systems store funds as tokens that can be exchanged between parties. Traditional currency falls in this category, as do many types of stored-value payment systems, such as subway fare cards, bridge and highway toll systems in large metropolitan areas (e.g., FastPass, EasyPass), and electronic postage meters. These systems store value in the form of tokens, either a physical token, such as a dollar bill, or an electronic register value, such as is stored by a subway fare card. During an exchange, if the full value of a token is not used, then the remainder is returned (analogous to change in a currency transaction)—either as a set of smaller tokens or as a decremented register value. Generally, if tokens are lost (e.g., if one’s wallet is stolen or one loses a subway card), the tokens cannot be recovered. 2. Account-based systems charge transactions to an account. Either the account number or a reference to the account is used to make payment. Examples include checking accounts, credit card accounts, and telephone calling cards. In some instances, the account is initially funded and then spent down (e.g., checking accounts); in other instances, debt is increased and periodically must be paid (e.g., credit cards). In most account-based systems, funds (or debt) are recorded by a trusted third party, such as a bank. The account can be turned off or renumbered if the account number is lost. The more complex an electronic payment system is, the less likely consumers are to use it. (As an example, a rule of thumb is that merchants offering “one-click ordering” for online purchases enjoy twice the order rate of merchants requiring that payment data be repeatedly entered with each purchase.)
Electronic Payment Using Credit Cards The most common form of electronic payment on the Internet today is credit card payment. Credit cards are account based. They are issued by financial institutions to consumers and in some cases to
168 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
organizations. A consumer presents the credit card number to a merchant to pay for a transaction. On the World Wide Web credit card account numbers are typically encrypted using the Secure Socket Layer (SSL) protocol built into most Web browsers. The merchant often attempts to verify the card holder by performing address verification (checking numbers appearing in an address) or by using a special verification code (typically printed on the reverse side of the credit card). In the United States credit card users typically enjoy strong rights and can reverse fraudulent transactions. Although the SSL protocol (in typical configurations) provides strong encryption preventing third parties from observing the transaction, risks still exist for the credit card holder. Many merchants apply inadequate security to their database of purchases, and attackers have gained access to large numbers of credit cards stored online. Moreover, some merchants charge incorrect amounts (or charge multiple times) for credit card transactions. Although fraudulent transactions are generally reversible for U.S. residents, time and effort are required to check and amend such transactions. In some instances, criminals engage in identity theft to apply for additional credit by using the identity of the victim. To reduce these risks, some experts have proposed a system that uses third parties (such as the bank that issued the card) to perform credit card transactions. A notable example of this type of system is Verified by Visa. However, the additional work required to configure the system has deterred some consumers, and as a result Verified by Visa and similar systems remain largely unused. The most elaborate of these systems was the Secure Electronic Transactions (SET) protocol proposed by MasterCard International and Visa International; however, the complexity of SET led to its being abandoned. In these systems credit card purchases are usually funded with a fee that is charged to the merchant. Although rates vary, typical fees are fifty cents plus 2 percent of the purchase amount.
Third-Party Payment Accounts A merchant must be able to process credit card payments. This processing is often inconvenient for small merchants, such as people who sell items in online
auctions. As a result, a market has opened for thirdparty payment processors. Today, the largest thirdparty payment processor is PayPal, owned by the eBay auction service. Third-party payment processor systems are account based. Consumers can pay for third-party purchases in three ways: by paying from an account maintained with the third party, by paying from a credit card account, and by paying from a checking account. Merchants’ rates for accepting funds from a credit card account are slightly higher than their rates for accepting funds from a conventional credit card account. Third-party payment accounts are convenient because they are simple to use and provide consumers with protection against being overcharged. However, they tend not to provide the same degree of protection that a credit card-funded purchase provides. Because third-party payment accounts are widely used with auction systems, where fraud rates are unusually high, the degree of protection is a serious consideration.
Smartcards and Other Stored-Value Systems Stored-value systems store value on a card that is used as needed. Smartcards are a token-based payment system. Many smartcards use an integrated circuit to pay for purchases. They are widely used in Europe for phone cards and in the GSM cellular telephone system. Mondex is a consumer-based system for point-of-sale purchases using smartcards. Use of smartcards is limited in Asia and largely unused in North America. (In North America only one major vendor, American Express, has issued smartcards to large numbers of users, and in those cards the smartcard feature is currently turned off.) Experts have raised a number of questions about the security of smartcards. Successful attacks conducted by security testers have been demonstrated against most smartcard systems. Experts have raised even deeper questions about the privacy protection provided by these systems. For example, in Taiwan, where the government has been moving to switch from paper records to a smartcard system for processing National Health Insurance payments,
DIGITAL CASH ❚❙❘ 169
considerable public concern has been raised about potential privacy invasions associated with the use of health and insurance records on a smartcard system. A number of devices function like a smartcard but have different packaging. For example, some urban areas have adopted the FastPass system, which allows drivers to pay bridge and highway tolls using radio link technology. As a car passes over a sensor at a toll booth, value stored in the FastPass device on the car is decremented to pay the toll. The state of California recently disclosed that it uses the same technology to monitor traffic flow even when no toll is charged. The state maintains that it does not gather personal information from FastPass-enabled cars, but experts say that it is theoretically possible.
Anonymous Digital Cash A number of researchers have proposed anonymous digital cash payment systems. These would be tokenbased systems in which tokens would be issued by a financial institution. A consumer could “blind” such tokens so that they could not be traced to the consumer. Using a cryptographic protocol, a consumer could make payments to merchants without merchants being able to collect information about the consumer. However, if a consumer attempted to copy a cryptographic token and use it multiple times, the cryptographic protocol would probably allow the consumer’s identity to be revealed, allowing the consumer to be prosecuted for fraud. Anonymous digital cash payment systems have remained primarily of theoretical interest, although some tr ials have been made (notably of the Digicash system pioneered by David Chaum). Anonymous payment for large purchases is illegal in the United States, where large purchases must be recorded and reported to the government. Moreover, consumers generally want to record their purchases (especially large ones) to have maximum consumer protection. Some researchers have demonstrated that anonymous digital cash payment systems are not compatible with atomic purchases (that is, guaranteed exchange of goods for payment). The principal demand for anonymous payment appears to be for transactions designed to evade taxes, transactions of contraband, and transactions of socially undesirable material.
Micropayments One of the most interesting types of electronic payment is micropayments. In many instances consumers wish to purchase relatively small-value items. For example, consider a website that vends recipes. Each recipe might be sold for only a few cents, but sold in volume, their value could be considerable. (Similarly, consider a website that offers online digital recordings of songs for ninety-nine cents each.) Currently, making small payments online using traditional payment methods is not feasible. For example, as mentioned, credit card companies typically charge merchants a processing fee of fifty cents plus 2 percent of the purchase amount for credit card transactions—clearly making credit card purchases for items that cost less than fifty cents impractical. Most merchants refuse to deal with small single-purchase amounts and require that consumers either buy a subscription or purchase the right to buy large numbers of items. For example, newspaper websites that offer archived articles typically require that consumers purchase either a subscription to access the articles or purchase a minimum number of archived articles—they refuse to sell archived articles individually. To enable small single purchases, a number of researchers have proposed micropayment systems that are either token based or account based. An example of an account-based micropayment system is the NetBill system designed at Carnegie Mellon University. This system provides strong protection for both consumers and merchants and acts as an aggregator of purchase information. When purchases across a number of merchants exceed a certain threshold amount, that amount is charged in a single credit card purchase. An example of a token-based micropayment system is the PepperCoin system proposed by Ron Rivest and Silvio Micali and currently being commercialized. Peppercoin uses a unique system of “lottery tickets” for purchases. For example, if a consumer wishes to make a ten-cent purchase, he might use a lottery ticket that is worth ten dollars with a probability of 1 percent. The expected value paid by the consumer would be the same as the items he purchased; but any single charge would be large enough to justify being charged using a traditional payment mechanism (such as a credit card).
170 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Despite the promise of micropayment systems, they remain largely unused. Most merchants prefer to support small-value items by using Web-based advertising or subscriptions. Nonetheless, advocates of micropayment systems maintain that such systems enable new classes of electronic commerce.
Challenges for Digital Cash Although digital cash is being increasingly used, a number of challenges remain. The principal challenge is associating payment with delivery of goods (this challenge is often known as the “atomic swap” or “fair exchange” problem.) Merchants also need to be protected from using stolen payment information, and consumers need to be protected from merchants who inadequately protect payment information (or, even worse, engage in fraud.) Finally, effective payment methods need to be developed and accepted to support both large and small purchases. A balance must be reached between consumers who want anonymous purchases and government authorities who want to tax or record purchases. These challenges make digital cash a rapidly developing research area. J. D. Tygar See also E-business FURTHER READING Chaum, D., Fiat, A., & Naor, M. (1990). Untraceable electronic cash. In G. Blakley & D. Chaum (Eds.), Advances in cryptolog y (pp. 319–327). Heidelberg, Germany: Springer-Verlag. Electronic Privacy Information Center. (2003). Privacy and human rights 2003. Washington, DC: Author. Evans, D., & Schmalensee, R. (2000). Paying with plastic: The digital revolution in buying and borrowing. Cambridge, MA: MIT Press. Kocher, P., Jaffe, J., & Jun, B. (1999). Differential power analysis. In M. Weiner (Ed.), Advances in cryptology (pp. 388–397). Heidelberg, Germany: Springer-Verlag. Mann, R., & Winn, J. (2002). Electronic commerce. Gaithersburg, MD: Aspen Publishers. O’Mahony, D., Peirce, M., & Tewari, H. (2001). Electronic payment systems for e-commerce (2nd ed.). Norwood, MA: Artech House. Tygar, J. D. (1998). Atomicity in electronic commerce. Networker, 2(2), 23–43. Wayner, P. (1997). Digital cash: Commerce on the Net (2nd ed.). San Francisco: Morgan-Kaufmann.
DIGITAL DIVIDE There is both optimism and pessimism about the ultimate impact of the digital revolution on individual, societal, and global well-being. On the optimistic side are hopes that access to information and communication technologies, particularly the Internet, will facilitate a more equitable distribution of social, economic, and political goods and services. On the pessimistic side are beliefs that lack of access to these technologies will exacerbate existing inequalities, both globally and among groups within societies. The phrase digital divide was coined to refer to this gap between the technology haves and have-nots— between those who have access to information and communications technologies, most notably the Internet, and those who do not. The overriding concern is that a world divided by geographic, religious, political, and other barriers will become further divided by differing degrees of access to digital technologies.
Evidence of a Digital Divide Evidence supporting the existence of a global digital divide is overwhelming. Of the estimated 430 million people online in 2001, 41 percent resided in the United States and Canada. The remaining Internet users were distributed as follows: 25 percent in Europe, 20 percent in the Asian Pacific (33 percent of this group in Asia, 8 percent in Australia and New Zealand), 4 percent in South America, and 2 percent in the Middle East and Africa. Even among highly developed nations there are vast differences in Internet access. For example, in Sweden 61 percent of homes have Internet access compared to 20 percent of homes in Spain. In light of the global digital divide evidence that Internet use is rapidly increasing takes on additional significance. According to data compiled by a variety of sources, the rise in Internet use extends throughout both the developed and developing world. The rapidly increasing global reach of the Internet intensifies concerns about its potential to exacerbate existing global economic and social disparities.
DIGITAL DIVIDE ❚❙❘ 171
HomeNetToo Tries to Bridge Digital Divide B e g u n i n t h e f a l l o f 2 0 0 0 , Hom e Ne t To o w a s a n eighteen-month field study of home Internet use in lowincome families. Funded by an Information Technology Research grant from the National Science Foundation, the project recruited ninety families who received in-home instruction on using the Internet, and agreed to have their Internet use recorded and to complete surveys on their experiences. In exchange, each family received a new home computer, Internet access, and in-home technical support. The comments of the HomeNetToo participants about their computer use provide a broad range of views about the pleasures and problems of computer interactions:
“If I’m stressed out or depressed or the day is not going right, I just get on the computer and just start messing around and I come up with all sorts of things like ‘okay, wow.’ ” “You get a lot of respect because you have a computer in your house. I think people view you a little differently.” “A lot of times I’m real busy, and it was hard for me to get a turn on the computer too. My best chance of getting time on the computer is I get up at 6 AM and the rest of the family gets up at seven. So if I finish my bath and get ready quickly I can get on before anyone else is up. And I can have an hour space to do whatever I want while they’re sleeping and getting up and dressed themselves.”
“When somebody’s on the computer whatever it is they’re doing on that computer at that time, that’s the world they’re in…it’s another world.”
“I feel like I don’t have time ...who has time to watch or play with these machines. There’s so much more in life to do.”
“With the computer I can do things…well, I tell the computer to do things nobody else will ever know about, you know what I am saying? I have a little journal that I keep that actually nobody else will know about unless I pull it up.”
“Instead of clicking, I would like to talk to it and then say ‘Can I go back please?’ ” “They talk in computer technical terms. If they could talk more in layman’s terms, you know, we could understand more and solve our own problems.”
“I escape on the computer all the time...I like feeling ‘connected to the world’ and I can dream.”
Source: Jackson, L. A., Barbatsis, G., von Eye, A., Biocca, F. A., Zhao, Y., & Fitzgerald, H. E. (2003c). Implications for the digital divide of Internet use in low-income families. IT & Society., 1(5), 219–244.
Evidence for a digital divide within the United States is a bit more controversial, and has shifted from irrefutable in 1995 to disputable in 2002. In its first Internet repor t in 1995, the U.S. Department of Commerce noted large disparities in Internet access attributable to income, education, age, race or ethnicity, geographic location, and gender. In its fifth Internet report in 2002, all disparities had shrunk substantially. However, only a few disappeared entirely. Although 143 million U.S citizens now have access to the Internet (54 percent of the population), gaps attributable
to the following factors have been observed in all surveys to date: Income: Income is the best predictor of Internet access. For example, only 25 percent of households with incomes of less than $15,000 had Internet access in 2001, compared to 80 percent of households with incomes of more than $75,000. ■ Education: Higher educational attainment is associated with higher rates of Internet use. For example, among those with bachelor’s degrees or ■
172 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
better, over 80 percent use the Internet, compared to 40 percent of those with only a high school diploma. ■ Age: Internet use rates are highest between the ages of twelve and fifty; they drop precipitously after age fifty-five. ■ Race or ethnicity: Asian/Pacific Islanders and whites are more likely to use the Internet (71 percent and 70 percent, respectively) than are Hispanics and African-Americans (32 percent and 40 percent, respectively). However, growth in Internet use has been greater among the latter than the former groups. The gender gap so evident in the 1995 U.S. Department of Commerce survey disappeared by the 2002 survey. However, gender-related differences remain. Among those over sixty years old, men had higher Internet use rates than did women. Among the twenty-to-fifty-year-old group, women had higher Internet use rates than did men. Also diminishing if not disappearing entirely are gaps related to geographic location. Internet use rates in rural areas climbed to 53 percent in 2002, almost as high as the national average, but use rates for central-city residents was only 49 percent, compared to 57 percent for urban residents outside the central city. In addition to the five Internet reports by the U.S. Department of Commerce, a number of other organizations have been tracking Internet use and issues related to the digital divide. The Pew Internet and American Life Project devoted one of its several reports to African-Americans and the Internet, focusing on how African-Americans’ Internet use differ from whites’ use. These differences are important to understanding the racial digital divide in the United States and are potentially important to understanding global digital-divide issues that may emerge as access to the Internet becomes less problematic. The Pew Internet and American Life Project reported the following findings: ■
African-Americans are more likely than whites to use the Internet to search for jobs, places to live, entertainment (for example, music and videos), religious or spiritual information and health care information, and as a means to pursue hobbies and learn new things.
African-Americans are less likely than whites to say the Internet helps them to stay connected to family and friends. ■ Women and parents are driving the growth of the African-American Internet population. ■ Mirroring the pattern of gender differences in the general population, African-American women are much more likely than African-American men to search for health, job, and religious information online. African American men are much more likely than African-American women to search for sports and financial information and to purchase products online. ■ Compared with older African-Americans, those under age thirty are more likely to participate in chat rooms, play games, and use multimedia sources. Older African-Americans are more likely to search for religious information than are younger African-Americans. ■ The gap in Internet access between AfricanAmericans and whites is closing, but AfricanAmericans still have a long way to go. Moreover, those with access to the Internet do not go online as often on a typical day as do whites, and online African-Americans do not participate on a daily basis in most Web activities at the same level as do online whites. ■
A number of researchers have also been interested in race differences in U.S. Internet access. Thomas Hoffman and Donna Novak, professors of management at Vanderbilt University, examined the reasons for race differences in Internet access and concluded that income and education cannot fully explain them. Even at comparable levels of income and education, African-Americans were less likely to have home PCs and Internet access than were whites. The psychologist Linda Jackson and her colleagues have found race differences in Internet use among college students who had similar access to the Internet. The United States is not the only country to report a domestic digital divide. In Great Britain the digital divide separates town and country, according to a 2002 joint study by IBM and Local Futures, a research and strategy consultancy. According to the study’s findings, Britain’s digital divide may soon
DIGITAL DIVIDE ❚❙❘ 173
grow so wide that it will not be bridgeable. People in Great Britain’s rural areas currently do not have the same degree of access to new technologies, such as cell phones, as do people in cities and the areas surrounding them.
Why Is There a Digital Divide? The global digital divide appears to have an obvious cause. In the absence of evidence to the contrary, it is reasonable to assume that the divide is attributable to differing degrees of access to digital technologies, especially the Internet. Of course there are a host of reasons why access may be lacking, including the absence of necessary infrastructure, government policy, and abject poverty. Regardless of the specific factor or factors involved, the access explanation assumes that if access were available, then the global divide would disappear. In other words, Internet access would translate readily into Internet use. Explaining the U.S. digital divide in terms of access to digital technologies is a bit more problematic. Indeed, some have argued that there is no digital divide in the U.S. and that the so-called information have-nots are really information want-nots. Those advocating this perspective view the U.S. Department of Commerce 2002 report as evidence that individuals without access have exercised their free choice to say no to the Internet in favor of higher priorities. Moreover, those who argue that the divide is disappearing say that because the growth rate in Internet use is much higher for low-income groups than it is for high-income groups (25 percent as opposed to 15 percent), the gap between rich and poor will eventually be negligible without any intervention from government or the private sector. Those who argue that a digital divide persists in the United States despite increasing low-income access suggest that the divide be reconceptualized to focus on use rather than access. This reconceptualization highlights the importance of understanding people’s motivations for Internet use and nonuse, an understanding that will be even more important if the global digital divide proves to be more than a matter of access to digital technologies.
The Divide between Digital Use and Nonuse Why do individuals choose to use or not use the Internet, assuming they have access to it? A number of studies have examined people’s motivations for using or not using the Internet. According to the “uses and gratifications” model of media use, individuals should use the Internet for the same reasons they use other media, namely, for information, communication, entertainment, escape, and transactions. Research generally supports this view, although the relative importance of these different motivations varies with demographic characteristics of the user and changes in the Internet itself. For example, older users are more likely to use the Internet for information, whereas younger users are more likely to use it for entertainment and escape. Entertainment and escape motives are more important today than they were when the World Wide Web was first launched in 1991. A report issued in 2000 by the Pew Internet and American Life Project focused specifically on why some Americans choose not to use the Internet. The authors noted that 32 percent of those currently without Internet access said they would definitely not be getting access—about 31 million people. Another 25 percent of non-Internet users said they probably would not get access. Reasons for not going online centered on beliefs that the Internet is a dangerous place (54 percent), that the online world has nothing to offer (51 percent), that Internet access is too expensive (39 percent), and that the online world is confusing and difficult to navigate (36 percent). The strongest demographic predictor of the decision not to go online was age. Older Americans apparently perceived few personal benefits to participating in the online world; 87 percent of those sixty-five and older did not have Internet access, and 74 percent of those over fifty who were not online said they had no plans to go online. In contrast, 65 percent of those under fifty said they planned to get Internet access in the near future. Ipsos-Reid, a research firm, used an international sample to examine people’s reasons for not going online. Their findings, published in 2000, were similar to the Pew report findings: Thirty-three percent
174 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
of respondents said they had no intention of going online. Their reasons included lack of need for the online world (40 percent), lack of a computer (33 percent), lack of interest in going online (25 percent), lack of necessary technical skills, and general cost concerns (16 percent). The Children’s Partnership, which also published a report in 2000 on why people do not go online, offered four reasons why low-income and underserved Americans may choose to stay away from the Internet. First, the Internet may lack the local information of interest to low-income and underserved Americans; second, there may be literacy barriers; third, there may be language barriers; and fourth, the lack of cultural diversity on the Internet may keep them from participating. Lack of local information disproportionately affects users living on limited incomes. Literacy barriers come into play because online content is often directed at more educated Internet users, particularly users who have discretionary money to spend online. Reading and understanding Web content may be especially difficult for the less educated and those for whom English is a second language (32 million Americas). An estimated 87 percent of the documents on the Internet are in English. The lack of cultural diversity on the Internet may be rendering the Internet less interesting to millions of Americans. Others have argued that access alone may not be enough to produce equity in Internet use in the United States. Gaps will persist due to differences in education, interest in Web topics, and interpersonal contact with others familiar with these topics. All of these factors may affect how eagerly an individual seeks out and consumes information on the Internet.
Whose Responsibility is the Digital Divide? Opinions vary about whose responsibility it is to address the digital divide, whether it be the global divide, the U.S. divide, or the divide between users and nonusers. At the global level, in June 2002 the United Nations’ telecommunications agency argued that it would take concerted global action to keep the digital divide from growing. The U.N. adopted a res-
olution to organize a world summit on the information society, the first in Geneva in 2003, and the second in Tunisia in 2005. The summits are expected to promote universal access to the information, knowledge, and communications technologies needed for social and economic development. In April 2002, Erkki Liikanen, the European commissioner for the Enterprise Directorate General and the Information Society Directorate General, argued that developing countries must be included in the shift to a networked, knowledge-based global economy. He stressed the importance of strong political leadership, top-level involvement and contributions from both the public and private sectors. In 2000, the European Commission launched an action plan, the goal of which was to bring all of Europe online by 2002. As a result of this action plan, decision making on telecommunications and e-commerce regulation accelerated and Internet access has moved to the top of the political agenda in all European Union member countries. In the coming years the focus will move to the user and usage of the Internet. The goal is to encourage more profound and inclusive use of the Internet. In the United States a number of nonprofit organizations have looked to the federal government to address the digital divide. For example, upon release of the U.S. Department of Commerce’s fifth digital divide report in 2002, the Benton Foundation issued a policy brief stating that “Targeted [government] funding for community technology is essential to maintain national digital divide leadership” (Arrison 2002). The government, however, continues to minimize the importance of the digital divide, asserting that for the all intents and purposes it no longer exists. Thus, while some call for broad-based approaches to eliminating the global digital divide and government intervention to eliminate the U.S. digital divide, others argue that nothing at all needs to be done, that market forces will bridge the digital divide without any other action being taken. Still others believe that access to and use of digital technologies, particularly the Internet, are neither necessary for everyday life nor solutions to social and economic problems in the United States or elsewhere. Linda A. Jackson
DIGITAL GOVERNMENT ❚❙❘ 175
See also Economics and HCI; Internet—Worldwide Diffusion
FURTHER READING Arrison, S. (2002, April 19). Why digital dividers are out of step. Retrieved July 17, 2003, from http://www.pacificresearch.org/press/opd/ 2002/opd_02-04-19sa.html Associated Press. (2002, June 22). U.N. warns on global digital divide. Retrieved July 18, 2003, from http://lists.isb.sdnpk.org/ pipermail/comp-list/2002-June/001053.html BBC News. (2002, March 10). Digital divisions split town and country. Retrieved July 18, 2003, from http://news.bbc.co.uk/2/hi/ science/nature/1849343.stm Carvin, A. (2000). Mind the gap: The digital divide as the civil rights issue of the new millenium. Multimedia Schools, 7(1), 56–58. Retrieved July 17, 2003, from http://www.infotoday.com/mmschools/ jan00/carvin.htm Cattagni, A., & Farris, E. (2001). Internet access in U.S. public schools and classrooms: 1994–2000 (NCES No. 2001-071). Retrieved July 18, 2003, from http://nces.ed.gov/pubsearch/pubsinfo .asp?pubid=2001071 Children’s Partnership. (2000). Online content for low-income and underserved Americans: The digital divide’s new frontier. Retrieved July 17, 2003, from http://www.childrenspartnership.org/pub/low_income/ Cooper, M. N. (2002, May 30). Does the digital divide still exist? Bush administration shrugs, but evidence says “yes.” Retrieved July 18, 2003, from http://www.consumerfed.org/DigitalDivideReport20020530 .pdf Digital Divide Network staff. (2003). Digital divide basics fact sheet. Retrieved July 18, 2003, from http://www.digitaldividenetwork .org/content/stories/index.cfm?key=168 eEurope. (1995–2002). An information society for all. Retrieved July 18, 2003, from http://europa.eu.int/information_society/eeurope/ index_en.htm European Union. (2002, May 4). e-Government and development: Br idg ing the gap. Ret r ie ved July 18, 2003, from http:// europa.eu.int/rapid/start/cgi/guesten.ksh?p_action.gettxt=gt&doc= SPEECH/02/157|0|RAPID&lg=EN&display= Gorski, P. (Fall, 2002). Dismantling the digital divide: A multicultural education framework. Multicultural Education, 10(1), 28–30. Hoffman, D. L., & Novak, T. P. (1998, April). Bridging the racial divide on the Internet. Science, 280, 390–391. Hoffman, D. L., Novak, T. P., & Schlosser, A. E. (2000). The evolution of the digital divide: How gaps in Internet access may impact electronic commerce. Journal of Computer Mediated Communication, 5(3), 1–57. Jackson, L. A., Ervin, K. S., Gardner, P. D., & Schmitt, N. (2001a). The racial digital divide: Motivational, affective, and cognitive correlates of Internet use. Journal of Applied Social Psychology, 31(10), 2019–2046. Jackson, L. A., Ervin, K. S., Gardner, P. D., & Schmitt, N. (2001b). Gender and the Internet: Women communicating and men searching. Sex Roles, 44(5–6), 363–380. Jackson, L. A., von Eye, A., Biocca, F., Barbatsis, G., Fitzgerald, H. E., & Zhao, Y. (2003, May 20–24). The social impact of Internet Use: Findings from the other side of the digital divide. Paper presented at
the twelfth International World Wide Web Conference, Budapest, Hungary. Lenhart, A. (2000). Who’s not online: 57% of those without Internet access say they do not plan to log on. Washington, DC: Pew Internet & American Life Project. Retrieved July 18, 2003, from http://www.pewinternet.org/reports/pdfs/Pew_Those_ Not_Online_Report.pdf Local Futures. (2001) Local futures research: On the move—mobile and wireless communications. Retrieved July 18, 2003, from http:// www.localfutures.com/article.asp?aid=41 National Telecommunications and Information Administration, Economics and Statistics Administration. (n.d.) A nation online: How Americans are expanding their use of the Internet. Retrieved July 18, 2003, from http://www.ntia.doc.gov/ntiahome/dn/ html/toc.htm Ombwatch. (2002, August 18). Divided over digital gains and gaps. Retrieved July 18, 2003, from http://www.ombwatch.org/article/ articleview/1052/ The relevance of ICT in development. (2002, May-June) The Courier ACP-EU, 192, 37–39. Retrieved 17 July 2003, from http://europa.eu.int/comm/development/body/publications/ courier/courier192/en/en_037_ni.pdf Spooner, T., & Rainie, L. (2000). African-Americans and the Internet. Washington, DC: Pew Internet & American Life Project. African Americans and the Internet. Retrieved July 18, 2003, from http://www.pewinternet.org/reports/pdfs/PIP_African_Americans_ Report.pdf UCLA Center for Communication Policy. (2000). The UCLA Internet report: Surveying the digital future. Retrieved July 18, 2003, from http://www.ccp.ucla.edu/UCLA-Internet-Report-2000.pdf UCLA Center for Communication Policy. (2003). The UCLA Internet report: Surveying the digital future, year three. Retrieved July 18, 2003, from http://www.ccp.ucla.edu/pdf/UCLA-Internet-ReportYear-Three.pdf U.S. Department of Commerce. (1995). Falling through the Net: A survey of the “have nots” in rural and urban America. Retrieved July 18, 2003, from http://www.ntia.doc.gov/ntiahome/fallingthru.html U.S. Department of Commerce. (2000). Falling through the Net: Toward digital inclusion. Retrieved July 18, 2003, from http://search.ntia .doc.gov/pdf/fttn00.pdf U.S. Department of Commerce. (2002). A nation online: How Americans are expanding their use of the Internet. Retrieved July 18, 2003, from http://www.ntia.doc.gov/ntiahome/dn/anationonline2.pdf Weiser, E. B. (2002). The functions of Internet use and their social and psychological consequences. Cyberpsychology and Behavior, 4(2), 723–743.
DIGITAL GOVERNMENT Electronic government (e-government) is intimately connected to human-computer interaction (HCI). Critical HCI issues for e-government include technical and social challenges and interactions between the two. First, at a broad, societal level, the adaptation of
176 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
government and civic engagement to increasingly computerized environments raises political, organizational, and social questions concerning use, the appropriate contexts or environments for use, reciprocal adaptation mechanisms, learning and the design of government work, the design of political and civic communities of interest, and the design of nations themselves as well as international governance bodies. Second, HCI focuses on human characteristics and their relationship to computing. The significant human characteristics of importance to e-government include cognition, motivation, language, social interaction, and ergonomics or human factors issues. The usability and feasibility of e-government require a deep understanding by designers of individual, group, and societal cognition and behavior. On the technological side HCI is concerned with the outputs and processes of design and development of systems and interfaces. Third, HCI and e-government intersect is the design of computer systems and interface architectures. Design questions apply to input and output devices, interface architectures (including all types of dialogue interfaces for individuals and shared spaces for multiple users), computer graphics, maps, visualization tools, and the effects of these systems and interface architectures on the quality of interaction among individuals, groups, and government. Fourth, HCI examines the development process itself, ranging from how designers and programmers work to the evaluations of human-computer systems in terms of feasibility, usability, productivity and efficiency and, more recently, their likelihood to promote and sustain democratic processes. These issues may be described separately; however, e-government projects require attention to several of these issues simultaneously. For example, user-friendly and socially effective applications that cannot be implemented in a government setting for reasons of privacy, fairness, cost, or user resistance prove infeasible for e-government. Multiple constraints and demands therefore make this area challenging for governments. Electronic government is typically defined as the production and delivery of information and services inside government and between government and the
public using a range of information and communication technologies (ICTs). The public includes individuals, interest groups, and organizations, including nonprofit organizations, nongovernmental organizations, firms, and consortia. The definition of e-government used here also includes e-democracy, that is, civic engagement and public deliberation using digital technologies. Governments in industrialized and developing countries are experimenting with interactive systems to connect people with government information and officials. Many observers have claimed that interactive technologies will revolutionize governance. We must wait to see how and to what extent individuals and groups will use computing to affect civic engagement and how governments will use computing to influence political and civic spheres.
Development Paths of E-government Initial efforts by government agencies to develop e-government entailed simply digitizing and posting static government information and forms on the World Wide Web using the language, displays, and design of existing paper-based documents. Beginning during the 1990s and continuing into the present many government agencies have begun to adapt operations, work, and business processes and their interface with the public to simplify and integrate information and services in online environments. The federal governments of the United States, Canada, Finland, and Singapore are among those at the forefront of e-government in terms of the amount of information and interactivity available to the public and attention to system development and interface architecture. The country-level Web portal designed to help people navigate and search information for entire federal governments is one of the key types of e-government initiatives. The U.S. government Web portal (www.FirstGov.gov) is an interface with a search tool meant to serve as a single point of entry to U.S. government information and services. The federal government of Singapore developed a single Web portal, called “Singov” (www.gov.sg), to simplify access to government information for visitors, citizens, and businesses. Similarly, the Web portal for the government of
DIGITAL GOVERNMENT ❚❙❘ 177
Canada (www.canada.gc.ca) was designed in terms of three main constituents: Canadians, non-Canadians, and Canadian business.
Organizational Redesign through Cross-Agency Integration During the 1990s several federal agencies and state governments created “virtual agencies”—online sources of information and services from several agencies organized in terms of client groups. For example, during the early 1990s the U.S. federal government developed the websites students.gov, seniors.gov, and business.gov to organize and display information using interfaces designed specifically for these populations with a single point of entry into a government portal focused on each population’s interests. By the end of the administration of President Bill Clinton approximately thirty cross-agency websites existed in the U.S. federal government. Beginning in 2001, the U.S. federal government continued this development path by undertaking more than twenty-five cross-agency e-government projects. The development path shifted from a loose confederation of interested designers in the government to an enterprise approach to e-government managed and controlled centrally and using lead agencies to control projects. The desire for internal efficiencies, as much as desire for service to the public, drives these projects. Several payroll systems are being consolidated into a few payroll systems for the entire government. Multiple and abstruse (difficult to comprehend) requirements for finding and applying for government grants are being streamlined into one federal online grants system called “e-grants.” Myriad rulemaking processes in agencies throughout the federal government, although not consolidated, have been captured and organized in the interface architecture of one Web portal, called “e-rulemaking.” The website recreation.gov uses an architecture that organizes recreation information from federal, state, and local governments. System design and interface architecture simplify search, navigation, and use of information concerning recreational activities, recreational areas, maps, trails, tourism sites, and weather reports by location.
Standardization, consolidation, and integration of information, operations, and interfaces with the public have been the key drivers for e-government in most federal government efforts. The ability to digitize visual government information is an additional development path for e-government. To note one example: The U.S. House Committee on Government Reform Subcommittee on Technology, Information Policy, Intergovernmental Relations, and the Census Web-casts its hearings and makes testimony before the committee searchable online. Previously, testimony was not searchable until a transcript of a hearing was produced—a process that could take up to six months. Considerable human, financial, and technical resources are required to design, develop, build, and maintain state-of-the-art e-government. For this reason, many local governments in poor economic areas or in cities and towns with small and medium populations lack resources to build interactive e-government unless resources are provided by federal and state governments. In the United States some of the most developed state-level e-government sites are in the states of Washington and Virginia. Municipal government websites vary dramatically in quality and level of development.
Interactivity and E-government Interactive e-government services include online tax payments, license applications and renewals, and grants applications and renewals. The city of Baltimore website (http://www.ci.baltimore.md.us/) has won awards for implementation of computing technology in government. The website allows citizens to pay parking fines, property taxes, and water and other bills. Users can search crime statistics by geographic area within the city and track several city services, including trash removal and street cleaning. The city of Baltimore has implemented an online version of the 311 service available in some other large U.S. cities, which allows citizens to request city information and services over the telephone. Citizens can report and then track the status of a request for city services, including removal of abandoned vehicles, repair of potholes, removal of graffiti, and requests for a change in traffic signs. These requests
178 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
not only provide interactivity but also promote government compliance and accountability to voters by making provision of city services more transparent to the public. Interactivity is increasing as governments continue to develop systems and as citizens adapt to government online. To note a few trends: In the United States the number of online federal tax filings increased from 20,000 in 1999 to 47 million, or about 36 percent of individual filings, in 2002. The Environmental Protection Agency reports that it saves approximately $5 million per year in printing and mailing costs by providing information digitally to the public. Public health agencies at all levels of government increasingly access centralized information online through the Centers for Disease Control and Protection of the U.S. Public Health Service.
Usability and E-government Usability studies in HCI examine the ease and efficiency with which users of a computer system can accomplish their goals as well as user satisfaction with a system. Usability in e-government is important because it is likely to affect public participation in ways that might result in unequal access or discrimination due to biases built into design and architecture. One area of usability concerns disabled people. Many governments around the world have passed laws to ensure usability to the disabled. Section 508 of the U.S. Rehabilitation Act (29 U.S.C. 794d), as amended by the Workforce Investment Act of 1998 (P.L. 105-220), 7 August 1998, mandates a set of requirements for U.S. federal government sites to assist disabled users. These requirements include standards for Web-based software and applications, operating systems, telecommunications products, personal computers, video, and multimedia products. Major federal services initiatives have been delayed and others upgraded to ensure compliance with Section 508 requirements. Disabilities increase as a population ages and chiefly include visual impairment and decreases in cognitive and motor skills important in an online environment. A research initiative, Toolset for Making Web Sites Accessible to Aging Adults in a Multi-
cultural Environment (http://www.cba.nau.edu/ facstaff/becker-a/Accessibility/main.html), focuses on the development of tools for government agencies to assess the usability of systems and sites for the elderly as well as standards of measurement for evaluating such sites. Developers will use evaluation tools to measure a site’s accessibility in terms of reading complexity and potential usability issues such as font size and font style, background images, and text justification. Transformational tools will convert a graphical image to one that can be seen by those users with color-deficiency disabilities. Developers are creating simulation tools to model many of the problems that elderly users experience, such as yellowing and darkening of images. Finally, compliance tools will be designed to modify webpages to comply with usability requirements for the elderly. Other U.S. researchers are working with the Social Security Administration, the Census Bureau, and the General Services Administration to better provide for their visually impaired users in a project entitled “Open a Door to Universal Access.” Project researchers are building and prototyping key technologies for disabled employees at the partner agencies. These technologies will later be transferred to the private sector for wider dissemination in work settings. Usability includes all elements of accessibility, including “look and feel,” readability, and navigability. For example, usability research focused on local government websites indicates that the reading level required to comprehend information on websites often exceeds that of the general population, raising concerns about accessibility, comprehension, interpretation, and associated potential for discrimination. Ongoing research regarding e-government and usability focuses primarily on development of tools for usability, including navigability and information representation in text, tabular, graphical, and other visual forms.
Internet Voting One of the most important developments in egovernment, with great significance for issues in HCI, is Internet voting. People have debated three main possibilities for Internet voting. First, computerized voting can be used at polling places in a “closed
DIGITAL GOVERNMENT ❚❙❘ 179
system” within a secure computer local area network (LAN). Local votes would be recorded from individual voting consoles and tallied at local polling stations. Second, voting consoles or kiosks can be located in areas widely accessible to the general population, such as public libraries or shopping malls. Third, Internet voting might take place from remote locations, such as homes or offices. Many observers predicted that Internet voting would simplify voting processes and thereby increase voter participation. These predictions are far from reality at present. Current systems and architectures lack the security and reliability required for Internet voting of the third type. In addition to questions of feasibility, experts are uncertain of how Internet voting would affect participation and the cognitive, social, and political process of voting itself. A current research study, Human Factors Research on Voting Machines and Ballot Design (http://www.capc.umd.edu/rpts/MD_EVoteHuFac .html), focuses on the human-machine interface in voting. Given the prominence of issues surrounding traditional voting methods during the 2000 U.S. presidential election, researchers from the University of Maryland are developing a process to evaluate several automated voting methods and ballot designs. The study compares technologies such as optical scanning and digital recording of electronic equipment and evaluates the effect of various voting methods and ballot designs on the precision with which voters’ intentions are recorded and other critical variables.
Representing Complex Government Information Government statistics are a powerful source of information for policymakers and the public. Large, democratic governments produce and distribute a vast quantity of statistical information in printed and electronic form. Yet, vital statistics continue to be stored in databases throughout governments and in forms that are not easily accessible, navigable, or usable by most citizens. A U.S. project called “Quality Graphics for Federal Statistics” (http://www.geovista .psu.edu/grants/dg-qg/intro.html) focuses on de-
velopment of graphical tools to simplify complex information. This project will develop and assess quality graphics for federal statistical summaries considering perceptual and cognitive factors in reading, interaction, and interpretation of statistical graphs, maps, and metadata (data about data). The project addresses four areas: conversion of tables to graphs, representation of metadata, interaction of graphs and maps, and communication of the spatial and temporal relationships among multiple variables. The project uses Web-based “middleware”—software which connects applications—to enable rapid development of graphics for usability testing. Another research project, Integration of Data and Interfaces to Enhance Human Understanding of Government Statistics: Toward the National Statistical Knowledge Network (http://ils.unc.edu/ govstat/), takes a different HCI approach. Members of the civically engaged public often struggle to access and combine the vast and increasing amount of statistical data—often in a variety of formats— available from government agency websites. Researchers working in cooperation with government agencies are developing standardized data formats and studying social processes to facilitate integration of search results. In addition, the project’s research team is developing a solutions architecture to accommodate users with a variety of communications and hardware needs and providing for broad-based usability requirements.
Ways Forward The technological potential exists for individuals, groups, and communities to participate in and shape government in new ways. Some observers speculate that increased access to government online will lead to greater interest, knowledge, and discussion of politics. The Internet might allow citizens to organize and mobilize resources in powerful new ways. The Internet enables groups and communities to deliberate in new, possibly more effective ways. Some observers have also speculated that computing will lead to direct democracy, with individuals voting on a wide range of issues. Currently, little evidence shows that this potential is being realized. Those groups already civically engaged are using
180 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
computing to enhance their activities. The propensity to simplify and distort information in public discourse is not abated by changes in media. Unequal access to the Internet and a wide range of computerized information and communication tools, roughly divided between people with education and people without, highly correlated with income and political participation, creates a digital divide in e-government in spite of advances in HCI. Lack of literacy and lack of computer literacy worsen the digital divide in access. Disparities among rich and poor nations parallel digital-divide challenges within countries. Yet, innovations in several developing countries and in rural areas invite some degree of optimism. Rural farmers and craftspeople are beginning to connect through the Internet to enhance their economic well-being. Rural communities in China are using the Internet, as yet on a modest scale, to decry local corruption and in some cases have forced the federal government to intervene in local affairs. Interfaces for preliterate populations are being developed. Human-computer interaction begins with the study of the mutual adaptation of social and technical systems. We cannot predict the path or the outcome of the many and varied complex adaptation processes now in play. One of the chief sources of learning for designers of e-government has been to focus on tools for building and sustaining democracy rather than to focus merely on efficiency. While researchers learn more about human cognition, social interaction, and motivation in computermediated environments and while designers develop new tools and interfaces to encompass a wider range of activities and discourse in online environments, large-scale adaptation continues between societies, governments, and technology. Jane E. Fountain and Robin A. McKinnon See also Online Voting; Political Science and HCI
FURTHER READING Abramson, M. A., & Means, G. E. (Eds.). (2001). E-government 2001 (ISM Center for the Business of Government). Lanham, MD: Rowman & Littlefield.
Alvarez, R. M. (2002). Ballot design options, California Institute of Technology. Retrieved February 17, 2004, from http://www.capc .umd.edu/rpts/MD_EVote_Alvarez.pdf Ceaparu, I. (2003). Finding governmental statistical data on the Web: A case study of FedStats. IT & Society, 1(3), 1–17. Retrieved February 17, 2004, from http://www.stanford.edu/group/siqss/itandsociety/ v01i03/v01i03a01.pdf Conrad, F. G. (n.d.). Usability and voting technology: Bureau of Labor Statistics. Retrieved February 17, 2004, from http://www.capc.umd .edu/rpts/MD_EVote_Conrad.pdf David, R. (1999). The web of politics: The Internet’s impact on the American political system. New York: Oxford University Press. Dutton, W. H. (1999). Society on the line: Information politics in the digital age. Oxford, UK: Oxford University Press. Dutton, W. H., & Peltu, M. (1996). Information and communication technologies—Visions and realities. Oxford, UK: Oxford University Press. Echt, K. V. (2002). Designing Web-based health information for older adults: Visual considerations and design directives. In R. W. Morrell (Ed.), Older adults, health information, and the World Wide Web (pp. 61–88). Mahwah, NJ: Lawrence Erlbaum Associates. Fountain, J. E. (2001). Building the virtual state: Information technology and institutional change. Washington, DC: Brookings Institution Press. Fountain, J. E. (2002). Information, institutions and governance: Advancing a basic social science research program for digital government. Cambridge, MA: National Center for Digital Government, John F. Kennedy School of Government. Fountain, J. E., & Osorio-Urzua, C. (2001). The economic impact of the Internet on the government sector. In R. E. Litan & A. M. Rivlin (Eds.), The economic payoff from the Internet re volution (pp. 235–268). Washington, DC: Brookings Institution Press. Harrison, T. M., & Zappen, J. P. (2003). Methodological and theoretical frameworks for the design of community information systems. Journal of Computer-Mediated Communication, 8(3). Retrieved February 17, 2004, from http://www.ascusc.org/jcmc/ vol8/issue3/harrison.html Harrison, T. M., Zappen, J. P., & Prell, C. (2002). Transforming new communication technologies into community media. In N. W. Jankowski & O. Prehn (Eds.), Community media in the information age: Perspectives and prospects (pp. 249–269). Cresskill, NJ: Hampton Press Communication Series. Hayward, T. (1995). Info-rich, info-poor: Access and exchange in the global information society. London: K. G. Saur. Heeks, R. (Ed.). (1999). Reinventing government in the information age: International practice in IT-enabled public sector reform. London and New York: Routledge. Hill, K. A., & Hughes, J. E. (1998). Cyberpolitics: Citizen activism in the age of the Internet. Lanham, MD: Rowman & Littlefield. Holt, B. J., & Morrell, R. W. (2002). Guidelines for website design for older adults: The ultimate influence of cognitive factors. In R. W. Morrell (Ed.), Older adults, health information, and the World Wide Web (pp. 109–129). Mahwah, NJ: Lawrence Erlbaum Associates. Internet Policy Institute. (2001). Report of the National Workshop on Internet Voting: Issues and research agenda. Retrieved February 17, 2004, from http://www.netvoting.org Kamarck, E. C., & Nye, J. S., Jr. (2001). Governance.com: Democracy in the information age. Washington, DC: Brookings Institution Press.
DIGITAL LIBRARIES ❚❙❘ 181
Margolis, M., & Resnick, D. (2000). Politics as usual: The cyberspace “revolution.” Thousand Oaks, CA: Sage. Nass, C. (1996). The media equation: How people treat computers, televisions, and new media like real people and places. New York: Cambridge University Press. Norris, P. (2001). Digital divide: Civic engagement, information poverty, and the Internet worldwide. Cambridge, UK: Cambridge University Press. O’Looney, J. A. (2002). Wiring governments: Challenges and possibilities for public managers. Westport. CT: Quorum Books. Putnam, R. (2000). Bowling alone: The collapse and revival of American community. New York: Simon & Schuster. Rash, W. (1997). Politics on the nets: Wiring the political process. New York: Freeman. Schwartz, E. (1996). Netactivism: How citizens use the Internet. Sebastapol, CA: Songline Studios. Wilheim, A. G. (2000). Democracy in the digital age: Challenges to political life in cyberspace. New York: Routledge.
DIGITAL LIBRARIES For centuries the concept of a global repository of knowledge has fascinated scholars and visionaries alike. Yet, from the French encyclopedist Denis Diderot’s L’Encylopedie to the British writer H. G. Wells’s book World Brain to Vannevar Bush’s (director of the U.S. Office of Scientific Research and Development) Memex (a desktop system for storing and retrieving information) to Ted Nelson’s Project Xanadu (a vision of an information retrieval system based on hyperlinks among digital content containers), the dream of such an organized and accessible collection of the totality of human knowledge has been elusive. However, recent technological advances and their rapid deployment have brought the far-reaching dream into renewed focus. The technologies associated with computing, networking, and presentation have evolved and converged to facilitate the creation, capture, storage, access, retrieval, and distribution of vast quantities of data, information, and knowledge in multiple formats. During the late 1980s and early 1990s the term digital libraries emerged to denote a field of interest to researchers, developers, and practitioners. The term encompasses specific areas of development such as electronic publishing, online databases, information retrieval, and data mining—the process of information extraction with the goal of discovering
hidden facts or patterns within databases. The term digital libraries has been defined in many ways. For example: “The Digital Library is the collection of services and the collection of information objects that support users in dealing with information objects available directly or indirectly via electronic/ digital means” (Fox and Urs 2002, 515). ■ “Digital libraries are organizations that provide the resources, including the specialized staff, to select, structure, offer intellectual access to, interpret, distribute, preserve the integrity of, and ensure the persistence over time of collections of digital works so that they are readily available for use by a defined community or set of communities”(Fox and Urs 2002, 515). ■ “A collection of information which is both digitized and organized” (Lesk 1997, 1). ■ “Digital libraries are a set of electronic resources and associated technical capabilities for creating, searching, and using information . . . they are an extension and enhancement of information storage and retrieval systems that manipulate digital data in any medium (text, images, sounds, static or dynamic images) and exist in distributed networks” (Borgman et al. 1996, online). ■
Clifford Lynch, a specialist in networked information, made a clear distinction between content that is born digital and content that is converted into digital format and for which an analogue counterpart may or may not continue to exist. The definitions of digital libraries have considerable commonality in that they all incorporate notions of purposefully developed collections of digital information, services to help the user identify and access content within the collections, and a supporting technical infrastructure that aims to organize the collection contents as well as enable access and retrieval of digital objects from within the collections. Yet, the term digital libraries may have constrained development in that people have tended to restrict their view of digital libraries to a digital version of more traditional libraries. They have tended to focus on textual content rather than on the full spectrum of content types—data, audio, visual images, simulations, etc. Much of the development to date has
182 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
focused on building collections and tools for organizing and extracting knowledge from them. Experts only recently have acknowledged the role of the creators and users of knowledge and the contexts in which they create and use. Dagobert Soergel, a specialist in the organization of information, characterizes much of the digital library activity to date as “still at the stage of ‘horseless carriage’; to fulfill its fullest potential, the activity needs to move on to the modern automobile” (Soergel 2002, 1).
Key Concepts Whereas researchers have expended considerable effort in developing digital libraries, theoretical
Vannevar Bush on the “Memex”
S
cientist Vannevar Bush’s highly influential essay “As We May Think” (1945) introduced the idea of a device he called the “memex”—inspiring others to develop digital technologies that would find and store a vast amount of information. The owner of the memex, let us say, is interested in the origin and properties of the bow and arrow. Specifically he is studying why the short Turkish bow was apparently superior to the English long bow in the skirmishes of the Crusades. He has dozens of possibly pertinent books and articles in his memex. First he runs through an encyclopedia, finds an interesting but sketchy article, leaves it projected. Next, in a history, he finds another pertinent item, and ties the two together. Thus he goes, building a trail of many items. Occasionally he inserts a comment of his own, either linking it into the main trail or joining it by a side trail to a particular item. When it becomes evident that the elastic properties of available materials had a great deal to do with the bow, he branches off on a side trail which takes him through textbooks on elasticity and tables of physical constants. He inserts a page of longhand analysis of his own. Thus he builds a trail of his interest through the maze of materials available to him. Source: Bush, V. (1945, July). As we may think. The Atlantic Monthly, 176(1). Retrieved March 25, 2004, from http://www.theatlantic.com/unbound/ flashbks/computer/bushf.htm
development has been elusive. Because of the multidisciplinary roots of the field, the different perspectives, and the lack of consensus on definition, we can have difficulty understanding the basic constructs of digital libraries. At its simplest interpretation, the term digital libraries brings together the notions of digital computing, networking, and content with those of library collections, services, and community. Researchers are giving attention to the 5S framework developed by Edward A. Fox, director of Digital Libraries Laboratory at Virginia Tech, Marcos André Gonçlaves of Digital Libraries Research, and Neill A. Kipp of Software Architecture. This framework defines streams, structures, spaces, scenarios, and societies to relate and unify the concepts of documents, metadata (descriptions of data or other forms of information content), services, interfaces, and information warehouses that are used to define and explain digital libraries: Streams: sequences of information-carrying elements of all types—can carry static content and dynamic content ■ Structures: specifications of how parts of a whole are arranged or organized, for example, hypertext, taxonomies (systems of classification), user relationships, data flow, work flow, and so forth ■ Spaces: sets of objects and operations performed on those objects, for example, measure, probability, and vector spaces (a form of mathematical representation of sets of vectors) used for indexing, visualizations, and so forth ■ Scenarios: events or actions that deliver a functional requirement, for example, the services that are offered—data mining, information retrieval, summarization, question answering, reference and referral, and so forth ■ Societies: understanding of the entities and their interrelationships, individual users, and user communities ■
Digital Libraries Today A report from the President’s Information Technology Advisory Committee (PITAC) in 2001 acknowledges the need for much more work to be accomplished before we can think of digital libraries as fully successful in the United States. The report
DIGITAL LIBRARIES ❚❙❘ 183
identifies deficiencies in digital content availability and accessibility: Less than 10 percent of publicly available information is available in digital form, and less than 1 percent of the digital content is indexed, and therefore identifiable, via Web search engines. Thus, the “visible Web” is still small relative to the total potential Web. The report goes on to acknowledge the need to create digital library collections at a faster rate and much larger scale than are currently available. The report also identifies the need for improved metadata standards and mechanisms for identifying and providing access to digital library content and the need to advance the state of the art in user interfaces so that digital library users with different needs and circumstances can use interfaces better suited to their contexts. The PITAC report acknowledges that much of the progress to date in digital libraries has resulted from the federal government’s investments through multiagency digital-library research and development initiatives and through provision of access to libraries of medical and scientific data. In 1993 the National Science Foundation (NSF) funded Mosaic, the first Web browser to run on multiple platforms, thereby encouraging widescale access to digital content via the Internet and the Web. In 1994 the Digital Libraries Initiative (DLI)—involving NSF, Defense Advanced Research Projects Agency (DARPA), and National Aeronautics and Space Administration (NASA)—funded six university-led consortia to conduct research and development to make large distributed digital collections accessible and interoperable. In 1998 the program was expanded to include the National Institutes of Health/National Library of Medicine (NIH/NLM), the Library of Congress, National Endowment for the Humanities (NEH), Federal Bureau of Investigation (FBI), National Archives and Records Administration (NARA), the Smithsonian Institution, and the Institute for Museum and Library Services. Other federal agencies compiled some of the largest publicly accessible databases, such as Earthobserving satellite data, weather data, climate data, and so forth. Most recently, new forms of digital data library collections have been initiated, including digital libraries of molecules, cells, genomes, proteins, and so forth. The PITAC report calls for the federal
government to play a more aggressive and proactive role in provision of digital content to all and to use digital library technologies and content to transform the way it services its citizens. Another key area identified by the PITAC report is the opportunities and challenges of digital libraries and their long-term preservation. Experts see a slow and steady leakage of digital content from the Web as content is updated, archived, or removed. They also see a need for both standards for digital preservation and archival processes for periodic transfer/transformation to new formats, media, and technologies. Finally, the PITAC report says the issue of intellectual property rights needs to be addressed for digital libraries to achieve their full potential. In particular, clarification was sought by the PITAC Committee on access to information subject to copyright, the treatment of digital content of unknown provenance or ownership, policies about federally funded digital content, and the role of the private sector.
The Significance for HCI The first decade of digital library research and development provided ample evidence that our ability to generate and collect digital content far exceeds our ability to organize, manage, and effectively use it. We need not look further than our own experiences with the growth of digital content and services on the Web. Although the Web may be perceived by the majority of the using public as a “vast library,” it is not a library in several important aspects. Experts acknowledge the importance of understanding how people interact with digital libraries, how their needs relate to new types of information available, and the functionality that is needed by these new types of information. Numerous experts have called for more “user-centric” approaches to the design and operation of digital libraries. However, these same calls tend to still see the user involved only in reaction to the development of certain collections. Thus, “user-centric” seems to mean “user involvement” rather than “placement of the user and potential user at the center of digital library activity.” For a truly user-centric approach to emerge, we must start by understanding user need and
184 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
A meteorologist at the console of the IBM 7090 electronic computer in the Joint Numerical Weather Prediction Unit, Suitland, Maryland, circa 1965. This computer was used to process weather data for short and long-range forecasts, analyses, and reseach. Photo courtesy of the U.S. Weather Bureau.
context. This understanding includes recognizing users both as individuals and as part of their social context. Who the users are, what they are trying to do, and how they interact with others are all meaningful areas of discovery for future digital library development. Social groups—be they families, work groups, communities of practice, learning communities, geographic communities, and so on— grow up with, create, structure, accept, and use information and knowledge. Digital library content, tools, and services are needed to support these groups and the individuals whom they encompass. Issues of trust, reputation, belief, consistency, and uncertainty of information will continue to prevail,
especially in digital distributed environments in which people question assumptions about identity and provenance. Similarly, economic, business, and market frameworks will complicate digital library use and development. For people interested in human-computer interaction, digital libraries offer a complex, widespread environment for the research, development, and evaluation of technologies, tools, and services aimed at improving the connection of people to d i g i t a l e nv i ro n m e n t s a n d to e a ch o t h e r through those environments. Effective interaction between human and computer is essential for successful digital libraries.
DIGITAL LIBRARIES ❚❙❘ 185
Research Digital-library research has made significant progress in demonstrating our ability to produce digital versions of traditional library collections and services. However, what began as an effort to create “digital” libraries has been transformed into something much more dynamic than was originally envisioned. The idea of curated, network-accessible repositories was (and remains) a fundamental need of scholarly inquiry and communication, as was the idea that these repositories should support information in multiple formats, representations, and media. However, not until people made serious efforts to build such resources, particularly for nontextual digital content (audio, image, and video, for example), did they realize that this venture would stretch the limits of existing disciplinary boundaries and require involvement of new interdisciplinary collaborations. The NSF recently sponsored a workshop of digital-library scholars and researchers to frame the longterm research needed to realize a new scholarly inquiry and communication infrastructure that is ubiquitous in scope and intuitive and transparent in operation. Five major research directions were recommended. The first direction is expansion of the range of digital content beyond traditional text and multimedia to encompass all types of recorded knowledge and artifacts (data, software, models, fossils, buildings, sculptures, etc.). This content expansion requires improved tools for identification, linkage, manipulation, and visualization. The second research direction is the use of context for retrieving information. Such context has two dimensions: the relationships among digital information objects and the relationship between these objects and users’ needs. In particular, because of continually accumulating volumes of digital content, such context needs to be automatically and dynamically generated to the extent possible. However, automated tools could be supplemented with contextual additions from the using community (akin to reader input to the Amazon.com website, for example). Involvement of the using community in building the knowledge environment will also build a sense of ownership and stewardship relative to the particular content and services of interest. The third research direction is the integration of information spaces into
everyday life. Such integration requires customized and customizable user interfaces that encompass dynamic user models (with knowledge of the history, needs, preferences, and foibles of the users and their individual and social roles). The fourth direction is the reduction of data to actionable information. This reduction requires developing capabilities to reduce human effort and provide focused, relevant, and useful information to the user; to do this again requires an in-depth understanding of the users and their individual and social contexts. The fifth research direction is to improve accessibility and productivity through developments in information retrieval, image processing, artificial intelligence, and data mining.
Realizing the Potential The PITAC report offers a vision for digital libraries (universally accessible collections of human knowledge): All citizens anywhere anytime can use an Internetconnected digital device to search all of human knowledge. Via the Internet, they can access knowledge in digital collections created by traditional libraries, museums, archives, universities, government agencies, specialized organizations, and even individuals around the world. These new libraries offer digital versions of traditional library, museum, and archive holdings, including text, documents, video, sound and images. But they also provide powerful new technological capabilities that enable users to refine their inquiries, analyze the results, and change the form of the information to interact with it, such as by turning statistical data into a graph and comparing it with other graphs, creating animated maps of wind currents over time, or exploring the shapes of molecules. Very-high-speed networks enable groups of digital library users to work collaboratively, communicate with each other about their findings, and use simulation environments, remote scientific instruments, and streaming audio and video. No matter where digital information resides physically, sophisticated search software can find it and present it to the user. In this vision, no classroom, group or person is ever isolated from the world’s greatest knowledge resources. (PITAC 2001, 1)
Clearly underlying this vision is the notion of engaged communities of both information providers and information users. Sometimes called “knowledge
186 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
communities,” these communities, defined by a shared interest in knowing or wanting to know about a subject area, are in constant flux. Understanding the dynamics of knowledge communities, why, when, and how they form or cease to function, will be important to the realization of the PITAC vision. Similarly, researchers need to acknowledge the social construction of knowledge and the roles of various members in the communities over time. Traditional publishers, libraries, museums, archives, and other information collection and distribution entities that are bound by the physicality of their collections and audiences can clearly be represented in virtual environments. However, the real power of the emerging technologies is their unleashing of human creativity, connection, and collaboration in their creation, discovery, and sharing of new knowledge. Developing technologies that are more human-centric in their design and function is a critical element in achieving this future. Perhaps the greatest potential change that may result from digital libraries of the future will be in the institutional framework. When collection content no longer needs to be physically colocated, when service providers no longer need to be physically close to their intended user communities, and when the roles of provider and user blend, people will question the continued need for physical institutions and information-professional roles. Such a future may well see librarians, museum professionals, and others working within knowledge communities, not just as providers to those communities. As digital libraries and their contents are dispersed across the Internet, and as permanent availability and access to those contents are assured, the need for individual institutions to “own” and house collections and service access points (the means by which individuals can request and receive service, i.e. an online catalog, a physical library, a reference desk, or an online help desk) will diminish. For institutions whose reputations have grown with the growth and maintenance of their scholarly library collections, how will this future play out? Although the opportunities are significant and the technological developments astounding, the abilities of institutions to change at a similar pace are not clear. Issues of trust and control are likely to
constrain the kinds of institutional developments that we envision. José-Marie Griffiths See also Information Organization; Information Retrieval
FURTHER READING Atkins, D. (1999). Visions for digital libraries. In P. Schauble & A. F. Smeaton (Eds.), Summary report of the series of joint NSF-EU working groups on future directions for digital libraries research (pp. 11–14). Washington, DC: National Science Foundation. Bishop, A. P., & Starr, S. L. (1996). Social informatics of digital library use and infrastructure. Annual Review of Information Science and Technology (ARIST), 31, 301–401. Borgman, C. L., Bates, M. J., Cloonan, M. V., Efthimiadis, E. N., Gilliland-Swetland, A., Kafai, Y., Leazer, G. H., & Maddox, A. B. (1996). Social aspects of digital libraries: Final report to the National Science Foundation. Los Angeles: Graduate School of Library & Information Studies, UCLA. Retrieved January 26, 2004, from http://dlis.gseis.ucla.edu/DL/UCLA_DL_Report.html Bush, V. (1945). As we may think. In J. Nyce & P. Kahn (Eds.), From Memex to hypertext: Vannevar Bush and the mind’s machine (pp. 85–110). San Diego, CA: Academic Press. Diderot, D., & le Rond D’ Alembert, J. (Eds.). (1758–1776). Encyclopedie ou dictionnaire raisonne des sciences, des arts et des métiers, par une societe de gens de letteres (Encyclopedia or rational dictionary of sciences, arts, and the professions, by a society of people of letters) (2nd ed). Luca, Italy: André Le Breton. Fox, E. A., Gonçalves, M. A., & Kipp, N. A. (2002). Digital libraries. In H. Adelsberger, B. Collis, & J. Pawlowski (Eds.), Handbook on information systems (pp. 623–641). Berlin: Springer-Verlag. Fox, E. A., & Urs, S. R. (2002). Digital libraries. Annual Review of Information and Science and Technology (ARIST), 46, 503–589. Griffiths, J.-M. (1998). Why the Web is not a library. In B. Hawkins & P. Battin (Eds.), The mirage of continuity: Reconfiguring academic information resources for the twenty-first century (pp. 229–246). Washington, DC: Council on Library and Information Resources, Association of American Universities. Lesk, M. (1997). Practical digital libraries: Books, bytes and bucks. San Francisco: Morgan Kaufmann. Lynch, C. A. (2002). Digital collections, digital libraries, and the digitization of cultural heritage information. First Monday, 7(5). National Science Foundation. (2003, June). Report of the NSF workshop on digital library research directions. Chatham, MA: Wave of the Future: NSF Post Digital Library Futures Workshop. Nelson, T. H. (1974). Dream machines: New freedoms through computer screens—A minority report (p. 144). Chicago: Nelson. President’s Information Technology Advisory Committee, Panel on Digital Libraries. (2001). Digital libraries: Universal access to human knowledge, report to the president. Arlington, VA: National Coordination Office for Information Technology Research and Development.
DRAWING AND DESIGN ❚❙❘ 187
Soergel, D. (2002). A framework for digital library research: Broadening the vision. D-Lib Magazine, 8(12). Retrieved January 26, 2004 from http://www.dlib.org/dlib/december02/soergel/12soergel.html Waters, D. J. (1998). The Digital Library Federation: Program agenda. Washington, DC: Digital Libraries, Council of Library and Information Resources. Wells, H. G. (1938). World brain. Garden City, NY: Doubleday, Doran.
DRAWING AND DESIGN Ever since the Sketchpad system of computer graphics pioneer Ivan Sutherland, designers have dreamed of using drawing to interact with intelligent systems. Built in the early 1960s, Sketchpad anticipated modern interactive graphics: The designer employed a light pen to make and edit a drawing and defined its behavior by applying geometric constraints such as parallel, perpendicular, and tangent lines. However, the widespread adoption of the windows-mouse interface paradigm on personal computers in the 1980s relegated pen-based interaction to a specialized domain, and for many years little research was done on computational support for freehand drawing. The re-emergence of stylus input and flat display output hardware in the 1990s renewed interest in pen-based interfaces. Commercial software has mostly focused on text interaction (employing either a stylized alphabet or full-fledged handwriting recognition), but human-computer interfaces for computer-aided design must also support sketching, drawing, and diagramming. Computer-aided design (CAD) is widely used in every design discipline. CAD software supports making and editing drawings and three-dimensional computer graphics models, and in most design firms, computer-aided design applications have replaced the old-fashioned drawing boards and parallel rules. Digital representations make it easier for a design team to share and edit drawings and to generate computer graphics renderings and animated views of a design. The predominant use of computers in design is simply to make and edit drawings and models, leaving it to human designers to view, evaluate, and make design decisions. However, computational design assistants are being increasingly brought in to help not only with creating drawings and mod-
els, but also with design evaluation and decision making. Despite the almost universal adoption of computer-aided design software, it is typically used in the later—design development—phases of a design process, after many of the basic design decisions have already been made. One reason for this, and a primary motivation for supporting sketching, diagramming, and drawing interfaces in computer-aided design, is that during the conceptual phases many designers prefer to work with pencil and paper. The history of computers and human-computer interaction shows a strong tendency to favor a problem-solving approach, and computer languages have quite properly focused on requiring programmers to state problems precisely and definitely. This has, in turn, colored a great deal of our software, including computer-aided design, which demands of its users that they be able to precisely articulate what they are doing at all times. Yet designing in particular, and drawings more generally, seem at least sometimes ill-suited to this historical paradigm. Although the goal of designing is to arrive at definite design decisions that make it possible to construct an artifact, during the process designers are often quite willing to entertain (or tolerate) a great deal of uncertainty. This makes building human-computer interfaces for computer-aided design an interesting challenge, and one that may ultimately demand new forms of computational representations. The development of freehand interfaces for computer-aided design will certainly depend on technical advances in pen-based interaction. However, successful drawing-based interfaces for design will ultimately also be informed by research on design processes (how designing works and how people do design) as well as by the efforts of cognitive psychologists to understand the role of drawing and visual representations in thinking. Along with the development of freehand-drawing software systems, research on design and visual cognition has recently enjoyed a resurgence of interest. In addition to human-computer interaction, relevant work is being done in design research, artificial intelligence, and cognitive science. An increasing number of conferences, workshops, and journals are publishing work in this growing research area.
188 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Drawing as an Interface to Everything People sketch, diagram, and draw to compose, consider, and communicate ideas. Information about the ideas is coded in the lines and other drawing marks and the spatial relationships among them. Some people may think of drawings as belonging to the realm of aesthetics, or as ancillary representations to “real” thinking carried out in words or mathematics, but many disciplines—from logic to sports, from physics to music, from mathematics and biology to design—employ drawings, sketches, and diagrams to represent ideas and reason about them. Many scientific and engineering disciplines use welldefined formal diagram languages such as molecular structure diagrams, analog and digital circuit diagrams, or user-modeling language (UML) diagrams in human-computer interactions. Drawing is seldom the sole vehicle for communicating information, but in many domains it is either a primary representation or an important auxiliary one, as a look at whiteboards in any school or company will confirm. Drawing plays a special role in design. In physical domains such as mechanics, structural engineering, and architecture, and in graphic (and graphical user interface) designs, a design drawing correlates directly with the ultimate artifact: The drawing’s geometry corresponds directly to the geometry of the artifact being designed. For example, a circle represents a wheel. In other disciplines, a diagram may correlate symbolically, for example a supply-demand curve in economics. Even in domains where a graphic representation only abstractly represents the artifact being designed, drawing supports the supposing, proposing, and disposing process of design decision making. For these reasons, drawing can be an interaction modality to a wide range of computational processes and applications—drawing as an interface to everything.
What’s in a Drawing? Drawings range from conceptual diagrams to rough sketches to precisely detailed drawings. The purposes of these representations differ, although designers may employ them all in the course of designing: Beginning with a conceptual diagram of an idea, they
develop it through a series of sketches, ultimately producing a precise and detailed drawing. Both diagram and sketch are typically preliminary representations used in early design thinking to capture the essence of an idea or to rapidly explore a range of possibilities. A diagram employs shapes and spatial relations to convey essentials concisely. A sketch, however, is often more suggestive than definitive, and it may convey details while simultaneously avoiding specificity. A schematic drawing involves more detail and complexity than a diagram and is usually intended as a more precise and definitive representation. Features of a drawing that are potentially relevant include shape and geometry, topology, curvature and points of inflection of lines, absolute and relative dimensions, positions of drawing marks and spatial relationships among them, line weights, thickness and color, speed and sequence of execution, and relationships with nearby text labels. In any particular drawing only some of these features may be relevant. For example, a diagram of digital logic is largely indifferent to geometry but the drawing topology (connections among the components) is essential to its meaning. On the other hand, in a schematic drawing of a mechanism or a sketch map, geometry and scale are essential. In addition to the information that a designer decides deliberately to communicate, a drawing also conveys information about the designer’s intent, that is, metainformation about the designing process. For example, the speed with which a sketch is executed, the extent to which a designer overtraces drawing marks, the pressure of the pen, or the darkness of the ink all offer important information. Intense overtracing in one area of the drawing may indicate that the designer is especially concerned with that part of the design, or it may reveal that the drawing represents several alternative design decisions. A quickly made sketch may reflect broad, high-level thinking, whereas a slow drawing may reveal a high degree of design deliberation. During design brainstorming it is common to find several sketches and diagrams on the same sheet of paper or whiteboard; they may be refinements of a single design, alternative designs, designs for different components of the artifact, or even representations of entirely unrelated ideas.
DRAWING AND DESIGN ❚❙❘ 189
Input Issues Two different approaches—ink based and stroke based—to building pen-based interaction systems are currently being followed, and each has certain advantages. An ink-based system registers the drawing marks the user makes in an array of pixels captured by a video camera or scanner, which serves as input for an image-processing system to parse and interpret. A stroke-based system records the motion of the user’s pen, usually as a sequence of x,y (and sometimes pressure and tilt) coordinates. To an inkbased system any drawing previously made on paper can serve as scanned input, whereas one that is strokebased must capture input as it is produced. This makes dynamic drawing information such as velocity, pen pressure, and timing available to stroke-based systems. Many stroke-based systems, for example, use timing information to segment drawing input into distinct drawing elements, or glyphs. Designers traditionally distinguish between freehand and hard-line drawings. Freehand drawings are typically made with only a stylus, whereas hardline drawings are made using a structured interface, previously a triangle and parallel rule, today the menus and tool palettes of a conventional computeraided design program. The structured interface has certain advantages: In selecting drawing elements from a tool palette the designer also identifies them, eliminating the need for the low-level recognition of drawing marks that freehand drawing systems typically require. While this helps the computer program to manage its representation of the design, many designers feel that the structured interface imposes an unacceptable cognitive load and requires a greater degree of commitment and precision than is appropriate, especially during the early phases of designing. Designers also complain that menus and tool systems get in the way of their design flow. A freehand drawing conveys more subtle nuances of line and shape than a hard-line drawing. Freehand drawings are often less formal and precise and more ambiguous than hard-line representations, all arguably advantageous characteristics in the early phases of design thinking. Some computer-based drawing systems automatically replace hand-drawn sketchy shapes and lines with “beautified” ones. Other systems retain the
user’s original drawing, even if the system has recognized sketched components and could replace them with precise visual representations. Many designers consider the imprecise, rough, and suggestive nature of a sketch or diagram to be of great value and therefore prefer a hand-drawn sketch to a refined, geometrically precise beautified drawing. On the other hand, some users strongly prefer to work with perfectly straight lines and exact right angles rather than crude-looking sketches. This depends at least in part on the user’s own experience with drawing: Novices are more likely to feel uncomfortable with their sketching ability and prefer to work with beautified drawings, whereas seasoned designers tend to see the nuances of their hand-drawn sketches as positive characteristics. Whether beautification is considered helpful or harmful also depends in part on the drawing’s intended purpose.
Recognition Issues A great deal of research in interactive drawing aims at recognizing sketches, diagrams, and drawings for semantic information processing by intelligent systems that apply domain knowledge to reason about designs. After the system extracts from the drawing the semantics of a proposed design, then various knowledge-based design aids, such as simulation programs, expert systems, and case-based reasoning tools, and other automated advisors can be brought to bear. An interface that recognizes and interprets the design semantics of sketches and diagrams enables a designer to employ these programs in the early phases of designing. For example, a program that recognizes the components and connections of a mechanical diagram can construct and execute a computer simulation of the mechanism. A program that recognizes the layout of an architectural floor plan can retrieve from a database other similar or analogous floor plans. A program that recognizes a sketched layout of a graphical user interface can generate code to construct that interface. A variety of recognition approaches have been explored, including visual-language parsing and statistical methods. Parsing approaches consider a drawing as an expression in a visual language composed of glyphs (simple drawing marks such as
190 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
arrows, circles, and rectangles) arranged in various spatial relations into configurations. Typically a lowlevel recognizer first identifies the glyphs. Some systems restrict glyphs to a single stroke, requiring, for example, that a box be drawn without lifting the pen; others allow multiple-stroke glyphs, allowing the box to be drawn as four distinct strokes. After the glyph recognizer has identified the basic drawing elements, the parser identifies legal visual expressions by matching the drawing against grammar rules. Smaller visual units—initially glyphs, then configurations arranged in specific spatial relations—make up more complex visual expressions. Each design domain has its own visual language, so parsing approaches to general-purpose sketch recognition must either be told which visual language to use or must determine this information from the context. Statistical methods such as Bayesian networks and hidden Markov models have proved successful in other kinds of recognition, notably speech recognition and natural-language understanding. Statistical techniques make it possible to build visual-language recognizers without having to manually construct a grammar for each domain-specific language. Against sketch recognition the argument is leveled that people are highly sensitive to recognizer failure and will not tolerate imperfect recognizer performance. Experience (for instance, with speech-totext systems and early handwriting recognizers) shows that users become quite frustrated unless recognition is extremely reliable, that is, has accuracy rates above 99 percent. On the other hand, unlike speech and character recognition—where it can be assumed that the input has only one intended interpretation—uncertainty in various forms may be more acceptable in drawing, especially when a designer wants to preserve ambiguity. Then for sketch recognition, the methods of sustaining ambiguity and vagueness would be at least as important as accuracy. An intermediate approach to recognition asks the user to label the elements of a sketch rather than attempt low-level glyph recognition. In this hybrid approach the user enters a freehand drawing; then after the user has labeled the elements (choosing from a palette of symbols) the system can reason about the drawing’s spatial organization.
Designers often sketch during the early stages of design thinking, and therefore a sketch may serve the dual purpose of (1) recording what the designer already has decided and (2) exploring possible alternatives. Sketches in general will vary along the dimensions of ambiguity and precision, and even within a single sketch some parts may record definite and precise design decisions while other parts are vague, amorphous, and imprecise, representing work-in-progress exploration. Recognition-based drawing systems must be able to deal with these extremes as well as with the range of representations in between, and they must also be able to determine autonomously—from the drawing itself—what degree of ambiguity and imprecision the designer intended to convey. For example, a recognition-based system might be able to distinguish between its own failure to recognize precise input and a drawing that is deliberately indeterminate. The ability of a system to sustain ambiguous and imprecise representations is for this reason especially important, and this may pertain not only to the interface-recognition algorithms, but also to any back-end processes behind the interface that later represent or reason about the designs. A recognizer can support imprecision and ambiguity in several ways. Recognition-based interfaces can catch, resolve, or mediate potential errors and ambiguities at input time, for example, by presenting the user with a sorted list of alternative interpretations. Visual-language interpreters can employ fuzzy-logic techniques, representing match probabilities in the parse, or they may allow the parse to carry multiple alternative interpretations. Rather than requiring an entire drawing to represent a single visual sentence, a recognizer may take a bottom-up approach that identifies some parts of the drawing while allowing others to remain uninterpreted.
Avoiding Recognition: Annotation and Multimodal Systems Another response to the problem of recognition is to avoid it entirely and simply manage drawings as design representations independent of their semantic content. This approach is taken in systems that
DRAWING AND DESIGN ❚❙❘ 191
treat drawings as components of a collection of multimodal conversations. Despite a popular myth of the lone creative designer, real-world design typically involves a team of participants that includes experts from a variety of design disciplines as well as other stakeholders, and a process that can range in duration from weeks to years. The record of the designing process (the design history) can therefore include successive and alternative versions over time and the comments of diverse participants, along with suggestions, revisions, discussions, and arguments. Sketches, diagrams, and drawings are important elements in the record of this design history. Design drawings are inevitably expressions in a larger context of communication that includes spoken or written information, photographs and video, and perhaps computational expressions such as equations or decision trees. This gives rise to a wide range of multimodalities. For example, a designer may (a) mark up or ”redline” a drawing, photograph, 3D model, or video to identify problems or propose changes, or add text notes for similar reasons; (b) insert a drawing to illustrate an equation or descriptive text or code; (c) annotate a drawing with spoken comments, recording an audio (or video) track of a collaborative design conversation as the drawing is made or attaching audio annotations to the drawing subsequently. Associated text and audio/video components of the design record can then be used in conjunction with the drawing; for example, text can be indexed and used to identify the role, function, or intentions of the accompanying drawings.
From Sketch to 3D Designers in physical domains such as mechanical, product, and industrial engineering and architecture often sketch isometric and perspective drawings to describe three-dimensional artifacts. Therefore, sketch-recognition research has long sought to build systems that can generate three-dimensional models from two-dimensional sketches. Although this goal has not yet been achieved in the general case of arbitrary 2D sketches, a variety of approaches have been pursued, each with particular strengths and limitations, and each supporting specific kinds of sketch-to-3D constructions. Recent representative
efforts include SKETCH!, Teddy, Chateau, SketchVR, and Stilton. Despite its name, the SKETCH! program does not interpret line drawings; rather, the designer controls a 3D modeler by drawing multistroke gestures, for example, three lines to indicate a corner of a rectangular solid. Teddy enables a user to generate volumes with curved surfaces (such as Teddy bears) by “inflating” 2D curve drawings. It uses simple heuristics to generate a plausible model from a sketch. Chateau is a “suggestive”’ interface: It offers alternative 3D completions of a 2D sketch as the user draws, asking in effect,“Do you mean this? Or this?” SketchVR generates three-dimensional models from 2D sketches by extrusion. It identifies symbols and configurations in the drawing in the 3D scene and replaces them with modeling elements chosen from a library. In Stilton, the user draws on top of the display of a 3D scene; the program uses heuristics about likely projection angles to interpret the sketch.
The Future Much of the personal computer era has been dominated by interfaces that depend on text or on interacting with mouse-window-menu systems. A renewed interest in sketch-based interaction has led to a new generation of systems that manage and interpret handdrawn input. Today, human-computer interaction research is enabling computer-aided design software to take advantage of sketching, drawing, and diagramming, which have long been essential representations in design, as well as in many other activities. Progress in freehand-drawing interaction research will go hand in hand with research in design processes and cognitive studies of visual and diagrammatic reasoning. Mark D. Gross See also Evolutionary Engineering; Pen and Stylus Input
FURTHER READING Davis, R. (2002). Sketch understanding in design: Overview of work at the MIT AI lab. In R. Davis, J. Landay & T. F. Stahovich (Eds.), Sketch understanding: Papers from the 2002 AAAI Symposium
192 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
(pp. 24–31). Menlo Park, CA: American Association for Artificial Intelligence (AAAI). Do, E. Y.-L. (2002). Drawing marks, acts, and reacts: Toward a computational sketching interface for architectural design. AIEDAM (Artificial Intelligence for Engineering Design, Analysis and Manufacturing), 16(3), 149–171. Forbus, K., Usher, J., & Chapman, V. (2003). Sketching for military courses of action diagrams. In International Conference on Intelligent User Interfaces (pp. 61–68). San Francisco: ACM Press. Goel, V. (1995). Sketches of thought. Cambridge MA: MIT Press. Gross, M. D., & Do, E. Y.-L. (2000). Drawing on the back of an envelope: A framework for interacting with application programs by freehand drawing. Computers and Graphics, 24(6), 835–849. Igarashi, T., & Hughes, J. F. (2001). A suggestive interface for 3-D drawing. In Proceedings of the ACM Symposium on User Interface Software and Technology (UIST) (pp. 173–181). New York: ACM Press. Igarashi, T., Matsuoka, S., & Tanaka, H. (1999). Teddy: A sketching interface for 3-D freeform design. In Proceedings of the SIGGRAPH 1999 Annual Conference on Computer Graphics (pp. 409–416). New York: ACM Press/Addison-Wesley Publishing Co. Kurtoglu, T., & Stahovich, T. F. (2002). Interpreting schematic sketches using physical reasoning. In R. Davis, J. Landay, & T. Stahovich. (Eds.), AAAI Spring Symposium on Sketch Understanding (pp. 78–85). Menlo Park, CA: AAAI Press. Landay, J. A., & Myers, B. A. (1995). Interactive sketching for the early stages of interface design. In CHI ’95—Human Factors in Computing Systems (pp. 43–50). Denver, CO: ACM Press. Larkin, J., & Simon, H. (1987). Why a diagram is (sometimes) worth 10,000 words. Cognitive Science, 11, 65–99. Mankoff, J., Hudson, S. E., & Abowd, G. D. (2000). Providing integrated toolkit-level support for ambiguity in recognition-based
interfaces. In Proceedings of the Human Factors in Computing (SIGCHI) Conference (pp. 368–375). The Hague, Netherlands: ACM Press. Negroponte, N. (1973). Recent advances in sketch recognition. In AFIPS (American Federation of Information Processing) National Computer Conference, 42, 663–675. Boston: American Federation of Information Processing. Oviatt, S., & Cohen, P. (2000). Multimodal interfaces that process what comes naturally. Communications of the ACM, 43(3), 45–53. Pinto-Albuquerque, M., Fonseca, M. J., & Jorge, J. A. (2000). Visual languages for sketching documents. In Proceedings, 2000 IEEE International Symposium on Visual Languages (pp. 225–232). Seattle, WA: IEEE Press. Saund, E., & Moran, T. P. (1994). A perceptually supported sketch editor. Paper presented at the ACM Symposium on User Interface Software and Technology, Marina del Rey, CA. Sutherland, I. (1963). Sketchpad: A man-machine graphical communication system. In Proceedings of the 1963 Spring Joint Computer Conference (pp. 329–346). Baltimore: Spartan Books. Suwa, M., & Tversky, B. (1997). What architects and students perceive in their sketches: A protocol analysis. Design Studies, 18, 385–403. Turner, A., Chapman, D., & Penn, A. (2000). Sketching space. Computers and Graphics, 24, 869–876. Ullman, D., Wood, S., & Craig, D. (1990). The importance of drawing in the mechanical design process. Computers and Graphics, 14(2), 263–274. Zeleznik, R., Herndon, K. P., & Hughes, J. F. (1996). SKETCH: An interface for sketching 3-D scenes. In SIGGraph ’96 Conference Proceedings (pp. 163–170). New York: ACM Press.
E-BUSINESS EDUCATION IN HCI ELECTRONIC JOURNALS ELECTRONIC PAPER TECHNOLOGY
E
ELIZA E-MAIL EMBEDDED SYSTEMS ENIAC ERGONOMICS ERRORS IN INTERACTIVE BEHAVIOR ETHICS ETHNOGRAPHY EVOLUTIONARY ENGINEERING EXPERT SYSTEMS EYE TRACKING
E-BUSINESS Although business challenges such as time and space now matter less, new challenges arise when people conduct electronic business (e-business). These challenges arise from two fundamental sources: global customers’ cultural values and culturally sensitive technology applications. Important cultural questions affect e-business and global customers. Such questions include (1) why is culture important to consider when conducting e-business? and (2) how do companies leverage their information technology (IT) applications in light of cultural differences exhibited by global customers? Answering these questions can help companies that use IT in a multicultural market.
The Technological Revolution A new landscape for conducting e-business has arisen with the proliferation of technologies that facilitate e-business, such as information communication technologies (ICT) (any communication device or application encompassing radio, television, cellular phones, satellite systems, etc.); enterprise resource planning (ERP) (any software system designed to support and automates the business processes of medium and large businesses); electronic data interchange (EDI) (an information system or process integrating all manufacturing and related applications for an entire enterprise; and manufacturing resource planning (MRP) (a system for effectively managing material requirements in a manufacturing process). Agility, flexibility, speed, and change are the conditions for developing e-business models. In 193
194 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
addition, by using information and telecommunication systems, companies are able to communicate with their global customers where barriers such as time zones, currencies, languages, and legal systems are reduced or eliminated. As a result, global customers can be reached anywhere and at anytime. Services and products can be obtained whenever, wherever, and by whomever. The digital economy is blazing a new path for doing business where the notion of “value through people” becomes the driving force for a successful model of e-business. With the advent of the World Wide Web, business is increasingly becoming an online environment. Traditional brick-and-mortar businesses have evolved into “click-and-mortar” businesses. Additionally, the Internet has changed from a communications tool used mostly by scientists to a business tool used by companies to reach millions of customers across the globe. As a result, the Internet has become a powerful business resource because its technology enables firms to conduct business globally (Simeon 1999). In addition, online sales easily penetrate global markets. Some companies treat Web customers as a new type of audience—so united in their use of the Internet that national differences no longer apply. Other companies, such as IBM, Microsoft, and Xerox, have developed local versions of their websites. These versions run off regional servers, address technical issues (such as the need to display different character sets), and provide information about local services and products. Occasionally they reflect aesthetic differences—such as cultural biases for or against certain colors—but few companies actively consider cultural variations that might enhance the delivery of their products.
What Are E-business and E-commerce? The terms e-business and e-commerce have slightly different meanings. E-business is “. . . a broader term that encompasses electronically buying, selling, servicing customers, and interacting with business partners and intermediaries over the Internet. Some exper ts see e-business as the objective and e-commerce as the means of achieving that objec-
tive” (Davis and Benamati 2003, 8). In essence, e-business means any Internet or network-enabled business, for example companies can buy parts and supplies from each other, collaborate on sales promotion, and conduct joint research. On the other hand, e-commerce is a way of doing business using purely the Internet as a means, whether the business occurs between two partners (business to business—B2B), between a business and its customers (business to customers—B2C), between customers (C2C), between a business and employees (B2E), or between a business and government (B2G). According to Effy Oz (2002), an expert in information technology and ethics, there are three categories of organizations that want to incorporate the Web into their e-business: (1) organizations that have a passive presence online and focus on online advertising, (2) organizations that use the Web to improve operations, and (3) organizations that create stand-alone transaction sites as their main or only business. In contrast, e-commerce is not exclusively about buying and selling. Although the ultimate goal of business is profit generation, e-commerce is not exclusively about buying and selling. Instead, the real goal of e-commerce is to improve efficiency by the deployment of technologies. Factors that influence the development of e-commerce are a competitive environment, strategic commitment of the company, and the required competencies. Thus, the definition of e-commerce has a more restricted application than that of e-business.
Understanding Cultural Concepts Explained below are three different categories of culture. The first category is national culture in which the differences of the cultural values are based on four key dimensions. First is the individualismcollectivism dimension, which denotes a culture’s level of freedom and independence of individuals. Second is the power-distance dimension, which denotes the levels of inequality expected and accepted by people in their jobs and lives. Third is the uncertainty-avoidance dimension, which denotes how societies deal with the unknown aspects of a dif-
E-BUSINESS ❚❙❘ 195
ferent environment and how much people are willing to accept risks. Fourth is the masculinityfemininity dimension, which denotes a culture’s ranking of values such as being dominant, assertive, tough, and focused on material success. The second category of culture is related to organizational culture. According to Edgar J. Schein, an organizational psychologist, organizational culture “is a property of a group. It arises at the level of department, functional groups, and other organizational units that have a common occupational core and common experience. It also exists at every hierarchical level of the organizations and at the level of the whole organization” (Schein 1999, 13–14). Thus, intense organizational culture can result in manifestations such as the phrases “the way we do things around here,” “the rites and rituals of our company,” “our company climate,” “our common practices and norms,” and “our core values.” A third category of culture can be called “information technology culture.” Information technolog y culture often overlaps national and organizational cultures. Indeed, IT culture is part of the organizational culture, which determines whether the user (i.e., customer) accepts or resists the technology to be used. IT culture can be defined as the sets of values and practices shared by those members of an organization who are involved in IT-related activities, such as information system professionals, and managers who are involved in ITrelated activities (i.e., programming, system analysis and design, and database management).
Global Customers: Challenges of Cultural Differences IT applications in the context of e-business have become more important because today companies of all sizes and in all sectors are adopting the principles of cultural diversity, as opposed to cultural convergence, when reaching out to global customers. Some questions that are worth considering are why global customers resist new IT implementation, how organizational culture affects new customers’ attitudes toward new IT implementation, and why many companies fail to consider the role
of culture when developing and implementing IT applications. Companies have difficulty in understanding or even recognizing cultural factors at a deeper level because the factors are complex and subtle. Companies’ understanding of cultural factors is normally only superficial, which is why people have difficulty observing the magnitude of the impact of such factors on the success or failure of e-business companies. Although people have conducted an increasing amount of research in global IT, this research has been primarily limited to descriptive cross-cultural studies where comparison analyses were made between technologies in different national cultures. A universal interface should not be mistakenly considered as one interface for all customers. The concept of universalism is somewhat misleading in this context. The most important goal is to ensure that customers feel at home when exploring the Internet. Fundamentally, cultural factors have strong influences on global customers’ preferences. Each customer has his or her own culturally rooted values, beliefs, perceptions, and attitudes. When loyal customers are satisfied with the way they have been buying goods and services, they resist changes. Making purchases online is less desirable to many customers. The fact that customers cannot touch or smell the products that they want makes some resistant to ebusiness. Customers also can be resistant because they lack the skills to use new technologies and an understanding of how e-business is conducted. Different ethnic cultures demonstrate different cognitive reactions, requiring different environmental stimuli (Tannen 1998). Similarly, Web-marketing psychology depends on different mixtures of cognitive and behavioral elements (Foxall 1997). Language, values, and infrastructure can also be barriers to ebusiness. For example, the preference of many Chinese people for a cash-based payment system or “cash-on-delivery” is the main obstacle to conducting e-business in China. The phenomenon can be explained by factors such as a lack of real credit cards, a lack of centralized settlement systems (the ability for credit cards to be used anywhere), and a lack of trust in conducting business via the Internet (Bin, Chen, and Sun 2003).
196 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
When people of Malaysia and Australia were asked to evaluate eight websites from their countries, the findings confirmed that the subjects had no preference for one-half the websites but had a preference associated with subject nationality for the other onehalf (Fink and Laupase 2000). A study of mobilephone use in Germany and China showed that people accepted support information and rated it as more effective when it was culturally localized. Similar studies have shown that cultural factors influence how long customers stay on the Internet, how likely they are to buy products online when the content is presented in their native language, and the usability of Web design elements. Researchers believe that culturally specific elements increase international participation in conducting e-business more than do genre-specific elements. Interestingly, the role of culture in user interface design can be identified as the localization elements that could be considered as cultural markers. These cultural markers are influenced by a specific culture or specific genre (Barber and Badre 1998). Examples of cultural markers are interface design elements that reflect national symbols, colors, or forms of spatial organization. After reviewing hundreds of websites from different countries and in different languages, Barber and Badre posited that different cultural groups prefer different types of cultural markers.
E-business Strategies Because global customers come from all over the world, their demands, needs, and values are more divergent than similar. Cultural context and cultural distance may have an impact on how goods and services can be delivered to them—that is, on marketing channels and logistics. Hence, e-business companies must fully understand the values that affect customers’ preferences. Companies need to tailor their products to customers’ electronic requirements. Selling products electronically means that businesses must consider international channels of distribution that fit with customers’ values. The electronic environment can become a barrier to successful business endeavors. For example, in some cultures a business transaction is best conducted face
to face. Customers can negotiate better with a seller because such a setting allows the reciprocal communication of interests and intentions. After seller and customers establish rapport, this rapport creates a more trusting relationship. This trusting relationship could lead to repeated transactions. Trusting the seller or buyer is crucial in certain cultures. Companies that conduct e-business need to find new mechanisms and strategies that overcome such cultural differences. In a situation such as the study of Chinese customers, where credit cards were not the common system of payment, a pragmatic strategy might be to buy online and pay offline. The findings in the research of customer interfaces for World Wide Web transactions indicate that there are significant cultural variations in why people used the Internet (O’Keefe et al. 2000). Their U.S. subjects used the Internet solely to search for information, whereas their Hong Kong subjects used the Internet to communicate socially. A wise e-business strategy for a company is thus to enhance personal competence for Western values and to seek out social relationships and shared loyalty for Eastern values. Another e-business strategy emphasizes leveraging technology. Electronic businesses have two options when designing websites for customers in different countries—design one website for all or “localized” websites for each country. If the audience crosses national borders, a single website may be appropriate. For instance, websites exist for Arctic researchers and astronomers. However, this strategy is less likely to be successful when no overriding professional or occupational focus unifies the audience. The alternative is for companies to develop local versions of their websites. These local versions may be run off regional servers to enhance performance or to display different character sets. They also can emphasize different product lines. Unfortunately, unless the company is highly decentralized, variations in the basic message or mode of presentation that might enhance delivery of its products to people in another culture are rarely seen. Melissa Cole and Robert O’Keefe (2000) believe that Amazon.com and Autobytel.com (an auto sales website) have transcended global differences by employing a standardized transaction-oriented inter-
E-BUSINESS ❚❙❘ 197
face. Such an interface may be practical for people who have a limited goal (such as deciding which book to buy) but may not be practical for people who do not. Because different audiences use the Internet for different purposes, standardized features may not be practical for all the nuances of cultural values. Designing interfaces for people who are searching for social relationships, rather than seeking information, imposes different requirements on Web retailers and designers. Culture has significant impacts on global customers and software designers. The merging concepts of culture and usability have been termed “cultural user interfaces” by Alvin Yeo (1996) and “culturability” by Barber and Badre (1998). Yeo talks about culture’s effect on overt and covert elements of interface design. Tangible, observable elements such as character sets and calendars are overt and easy to change, whereas metaphors, colors, and icons may reflect covert symbols or taboos and be difficult to recognize and manipulate. Barber and Badre assert that what is user friendly to one nation or culture may suggest different meanings and understandings to another. Therefore, efforts to build a generic global interface may not be successful. Instead, cultural markers should be programmatically changed to facilitate international interactions.
Implications Because of globalization and competitiveness in international business, many multinational and local companies have considered implementing technologies to conduct e-business. Among the primary factors that companies must consider are the effect of culture on customers’ technology acceptance and customers’ cultural values. Companies should address the appropriateness of management policies and practices across countries. For example, managers need to make decisions concerning certain situations such as whether a global company can override national cultural differences and when local policies are best. IT provides vast opportunities for companies to compete in the global and electronic arena. At the same time, customers from different cultures can differ significantly in their perceptions, beliefs, at-
titudes, tastes, selection, and participation in e-business. Hence, companies need to fully understand cultural variances in order to make decisions on which e-business strategies work best. Some basic questions for future research would be: (1) What makes for universally appealing IT practices? (2) Does acceptability or familiarity drive global IT use? (3) How does one successfully introduce technology applications that are unusual or not appropriate in a country? (4) How can cultural differences be considered in the planning of IT practices? In a nutshell, companies and Web designers need to be sensitive to the different needs of global customers and to build strategies and interfaces that consider cultural assumptions and characteristics. Taking advantage of national differences and preferences provides resource-based competencies and competitive advantages for e-businesses. Companies need a more innovative e-business model. With new e-business practices, success is centered on people’s values, agility, speed, flexibility, and change. Hence, the common business phrase “Think globally, act locally” may not be as practical as “Think locally, act globally.” Reaching out to global customers means reflecting their local cultures, language, and currency. Norhayati Zakaria See also Anthropology and HCI; Ethnography; Website Design FURTHER READING Barber, W., & Badre, A. (1998). Culturability: The merging of culture and usability. Human Factors and the Web. Retrieved March 1, 2004, from http://www.research.att.com/conf/hfweb/proceedings/barber/ index.htm Bin, Q., Chen, S., & Sun, S. (2003). Cultural differences in e-commerce: A comparison between the U.S. and China. Journal of Global Information Management, 11(2), 48–56. Cole, M., & O’Keefe, R. M. (2000). Conceptualizing the dynamics of globalization and culture in electronic commerce. Journal of Global Information Technology Management, 3(1), 4–17. Cooper, R. B. (1994). The inertia impact of culture on IT implementation. Information and Management, 17(1), 17–31. Davis, W. S., & Benamati, J. (2003). E-commerce basics: Technology foundations and e-business applications. New York: Addison-Wesley. Fink, D., & Laupase, R. (2000). Perceptions of web site design characteristics: A Malaysian/Australian comparison. Internet Research, 10(1), 44–55.
198 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Foxall, G. R. (1997). Marketing psychology: The paradigm in the wings. London: Macmillan. Hofstede, G. (1980). Culture's consequences: International differences in work-related values. Beverly Hills, CA: Sage. Honald, P. (1999). Learning how to use a cellular phone: Comparison between German and Chinese users. Technical Communication: Journal of the Society for Technical Communication, 46(2), 196–205. Janssens, M., Brett, J. M., & Smith, F. J. (1995). Confirmatory crosscultural research: Testing the viability of a corporate-wide safety policy. Academy of Management Journal, 38, 364–382. Johnston, K, & Johal, P. (1999). The Internet as a “virtual cultural region”: Are extant cultural classifications schemes appropriate? Internet Research: Electronic Networking Applications and Policy, 9(3), 178–186. Kowtha, N. R., & Choon, W. P. (2001). Determinants of website development: A study of electronic commerce in Singapore. Information & Management, 39(3), 227–242. O'Keefe, R., Cole, M., Chau, P., Massey, A., Montoya-Weiss, M., & Perry, M. (2000). From the user interface to the consumer interface: Results from a global experiment. International Journal of Human Computer Studies, 53(4), 611–628. Oz, E. (2002). Foundations of e-commerce. Upper Saddle River, NJ: Pearson Education. Rosenbloom, B., & Larsen, T. (2003). Communication in international business-to-business marketing channels: Does culture matter? Industrial Marketing Management, 32(4), 309–317. Ryan, A. M., McFarland, L., Baron, H., & Page, R. (1999). An international look at selection practices: Nation and culture as explanations for variability in practice. Personnel Psychology, 52, 359–391. Sanders, M. (2000). World Net commerce approaches hypergrowth. Retrieved March 1, 2004, from http://www.forrester.com/ER/ Research/Brief/0,1317,9229,FF.html Schein, E. H. (1999). The corporate culture survival guide: Sense and nonsense about cultural change. San Francisco: Jossey-Bass. Simeon, R. (1999). Evaluating domestic and international web-sites strategies. Internet Research: Electronic Networking Applications and Policy, 9(4), 297–308. Straub, D., Keil, M., & Brenner, W. (1997). Testing the technology acceptance model across cultures: A three country study. Information & Management, 31(1), 1–11. Tannen, R. S. (1998). Breaking the sound barrier: Designing auditory displays for global usability. Human Factors and the Web. Retrieved March 1, 2004, from http://www.research.att.com/conf/hfweb/ proceedings/tannen/index.htm Wargin, J., & Dobiey, D. (2001). E-business and change: Managing the change in the digital economy. Journal of Change Management, 2(1), 72–83. Yeo, A. (1996). World-wide CHI: Cultural user interfaces, a silver lining in cultural diversity. SIGCHI Bulletin, 28(3), 4–7. Retrieved March 1, 2004, from http://www.acm.org/sigchi/bulletin/1996.3/ international.html.
ECONOMICS AND HCI See Denial-of-Ser v ice Attack; Digital Cash; E-business; Hackers
EDUCATION IN HCI Education in human-computer interaction (HCI) teaches students about the development and use of interactive computerized systems. Development involves analysis, design, implementation, and evaluation, while use emphasizes the interplay between the human users and the computerized systems. The basic aim of instruction in HCI is that students learn to develop systems that support users in their activities. Education in HCI is primarily conducted in two contexts: academia and industry. HCI is an important element in such diverse disciplines as computer science, information systems, psychology, arts, and design. Key elements of HCI are also taught as industry courses, usually with more focus on the design and development of interactive systems.
Development of HCI as a Discipline The first education programs in computer science and computer engineering were developed in the 1970s and 1980s. They dealt extensively with hardware and software; mathematics was the main supporting discipline. As the field of computer science has developed, other disciplines have been added to accommodate changes in use and technology. HCI is one such discipline; it has been added to many computer science curricula during the 1990s and early 2000s. In order to promote a more unified and coherent approach to education in HCI, the Special Interest Group on Human-Computer Interaction (SIGCHI), part of the Association for Computing Machinery (ACM), the world’s oldest and largest international computing society, decided in 1988 to initiate development of curriculum recommendations. The result of this work was published in 1992 under the title ACM SIGCHI Curricula for Human-Computer Interaction. The report defined the discipline and presented six main content areas. The report also provided four standard courses: CS1 (User Interface Design and Development), CS2 (Phenomena and Theories of Human-Computer Interaction), PSY1 (Psychology of Human-Computer
EDUCATION IN HCI ❚❙❘ 199
A Personal Story—Bringing HCI Into the “Real World” In teaching HCI concepts, I often try to make connections to interaction with the real world. One of the classrooms in which I teach is adjacent to a chemistry lab. A solid wooden door connects the two rooms. Until recently, a large white sign with red lettering was posted on the door, visible to all in the classroom, reading, “Fire door. Do not block.” I found nothing remarkable about this arrangement until one day I noticed that the door has no knob, no visible way of opening it. Further examination showed that the hinges are on the inside of the door, so that it opens into the classroom. A bit of thought led to the realization that the door is for the students in the chemistry lab; if a fire breaks out in the lab they can escape into the classroom and then out into the corridor and out of the building. All well and good, but where does that leave students in the classroom? Imagine a fire alarm going off and the smell of smoke in the air. My students rush to what looks to be the most appropriate exit, and find that there's no way of opening the door marked “Fire door,” and that pushing on it is not the solution in any case. When I describe this scenario to my HCI students in the classroom, as an example of inadequate design in our immediate surroundings, it usually gets a few chuckles, despite the context. Still, they can learn a few lessons about design from this example. Messages are targeted at specific audiences, and messages must be appropriate for their audience. Here we have two potential audiences, the students in each of the two adjoining rooms. For the students in the chemistry lab, the sign would be perfectly appropriate if it were visible on the other side of the door. For the students in the classroom, less information would actually improve the message: “Important: Do not block this door” would be sufficient. This avoids drawing attention to the function of the door, functionality that is not targeted at those reading the sign. In general, conveying an unambiguous message can be difficult and requires careful thought. The sign no longer hangs on the door, which now stands blank. Robert A. St. Amant
Interaction), and MIS1 (Human Aspects of Information Systems). CS1 and CS2 were designed to be offered in sequence in a computer science or computer engineering department. CS1 focused on HCI aspects of software, dealing primarily with practical development of interfaces. It was defined as a general course that complemented basic programming and software engineering courses. CS2 was for students specializing in HCI, and it examined HCI in a broader context, presented more-refined design and evaluation techniques, and placed more emphasis on scientific foundations. The PSY1 course was designed to be offered in a psychology, human factors, or industrial engineering department. It stressed the theoretical and empirical foundations of human-computer interaction. Here too the emphasis was more on design and evaluation techniques and less on implementation. The MIS1 course was designed to be offered in an information systems department. It focused on
use in order to contribute to consumer awareness of interactive systems. It emphasized the role of computers in organizations and evaluation of the suitability of technological solutions. Although the students were not thought of as system builders, the ACM SIGCHI report recommended teaching program design and implementation as well as the use of tools such as spreadsheets and databases that have considerable prototyping and programming capability. This classical curriculum has been very influential as a framework and source of inspiration for the integration of HCI into many educational programs. Several textbooks have been created to cover these areas, including Prentice-Hall’s 1993 HumanComputer Interaction, Addison-Wesley’s 1994 Human-Computer Interaction, and Addison-Wesley’s 1998 Designing the User Interface. A classical reference for the graduate level is Readings in HumanComputer Interaction: Toward the Year 2000, published in 1995 by Morgan Kaufmann.
200 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Typical Problems HCI literature includes numerous guidelines and methods for analyzing users’ work, for implementation, and for evaluation. There are also many discussions of specific designs for interactive systems, since the systematic design of a user interface is an essential activity in the development process. However, although courses expose students to a rich variety of systems, as soon as the students are confronted with the task of designing a new system, they are equipped with very little in the way of methodologies. Current education in HCI also does not pay enough attention to the interplay between design and implementation. Design and implementation can be seen as separate activities, but the tools used for implementation support certain designs and impede others. When the two activities are treated separately, this fundamental relation is ignored. Another weakness of many introductory courses is that they focus solely on design and implementation and fail to stress the importance of evaluation—of defining and measuring usability in a systematic manner. Within a single course, it is impossible to master all the issues involved in the development of a user interface, but students should be exposed to all the issues and understand their importance and how they are related. If they only learn about design and implementation and not about evaluating the usability of their products, we risk ending up with systems that are attractive on the surface but are of no practical use to a real user. The opposite approach is to focus primarily on evaluation from the very beginning. Students learn to evaluate the usability of an existing system through a course in usability engineering, which they can take in the first semester of an undergraduate program. Field evaluations and other, more complicated, forms of evaluations can then be introduced in later semesters.
New Challenges HCI education continues to be challenged by new technological developments. The PC revolution that occurred in the middle of the 1990s and the widespread use of graphical user interfaces required more focus on graphical design. Many courses have adapted to these developments.
Since the late 1990s, small mobile computers and Web-based applications have presented new challenges. The design of interfaces for such technologies is only supported to a very limited extent by the methods and guidelines that are currently taught in many HCI courses. Textbooks that deal with these challenges are beginning to appear. Web Site Usability, published in 1999 by Morgan Kaufmann, teaches design of Web-based applications. HCI courses have to some extent been adjusted to include brief descriptions of novel systems and devices to inspire students to use their imaginations, but most education in HCI still focuses on designing and developing traditional computer systems. Guidelines for developing interactive interfaces typically include careful analysis of the context of use, which has traditionally been work activities. Yet the new technologies are used in a multitude of other contexts, such as entertainment, and these new contexts must be taken into consideration for future guidelines.
Integrating Practical Development For students of HCI truly to understand the nature of the field, they must try putting their knowledge into action. There are two basically different ways of giving students experience with practical development: through course exercises and student projects. The ACM SIGCHI curriculum contains proposals for limited development tasks that students can solve as exercises in a course. CS1 encourages a focus on design and implementation, using interface libraries and tools. CS2 suggests having students begin from less well-defined requirements, thereby changing the focus more toward user work and task analysis. It is suggested that the students also complete design, implementation, and evaluation activities. The problem with such exercises is that they are limited in time and therefore tend to simplify the challenges of interface development. In addition, exercises are usually conducted in relation to just one course. Therefore, they usually involve topics from that one course only. A more radical approach is to have students work on projects that involve topics from a cluster of
EDUCATION IN HCI ❚❙❘ 201
courses. There are some courses of study in which HCI is one element in a large project assignment that student teams work to complete. These courses introduce general issues and support work with the project assignment—for example, an assignment to develop a software application for a specific organization might be supported with courses in HCI, analysis and design, programming, and algorithmics and data structures. This basic pedagogical approach introduces students to theories and concepts in a context that lets the students see the practical applications of those theories and concepts. Projects undertaken during different semesters can be differentiated by overall themes. Such themes might reflect key challenges for a practitioner—for example, software development for a particular organization or design of software in collaboration with users. Using projects as a major building block in each semester increases an educational program’s flexibility, for while the content of a course tends to be static and difficult to change, the focus of the projects is much easier to change and can accommodate shifting trends in technology or use. Thus while courses and general themes of the projects can be fixed for several years, the content of the projects can be changed regularly, so that, for example, one year students work on administrative application systems and the next on mobile devices. Managers from organizations that hire students after graduation have emphasized the importance of projects. The students get experience with large development projects that are inspired by actual realworld problems. In addition, the students learn to work with other people on solving a task. The managers often say that a student with that sort of training is able to become a productive member of a project team in a very short time.
The Future In the last decades of the twentieth century, HCI was integrated into many educational programs, and there are no signs that the subject will diminish in importance in the years to come. On the contrary, one can expect that many programs that have a basic focus on computing and information systems but that lack courses in HCI will take up the subject.
There are a growing number of cross-disciplinary programs that involve development and use of computers. In several of these, HCI is becoming a key discipline among a number of scientific approaches that are merged and integrated in one institutional setting. Finally, multidisciplinary education programs with an explicit and strong focus on design are beginning to appear. These programs handle the challenge from emerging technologies by using an overall focus on design to treat such diverse disciplines as computer science, architecture, industrial design, communication and interaction theory, culture and organization theory, art, media, and aesthetics. The goal is to educate students to think of themselves as designers who posses a rich and constructive understanding of how modern information technology can be used to support human interaction and communication. HCI will be a core subject in such programs. Jan Stage See also Classrooms
FURTHER READING Baecker, R. M., Grudin, J., Buxton, W. A. S., & Greenberg, S. (Eds.). (1995). Readings in human-computer interaction: Toward the year 2000 (2nd ed.). Los Altos, CA: Morgan Kaufmann. Dahlbom, B. (1995). Göteborg informatics. Scandinavian Journal of Information Systems, 7(2), 87–92. Denning, P. J. (1992): Educating a new engineer. Communications of the ACM, 35(12), 83–97. Dix, A., Finlay, J., Abowd, G., & Beale, R. (1993). Human-computer interaction. Hillsdale, NJ: Prentice-Hall. Hewett, T. T., Baecker, R., Card, S., Carey, T., Gasen, J., Mantei, M., et al. (1992). ACM SIGCHI curricula for human-computer interaction. New York: ACM. Retrieved July 24, 2003, from http://www. acm.org/sigchi/cdg/ Kling, R. (1993): Broadening computer science. Communications of the ACM, 36(2), 15–17. Mathiassen, L., & Stage, J. (1999). Informatics as a multi-disciplinary education. Scandinavian Journal of Information Systems, 11(1), 13–22. Nielsen, J. (1993). Usability engineering. San Francisco: Morgan Kaufmann. Preece, J., Rogers, Y., Sharp, H., Benyon, D., Holland, S., & Carey, T. (1995). Human-computer interaction. Reading, MA: AddisonWesley. Rubin, J. (1994). Handbook of usability testing. New York: Wiley.
202 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Shneiderman, B. (1998). Designing the user interface (3d ed.). Reading, MA: Addison-Wesley. Skov, M. B., & Stage, J. (2003). Enhancing usability testing skills of novice testers: A longitudinal study. Proceedings of the 2nd Conference on Universal Access in Computer-Human Interaction. Mahwah, NJ: Lawrence-Erlbaum. Spool, J. M., Scanlon, T., Schroeder, W., Snyder, C., & DeAngelo, T. (1999). Web site usability. Los Altos, CA: Morgan Kaufmann.
ELECTRONIC JOURNALS Scholarly journals, which include substantive research articles and other materials, including letters to the editor, book reviews, and announcements of meetings, trace their origins back to 1665, with Les Journal des Scavans (trans., “Journal of the experts”) in Paris and Proceedings of the Royal Society of London in London. These journals developed to share scientific discoveries among interested parties and to establish who was first to have made a given discovery or to have advanced a given theory. Peer review is an important part of publication in scholarly journals. It is a system whereby scholars who are experts in the same field as the author (the author’s peers) read, comment on, and recommend publication or rejection of an article. This process is usually single-blind (the author does not know who the reviewers are, but the reviewers know who the author is) or double-blind (the author does not know who the reviewers are and the reviewers do not know the identity of the author), which gives both readers and authors increased confidence in the validity of the published articles. Although it has been criticized from time to time, peer review remains one of the most valued aspects of publication in scholarly journals, which are also referred to as peer-reviewed journals, scholarly journals, or refereed journals.
Status of Electronic Journals Today Today, according to Ulrich’s Periodicals Directory, there are approximately 15,000 peer-reviewed journals actively published in all fields. (This number should be considered approximate, as new journals are constantly being launched and old ones constantly ceasing publication. In addition, journals
sometimes change their titles, making it difficult to arrive at an exact figure.) Beginning in the 1960s, the first attempts were made to convert scholarly journals or articles from journals into digital format. As information technologies and telecommunications infrastructure developed, digital, or electronic, journals have become a viable alternative to print. As of 2003, over 80 percent (approximately 12,000) of peer-reviewed journals are available in some electronic form Fulltext Sources Online, published twice a year by Information Today, Inc., lists by title the scholarly journals, magazines, newspapers, and newsletters that are available in some digital form. The number of listings in Fulltext Sources Online grew from about 4,400 in 1993 to over 17,000 by the end of 2002. The formats of electronic journals (or e-journals) vary considerably, however.
Electronic Journals: Journal Focused or Article Focused E-journals can be categorized as either journal focused or article focused. Journal-focused e-journals are complete replacements for print, providing an entire journal and, often, even more information than is available in any extant print alternative versions. A journal-focused e-journal generally has a recognizable journal title, an editorial process, a collection of articles on related topics, and may even have volumes and issue numbers. These complete ejournals often permit browsing through tables of contents and often feature a search engine that lets readers search for specific information. Complete electronic journals provide the same branding function that print journals provide. They are typically available directly from the primary journal publisher, usually for a subscription charge. Article-focused e-journals are just databases of separate articles extracted from print or electronic versions of the complete journal. Commercial databases of separate articles may be available either from the primary publisher or from an aggregator service such as ProQuest, InfoTrac, or EbscoHost. Articlefocused e-journals typically emphasize searching over browsing and mix articles from many different jour-
E-JOURNALS ❚❙❘ 203
nals. In these databases it is selected articles, rather than complete journal titles, that are made available. Even within a journal-focused e-journals, there are many variations. The scholars Rob Kling and Ewa Callahan describe four kinds of electronic journals: pure e-journals distributed only in digital form; e-p-journals, which are primarily distributed electronically, but are also distributed in paper form in a limited way; p-e-journals, which are primarily distributed in paper form, but are also distributed electronically; and p- + e-journals, which have parallel paper and electronic editions. Electronic journals may be mere replicas of a print version, with papers presented in PDF format for handy printing, or they may provide a new e-design with added functionality, color graphics, video clips, and links to data sets. Both browsing and searching may be possible, or only one or the other. The availability of back issues also varies considerably. The American Astronomical Society has an advanced electronic-journals system, with added functions, links to other articles and to data sets, and extensive back files of old issues. Aggregators of electronic-journal articles are companies that act as third parties to provide access to journal articles from a variety of publishers. The advantage of an aggregator or a publisher that offers many titles is, of course, the availability of many articles from many journals in just one system. The system may offer articles from a wide variety of publishers and the originals may be print, electronic, or both.
Publishers of Scholarly Journals From their early days, scholarly journals were published by scholarly societies, commercial publishers, university presses, and government agencies. These main categories of publishers continue today with both print and electronic-journal publishing. The number of journals published by each is not equally distributed, however. Societies may be the most visible to scholars, yet only approximately 23 percent of scholarly journals are published by societies. They have a core constituency to serve, and publishing activities are almost always seen as a money-making venture to pay
for member services. Members may receive a subscription to a print or electronic journal with their society membership or, increasingly, pay extra for it. Society publishers' main revenue source is from subscriptions paid for by libraries. Some say that for-profit companies (commercial publishers) should not publish scholarly publications because research and scholarship should be freely available to all. A for-profit company owes its primary allegiance to its shareholders and the “bottom line” rather than only to the propagation of knowledge. Subscription fees create a barrier that means only those who can pay or who belong to an institution that can pay, have access to important research information. Still, in scholarly journal publishing, commercial publishers such as Elsevier Science, Wiley, and Springer publish the largest percentage of the scholarly journals, and that percentage is growing. For-profit publishers range from those giants to relatively tiny publishers, and together they publish approximately 40 percent of all scholarly journals. Libraries are the main subscribers to both print and electronic journals and provide access to library constituents either by password or Internet protocol address (the address, given in numbers, that corresponds to an Internet location). University presses mostly publish monographs, but universities and other educational institutions also account for about 16 percent of scholarly journals. Other publishers, mostly government agencies, contribute 21 percent of the titles published. Many scientists and social scientists prefer electronic journals for the convenience of desktop access and additional functions, such as the ability to e-mail an article to a colleague. E-journals also allow scholars to save time locating and retrieving articles. Since almost all electronic journals have a subscription charge, libraries are the main customers, providing seamless access for faculty, students, staff, or researchers.
Article-Focused Alternatives to E-journals Article-focused e-journals, being collections of articles organized in subject-related databases,
204 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
are particularly good for in-depth reading over time or for access to articles that come from unfamiliar sources. They extend, rather than replace, a library’s journal collection and, like journals, are provided to library constituents on a secure basis through passwords or other authentication. Article databases are changing the nature of scholarship: In the late 1970s, scientists and social scientists read articles from an average of thirteen journal titles each year; with electronic-journal databases they now read from an average of twenty-three journal titles. In addition to taking advantage of aggregators’ article databases, readers can also choose to get individual articles from special electronic services, such as the Los Alamos/Cornell arXiv.org service or those linked to by the Department of Energy, Office of Scientific and Technical Information PrePrint Network (http://www.osti.gov/preprints/). These services provide access to articles that may be preprints of articles that will be submitted to peerreviewed journals by the author, postprints (copies of articles that are also published in journals), or papers that will never be submitted to traditional journals. Individual electronic articles may also be accessed at an author’s website or at institutional repositories. The Open Archives Initiative has led the way in alternatives to traditional journal publishing and has inspired related initiatives that move the responsibility for distributing scholarship from publishers to the scholars themselves or to the scholars’ institutions. Institutional repositories are now at the early planning and development stage, but ideally will include the entire intellectual capital of a university faculty, including papers, data, graphics, and other materials. The Open Archives Initiative promotes software standards for establishing institutional or individual e-print services (access to digital “preprints” or “postprints”) so many institutions are establishing OAI-compliant sites. E-print services are well established in some academic disciplines, in particular high-energy physics and astrophysics. They are not as common in disciplines such as medicine and chemistry, which rely heavily on peer review.
The Impact of E-publishing Alternatives The fact that authors are now using a variety of publishing venues leads to worries about duplicate versions, as it is hard to tell which is the definitive or archival version of a paper when multiple versions of the same paper are posted over time. Also, it may be difficult to distinguish low-quality papers from high-quality papers when it is so easy for all papers to be posted. The positive impact of speedy access to research literature overshadows these fears in many scholars’ minds, however, and so far some scholars and students report being able to assess the definitiveness and quality of articles without too much difficulty. All of the new electronic models, formats, and choices show us clearly that scholarly publishing is at a crossroads. To understand what impact these new options for reading and publishing scholarly materials may have, it is useful first to consider what the traditional structure and fundamental purposes of scholarly publishing have been. Traditionally, many people have been involved in the business of moving scholarly ideas from the hands of the author to the hands of the reader. If the people and stages involved are seen as links in a chain, the first link is the author and the last link is the reader, but there are many intervening links— peer review, editing, distribution, indexing, subscription, and so forth. Each link adds value, but it also adds costs and time delays. Some of the links are by-products of a print distribution system and reflect the limitations of print access. Electronic distribution may be one way to cut out the intervening links, so an artic l e m o ve s d i r e c t l y f ro m t h e a u t h o r t o t h e reader. But it is important to remember the functions of those links and the value they add. Peer review, for example, adds authority; editing adds quality; distribution adds accessibility; and archiving adds longevity. Online alternatives that protect these functions to some degree will be the most successful in the long run, although the relative value versus cost of these functions is hotly debated.
ELECTRONIC PAPER TECHNOLOGY ❚❙❘ 205
The Future Online journals today range from simplistic (and quite old-fashioned-looking) ASCII texts (texts that rely on the American Standard Code for Information Interchange, or ASCII, for data transmission) of individual articles available from aggregator services such as Lexis-Nexis to complex multimedia and interactive electronic journals available on the publisher’s website. Fully electronic journals without print equivalents are still rare, but they are expected to become more common in many disciplines. Fully electronic journals can be highly interactive and can include multimedia, links to data sets, and links to other articles; they can also encourage a sense of community among their readers. Therefore their impact on scholarship in the future is likely to continue to grow.
Pullinger, D., & Baldwin, C. (2002). Electronic journals and user behaviour. Cambridge, UK: Deedot Press. Rusch-Feja, D. (2002). The Open Archives Initiative and the OAI protocol for metadata harvesting: Rapidly forming a new tier in the scholarly communication infrastructure. Learned Publishing, 15(3), 179–186. Schauder, D. (1994). Electronic publishing of professional articles: Attitudes of academics and implications for the scholarly communication industry. Journal of the American Society for Information Science, 45(2), 73–100. Tenopir, C., King, D. W., Boyce, P., Grayson, M., Zhang, Y., & Ebuen, M. (2003). Patterns of journal use by scientists through three evolutionary phases. D-Lib Magazine, 9(5). Retrieved July 29, 2003, from http://www.dlib.org/dlib/may03/king/05king.html Tenopir, C., & King, D. W. (2000). Towards electronic journals: Realities for scientists, librarians, and publishers. Washington, DC: Special Libraries Association. Weller, A. C. (2001). Editorial peer review: Its strengths and weaknesses. Medford, NJ: Information Today.
Carol Tenopir See also Digital Libraries FURTHER READING Borgman, C. L. (2000). From Gutenberg to the Global Information Infrastructure: Access to information in the networked world. Cambridge, MA: MIT Press. Harnad, S. (2001). For whom the gate tolls? How and why to free the refereed research literature online through author/institution selfarchiving, now. Retrieved July 28, 2003, from http://cogprints .soton.ac.uk/documents/disk0/00/00/16/39/index.html King, D. W., & Tenopir, C. (2001). Using and reading scholarly literature. In M. E. Williams (Ed.), Annual review of information science and technology: Vol. 34. 1999–2000 (pp. 423–477). Medford, NJ: Information Today. Fjallbrant, N. (1997). Scholarly communication: Historical development and new possibilities. Retrieved July 28, 2003, from http://internet.unib.ktu.lt/physics/texts/schoolarly/scolcom.htm Ginsparg, P. (2001). Creating a global knowledge network. Retrieved July 28, 2003, from http://arxiv.org/blurb/pg01unesco.html Kling, R., & Callahan, E. (2003). Electronic journals, the Internet, and scholarly communication. In B. Cronin (Ed.), Annual review of information science and technology: Vol. 37. 2003 (pp. 127–177). Medford, NJ: Information Today. Meadows,A. J. (1998). Communicating research. New York: Academic Press. Nature Webdebates. (2001). Future e-access to the primary literature. Retrieved July 28, 2003, from http://www.nature.com/nature/ debates/e-access/ Page, G., Campbell, R., & Meadows, A. J. (1997). Journal publishing (2nd ed.). Cambridge, UK: Cambridge University Press. Peek, R. P., & Newby, G. B. (1996). Scholarly publishing: The electronic frontier. Cambridge, MA: MIT Press.
ELECTRONIC PAPER TECHNOLOGY For nearly two thousand years, ink on paper has been the near-universal way to display text and images on a flexible, portable, and inexpensive medium. Paper does not require any external power supply, and images and text can be preserved for hundreds of years. However, paper is not without limitations. Paper cannot be readily updated with new images or text sequences, nor does it remain lightweight when dealing with large quantities of information (for example, books). Nevertheless, although laptop computers have enabled people to carry around literally thousands of documents and images in a portable way, they still have not replaced ink on paper. Imagine a thin film that possesses the look and feel of paper, but whose text and images could be readily changed with the press of a button. Imagine downloading an entire book or newspaper from the web onto this thin medium, rolling it up, and taking it to work with you. The technology to make this and similar concepts possible is currently being developed. There are several different approaches to creating what has become known as electronic ink or electronic paper.
206 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Ink on paper is a very powerful medium for several reasons. Not only is it thin, lightweight, and inexpensive, but ink on paper reflects ambient light, has extraordinary contrast and brightness, retains its text and images indefinitely, has essentially a 180˚ viewing angle (a viewing angle is the angle at which something can be seen correctly) is flexible, bendable, and foldable, and perhaps most importantly, consumes no power. Objectively speaking, paper is an extraordinary technology. Creating a new electronic technology that will serve as a successful paper surrogate and match all the positive attributes of paper is no easy task. In fact, it is one of the biggest challenges facing technologists today. Broadly defined, electronic display materials that can be used in electronic paper applications can be made from a number of different substances, reflect ambient light, have a broad viewing angle, have a paper-like appearance and most importantly, have bistable memory. Bistable memory—a highly soughtafter property—is the ability of an electrically created image to remain indefinitely without the application of any additional electrical power. There are currently three types of display technologies that may make electronic paper or ink applications possible. These technologies are bichromal rotating ball dispersions, electrophoretic devices, and cholesteric liquid crystals.
Rotating Ball Technology: Gyricon Sheets A Gyricon sheet is a thin layer of transparent plastic in which millions of small beads or balls, analogous to the toner particles in a photocopier cartridge, are randomly dispersed in an elastomer sheet. The beads are held within oil-filled cavities within the sheet; they can rotate freely in those cavities. The beads are also bichromal in nature; that is, the hemispheres are of two contrasting colors (black on one hemisphere and white on the other hemisphere). Because the beads are charged, they move when voltage is applied to the surface of the sheet, turning one of their colored faces toward the side of the sheet that will be viewed. The beads may rotate
all the way in one direction or the other, in which case the color viewed will be one of the contrasting colors or the other, or they may rotate partially, in which case the color viewed will be a shade between the two. For example, if the contrasting colors are black and white, then complete rotation in one direction will mean that black shows, complete rotation in the other will mean white shows, and partial rotation will mean a shade of gray. The image that is formed by this process remains stable with no additional electrical addressing on the sheet a long time (even for days). This innovative technology was pioneered at Xerox’s Palo Alto Research Center and is currently being commercialized by Gyricon Media. Given contrasting colors of black and white, the white side of each bead has a diffuse white reflecting appearance that mimics the look and effect of paper, while the other side of the ball is black to create optical contrast. Gyricon displays are typically made with 100-micrometer balls. An important factor in this technology’s success is the fact that the many millions of bichromal beads that are necessary can be inexpensively fabricated. Molten white and black (or other contrasting colors) waxlike plastics are introduced on opposite sides of a spinning disk, which forces the material to flow to the edges of the disk, where they form a large number of ligaments (small strands) protruding past the edge of the disk. The jets are black on one side and white on the other, and quickly break up into balls as they travel through the air and solidify. The speed of the spinning disk controls the balls’ diameter. There are many applications envisioned for this type of display technology. As a paper substitute (electronic paper), it can be recycled several thousand times; it could be fed through a copy machine such that its old image is erased and the new one is presented, or a wand can be pulled across the paperlike surface to create an image. If the wand is given a built-in input scanner, it becomes multifunctional: It can be a printer, copier, fax, and scanner all in one. This technology is very cheap because the materials used and the manufacturing techniques are inexpensive.
ELECTRONIC PAPER TECHNOLOGY ❚❙❘ 207
Electrophoretic Technology Electrophoretic materials are particles that move through a medium in response to electrical stimulation. Researchers at the Massachusetts Institute of Technology pioneered a technique to create microcapsules with diameters of 30–300 micrometers that encase the electrophoretic materials, which may be white particles in a dark dye fluid or black and white particles in a clear fluid. They have coined the name electronic ink (or e-ink) to identify their technology. Material containing these microcapsules is then coated onto any conducting surface. By encapsulating the particles, the researchers solved the longstanding problem of electrophoretic materials’ instability. (Electrophoretic materials have tendencies toward particle clustering, agglomeration, and lateral migration.) By having the particles encapsulated in discrete capsules, the particles cannot diffuse or agglomerate on any scale larger than the capsule size. In the technology using white particles in a dark dye, when a voltage of one polarity is applied to a surface that has been coated with this material, the tiny white encapsulated particles are attracted to the top electrode surface so that the viewer observes a diffuse white appearance. By changing the polarity of the applied voltage, the white particles then migrate back to the rear electrode where they are concealed by the dye and the pixel appears dark to the viewer. After migration occurs in both states the white particles stay in their location indefinitely even after the voltage is removed. Gray scale is possible by controlling the degree of particle migration with applied voltage. This innovative technology is currently being commercialized by E Ink. In the system using black and white particles in a clear fluid, each microcapsule contains positively charged white particles and negatively charged black particles suspended in a transparent fluid.When one polarity of the voltage is applied, the white particles move to the top of the microcapsule where they become visible to the user (this part appears white). At the same time, an opposite polarity pulls the black particles to the bottom of the microcapsules where they are no longer visible to the viewer. By reversing this process, the black particles migrate to the top of
the capsule and the white particles to the bottom, which now makes the surface appear dark at that spot.
Cholesteric Liquid Crystals Cholesteric liquid crystal materials also have many of the positive attributes of paper, and they have the added advantage of being amenable to full color. The optical and electrical properties of a cholesteric liquid crystal material allow it to form two stable textures when sandwiched between conducting electrodes. The first is a reflective planar texture with a helical twist whose pitch, p, can be tuned to reject a portion of visible light: When the material is placed on a black background, the viewer sees a brilliant color reflection. The second is a focal conic texture that is relatively transparent. The reflection bandwidth (Dl) in the perfect planar texture is approximately 100 nanometers (100 billionths of a meter). This narrow selected reflection band is different from the broadband white reflection of Gyricon and electronic ink reflective display renditions. Upon the application of an applied voltage, V 1 , the planar structure transforms into the focal conic state that is nearly transparent to all wavelengths in the visiblelight range. The black background is then visible, and an optical contrast is created between reflecting color pixels and black pixels. In this state, the voltage can be removed and the focal conic state will remain indefinitely, creating a bistable memory between the reflecting planar state and the transparent focal conic state. In order to revert from the focal conic state back to the planar reflecting texture, the molecules must transition through a highly aligned state, which requires the application of voltage V2, which is slightly higher than V1. Abruptly turning off the voltage after the aligned state results in the planar texture. There are ways in which the planar texture can be altered to make it more paperlike in its reflectivity. Gray scale is inherent in cholesteric liquid crystals technology since the focal conic domains can be controlled with different levels of voltage. Since cholesteric liquid crystal materials are transparent, they can be vertically integrated to create a true color addition scheme. Although stacking creates more complicated driving circuitry, it preserves
208 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
resolution and brightness levels since the pixels are vertically integrated rather than spatially arranged across the substrate plane, as is the case with conventional liquid crystal displays. The technology was developed at Kent State University and is now being commercialized by Kent Displays. Cholesteric liquid crystal materials are being developed for document viewers, electronic newspapers and books, and information signs. Gregory Philip Crawford See also Cathode Ray Tubes; Liquid Crystal Display FURTHER READING Comiskey, B., Albert, J. D., Yoshizawa, H., & Jacobson, J. (1998). An electrophoretic ink for all-printed reflective electronic displays. Nature, 394(6690), 253–255. Crawford, G. P. (2000). A bright new page in portable displays. IEEE Spectra, 37(10), 40–46. Sheridon, N. K.; Richley, E. A.; Mikkelsen, J. C.; Tsuda, D.; Crowley, J. C.; Oraha, K. A., et al. (1999). The gyricon rotating ball display. Journal for the Society for Information Display, 7(2), 141.
ELIZA The computer program Eliza (also known as “Doctor”) was created by the U.S. computer scientist Joseph Weizenbaum (b. 1923) as an artificial intelligence application for natural language conversation. Considered a breakthrough when published, Eliza was named after the character Eliza Doolittle, who learned how to speak proper English in G. B. Shaw's play Pygmalion. Weizenbaum developed this program in the 1960s while a computer scientist at MIT (1963–1988). Eliza is actually only one specialized script running on a general conversational shell program that could have various scripts with different content. The Eliza script presents the computer's conversational role as a mock Rogerian (referring to the U.S. psychologist Carl Rogers) client-centered psychotherapist while the user plays the role of a client. At the time the program was
so convincing that many people believed that they were talking with a human psychotherapist.
Eliza as Psychotherapist In client-centered sessions a psychotherapist reflects back what the client says to invite further responses instead of offering interpretations. If a client reports a dream about “a long boat ride,” Eliza might respond with “Tell me about boats.” Most users would not immediately assume that the program is ignorant of even the basic facts about boats. Weizenbaum designed Eliza to take advantage of the user's projected illusion of understanding as a way of masking the program's profound lack of real-world knowledge. He also carefully noted that the assumption of a program understanding what the user says is one made by the user. In 1966 the popular understanding of mainframe computers as electronic brains superior to human capabilities was so strong that most people did indeed project vast knowledge and understanding onto any computer. So, despite flaws and limitations that later users would immediately notice, the first users attributed so much credibility to Eliza's responses that “some subjects have been very hard to convince that ELIZA is not human” (Weizenbaum 1966, 42). Scientists were also impressed with Eliza’s potentials. Nobel Prize winner Herbert Simon (economist, computer scientist, psychologist, and philosopher of science) (1916–2001) suggested that the U.S. Congress fund the development of Eliza as a substitute for human psychotherapists, and some psychoanalysts attempted to develop more sophisticated scripts for the program. Such efforts inevitably failed. In fact, Weizenbaum critiques Eliza's reception and popularity and asserts that such programs should not be used for sensitive interactions.
Eliza as Conversational Interface Nevertheless, Eliza's computerized techniques for conversational interaction were a significant improvement in human-computer interfaces. Eliza was among the first artificial intelligence programs that
ELIZA ❚❙❘ 209
Talking with ELIZA
J
oseph Weizenbaum reports the following as a “typical conversation” with ELIZA (Weizenbaum 1966). CLIENT: Men are all alike. ELIZA: IN WHAT WAY They're always bugging us about something or other. CAN YOU THINK OF A SPECIFIC EXAMPLE Well, my boyfriend made me come here. YOUR BOYFRIEND MADE YOU COME HERE He says I'm depressed much of the time. I AM SORRY TO HEAR YOU ARE DEPRESSED It's true. I am unhappy. DO YOU THINK COMING HERE WILL HELP YOU NOT TO BE UNHAPPY I need some help, that much seems certain. WHAT WOULD IT MEAN TO YOU IF YOU GOT SOME HELP Perhaps I could learn to get along with my mother. TELL ME MORE ABOUT YOUR FAMILY My mother takes care of me.
allowed for a conversation between people and computers. Eliza opened up the possibility of immediate user input and computer response in ways that were much closer to natural human interactions. A crucial property of human-computer interfaces disclosed by Weizenbaum's program is that they must maintain the user's “sense of being heard and understood” (Weizenbaum 1966, 42). In that sense Eliza satisfied one aspect of Alan M. Turing's (mathematician and founder of computer science and cryptology) test for artificial intelligence.
WHO ELSE IN YOUR FAMILY TAKES CARE OF YOU My father. YOUR FATHER You are like my father in some ways. WHAT RESEMBLANCE DO YOU SEE You are not very aggressive but I think you don't want me to notice that. WHAT MAKES YOU THINK I AM NOT VERY AGGRESSIVE You don't argue with me. WHY DO YOU THINK I DON'T ARGUE WITH YOU You are afraid of me. DOES IT PLEASE YOU TO BELIEVE I AM AFRAID OF YOU My father is afraid of everybody. WHAT ELSE COMES TO MIND WHEN YOU THINK OF YOUR FATHER Bullies. DOES THAT HAVE ANYTHING TO DO W I T H T H E FAC T T H AT YO U R BOYFRIEND MADE YOU COME HERE Source: Weizenbaum, Joseph. (1966, January). ELIZA—A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1). Retrieved March 22, 2004, from http://i5.nyu.edu/~mm64/x52.9265/january1966.html
Several features of the interactive environment and conversational interface contributed to Eliza's credibility. An environmental feature easily taken for granted today was MIT's MAC (Multi-Access Computer) time-sharing operating system, which allowed multiple users to have quick response times to their individual input. Eliza appeared to speak back to the user the way another person would. A user could generate input spontaneously at the teletype machine and have the program respond to that specific input conversationally at the same teletype—
210 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
not unlike today's Internet chat rooms, only with responses generated by a “bot” (robot). Compared to submitting a stack of punched cards and waiting a day for a printout, Eliza's interface was positively friendly.
Interface Problems and How Eliza Solves Them Weizenbaum's program dealt with several specific interface problems: identifying keywords, discovering minimal context, choosing and calculating appropriate responses (transformations), generating responses for input without any keywords, and most importantly, allowing for designing separate, changeable scripts that encode the content, that is, the particular keywords and transformations for a given conversational role. Thus, the shell program that computes responses and a script provide an interface to the content encoded in that script. The program first scans the user's input sentence to see if any of the words are in its dictionary of keywords. If a keyword is found, then the sentence is “decomposed” by matching it to a list of possible templates. The design of the templates is what discovers some minimal context for the user's input. In one of Weizenbaum's examples, the sentence “It seems that you hate me” is matched to a template for the keywords “YOU” and “ME”: (0 YOU 0 ME) The “0” in the template stands for any number of filler words. The template is used to break up the input sentence into four groups: (1) It seems that (2) YOU (3) hate (4) ME. This decomposition is then matched to one of several possible “reassembly” rules that can be used to generate a response. In this case the one chosen is: (WHAT MAKES YOU THINK I : 3 : YOU). The response then substitutes the third part of the input sentence, “hate,” into the response “What makes you think I hate you” (Weizenbaum 1966, 38). That is the basic operation of Eliza, although the program has many more technical nuances. The real ingenuity comes from designing the decomposition and reassembly rules that make up the script. We can
easily see how merely reusing input words by putting them into canned sentences leads to a loss of meaning.
Achievements and Continued Influence The program's real achievement was as an example of a conversational interface for some useful content. This kind of interface is successful for a narrow, theoretically well-defined, or foreseeable field of interactions such as solving simple arithmetic problems. Eliza quickly entered into intellectual and popular culture and continues to be discussed and cited forty years later. The program has many variants, including psychiatrists Kenneth Colby’s Parry (short for paranoid schizophrenic), the program Racter, described as “artificially insane,” and many more sophisticated descendents. William H. Sterner See also Dialog Systems; Natural-Language Processing FURTHER READING Bobrow, D. G. (1965). Natural language input for a computer problem solving system (Doctoral dissertation, MIT, 1965), source number ADD X1965. Colby, K. M., Watt, J. B., & Gilbert, J. P. (1966). A computer method of psychotherapy: Preliminary communication. The Journal of Nervous and Mental Disease, 142(2), 148–152. Lai, J. (Ed.). (2000). Conversational interfaces. Communications of the ACM, 43(9), 24–73. Raskin, J. (2000). The humane interface—New directions for designing interactive systems. New York: Addison-Wesley. Rogers, C. (1951). Client centered therapy: Current practice, implications and theory. Boston: Houghton Mifflin. Turing, A. M. (1981). Computing machinery and intelligence. In D. R. Hofstadter & D. C. Dennett (Eds.), The mind's I—Fantasies and reflections on self and soul (pp. 53–68). New York: Bantam Books. (Reprinted from Mind, 49[236], 433–460) Turkle, S. (1984). The second self—Computers and the human spirit. New York: Simon & Schuster. Weizenbaum, J. (1966). ELIZA—A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. Weizenbaum, J. (1967). Contextual understanding by computers. Communications of the ACM, 10(8), 474–480.
E-MAIL ❚❙❘ 211
Weizenbaum, J. (1976). Computer power and human reason—From judgment to calculation. San Francisco: W. H. Freeman. Winograd, T. (1972). Understanding natural language. New York: Academic Press.
E-MAIL Electronic mail, also called“e-mail”or simply“email”, is a system for exchanging text messages between computers. First invented in 1971, e-mail came into very widespread usage in the 1990s, and is considered by many to be the most important innovation in personal communications since the telephone. E-mail has changed the way businesses, social groups, and many other kinds of groups communicate.
History of E-mail E-mail was invented in 1971 by Ray Tomlinson, who was a scientist at BBN in Cambridge, Massachusetts. (The first-ever e-mail message, probably “QWERTY UIOP”, was sent as a test between two computers on Tomlinson’s desk. Many, but not all e-mail messages sent since then have been more informative.) This was not the first text message sent via computer, but the first-ever sent between computers using the nowstandard addressing scheme. The Internet, or Arpanet as it was then called, had come into existence a few years earlier, and was used by scientists at a few locations. Users of the Arpanet system already used messaging, but one could only send messages to other users at the same location (e.g. user “TomJones” at State U might easily leave a message for “SallySmith” at the same location). Tomlinson was working on a way to send files between mainframes using filetransfer program called CPYNET. He decided to also extend the messaging system this so that users could send messages to other users anywhere in the Arpanet system. One of the problems facing Tomlinson was addressing. How would TomJones at State U indicate that he wanted to send a message to SallySmith at TechU, not State U? Tomlinson chose the @ symbol as the centerpoint for his new addressing system. Information on the right of the @ would indicate the
location, and information on the left would indicate the user, so a message for SallySmith@TechU would arrive at the right place. The @ symbol was an obvious choice, according to Tomlinson, because it was a character that never appeared in names, and already had the meaning“at,”so was appropriate for addressing. All e-mail addresses still include this symbol. E-mail has grown exponentially for three decades since. In the 1970s and 1980s it grew until it was a standard throughout American universities. Starting in 1988 it moved out into the nonuniversity population, promoted by private companies such as CompuServe, Prodigy, and America Online. A study of e-mail growth between 1992–1994 showed traffic doubling about every twelve months—279 million messages sent in November of 1992, 508 million the next year, and topping the 1 billion messages/ month mark for the first time in November of 1994 (Lyman and Varian 2004). Not only were more people getting e-mail accounts, but the people who had them were sending more and more messages. For more and more groups, there was enough “critical mass” that e-mail became the preferred way of communicating. By the early twenty-first century e-mail was no longer a novelty, but a standard way of communicating throughout the world between all kinds of people.
Format of E-mail Messages At its most basic, e-mail is simply a text message with a valid address marked by “To:” Imagine that [email protected] now wants to send an e-mail address to [email protected]. The part of the message after the @ sign refers to an Internet Domain Name. If the e-mail is to be delivered correctly, this domain must be registered on the Internet Domain Name Server (DNS) system, just as Web pages must be. Likely, TomJones’s university keeps a constantly updated list of DNS entries (a “DNS lookup service”) so that it knows where to sent Tom’s outgoing mail. The computer receiving Tom’s message must have an e-mail server or know how to forward to one, and must have an account listed for“joe.”If either of these
212 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
A Personal Story—The Generation Gap When I first went off to college, e-mail was something your university offered only to those savvy enough to take advantage of it. It was an exclusive club: People who could send mail and have it arrive in seconds, rather than the usual two or three days that the U.S. Postal Service required. And so, a freshman at college, I talked with my parents almost every day for free, via e-mail, while my friends racked up large phone bills calling home. The formality of a written letter, or even a phone call, was a treat saved only for a special occasion. But it took some time for my mother to warm to this interaction; to her, e-mail was only on the computer, not personal like a letter could be. Even today, it is more like a second language to her. By the time I graduated from college, e-mail was commonplace and ubiquitous. Despite the diaspora of my college friends across the country, my phone bill remained small, my e-mail rate high, until suddenly a new technology burst onto the scene. In 1997 I started using Instant Messenger (IM), leaving a small window open on the corner of my screen. As my friends slowly opted in we gravitated toward the peripheral contact of the “buddy list” and away from the more formal interaction of e-mail. Gradually, I realized that long e-mail threads had been replaced by quick, frequent IM interaction: a brief question from a friend, a flurry of activity to plan a night out. But I’ve become a bit of a fuddy-duddy; the technology has passed me by. Recently I added a young acquaintance to my buddy list. He mystified me by sending brief messages: "Hi!" To this I would reply, "What's up? Did you have a question?" This would confuse him—why would he have a question? I finally realized that we used the medium in different ways. To me, IM was a path for getting work done, a substitute for a quick phone call or a short e-mail. To him, the presence of friends on his buddy list was simply the warmth of contact, the quick hello of a friend passing by on the Web. Observing his use is fascinating; he has well over a hundred friends on his list, and generally keeps a dozen or more conversations occurring simultaneously. No wonder I rated no more than a quick hello in his busy world! I tried to keep up once, but found I could not match his style of use of the medium. As new technologies arise, their new users will no doubt take to them with a gusto and facility that we cannot fully comprehend. It is our job as designers to ensure that we offer these users the flexibility and control to make of these new media what they will, and not limit them by the boundaries of our own imagination. Alex Feinman
is not correct, the message will be “bounced” back to the original sender. E-mail can be sent to multiple recipients by putting multiple e-mail addresses in the “To” field separated by commas or by using the “cc” field or “bcc” field. CC stands for “Carbon Copy,” and is a convention taken from office communications long predating e-mail. If you receive an e-mail where you are listed under the CC field, this means that you are not the primary intended recipient of the message, but are being copied as a courtesy. Recipients listed in the CC field are visible to all recipients. BCC in contrast stands for “Blind Carbon Copy,” and contents of this field are not visible to message recipients. If you receive a BCC message, other recipients will not see that you were copied on the message, and you will not see other BCC recipients.
Standard E-mail messages also contain other, nonessential fields usually including a “From” field identifying the sender and a “Subject” field summarizing the content. Other optional fields are: Mime type: Describes the file format for attachments ■ HTML formatting: Indicates that the message contains formatting, graphics, or other elements described in the standard Web html format ■ Reply-To: Can list a “reply to” address that may be different from the sender. This is useful for lists that want to avoid individual replies being accidentally sent to the entire group. ■ SMS: Indicates that the e-mail can be sent to a device using the Simple Messaging System pro■
E-MAIL ❚❙❘ 213
tocol used by cell phones and other handheld devices ■ Priority: Can be interpreted by some Web browser to indicate different priority statuses These are only a few of the more common optional fields that may be included in an e-mail. When an e-mail is sent using these optional features, the sender cannot be sure that the recipient’s e-mail software will be able to interpret them properly. No organization enforces these as standards, so it is up to developers of e-mail server software and e-mail client software to include or not include these. Companies such as Microsoft and IBM may also add specialized features that work only within their systems. E-mail with specialized features that are sent outside of the intended system doesn’t usually cause undue problems, however—there will just be extra text included in the e-mail header that can be disregarded by the recipient.
E-mail Lists An important technological development in the history of e-mail was the e-mail list. Lists are one-tomany distributions. A message sent to an e-mail list address (e.g., “[email protected]”) is sent by an individual and received by everyone subscribed to the list. One popular way of administering lists is using ListServ software, which was first developed in 1986 for use on IBM mainframes, and currently marketed by Lsoft (www.lsoft.com). ListServ software has the advantage that membership is selfadministered—you don’t need a moderator’s help to subscribe, unsubscribe, or change membership options, these are done by sending messages that are interpreted and carried out automatically by the server. For example, Tom Jones could subscribe himself to an open list by sending the message SUBSCRIBE dogtalk-l to the appropriate listserv address. And, just as important, he could unsubscribe himself later by sending the e-mail UNSUBSCRIBE dogtalk-l. There are also a wide variety of options for list subscriptions, such as receiving daily digests or subscribing anonymously. Another popular way of administering groups is through online services such as Yahoogroups. These
groups are administered through buttons and links on the group web page, not text commands. These groups may also include other features such as online calendars or chatrooms. E-mail lists, like most other groups, have certain group norms that they follow, and newcomers should take note of them. Some of these are options that are set by the list administrator: Is the list moderated or unmoderated? In moderated lists, an administrator screens all incoming messages before they are sent to the group. In unmoderated lists, messages are immediately posted. ■ Does the list by default “Reply to all”? When users hit the “Reply” button to respond to a list message, will they by default be writing to the individual who sent the message, or to the entire group? Not all lists are the same, and many embarrassments have resulted in failure to notice the differences. Users can always manually override these defaults, simply by changing the recipient of their messages in the “To” line. ■
Lists also have group norms that are not implemented as features of the software, but are important nonetheless. How strictly are list members expected to stick to the topic? Is the purpose of the list social or purely informational? Are commercial posts welcome or not? Listserv software can be configured to send an automatic “Welcome” message to new members explaining the formal and informal rules of the road.
Social Characteristics of E-mail Academic researchers in the fields of communications, psychology, and human-computer interaction were quick to recognize that this radical new communications method could have effects on both individuals and organizations. This research area, which encompasses the study of e-mail and other online media, is referred to as the study of “Computer-Mediated Communications,” abbreviated CMC. Some well-established characteristics of e-mail are:
214 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Casual Style Electronic mail was very quickly recognized to have some unique effects on communication style, and possibly have long-term effects on the groups that use it. Casual style is one common marker of email communication. Many people use the verb “talk” rather than “write,” as in “I’ll talk to you on email” rather than “I’ll write you an e-mail.” E-mail never developed the formal salutations and benedictions of letters—few e-mails begin with “Dear Mr. Jones” or end with “Sincerely, Sally Smith.” In 1978 one early e-mail user observed: “One could write tersely and type imperfectly, even to an older person in a superior position and even to a person one did not know very well, and the recipient took no offense. The formality and perfection that most people expect in a typed letter did not become associated with network messages, probably because the network was so much faster, so much more like the telephone” (J.C.R. Licklider, quoted in Vezza 1978). The casual style is partly a result of the unique early-Internet “hacker” culture, but also partly a result of the medium itself. E-mail messages are often delivered in a few seconds, lending a feeling of immediacy. The fact that e-mail is easily deleted and not printed on paper lends a feeling of impermanence (although this is illusory, as many legal defendants are now finding!) While in some settings, such as when conducting corporate or legal business, e-mails are now expected to be formal and guarded in the manner of a letter, in general the literary genre of e-mail remains one of casualness and informality. E-mail, along with other means of ComputerMediated Communications, also lends a feeling of social distance. Individuals feel less close, and less inhibited via e-mail compared to being face-to-face with message recipients. The social distance of e-mail has a number of good and bad effects. Self-Disclosure via E-mail Online communication with strangers also leads to a feeling of safety, because the relationship can be more easily controlled. Many support groups for highly personal issues thrive as e-mail lists. Individuals may use an online forum to disclose feelings
and share problems that they would be extremely reluctant to discuss with anyone face-to-face. Online dating services often arrange e-mail exchanges prior to phone or face-to-face meetings. Lack of social cues may sometimes promotes an artificial feeling of closeness that Joseph Walther calls a “Hyperpersonal” effect (Walther, 1996). Individuals may imagine that other are much closer to themselves in attitudes than they really are, and this may lead to highly personal revelations being shared online that would rarely be communicated faceto-face. Egalitarianism Text-only communication does not convey status cues, or other information that tends to reinforce social differences between individuals. E-mail is believed to promote egalitarian communication (Dubrovsky, Kiesler, and Sethna 1991). Lower-level employees can easily send e-mails to executives that they would never think to phone or visit, loosening restraints on corporate communication and potentially flattening corporate hierarchies. It has also been observed that students who rarely contribute verbally in classes will contribute more via e-mail or other online discussion, probably because of the increased social distance and reduced inhibition (Harasim 1990). Negative Effects: Flaming and Distrust The social distance and lack of inhibition can have negative effects as well. E-mail writers more easily give in to displays of temper than they would in person. In person, blunt verbal messages are often presented with body language and tone of voice to alleviate anger, but in e-mail these forms of communication are not present. Recipients of rude emails may more easily feel insulted, and respond in kind. Insulting, angry, or obscene e-mail is called “flaming.” In one early experimental study of comparing e-mail and face-to-face discussions, researchers counted 34 instances of swearing, insults and namecalling, which were behaviors that never occurred in a face-to-face group performing the same task (Siegel et al. 1986). For similar reasons, it is often harder to build trust through e-mail. Rocco (1998) found that groups using e-mail could not solve a social
E-MAIL ❚❙❘ 215
dilemma that required trust building via e-mail but groups working face-to-face could do so easily. Beyond these interpersonal difficulties that can occur online, there are some practical limitations of e-mail as well. The asynchronous nature of e-mail makes it difficult to come to group decisions (see Kiesler and Sproull 1991). Anyone who has tried to use e-mail to set up a meeting time among a large group of busy people has experienced this difficulty.
Culture Adapts to E-mail These observations about the effects of e-mail were made relatively early in its history, before it had become as widespread as it currently is. As with all new technologies, however, culture rapidly adapts. It has not taken long, for example, for high-level business executives to assign assistants to screen e-mails the way they have long done for phone calls. It is probably still the case that employees are more likely to exchange email with top executives than to have a phone or personal meeting with them, but the non-hierarchical utopia envisioned by some has not yet arrived. A simple and entertaining development helps email senders convey emotion a little better than plain text alone. “Emoticons” are sideways drawings made with ASCII symbols (letters,numbers and punctuation) that punctuate texts. The first emoticon was probably : ) which, when viewed sideways, looks like a smiley face. This emoticon is used to alert a recipient that comments are meant as a joke, or in fun, which can take the edge off of blunt or harsh statements. Most experienced e-mail users also develop personal awareness and practices that aid communication. Writers learn to reread even short messages for material that is overly blunt, overly personal, or otherwise ill conceived. If harsh words are exchanged via e-mail, wise coworkers arrange a time to meet face-to-face or on the phone to work out differences. If a group needs to make a decision over e-mail, such as setting a meeting time, they adopt practices such as having the first sender propose multiplechoice options (should we meet Tuesday at 1 or Wednesday at 3?) or assigning one person to collect all scheduling constraints. Groups also take advantage of e-mail’s good characteristics to transform themselves in interesting
ways. Companies are experimenting with more virtual teams, and allowing workers to telecommute more often, because electronic communications make it easier to stay in touch. Universities offer more off-campus class options than ever before for the same reason. Organizations may take on more democratic decision-making practices, perhaps polling employees as to their cafeteria preferences or parking issues, because collecting opinions by e-mail is far easier than previous methods of many-tomany communication.
Future of E-mail Electronic mail has been such a successful medium of communication that it is in danger of being swamped by its own success. People receive more electronic mail than they can keep up with, and struggle to filter out unwanted e-mail and process the relevant information without overlooking important details. Researchers have found that e-mail for many people has become much more than a communication medium (Whittaker and Sidner 1996). For example, many people do not keep a separate address book to manage their personal contacts, but instead search through their old e-mail to find colleagues’ addresses when needed. People also use their overcrowded e-mail “inboxes” as makeshift calendars, “to-do” lists, and filing systems. Designers of highend e-mail client software are trying to accommodate these demands by incorporating new features such as better searching capability, advanced filters and “threading” to help users manage documents (Rohall and Gruen 2002). E-mail software is often integrated with electronic calendars and address books to make it easy to track appointments and contacts. And e-mail is increasingly integrated with synchronous media such as cell phones, instant messaging, or pagers to facilitate decisionmaking and other tasks that are difficult to accomplish asynchronously.
The Spam Problem A larger, more insidious threat to e-mail comes in the form of “spam” or “junk” e-mail. Spam refers to unwanted e-mail sent to many recipients. The
216 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
term was first used to describe rude but fairly innocuous e-mailings, such as off-topic comments sent to group lists, or personal messages accidentally sent to a group. But spam has taken on a more problematic form, with unscrupulous mass-marketers sending unsolicited messages to thousands or even millions of e-mail addresses. These spammers are often marketing shady products (video spy cameras, pornographic websites) or worse, soliciting funds in e-mail scams. These professional spammers take advantage of two of the characteristics of e-mail that have made it so popular: its flexibility and inexpensiveness. Spammer usually forge the “from” line of the e-mails they send, so that their whereabouts cannot be easily blocked. (Messages usually include Web addresses hosted in nations where it would be difficult to shut them down.) Spammers also take advantage of the fact that email is essentially free for senders. The only significant cost of e-mail is borne by recipients, who must pay to store e-mail until it can be read or deleted. Low sending cost means that spammers can afford to send out advertisements that get only a miniscule fraction of responses. The effect of this spamming is that users are often inundated with hundreds of unwanted e-mails, storage requirements for service providers are greatly increased, and the marvelously free and open world of international e-mail exchange is threatened. What is the solution to spam? Many different groups are working on solutions, some primarily technical, some legal, and some economic or social. Software companies are working on spam “filters” that can identify and delete spam messages before they appear in a user’s inbox. The simplest ones work on the basis of keywords, but spammers quickly developed means around these with clever misspellings. Other filters only let through e-mails from known friends and colleagues. But most users find this idea distasteful—isn’t the possibility of finding new and unexpected friends and colleagues one of the great features of the Internet? Research continues on filters that use more sophisticated algorithms, such as Bayesian filtering, to screen out a high percentage of unwanted e-mail. There are also attempts afoot to outlaw spam. In December 2003 the U.S. Congress passed a bill (CAN-SPAM,
2004), designed to limit spamming. This bill would, among other things, mandate that commercial emailers provide “opt-out” options to recipients and prohibit false e-mail return addresses and false subject headings. This bill will not eliminate the problem, because most spam currently originates outside of the United States. Similar multinational efforts may eventually have an effect, however. Individuals can also purchase antispam software or antispam services that will delete some (but not all) unwanted e-mails. The best way to avoid receiving spam is never to list your e-mail address on your website in machine-readable text. Many spam lists are assembled by automatic spider software that combs through webpages looking for the telltale @ sign. If you still want your e-mail to be available on the Web, two simple ways around this are to replace the @ symbol in your e-mail address with the word “at” or create a graphic of your e-mail address and use it as a substitute for the text. Despite these challenges, electronic mail has carved itself an essential place in the social world of the twenty-first century and should continue to grow in importance and usefulness for many years to come. Nathan Bos See also Internet in Everyday Life; Spamming FURTHER READING Bordia, P. (1997). Face-to-face versus computer-mediated communication. Journal of Business Communication, 34, 99–120. C A N - S PA M l e g i s l a t i o n . Re t r i e ve d Ma rch 3 1 , 2 0 0 4 , f ro m http://www.spamlaws.com/federal/108s877.html Crocker, D. E-mail history. Retrieved March 31, 2004, from www. livinginternet.com Dubrovsky, V. J., Kiesler, S., & Sethna, B. N. (1991). The equalization phenomenon: Status effects in computer-mediated and face-toface decision-making groups. Human-Computer Interaction, 6, 119–146. Garton, L. & Wellman, B. (1995). Social impacts of electronic mail in organizations: a review of the research literature. In B. R. Burleson (Ed.), Communications Yearbook, 18. Thousand Oaks, CA: Sage. Harasim, L. M. (Ed.). (1990). Online education: perspectives on a new environment (pp. 39–64). New York: Praeger. Hardy, I. R. (1996). The evolution of ARPANET e-mail. History Thesis, University of California at Berkeley. Retrieved March 31, 2004, from http://www.ifla.org/documents/internet/hari1.txt
EMBEDDED SYSTEMS ❚❙❘ 217
Kiesler, S., & Sproull, L. S. (1992). Group decision-making and communication technology. Organizational Behavior and Human Decision Processes, 52, 96–123. Lyman, P., & Varian, H. R. (2000). How much information. Retrieved March 31, 2004, from http://www.sims.berkeley.edu/how-muchinfo Rocco, E. (1998). Trust breaks down in electronic contexts but can be repaired by some initial face-to-face contact. In Proceedings of Human Factors in Computing Systems, CHI 1998 (pp. 496–502). Rohall, S. L., & Gruen, D. (2002). Re-mail: A reinvented e-mail prototype. In Proceedings of Computer-Supported Cooperative Work 2002. New York: Association for Computer Machinery. Siegel, J., Dubrovsky, V., Kiesler, S., & McGuire, T. W. (1986). Group processes in computer-mediated communication. Organizational Behavior and Human Decision Processes, 37, 157–186. Sproull, L., & Kiesler, S. (1991). Connections: New ways of working in the networked organization. Cambridge, MA: The MIT Press. Vezza, A. (1978). Applications of information networks. In Proceedings of the IEEE, 66(11). Walther, J. B. (1996). Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction. Communication Research, 23, 3–43. Whittaker, S. & Sidner, C. (1996). E-mail overload: Exploring personal information management of e-mail. In Proceedings of ComputerHuman Interaction. New York: ACM Press. Zakon, R. H. (1993). Hobbes’ Internet timeline. Retrieved March 31, 2004, from http://www.zakon.org/robert/internet/timeline/
EMBEDDED SYSTEMS Embedded systems use computers to accomplish specific and relatively invariant tasks as part of a larger system function—as when, for example, a computer in a car controls engine conditions. Computers are embedded in larger systems because of the capability and flexibility that is available only through digital systems. Computers are used to control other elements of the system, to manipulate signals directly and in sophisticated ways, and to take increasing responsibility for the interface between humans and machines in machine-human interactions. Prior to this embedding of computers in larger systems, any nontrivial system control required the design and implementation of complex mechanisms or analog circuitry. These special-purpose dedicated mechanisms and circuits were often difficult to design, implement, adjust, and maintain. Once implemented, any significant changes to them were impractical. Further, there were severe limits on the types of control that were feasible using this approach.
The embedding of computers in larger systems enables the implementation of almost unlimited approaches to control and signal processing. A computer can implement complex control algorithms that can adapt to the changing operation of a larger system. Once a computer has been embedded in a larger system, it can also be used to provide additional functionality, such as communications with other computers within or outside the larger system that it serves. It can also be used to support improved interfaces between machines and human operators. In addition, an embedded computing system can be updated or altered through the loading of new software, a much simpler process than is required for changes to a dedicated mechanism or analog circuit. People living in modern technological societies come into contact with many embedded systems each day. The modern automobile alone presents several examples of embedded systems. Computerbased engine control has increased fuel efficiency, reduced harmful emissions, and improved automobile starting and running characteristics. Computerbased control of automotive braking systems has enhanced safety through antilock brakes. Embedded computers in cellular telephones control system management and signal processing, and multiple computers in a single handset handle the human interface. Similar control and signal-processing functions are provided by computers in consumer entertainment products such as digital audio and video players and games. Embedded computing is at the core of high-definition television. In health care, many people owe their lives to medical equipment and appliances that could only be implemented using embedded systems.
Defining Constraints The implementation and operation constraints on embedded systems differentiate these systems from general-purpose computers. Many embedded systems require that results be produced on a strict schedule or in real time. Not all embedded systems face this constraint, but it is imposed much more on embedded systems than on general-purpose computers. Those familiar with personal computers rarely think
218 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
of the time required for the computer to accomplish a task because results are typically returned very quickly, from the average user’s point of view. Some personal-computing operations, such as very large spreadsheet calculations or the editing of large, highresolution photographs, may take the computer a noticeable amount of time, but even these delays are rarely more than an inconvenience. In contrast, embedded systems can operate at unimaginable speeds, but if an embedded system violates a realtime constraint, the results can be catastrophic. For example, an automobile engine controller may need to order the injection of fuel into a cylinder and the firing of a sparkplug at a rate of thousands of injections and sparks each second, and timing that deviates by less than one-thousandth of a second may cause the engine to stall. Systems that involve control or signal processing are equally intolerant of results that come early or late: Both flaws are disruptive. Limited electrical power and the need to remove heat are challenges faced by the designers of many embedded systems because many embedded applications must run in environments where power is scarce and the removal of heat is inconvenient. Devices that operate on batteries must strike a balance between demand for power, battery capacity, and operation time between charges. Heat removal is a related problem because heat production goes up as more power is used. Also, embedded systems must often fit within a small space to improve portability or simply to comply with space constraints imposed by a larger system. Such space constraints exacerbate the problem of heat removal and thus further favor designs that limit power consumption. A cellular telephone, for example, features embedded systems that are hampered by significant power and space constraints. A less obvious example is the avionics package for a general-aviation aircraft. Such a system must not draw excessive power from the aircraft’s electrical system, and there may be little space available for it in the aircraft cockpit. Users of older personal computers learned to expect frequent computer failures requiring that the users restart the computer by pressing a combination of keys or a reset button. Newer personal computers are more robust, but many embedded systems demand even greater robustness and can-
not rely on human intervention to address failures that might arise. Users of personal computers accept that software often includes bugs, but the same users expect that their hardware—in this context, devices such as household appliances, automobiles, and telephones—will operate without problems. Traditionally, such devices have been very robust because they were relatively simple. The embedding of computing into these sorts of devices offers potential for greater functionality and better performance, but the consumer still expects the familiar robustness. Further, embedded computing is often found in systems that are critical to the preservation of human life. Examples include railroad signaling devices and medical diagnostic and assistive technology such as imaging systems and pacemakers. These systems must be robust when first placed in service and must either continue to operate properly or fail only in ways that are unlikely to cause harm. Further, as mentioned above, these systems must operate without human intervention for extended periods of time. Most current embedded systems operate in isolation, but some perform their functions with limited monitoring and direction from other computers. As with general-purpose computing, there appears to be a trend toward increasing the interoperability of embedded systems. While increasing the interaction among embedded systems offers the potential for new functionality, networking of embedded computing devices also increases security concerns.
An Illustrative Example People tend instead to think of embedded systems in conjunction with cutting-edge technology, such as the various spacecraft developed and deployed by NASA. The first embedded computer used by NASA in a manned spacecraft was developed for the Gemini program in the early 1960s. That computer was used for guidance and navigation. (The Mercury program preceding Gemini involved manned space flight, but the flights were simple enough to be controlled from the ground.) The NASA programs following Gemini placed increasing reliance on embedded computers to accomplish a range of tasks required for the successful completion of manned space missions.
EMBEDDED SYSTEMS ❚❙❘ 219
Unmanned space flights have needed embedded computers to provide flight control for spacecraft too far away to tolerate control from Earth. Closer to Earth, however, the modern automobile may contain a hundred embedded computers, each with greater computational capabilities than the single computer that traveled on the Gemini space flights. Embedded computer engine control was introduced in the late 1970s to satisfy emissions requirements while maintaining good performance. Those who have operated automobiles from before the days of embedded systems will recall that those automobiles were more difficult to start when the weather was too cold or too hot or too wet. Automobiles of that era were also less fuel efficient, emitted more pollution, and had performance characteristics that varied with driving and environmental conditions more than is the case today. Embedded computer engine control addresses these variations by adapting the engine control in response to sensed environmental and engine operation data. The next element in the automobile drive train is the transmission. The first cars with automatic transmissions typically suffered from poorer performance and fuel economy than cars with manual transmissions. Modern automatic transmissions controlled by embedded computers, by comparison, compare favorably with manual transmissions in both performance and economy. The computer control supports the selection of different shifting strategies depending on whether the driver prefers sports driving or economy driving. Further, manufacturers can match a single transmission to a wide range of engines by changing the software in the transmission controller. The embedded transmission system can also be configured to communicate with the embedded engine system to generate better performance and economy than each system could achieve operating independently. Other familiar automotive capabilities provided through embedded systems include cruise control, control of antilock brakes, traction control, active control of vehicle suspension, and control of steering for variable power assist or four-wheel steering. Automobile interior climate and accessories such as
wipers and power windows may be controlled by embedded systems. In some instances late-model automobiles that have been recalled to the factory have had the required repair accomplished entirely through an embedded computer software change. Embedded communication and navigation systems for automobiles are now available, and these systems are more complex than those used in the early space program. In addition, the human interface between the automobile and its driver is now managed by one or more embedded systems. In the early 1980s, several automakers replaced analog human interfaces with computer-based interfaces. Some of those interfaces were not well received. Latermodel automobiles retained the computer control of the interfaces, but returned to the more familiar analog appearance. For example, many drivers prefer the dial speedometer to a digital display, so even though the speedometer is actually controlled by a computer, auto designers reverted from digital to analog display.
Increasing Dependence Embedded computers can be used to implement far more sophisticated and adaptive control for complex systems than would be feasible with mechanical devices or analog controllers. Embedded systems permit the human user to interact with technology as the supervisor of the task rather than as the controller of the task. For example, in an automobile, the engine controller frees the driver from having to recall a particular sequence of actions to start a car in cold weather. Similarly, the automatic transmission controller releases the driver from tracking engine speed, load, and gear, leaving the driver free to concentrate on other important driving tasks. The computer’s ability to manage mundane tasks efficiently is one of the great assets of embedded systems. Unfortunately, the increased complexity that embedded systems make possible and the increased separation between the user and the machine also introduce new potential dangers. Embedded systems make previously impractical applications practical. Prior to the late 1970s, mobile telephone service was cumbersome and expensive because of limited capabilities to manage the
220 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
available radio communication channels. Embedded computing initially made the modern cellular telephone industry feasible because computers embedded within the cellular telephone base stations provided radio channel management and efficient hand-offs as mobile users moved from cell to cell. Newer digital cell phones include computers embedded within the handsets to improve communication and power efficiency. Without embedded computing, the explosive expansion of the cell phone industry could not have occurred. Implanted pacemakers help maintain the human heart’s proper pumping rhythm and have improved the quality and duration of life for many people, but those people are now dependent upon this embedded system.
Jeffrey, K. (2001). Machines in our hearts: The cardiac pacemaker, the implantable defibrillator, and American health care. Baltimore: Johns Hopkins University Press. Jurgen, R. (1995). Automotive electronics handbook. New York: McGrawHill. Leveson, N. (1995). Safeware: System safety and computers. Reading, MA: Addison-Wesley. Shaw, A. ( 2001). Real-time systems and software. New York: John Wiley & Sons. Stajano, R. (2002). Security for ubiquitous computing. West Sussex, UK: John Wiley & Sons. Vahid, F., & Givargis, T. (2002). Embedded systems design: A unified hardware/software introduction. New York: John Wiley & Sons. Wolf, W. (2001). Computers as components. San Francisco: Morgan Kaufmann Publishers.
The Future
The Electronic Numerical Integrator and Computer (ENIAC), built at the University of Pennsylvania between 1943 and 1946, was the first electronic digital computer that did useful work. Large analog computers had existed since Vannevar Bush and his team had built the differential analyzer in 1930. Depending on one's definition, the first digital computer may have been the exper imental Atanasoff-Berry machine in 1940. Unlike its predecessors, ENIAC possessed many of the features of later digital computers, with the notable exceptions of a central memory and fully automatic stored programs. In terms of its goals and function, ENIAC was the first digital supercomputer, and subsequent supercomputers have continued in the tradition of human-computer interaction that it established. They tend to be difficult to program, and a technically adept team is required to operate them. Built from state-of-the-art components, they involve a demanding trade-off between performance and reliability. Their chief purpose is to carry out large numbers of repetitious numerical calculations, so they emphasize speed of internal operation and tend to have relatively cumbersome methods of data input and output. Remarkably, ENIAC solved problems of kinds that continue to challenge supercomputers more than a half-century later: calculating ballistic trajectories, simulating nuclear explosions, and predicting the weather.
Embedded systems will certainly expand in functionality, influence, and diversity for the foreseeable future. The digital technology required to implement embedded systems continues to improve, and computer hardware is becoming more powerful, less expensive, smaller, and more capable of addressing electrical power considerations. In parallel, techniques for the production of robust real-time software are steadily improving. Digital communication capability and access is also expanding, and thus future embedded systems are more likely to exhibit connectivity outside of their larger systems. Lessons learned from early interfaces between humans and embedded systems coupled with the improvements in embedded computing should yield better interfaces for these systems. Embedded systems are likely to become so woven into everyday experience that we will be unaware of their presence. Ronald D. Williams See also Fly-by-Wire; Ubiquitous Computing FURTHER READING Graybill, R., & Melhwm, R. (2002). Power aware computing. New York: Kluwer Academic Press/Plenum. Hacker, B. (1978). On the shoulders of Titans: A history of Project Gemini. Washington, DC: NASA Scientific and Technical Information Office.
ENIAC
ENIAC ❚❙❘ 221
A technician changes a tube in the ENIAC computer during the mid-1940s. Replacing a faulty tube required checking through some 19,000 possibilities. Photo courtesy of the U.S. Army.
Historians debate the relative importance of various members of the ENIAC team, but the leaders were the physicist John W. Mauchly, who dreamed of a computer to do weather forecasting, and the engineer J. Presper Eckert. After building ENIAC for the U.S. Army, they founded a company to manufacture computers for use in business as well as in government research. Although the company was unprofitable, their UNIVAC computer successfully transferred the ENIAC technology to the civilian sector when they sold out to Remington Rand in 1950. Both development and commercialization of digital computing would have been significantly delayed had it not been for the efforts of Mauchly and Eckert.
The problem that motivated the U.S. Army to invest in ENIAC was the need for accurate firing tables for aiming artillery during World War II. Many new models of guns were being produced, and working out detailed instructions for hitting targets at various distances empirically by actually shooting the guns repeatedly on test firing ranges was costly in time and money. With data from a few test firings, one can predict a vast number of specific trajectories mathematically, varying such parameters as gun angle and initial shell velocity. The friction of air resistance slows the projectile second by second as it flies, but air resistance depends on such factors as the momentary speed of the projectile and its altitude.
222 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Thus, the accuracy of calculations is improved by dividing the trajectory into many short intervals of time and figuring the movement of the projectile in each interval on the basis of the output of the preceding intervals and changing parameters. At the dedication ceremony for ENIAC in 1946, the thirty-second trajectory of an artillery shell was calculated to demonstrate the machine's effectiveness. Using desk calculators, people would take three days to complete the job, compared with thirty minutes on the best ballistics analog computer, the differential analyzer. ENIAC did the calculation accurately in twenty seconds—less than the time the shell would be in the air. Because World War II had ended by the time ENIAC was ready, the first real job it did was evaluating the original design for the thermonuclear (hydrogen) bomb, finding that the design was flawed and causing the atomic scientists to develop a better approach. Filling 167 square meters in a large room, 27metric ton ENIAC was constructed in a “U” shape, with the panels and controls facing inward toward an area where the operators worked. ENIAC was built with about eighteen thousand vacuum tubes, consuming 174 kilowatts of electric power and keeping the room quite hot. Many experts had been skeptical that the machine could work because vacuum tubes frequently burned out, but taking great care in testing the tubes and running them below their specifications kept failures in use to about six hundred a year. ENIAC had both an IBM card reader and an automatic card punch, used chiefly for output and input of data calculated during one run that would be used later in another run; the cards were not used to enter programs. The computer was programmed largely by plugging in equipment and connecting by means of cables the twenty accumulators (electronic adders) that performed the calculations. Hundreds of flashing lights on the accumulators gave the operators clues about how the work was progressing. The calculations were done in the decimal system, rather than binary, and parameters were input manually by setting rotary switches. Switches also controlled local program-control circuits. To set parameters for a given run, the programmers held
paper instructions in one hand and used the other hand to turn rotary switches on the tall function tables, one 0–9 switch for each digit. Arranged in rows from head to ankle height, these switches had a simple color coding to reduce errors: Every fifth row of knobs was red and the others black; the plates behind the knobs alternated shinny with black, three columns at a time. Multiplication, division, and square-root calculation were handled by specially built components that could be plugged in as needed. A master programmer unit handled conditional (“ifthen”) procedures. Programmers might require a month to write a program for ENIAC and from a day to a week to set up and run one, but this was not as inefficient as it seems because after the machine was ready to do a particular job, a large number of runs could be cranked out rapidly with slight changes in the parameters. ENIAC continued to do useful work for the military until October 1955. Parts of this pioneering machine are on display at the National Museum of American History in Washington, D.C., along with videos of Presper Eckert explaining how it was operated. William Sims Bainbridge See also Atanasoff-Berry Computer; Supercomputers FURTHER READING McCartney, S. (1999). ENIAC: The triumphs and tragedies of the world's first computer. New York: Walker. Metropolis, N., Howlett, J., & Rota, G.-C. (Eds.). (1980). A history of computing in the twentieth century. New York: Academic Press. Stern, N. (1981). From ENIAC to UNIVAC: An appraisal of the EckertMauchly computers. Bedford, MA: Digital Press. Weik, M. H. (1961, January/February). The ENIAC story. Ordnance, 3–7.
ERGONOMICS The field of human factors and ergonomics plays an important and continuing role in the design of
ERGONOMICS ❚❙❘ 223
human-computer interfaces and interaction. Researchers in human factors may specialize in problems of human-computer interaction and system design, or practitioners with human factors credentials may be involved in the design, testing, and implementation of computer-based information displays and systems. Human factors and ergonomics can be defined as the study, analysis, and design of systems in which humans and machines interact. The goal of human factors and ergonomics is safe, efficient, effective, and error-free performance. Human factors researchers and practitioners are trained to create systems that effectively support human performance: Such systems allow work to be performed efficiently, without harm to the worker, and prevent the worker from making errors that could adversely affect productivity, or (more importantly), have adverse affects on him or herself or others. Research and practice in the field involve the design of workplaces, systems, and tasks to match human capabilities and limitations (cognitive, perceptual, and physical), as well as the empirical and theoretical analysis of humans, tasks, and systems to gain a better understanding of human-system interaction. Methodologies include controlled laboratory experimentation, field and observational studies, and modeling and computer simulation. Human factors and ergonomics traces its roots to the formal work descriptions and requirements of the engineer and inventor Frederick W. Taylor (1856–1915) and the detailed systems of motion analysis created by the engineers Frank and Lillian Gilbreth (1868–1924, 1878–1972). As human work has come to require more cognitive than physical activities, and as people have come to rely on increasingly sophisticated computer systems and automated technologies, human factors researchers and practitioners have naturally moved into the design of computerized, as well as mechanized, systems. Human factors engineering in the twenty-first century focuses on the design and evaluation of information displays, on advanced automation systems with which human operators interact, and on the appropriate role of human operators in supervising and controlling computerized systems.
An Ergonomics Approach to Human-Computer Interaction When it comes to designing computer systems, human factors and ergonomics takes the view that people do not simply use computers; rather, they perform tasks. Those tasks are as various as controlling aircraft, creating documents, and monitoring hospital patients. Thus, when analyzing a computer system, it is best to focus not on how well people interact with it (that is, not on how well they select a menu option, type in a command, and so forth), but how well the system allows them to accomplish their task-related goals. The quality of the interface affects the usability of the system, and the more congruent the computer system is with the users’ task- and goal-related needs, the more successful it will be. David Woods and Emilie Roth (researchers in cognitive engineering) describe a triad of factors that contribute to the complexity of problem solving: the world to be acted on, the agents (automated or human), and how the task is represented. In human-computer systems, the elements of the triad are, first, aspects of the task and situation for which the system is being employed; second, the human operator or user; and third, the manner in which information relevant to the task is represented or displayed. The role of the computer interface is to serve as a means of representing the world to the human operator. Research related to human factors and ergonomics has addressed numerous topics related to human-computer interaction, including the design of input devices and interaction styles (for example, menus or graphical interfaces), computer use and training for older adults, characteristics of textual displays, and design for users with perceptual or physical limitations. Areas within human factors and ergonomics that have direct applicability to the design of human-computer systems include those focused on appropriate methodologies and modeling techniques for representing task demands, those that deal with issues of function allocation and the design of human-centered automation, and those concerned with the design of display elements that are relevant to particular tasks.
224 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Methodologies and Modeling Frameworks for Task And Work Analysis John Gould and Clayton Lewis (1985) claimed that a fundamental component of successful computer system design requires an early and continual focus on system users and the tasks that they need to perform. In addition, Donald Norman (1988) has suggested that for a computer system to be successful, users must be given an appropriate model of what the system does, and how; information on system operation must be visible or available; and users must have timely and meaningful feedback regarding the results of their actions. Users should never be unable to identify or interpret the state of the computer system, nor should they be unable to identify or execute desired actions. Therefore, human factors and ergonomics research that focuses on humancomputer system design devotes considerable energies to analysis of system components, operator characteristics, and task requirements, using task and work analysis methods. A hierarchical task analysis (HTA), as described by Annett and Duncan (1967), decomposes a task into a hierarchical chain of goals, subgoals, and actions. Plans are associated with goals to specify how and when related subgoals and activities are carried out, and can take on a variety of structures. For instance, plans can have an iterative structure, describing activities that are repeated until some criterion is met. Plans may also describe a set of strictly sequential activities, or they may describe activities that can be done in parallel. In HTA, as in other forms of task analysis, one can specify task demands, criteria for successful completion, knowledge or skills required, information needed, and likely errors associated with each step. In human-computer system design HTA is used to help ensure that display information content corresponds to identified activities, that the organization of displays and control activities matches task requirements, and to provide additional support (in the form of better information displays, training, or task requirements) to reduce the likelihood of error.
Another task-analytic framework that features goal decomposition is GOMS, which stands for goals, operators, methods, and selection rules, has been developed by Card, Moran, and Newell (1983) and applied in subsequent research. Tasks are decomposed into task goals and the methods, or sequences of operators (or actions) that can accomplish them. Selection rules describe the means by which the operator selects a particular method. GOMS models have been used to describe, predict, and analyze human interactions with computing systems. For instance, GOMS models that have been constructed to describe user interactions with computer systems have actions such as keystrokes or mouse movements, along with more cognitive actions such as reading the screen. The psychologist David Kieras (1997) has described how GOMS models can be used to predict learning and task execution times for software systems; GOMS (along with a variation supporting the modeling of parallel activities) has also been used to model large-scale human-computer systems in order to predict task times for a system under design. Christine Mitchell (1987) has developed a third modeling framework, operator function modeling, which has been applied to the design of information displays. In this framework, systems goals, subgoals, and activities are represented as a set of interconnected nodes; each node (corresponding to a goal, subgoal, or activity) has the potential to change its state in response to external inputs or the states of higher-level goals. This technique makes it possible for researchers to model human operators’ actions in real time, which in turn helps them be sure that the necessary information is displayed at the appropriate time. Other forms of analyses and modeling within human factors, particularly in the subdiscipline of cognitive engineering, have focused specifically on the cognitive challenges associated with complex human-computer systems. By modeling the complexities that cause decision-making and problem-solving difficulties for human operators, researchers can develop solutions, such as more-effective interfaces, that mitigate those difficulties. Data collection methods frequently used in cognitive engineering work
ERGONOMICS ❚❙❘ 225
and task analyses include interviews with experts in the area under consideration and observation of practitioners. Some methods in cognitive task analysis focus on the identification and explication of real-world decisions made by experts. Other methods include cognitive work analysis, an iterative set of analyses and modeling efforts that address goals and resources relating to the tasks that must be performed, strategies for performing the tasks, the influence of the sociotechnical environment on system performance, and the knowledge and skills required of operators. (These methods have been developed and described in monographs by Jens Rasmussen, Annelise Mark Pejitsen, and L. P. Goodstein; and Kim Vicente.) An important component is identifying the complexities and constraints that adversely affect the behavior of actual users of the system; those constraints are often represented using abstraction-hierarchy models. Abstraction hierarchies are multilevel system models in which each level of the model corresponds to a description of the system at a different level of abstraction. Higher levels of abstraction represent the system in terms of its purpose and functions, whereas lower levels represent the system in terms of its physical implementation. Abstraction hierarchy models lay out the purposes of the overall system, the functions and systems available to achieve those purposes, and the constraints on their use or implementation. The cognitive work analysis paradigm also uses decision-ladder descriptions of tasks to be performed. A decision-ladder description represents stages of processing and resultant knowledge states of either human or automated agents, starting with the initial instance of a need for response, moving through the observation and classification of information, and ending with the selection and execution of an action. (The actual stages are activation, observation, state recognition, goal selection, task and procedure selection, and implementation.) The model provides explicitly for shortcuts between knowledge states and information-processing stages, allowing intermediary steps to be bypassed. Such shortcuts might be appropriate, for example, if an expert immedi-
ately was able to select a course of action based on recognition of a situation or state. Methods in cognitive task and work analysis have contributed to the design of information displays for numerous types of complex systems, including process control systems, military command and control systems, and information systems.
Function Allocation and Automation Design Another key research area within human factors and ergonomics that has direct application to the design of human-computer systems is the appropriate allocation of functions between human operators and automated systems. While early efforts in function allocation tended to rely on fixed lists of functions better suited to humans or machines and an either-or approach, more recent allocation schemes have focused on a more human-centered approach.Within these schemes, allocations can range from complete human control to complete automation,and there can be intermediate stages in which humans can override automated actions or choose from and implement actions recommended by the automated system.Other models have focused on the differing roles of human operation and automation at different stages in the decision process (the information gathering stage, the analysis stage, the actual decision making, and the implementation stage). Selection of an appropriate level of automation may also be dynamic, changing based on external task demands and circumstances. Related to problem of function allocation are considerations such as the degree of trust and reliance operators place on automated systems, the extent to which operators are aware of and understand the functioning of the automation, and the degree to which the use of information displays can mitigate any difficulties in these areas. For instance, research has found that information displays showing how well an automated decision-making element functions can improve human operators’ judgment with regard to using the aid. Other research has studied how automation may also affect operators’ degree of awareness regarding system function
226 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
V1
by keeping them effectively “out of the loop,” particularly for more complex systems.
V8
V2
Ergonomic Studies of Display Elements A third way in which human factors and ergonomics have made specific contributions to the design of human-computer interfaces is in the area of display elements and their ability to convey task and goalrelated information to human operators. For instance, one angle of human factors and ergonomics display research has focused on how to represent uncertainty (an important contributor to task complexity) in graphical form. Researchers have investigated the use of shapes such as ellipses or rings, linguistic phrases, color, or sound to convey positional uncertainty and blurred icons to convey state uncertainty. A display-design methodology called ecological interface design, developed by Kim Vicente, applies outcomes from cognitive work analysis to the design of information displays for complex system. This approach aims to design displays that support activities such as fault identification and diagnosis, as well as normal system monitoring and control, by making goal-relevant constraints and properties in the system visible through the interface. Principles in ecological interface design have been applied and tested in a variety of applications, including process control and aviation. Military command and control systems make use of interfaces that offer similar goal- and function-related information. Researchers have also studied properties of socalled “object” displays, which integrate multiple pieces of information into one graphical form. Holistic properties, or emergent features, defined by the values or configurations of individual elements of the graphical form, are used to convey information related to higher-level goals or system states. For instance, a star (or polygon) display will graph the values of system state variables graphed on individual axes. When the system is operating under normal conditions, a symmetric polygon is formed; deviations from normal are easily identified (as shown in Figure 1).
V7
V3
V6
V4 V5
Example of an object display. When the system is operating normally (solid line), based on the state of variables V1–V8, a regular polygon appears. Deviations (shown as a dashed line) are easily perceptible. FIGURE 1.
Case Study: The Application of Cognitive Analysis Methods to Display Design As noted above, methods in cognitive work and task analysis have been employed in the design of numerous complex human-machine systems, impacting the design of information displays, allocation of functions between humans and automated components, and the design of tasks and training requirements. Cognitive analyses were used in the design of a new naval surface vessel, as described by Ann W. Bisantz (2003). Information sources for the analyses included interviews with domain experts, design documents and written operational requirements, and other experts on the subject matter. The analyses were part of a multiyear design effort and were performed early in the design of the vessel. The effort focused on information displays, manning requirements, and human-automation function allocation for the command-and-control center of the ship. However, at the point of the design process when the analyses took place, choices regarding man-
ERGONOMICS ❚❙❘ 227
ning (the number of personnel that would be available to run the ship), the use of automation, and subsequent tasks that would be assigned to personnel were still to be determined. Thus, it was not possible for the designs to focus on detailed plans and specifications for the command-and-control workstations. Instead, models and research findings from cognitive engineering were used to make recommendations regarding display areas and content as follows. Work Domain Models As described above, functional abstractions of the work domain can help significantly with the design of information displays, because such abstractions make the goals of the work domain, as well as the functional and physical resources available to accomplish those goals, explicit. Additionally, work
Self Defense Against Undersea Threats
Land Attack Missions
domain models can be used to identify potential conflicts, such as when a system might need to be utilized to accomplish multiple goals simultaneously, or when the use of a system to accomplish one goal might negatively impact another goal. These conflicts and interactions suggest requirements for information displays that would let system controllers recognize the conflicts and make appropriate decisions. For instance, Figure 2 shows a portion of a work domain model of the naval environment. Notice that the gun system (a physical system) may be required to accomplish multiple high-level goals: It may be required for self defense against mines as well as for support of on-shore missions (land attack). Because different people may be responsible for coordinating and implementing these two goals, this potential conflict indicates the need for specific
Self Defense Against Mines
Neutralize Mines
Naval Surface Fire Suppor t
Counterbattery
Deliver Ordinance for Land Attack
Mine Neutralization Systems
Short Range Deliver y
Gun System
Work domain example showing how multiple goals can rely on one physical system, indicating requirements for the contents of information displays for those selecting among goals or utilizing the gun system. FIGURE 2.
Figure reprinted from Bisantz et al. (2003)
228 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Use of some sensors makes ship detectable
Defense of Systems and Personnel
Achieve Assigned Missions
Signature Maintenance
Battlespace Awareness
Maneuverin g Systems
Battlespace Sensors
Moving may disambiguate sensor data
Environmental Sensors
Use of sensors necessary for offensive functions
Knowledge of environmental conditions can impact choice of sensor settings
Portion of a work domain analysis indicating potential goal interactions involved in sensor management that would need to be displayed to operators. Source: Bisantz et al. (2003) FIGURE 3.
information display content, such as alerts or messages to subsystem controllers if the gun system is unavailable or if there are plans for its use, as well as alerts to higher-level commanders that if both the landattack and mine-defense goals are active, there may be a constraint in the availability of the gun system. Another example involves the management, use, and configuration of sensor systems, as shown in Figure 3. The configuration of certain types of sensors depends on environmental factors such as ocean conditions. In some circumstances it may be necessary to maneuver the ship to disambiguate sensor data. This movement may in turn make the ship detectable to enemy forces, thus creating a conflict between offensive and defensive ship goals. Again, such conflicts indicate the need for information to be displayed to sensor operators as well as to higherlevel commanders, who may need to prioritize these potentially conflicting goals. More generally, the work domain analyses led to recommendations regarding display areas that supported communication among controllers with different areas of responsibility (for example, ship
defense and offensive support), display indications regarding mission areas outside a controller’s primary area of responsibility, and displays for higherlevel commanders regarding mission goals and priorities (Bisantz et al. 2003). Decision Ladder Descriptions The application of the decision ladder formalism to describe operators’ tasks also led to recommendations for display design. As noted above, the decision ladder describes tasks in terms of the stages of activation, observation, state recognition, goal selection, task and procedure selection, and implementation and explicitly allows shortcuts and non-sequential paths through this sequence. Application of this method to the description of tasks in the undersea warfare domain indicated that many of the tasks comprised primarily observation and state recognition activities (rather than intensive goal selection or task-planning activities), thus suggesting that information displays that highlighted potential patterns and supported training or practice with pattern recognition would be valuable.
ERGONOMICS ❚❙❘ 229
Cross-linked Functional Matrices A third form of analysis, cross-linked functional matrices, also led directly to display-design recommendations. As part of the ongoing design effort, systems engineers were developing detailed functional decompositions of the functions and tasks that the ship would be required to perform. These breakdowns were utilized to make recommendations regarding automation and display requirements for each function, as well as to document the cognitive tasks associated with the function and to make recommendations on the contents of workstation display areas that would support those tasks and functions. For instance, one ship function is to filter tracks, that is, to apply filtering techniques to tracks (unknown contacts picked up by radar or other sensing systems) and remove tracks that are nonthreatening from the display. To filter tracks successfully, the primary display supporting monitoring and supervisory control activities should include an alert or indication that tracks are being filtered. The display also must provide access to detailed information, including, for example, the filtering algorithms that have been employed to determine the significance of the track. These requirements, as well those of ship functions that support the task of supervising the classification and identification of tracks, were specified. Finally, when the display needs of all the ship’s functions had been specified, a set of workstation display areas and their content was specified. Overall, eleven display areas were identified; these included the local-area picture, the task scheduling and status areas, the tactical picture area,the communications area,and the goals and highlevel constraints area, among others. Interface prototypes were then implemented and tested,and provided early validation of the utility of the display area concept. Importantly, this study focused on identifying appropriate information content for the displays,rather than on aspects such as interaction style, hardware requirements, or screen organization or design.
The Future As work continues to become more and more cognitive in nature, and as workplaces become more and more computerized, the field of human factors and
ergonomics will no doubt continue to increase its focus on research and methodologies appropriate for the design of complex human-computer systems. In the early twenty-first century, common themes within this work are the identification of system and task demands on the operator, concern for overall human-system effectiveness, and optimal application of methodologies and models to answer information requirements with supportive information displays. Ann M. Bisantz See also Task Analysis FURTHER READING Annett, J., & Duncan, K. D. (1967). Task analysis and training design. Occupational Psychology, 41, 211–221. Bainbridge, L. (1983). Ironies of Automation. Automatica, 19(6), 775–779. Bennett, K. B., Toms, M. L., & Woods, D. D. (1993). Emergent features and configural elements: Designing more effective configural displays. Human Computer Interaction, 35, 71–97. Billings, D. E. (1997). Aviation automation: The search for a humancentered approach. Mahwah, NJ: Lawrence Erlbaum Associates. Bisantz, A. M., Roth, E. M., Brickman, B., Lin, L., Hettinger, L., & McKinney, J. (2003). Integrating cognitive analysis in a large scale system design process. International Journal of HumanComputer Studies, 58, 177–206. Card, S. K., Moran, T. P., & Newell, A. (1983). The psychology of humancomputer interaction. Hillsdale, NJ: Lawrence Erlbaum. Dix, A., Finlay, J., Abowd, G., & Beale, R. (1998). Human-computer interaction. London: Prentice Hall. Endsley, M., & Kaber, D. B. (1999). Level of automation effects on performance, situation awareness and workload in a dynamic control task. Ergonomics, 42(3), 462–492. Finger, R., & Bisantz, A. M. (2002). Utilizing graphical formats to convey uncertainty in a decision-making task. Theoretical Issues in Ergonomics Science, 3(1), 1–24. Gilbreth, F., & Gilbreth, L. (1919). Applied motion study. London: Sturgis and Walton. Gould, J. L., & Lewis, C. (1985). Designing for usability: Key principles and what designers think. Communications of the ACM, 28(3), 300–311. Gray, W. D., John, D. E., & Atwood, M. E. (1993). Project Ernestine: Validating a GOMS analysis for predicting and explaining realworld task performance. Human Computer Interaction, 8(3), 237–309. Helander, M., Landauer, T. K., & Prabhu, P. V. (1997). Handbook of human-computer interaction (2nd ed.). Amsterdam: Elsevier Science-North Holland. Hoffman, R. R., Crandall, B., & Shadbolt, N. (1998). Use of the critical decision method to elicit expert knowledge: A case study in the
230 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
methodology of cognitive task analysis. Human Factors, 40(2), 254–276. Hutchins, E. L., Hollan, J. D., & Norman, D. A. (1986). Direct manipulation interfaces. In D. A. Norman & S. W. Draper (Eds.), User centered system design (pp. 87–124). Hillsdale, NJ: Lawrence Erlbaum. Kirschenbaum, S. S., & Arruda, J. E. (1994). Effects of graphic and verbal probability information on command decision-making. Human Factors, 36(3), 406–418. Lee, J. D., & Moray, N. (1994). Trust, self-confidence, and operators' adaptation to automation. International Journal of HumanComputer Studies, 40, 153–184. Martinez, S. G., Bennett, K. B., & Shattuck, L. (2001). Cognitive systems engineering analyses for army tactical operations. In Proceedings of the human factors and ergonomics society 44th annual meeting (pp. 523–526). Santa Monica, CA: Human Factors and Ergonomics Society. Mitchell, C. M., & Miller, R. A. (1986). A discrete control model of operator function: A methodology for information display design. IEEE Transactions on Systems, Man, and Cybernetics, SMC-16(3), 343–357. Muir, B. M. (1987). Trust between humans and machines, and the design of decision aids. International Journal of Man-Machine Studies, 27, 527–539. Norman, D. A. (1988). The psychology of everyday things. New York: Basic Books. Parasuraman, R., Sheridan, T., & Wickens, C. D. (2000). A model for types and levels of human interaction with automation. IEEE Transacctions on Systems, Man, and Cybernetics, 30(3), 286–297. Rasmussen, J., Pejtersen, A. M., & Goodstein, L. P. (1994). Cognitive systems engineering. New York: Wiley and Sons. Roth, E. M., Patterson, E. S., & Mumaw, R. J. (2002). Cognitive Engineering: Issues in User-centered System Design. In J. J. Marciniak (Ed.), Encyclopedia of software engineering (2nd ed., pp. 163–179). New York: Wiley Interscience, John Wiley and Sons. Salvendy, G. (Ed.). Handbook of human factors. New York: Wiley and Sons. Stammers, R. B., & Shephard, A. (1995). Task analysis. In J. R. Wilson & E. N. Corlett (Eds.), Evaluation of human work (pp. 144–168). London: Taylor and Francis. Taylor, F. W. (1911). The principles of scientific management. New York: Norton and Company. Vicente, K. J. (1999). Cognitive work analysis. Mahwah, NJ: Erlbaum. Vicente, K. J. (2002). Ecological interface design: Progress and challenges. Human Factors, 44(1), 62–78. Woods, D. D., & Roth, E. M. (1988). Cognitive systems engineering. In M. Helander (Ed.), Handbook of human computer interaction (pp. 3–35). Amsterdam: Elsevier.
ERRORS IN INTERACTIVE BEHAVIOR Designing interactive systems to reduce error and increase error detection and recovery is an important—and often frustrating—goal. “Human error”
is an everyday term with different meanings for different communities of practitioners and researchers, but the fact that different users of the same term may refer to very different phenomena does not seem to be widely recognized. A further difficulty is that although many researchers collect error data, there is no established research tradition to experimentally manipulate and study error as a phenomenon. In short, the subject of errors is broad, but the study of errors is shallow. After reviewing the state of the art in human error research, this article examines studies of the cognitive processes that lead to errors in interactive behavior.
Assessment of Current Research Errors may affect productivity, user satisfaction, and the safety of property and lives. Errors may be diagnosed as training problems, design problems, system problems, or organizational problems, and the diagnosis will determine the remediation. Training problems are remediated by better documentation or user training. Design problems are fixed by redesign. Systems problems require a wider look at the complete task environment of the user to determine incompatibilities or conflicts between the design of multiple devices used by the same worker or in the functions and responsibilities of the worker. Finally, organizational problems (for example, strong pressures to reduce costs at the expense of all else, including safety) may require remediations as diverse as the adoption of new procedures, changes in organizational structure, or the replacement of top management. Although legitimate, the breadth of phenomenon covered by the term “human error” has tended to get in the way of understanding its nature, detection, and correction. To some degree, these different meanings have caused communication difficulties and have occasionally resulted in turf battles in which different communities argue for the primacy of their level of analysis (for instance, cognitive, systems, or organizational). More generally, these various meanings muddy the waters because distinctions that are important within one level of analysis are lost or blurred by attempts to cast all error phenomena within the same framework.
ERRORS IN INTERACTIVE BEHAVIOR ❚❙❘ 231
To Err Is Technological CHICAGO (ANS)—Humans may err, but computers are supposed to be accurate all the time. Except, of course, they're not. And as humans rely more and more on technology, they have to make allowances for the error factor of both the hardware and the people operating it, researchers are finding. In recent studies conducted in flight simulators, pilots who relied solely on automated decision aids—designed to reduce human error—often found themselves the victims of unintended consequences that might have proved deadly in an actual flight. According to University of Illinois at Chicago psychologist Linda Skitka, who has been studying the phenomenon with a teammate for five years, people working with computerized systems are prone to two kinds of errors. First, when they are told by a computer to do a task, many do it without double-checking the machine's accuracy, despite the fact they've been told the system is not fail-safe. The researchers dubbed this an error of commission. For example, the test pilots were told to go through a five-step checklist to determine whether or not an engine was on fire. One of the elements was a computerized warning signal. When they received the signal, the pilots all turned off the defective engine—without running through the other four steps. It turned out that a completely different engine had been on fire. When asked about their decision, all the pilots said they had run through the entire checklist when in fact they had not. “Most of these systems are being designed by engineers who think the way to get rid of human error is to engineer the human out of the equation,” said Skitka.“To some extent, that's right. But to the extent that we still have human operators in the system, we need to take a
This situation holds even among the communities of researchers and practitioners interested in human factors and human-computer interaction. As
look at the human-computer interaction and be more sensitive to the human side.” The second common mistake, which researchers classified as an error of omission, takes place when a computer fails to detect a mishap and human operators miss it too because they haven't run through a manual checklist. It was an error of omission that led to the crash of a Korean Air jet in 1983 after being shot down over Soviet airspace, Skitka said. The pilot allegedly never double-checked the autopilot program to make sure it was following the correct flight path. It wasn't, she said. Indeed, in studying anonymous near-accident reports filed with the airlines by pilots, Skitka found that many mistakes involved pilots programming the flight computer to do specific tasks but not bothering to check that it was performing those tasks. The studies were conducted at the NASA Ames Research Center in California and at the University of Illinois and have left Skitka suspicious of any task that involves highly technical systems that monitor events. That includes work in the nuclear energy and shipping industries and even hospital intensive care units, where monitors are relied on for life-and-death decisions, she said. Better technical design and operator training are potential solutions, she said. Perhaps the biggest problem is that many of the tasks that need to be performed in automated situations are dull. Those tasks need somehow to be made more interesting so humans don't go into autopilot themselves, she said. “I'm still a fan of automation but now we've introduced new possibilities for human error,” said Skitka. “(Computers) are never going to be able to be programmed for every possible contingency. We have to make sure we keep that human factor in our equation.” Source: To err is technological, new research finds. American News Service, October 5, 2000.
one of the most influential thinkers on the topic complained, “the need for human error data for various purposes has been discussed for decades, yet no
232 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
To err is human—and to blame it on a computer is even more so. —Robert Orben
acceptable human error data bank has emerged” (Rasmussen 1987, 23). However, even within the human factors and human-computer interaction communities, these “various purposes” are somewhat independent and attempts to shoehorn them into one approach may not be the path to progress, but an obstacle to progress. For example, major reviews of human errors discuss the cognitive, systems, and organizational perspectives on human error. Although each perspective is important, it is difficult to know how progress in understanding the roots of error at, for instance, the organizational level, will either provide or be aided by insights into, for instance, the errors encountered in the course of routine interactive behavior. Although a confusion of level of analysis is a major obstacle to understanding human error, it is not the only one. Perhaps equally damaging is the way in which errors are collected and classified. Following a tradition that goes back at least to William James, the most famous error taxonomies simply amass a large number of naturally occurring slips as reported anecdotally by friends, colleagues, the current researchers, and prior researchers. These errors are then subjected to a largely informal analysis that sorts the errors into the taxonomic categories favored by the researchers. Attempts to compare how any given error is classified within or between taxonomies brings to mind the complaint that “cognitive theory is radically underdetermined by data” (Newell 1992, 426). Although some of these taxonomies rely on cognitive theory as the basis of their classifications, all lack the mechanisms to predict errors. Hence, their explanatory power is only post hoc and incidental. Indeed, a 1997 survey of the literature on errors led the researchers to conclude that errors in routine interactive behavior are regarded primarily as the result of some stochastic process. Such a view discourages the systematic study of the nature and origin of this class of errors.
Approaches to the Study of Error in Interactive Behavior An analysis of errors at the cognitive level avoids neither the confusion nor the shallowness endemic to the study of human error. Arguably the dominant cognitive account of errors distinguishes among knowledge-based, rule-based, and skill-based errors. Knowledge-based errors occur when a user lacks the requisite knowledge—for example, if the only route you know from your home to work is habitually crowded during rush hour, you will undoubtedly spend a lot of time waiting in traffic. If you do not know an alternative route, then obviously you will not be able to take it. Rule-based errors result from learning a mal-rule or applying the correct rule but on the wrong occasion. For example, if you know two routes to work and you also know that one is the fastest during rush hour and the other is the fastest on the off hours, taking the wrong route becomes a rule-based error. You have the correct knowledge, but you picked the wrong rule. An error is “skill-based” when knowledge is available, the correct rule is selected, but a slip is made in executing the rule. For example, you intend to take your rush-hour route to work, but at the critical intersection you take the turn for the route that is fastest during the off hours. The same behavior, e.g., “taking the wrong route during rush hour,” can result from lack of knowledge, misapplication of a rule, or a slip. Hence, the knowledge-based, rule-based, and slip-based approach to errors is neither as neat and clean nor as theory-based as it may first appear. Whether an error is classified as skill-based, rule-based, or knowledgebased may depend more on the level of analysis than on its ontogeny. Unfortunately, the view that errors in routine interactive behavior are stochastic is reinforced by the difficulties of systematically studying such errors. Indeed, it is almost a tautology to assert that errors in routine interactive behavior are rare. This rarity may have encouraged the naturalistic approach in which researchers and their confederates carry around notebooks with the goal of noting and
ERRORS IN INTERACTIVE BEHAVIOR ❚❙❘ 233
recording the occasional error. Naturalistic approaches have an important role to play in documenting the importance and frequency of error. However, they have not been particularly productive in understanding the cognitive mechanisms that determine the nature, detection, and correction of errors in interactive behavior. New Directions The rarity of errors requires research programs that will capture and document errors, not retrospectively, but as they occur. The systematic study of such errors of interactive behavior involves three interrelated paths. The first path entails creating a task environment designed to elicit a particular type of error. The second involves collecting errorful and error-free behaviors and subjecting both to a finegrained analysis. A cost of these two approaches is that they require collecting vast amounts of correct behavior to amass a small database of errorful behavior. For example, in 2000 cognitive researcher Wayne Gray reported that out of 2,118 goal events (either initiating or terminating a goal) only 76 or 3.6 percent could be classified as errors. The third path entails building integrated models of cognition that predict the full range of behavior, including reaction time, correct performance, and errors. The study of the cognitive mechanisms that produce errors has been hampered by the long tradition in psychology of attempting to understand the mind by studying each mental function in isolation. Fortunately, contrasting trends exist. For example, the pioneering researchers Stewart Card, Thomas Moran, and Allen Newell are credited with bringing to HCI the attempt “to understand in detail the involvement of cognitive, perceptual, and motor components in the moment-by-moment interaction a person encounters when working at a computer” (Olson and Olson 2003, 493). Indeed, building on this work, the noted HCI investigator Bonnie John developed a task analysis notation that captures the ways in which embodied cognition (cognitive, perceptual, and action) is responsive to small changes in the task environment. This approach is called CPM-GOMS (CPM—critical path
method and cognitive, perceptual, and movement; GOMS—goals, operators, methods, and selection rules). Despite a strong push from the cognitive HCI community, within the larger cognitive community the emphasis on an embodied cognition interacting with a task environment to accomplish a task has been a minority position. Fortunately, its status seems to have changed as we now have six approaches to embodied cognition and at least two mechanistic approaches capable of modeling the control of interactive behavior. The components of interactive behavior can be studied by focusing on the mixture of cognition, perception, and action that takes approximately 1⁄3 of a sec to occur. As human rationality is bounded by limits to working memory, attention, and other cognitive functions the exact mix of operations depends on the task being performed and the task environment. Understanding how the task environment influences the mix of operations is the key to understanding human error in interactive behavior, as the following four examples show: A GOAL STRUCTURE ANALYSIS OF THE NATURE,
In 2000 Gray provided a goal structure analysis of errors made programming a VCR. A cognitive model was written that used the same goal structure as humans with the goals and subgoals analyzed down to those that take approximately 1-s to occur (three times higher than required for the analysis of embodied cognition). This level of analysis allowed second-bysecond comparisons of human behavior with model behavior (that is, model tracing). Places in which the model and a given human on a given trial diverged were considered potential errors. Each potential error was inspected to determine whether it represented a true error or a failure of the model to capture the richness and diversity of human goal structures. “True errors” were cataloged according to the actions that the model would have had to take to duplicate the error. This taxonomy avoided the use of more ambiguous terms such as “knowledge-based,”“rule-based,” and “slip-based,” or “capture errors,” “description errors,” and “mode DETECTION, AND CORRECTION OF ERRORS
234 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
errors.” Thus, model tracing was used to provide a rigorous and objective taxonomy with which to characterize the nature, detection, and correction of errors. A MEMORY ACTIVATION ANALYSIS OF POSTCOMPLETION
Postcompletion errors are device-specific errors made after the target task has been accomplished. A classic postcompletion error is making copies of a paper but forgetting to remove the original. Computational modelers Mike Byrne and Susan Bovair showed that a model that was sensitive to the working memory demands of the task environment could duplicate the pattern of human postcompletion errors.
ERROR
LEAST-EFFORT TRADEOFFS BETWEEN KNOWLEDGE IN-THE-WORLD AND KNOWLEDGE IN-THE-HEAD
Researchers Wayne Gray and Wai-Tat Fu were able to show an increase of errors in interactive behavior due to least-effort tradeoffs between reliance on knowledge in-the-world and knowledge in-the-head. Subjects in two conditions of a VCR programming task could acquire show information either by looking at a show information window (Free Access) or by moving the mouse and clicking on the gray box that covered a field of the window (Gray Box). Subjects in a third condition were required to memorize the show information before they began programming (Memory Test). Results showed that the Gray Box condition made the most errors, followed by Free Access, and then the Memory Test. The results were interpreted to mean that the increased perceptual-motor costs of information acquisition led the Free Access and Gray Box groups to an increased reliance on error-prone memory. I N T E G R A T E D M O D E L O F C O G N I T I O N In a 2002 paper researchers Erik Altmann and Gregory Trafton proposed a goal-activation model of how people remember the states of the world they want to achieve. In subsequent work, this model was applied to yield predictions about the cognitive effects of interruptions on task performance (for instance, being interrupted by the phone while writing a paper). For the cognitive level of analysis, this work demonstrates that the basic research agenda of producing integrated
models of cognitive processing is the key to understanding, detecting, and correcting human errors.
Applying a Bounded Rationality Framework Errors are infrequent, but not rare. Their infrequency has discouraged many from studying errors within the experimental laboratory and may have discouraged a rigorous, theory-based approach to understanding how cognitive processes interact with the task environment to produce errors. The naturalistic approach to errors is enticing, but a hundred years of this approach has not yielded much progress. Although the importance of errors must be judged by their effect on everyday life, the study of the nature, detection, and correction of errors must be pursued in the laboratory. For those concerned with human errors in HCI, a fruitful path is to pursue the errors that emerge from the interaction of embodied cognition with a task being performed in a given task environment. This bounded rationality framework focuses on the mixture of cognition, perception, and action that takes approximately 1/3 of a sec to occur. The goal of this work is the creation of powerful theories that would allow researchers and practitioners to predict the nature and probable occurrence of errors within a given task environment. Wayne D. Gray See also Cognitive Walkthrough; User Modeling FURTHER READING Allwood, C. M. (1984). Error detection processes in statistical problem solving. Cognitive Science, 8, 413–437. Allwood, C. M., & Bjorhag, C. G. (1990). Novices debugging when programming in Pascal. International Journal of Man-Machine Studies, 33(6), 707–724. Allwood, C. M., & Bjorhag, C. G. (1991). Training of Pascal novices error handling ability. Acta Psychologica, 78(1–3), 137–150. Altmann, E. M., & Trafton, J. G. (2002). Memory for goals: an activation-based model. Cognitive Science, 26(1), 39–83. Anderson, J. R., Bothell, D., Byrne, M. D., & Lebiere, C. (2002). An integrated theory of the mind. Retrieved October 17, 2002, from http://act-r.psy.cmu.edu/papers/403/IntegratedTheory.pdf
ETHICS ❚❙❘ 235
Anderson, J. R., & Lebiere, C. (Eds.). (1998). Atomic components of thought. Hillsdale, NJ: Erlbaum. Ballard, D. H., Hayhoe, M. M., Pook, P. K., & Rao, R. P. N. (1997). Deictic codes for the embodiment of cognition. Behavioral and Brain Sciences, 20(4), 723–742. Benzeev, T. (1995). The nature and origin of rational errors in arithmetic thinking: Induction from examples and prior knowledge. Cognitive Science, 19(3), 341–376. Berry, D. C. (1993). Slips and errors in learning complex tasks. In G. M. Davies & L. R. H. (Eds.), Memory in everyday life. Advances in psychology (pp. 137–159). Amsterdam: North-Holland/Elsevier. Byrne, M. D. (2001). ACT-R/PM and menu selection: applying a cognitive architecture to HCI. International Journal of HumanComputer Studies, 55(1), 41–84. Byrne, M. D., & Bovair, S. (1997). A working memory model of a common procedural error. Cognitive Science, 21(1), 31–61. Card, S. K., Moran, T. P., & Newell, A. (1983). The psychology of human-computer interaction. Hillsdale, NJ: Erlbaum. Elkerton, J., & Palmiter, S. L. (1991). Designing help using a GOMS model: An information retrieval evaluation. Human Factors, 33(2), 185–204. Gray, W. D. (1995). VCR-as-paradigm: A study and taxonomy of errors in an interactive task. In K. Nordby, P. Helmersen, D. J. Gilmore, & S. A. Arnesen (Eds.), Human-Computer Interaction—Interact'95 (pp. 265–270). New York: Chapman & Hall. Gray, W. D. (2000). The nature and processing of errors in interactive behavior. Cognitive Science, 24(2), 205–248. Gray, W. D., & Boehm-Davis, D. A. (2000). Milliseconds matter: An introduction to microstrategies and to their use in describing and predicting interactive behavior. Journal of Experimental Psychology: Applied, 6(4), 322–335. Gray, W. D., & Fu, W.-t. (2001). Ignoring perfect knowledge in-theworld for imperfect knowledge in-the-head: Implications of rational analysis for interface design. CHI Letters, 3(1), 112–119. Gray, W. D., & Fu, W.-t. (in press). Soft constraints in interactive behavior: The case of ignoring perfect knowledge in-the-world for imperfect knowledge in-the-head. Cognitive Science. Gray, W. D., John, B. E., & Atwood, M. E. (1993). Project Ernestine: Validating a GOMS analysis for predicting and explaining real–world performance. Human-Computer Interaction, 8(3), 237–309. Gray, W. D., Palanque, P., & Paternò, F. (1999). Introduction to the special issue on: interface issues and designs for safety-critical interactive systems. ACM Transactions on Computer-Human Interaction, 6(4), 309–310. Heckhausen, H., & Beckmann, J. (1990). Intentional action and action slips. Psychological Review, 97(1), 36–48. James, W. (1985). Psychology: The briefer course. Notre Dame, IN: University of Nortre Dame Press. (Original work published 1892.) John, B. E. (1990). Extensions of GOMS analyses to expert performance requiring perception of dynamic visual and auditory information. In J. C. Chew & J. Whiteside (Eds.), ACM CHI'90 Conference on Human Factors in Computing Systems (pp. 107–115). New York: ACM Press. John, B. E. (1996). TYPIST: A theory of performance in skilled typing. Human-Computer Interaction, 11(4), 321–355. Kieras, D. E., & Meyer, D. E. (1997). An overview of the EPIC architecture for cognition and performance with application to humancomputer interaction. Human-Computer Interaction, 12(4), 391–438.
Newell, A. (1992). Precis of unified theories of cognition. Behavioral and Brain Sciences, 15(3), 425–437. Nooteboom, S. G. (1980). Speaking and unspeaking: Detection and correction of phonological and lexical errors in spontaneous speech. In V.A. Fromkin (Ed.), Errors in linguistic performance: Slips of the tongue, ear, pen, and hand (pp. 87–95). San Francisco: Academic Press. Norman, D. A. (1981). Categorization of action slips. Psychological Review, 88(1), 1–15. Ohlsson, S. (1996a). Learning from error and the design of task environments. International Journal of Educational Research, 25(5), 419–448. Ohlsson, S. (1996b). Learning from performance errors. Psychological Review, 103(2), 241–262. Olson, G. M., & Olson, J. S. (2003). Human-computer interaction: Psychological aspects of the human use of computing. Annual Review of Psychology, 54, 491–516. Payne, S. J., & Squibb, H. R. (1990). Algebra mal-rules and cognitive accounts of error. Cognitive Science, 14(3), 445–481. Rasmussen, J. (1987). The definition of human error and a taxonomy for technical system design. In J. Rasmussen, K. Duncan, & J. Leplat (Eds.), New technology and human error (pp. 23–30). New York: Wiley. Reason, J. (1990). Human error. New York: Cambridge University Press. Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review, 63, 129–138. Trafton, J. G., Altmann, E. M., Brock, D. P., & Mintz, F. E. (2003). Preparing to resume an interrupted task: Effects of prospective goal encoding and retrospective rehearsal. International Journal of Human-Computer Studies, 58(5), 583–603. VanLehn, K. A. (1990). Mind bugs: The origins of procedural misconceptions. Cambridge, MA: MIT Press. Vicente, K. J. (2002). Ecological interface design: Progress and challenges. Human Factors, 44(1), 62–78.
ETHICS Philosophical interest in the ethical implications of the development and application of computer technology emerged during the 1980s, pioneered by, among others, Terrell Ward Bynum, Deborah Johnson, Walter Maner—usually credited with coining the phrase computer ethics—and James Moor. These philosophers and others laid the foundations for a field of study that, for a number of years, encompassed three central lines of inquiry: (1) ethical questions and challenges to social, moral, and political values raised by changes in society and individual lives, (2) the nature of computer ethics itself, and (3) ethical obligations of professional experts in computer and information technologies and engineering. More recently the field has broadened to include strands from neighboring disciplines.
236 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Ethics, Values, and the Impacts of Computer and Information Technologies Incorporating most of the work in the field, this line of inquiry focuses on the impacts of computing and information technologies that raise ethical questions as well as questions about moral, political, and social values in societies and in individuals’ lives. Many of the issues that emerged early on, such as intellectual property, responsibility, crime, privacy, autonomy, free speech, and quality of life, have remained important and have evolved alongside developments in the technologies themselves. Philosophers engaged in the study of impacts have approached their subject from at least two perspectives. In one they have asked about the nature of moral obligations in light of particular changes, thus being concerned with right and wrong actions of people. In the other they have been concerned with the status of particular values in society and how these are affected by technology-induced changes. In the case of intellectual property, philosophical interest focused on moral obligations owed to the creators and owners of software. Philosophers, like their colleagues in law, recognized key metaphysical (relating to a branch of philosophy that is concerned with the fundamental nature of reality and being) differences between computer software and traditional forms of intellectual property and sought to understand whether and in what ways these differences affect the extent and nature of property protection that software deserves. By the mid-1990s and into the present, as the Internet and World Wide Web developed and increased in popularity, most of the attention given to intellectual property has been focused on controversial questions concerning digital representations of a wide range of intellectual and cultural works (including text, images, music, and video), peer-to-peer file sharing, and even Web-linking (the use of Web hyperlinks to move from one web page to another). From the perspective of values, philosophers have questioned social and legal decisions that have shaped the relative strength and standing of intellectual property in the face of other values, such as freedom to share.
Computer technology raised questions about attributing moral responsibility for harmful consequences of action as philosophers and others noted the increasing use of computer systems in control functions, sometimes replacing human controllers, sometimes mediating human action, sometimes automating complex sequences of tasks. Ethical concerns went hand in hand with technical concerns. Where computer scientists and engineers worried about correctness, reliability, safety and dependability, philosophers asked whether increasing reliance on computer-controlled automation is warranted and whether, secondarily, it leads to a diminishment of accountability for malfunctions, dangers, and harms due to computerization. Another intriguing line of questions was taken up by philosophers such as Kari Coleman, Arthur Kuflik, James Moor, and John Snapper. This line concerned responsibility and was spurred by actual and predicted advances in artificial intelligence. It asked whether aspects of human agency, such as lifeand-death decisions, should ever be delegated to computers no matter what the relative competency levels. A twist in this line of questions is whether a time will come when humans will have moral obligations to intelligent machines. An issue related to that of responsibility is the nature and severity of computer crime and the variety of harms wrought on others in the context of computer-mediated communications and transactions. Philosophers participated in early debates over whether actions such as gaining unauthorized access to computer systems and networks should be judged as crimes or whether such judgment should be reserved for cases where clear damage results, as in the cases of transmitting computer viruses and worms and posting obscene or threatening materials. Privacy has been one of the most enduring issues in this category. Philosophers have focused attention on privacy as a social, political, and individual value threatened by developments and applications of computer and information technologies. Philosophers have participated in the chorus of voices, which also includes scholars of law, policy, and social science and privacy advocates, that has denounced many of these developments and applications as dangerously erosive of privacy. As with other issues, the
ETHICS ❚❙❘ 237
nature of the activities that raise concern shifts through time as a result of evolving technologies and their applications. The earliest applications to take the limelight were large government and corporate databases. Cries of “Big Brother” resulted in various legal constraints, including, most importantly, the U.S. Privacy Act of 1974. Through time dramatic reductions in the cost of hardware and improvements in the capacities to collect, store, communicate, retrieve, analyze, manipulate, aggregate, match, and mine data led to a proliferation in information gathering throughout most sectors of society and an amplification of early concerns. In parallel with these developments, we experienced an upsurge in identification and surveillance technologies, from video surveillance cameras to biometric (relating to the statistical analysis of biological observations and phenomena) identification to techniques (such as Web cookies) that monitor online activities. Each of these developments has attracted concern of a broad constituency of scholars, practitioners, and activists who have applied their areas of knowledge to particular dimensions of the developments. Philosophers, such as Judith DeCew, Jeroen van den Hoven, James Moor, Anton Vedder, and Helen Nissenbaum, have taken up particularly two challenges: (1) improving conceptual understanding of privacy and the right to privacy and (2) refining theoretical underpinnings and providing a systematic rationale for protecting the right to privacy. Finally, a category of questions concerning “quality of life” asks, more generally, about the ways comp u t e r a n d i n f o r m a t i o n t e c h n o l o g i e s h ave impinged on core human values. We could include in this category a variety of concerns, starting with the digital divide—the possibility that computer technology has increased the socioeconomic gap between those groups of people with power and wealth and historically disadvantaged socialeconomic, racial, and gender groups. Such questions concerning social justice within societies have been extended to the global sphere and the vastly different levels of access available in countries around the globe. Another element in the category of “quality of life” concerns the impacts on relationships, such as those among friends, romantic partners, family members, and teachers and students, made by computers
and digital networking technologies. Many researchers have pointed to the enormous positive potential of collaborating online and building community and accessing vast troves of information. However, some philosophers have asked whether the intrusion of digital technologies debases these spheres of life—replacing the actual with the virtual, replacing face-to-face communication with mediated communication, replacing family and intimate interactions with chat rooms and online games, and replacing human teachers and mentors with computerized instruction—and deprives them of their essentially human character and consequently deprives us of meaningful opportunities for emotional, spiritual, and social growth. The influence of Continental philosophers, including Edmund Husserl and Emmanuel Levinas, is more apparent here than in previously mentioned areas where AngloAmerican, analytical thought tends to dominate.
Metaethics of Computer and Information Technology Many philosophers leading the inquiry of ethics and information technology have raised questions about the nature of the inquiry itself, asking whether anything is unique, or uniquely interesting, about the moral and political issues raised by information technology. The continuum of responses is fairly clear, from a view that nothing is philosophically unique about the issues to the view that settings and capacities generated by computer and information technologies are so novel and so distinctive that they demand new theoretical approaches to ethics. The more conservative approaches assume that we can reduce the problems in computer ethics (for example, any of those mentioned earlier) to the more familiar terms of ethics and applied ethics, generally. From there the problems are accessible to standard ethical theories. For example, although transmitting computer viruses is a novel phenomenon, after we cast it as simply a new form of harming others’ property, it can be treated in those familiar terms. Other philosophers, such as Luciano Floridi, have suggested that because these technologies create new forms of agency or new loci of value
238 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
itself, new ethical theories are required to resolve problems. James Moor, in an essay entitled “What Is Computer Ethics?,” offers something in between. Computer ethics deserves attention because it raises not only policy questions that are new (such as “Should we allow computer programs to be privately owned?”) but also novel conceptual questions about the very nature of a computer program, whether more like an idea, a process, or a piece of writing. These conceptual puzzles, particularly acute in the case of privacy, explain why we continue to struggle to resolve so many of the controversial questions that privacy raises.
and Electronics—have developed codes of professional ethics. Two issues remain controversial. One issue deals with the nature and limits of professional codes. The philosopher Michael Davis has provided a thoughtful account of the role of codes of conduct in encouraging ethical professional practice, in contrast to John Ladd, who has challenged the very possibility that professional codes of conduct can rightly be thought of as codes of ethics. The other issue, specific to the professions within computer technologies, asks whether they are sufficiently similar to traditional professions of law and medicine to warrant the label of “professions.”
Computer Ethics as Professional Ethics
Porous Borders
Some contributors to the field of computer ethics have seen its greatest potential as a guide for computer scientists, engineers, and other experts in the technologies of computing and information, thus placing it in the general area of professional ethics. Early proponents of this idea, Donald Gotterbarn and Keith Miller, added their voices to those of socially concerned computer scientists and engineers who—starting with Norbert Wiener and Joseph Weizenbaum, followed by Terry Winograd, Peter Neumann, and Alan Borning—exhorted their colleagues to participate actively in steering social deliberation, decision, and investment toward socially, politically, and morally positive ends and also to warn of dangers and possible misuse of the powerful technologies of computation and information. In this area, as in other areas of professional ethics, such as legal and medical ethics, key questions included the duties accruing to computer scientists and engineers as a consequence of their specialized knowledge and training. In the area of system reliability, for example, computer engineers such as Nancy Leveson have focused enormous energies to articulate the duty to produce, above all, safe systems, particularly in life-critical areas. Responding to calls for greater focus on professional duties, at least two major professional organizations—the Association for Computing Machinery and the Institute of Electrical Engineering
Although the philosophical community pursuing inquiry into ethical implications of information technology remains relatively small, its intellectual borders are fluid. Since the decades of its emergence, it has been enriched by developments in the literatures and methods of neighboring fields. In turn, many of the works produced within those fields have been influenced by the work of ethicists. A few examples, where cross-disciplinary flow has been particularly active, bear mentioning. One example is information law, which emerged into prominence roughly a decade after philosophical issues of privacy, intellectual property, free speech, and governance spurred many of its core works by legal scholars such as Lawrence Lessig, Yochai Benkler, James Boyle, Pamela Samuelson, Jerry Kang, and Niva Elkin-Koren. As a result of these works, philosophical studies have paid greater attention to issues of public values, the direct effects of policy on values, and the meaning for society of key court rulings. A second prominent influence has come from the areas of philosophy and social study of science and technology where theoretical writings and empirical studies of scholars such as Langdon Winner, Albert Borgman, Bruno Latour, Wiebe Bijker, Andrew Feenberg, and Donald MacKenzie have inspired novel approaches to many of the substantive issues of ethics and information technology. Ideas such as the social shaping of technical systems and the values embodied in system design have focused
ETHNOGRAPHY ❚❙❘ 239
philosophical attention on design and development details of specific systems and devices, opening a line of work that views the design of systems and devices not as a given but rather as a dependent variable. Although influenced by such ideas, ethicists approach them with a different goal, not only seeking to describe but also to evaluate systems in terms of moral, political, and social values. Philosophers who have pursued these lines of inquiry include Deborah Johnson, Jeroen van den Hoven, and Philip Brey, who has interpreted many key works in social constructivism (an approach to the social and humanistic study of technology that cites social factors as the primary determinants of technical development) for philosophical audiences and developed a concept of disclosive ethics (a philosophical approach which holds that system design may “disclose” ethical implications). Interest in design as a dependent variable has also led to collaborations among philosophers, computer scientists, and researchers and designers of human-computer interfaces who have been inspired by the complex interplay between computer systems and human values. These collaborations are important test beds of the idea that a rich evaluation of technology can benefit from simultaneous consideration of several dimensions: not only technical design, for example, but also, ideally, empirical effects on people and an understanding of the values involved. For example, Lucas Introna and Helen Nissenbaum studied a search-engine design from the point of view of social and political values. Their study tried to achieve an in-depth grasp of the workings of the search engine’s system, which they evaluated in terms of fairness, equality of access to the Web, and distribution of political power within the new medium of the Web. Other researchers who reach across disciplines include Jean Camp and Lorrie Cranor, as well as Batya Friedman, Peter Kahn, and Alan Borning, who have developed value-sensitive design as a methodology for developing computer systems that take multiple factors into consideration. Helen Nissenbaum See also Law and HCI; Privacy; Value Sensitive Design
FURTHER READING Adam, A. (2002). Cyberstalking and Internet pornography: Gender and the gaze. Ethics and Information Technology, 2(2), 133–142. Brey, P. (1997). Philosophy of technology meets social constructivism. Techne: Journal of the Society for Philosophy and Technology, 2(3–4). Retrieved March 24, 2004, from http://scholar.lib.vt.edu/ejournals/SPT/v2n3n4/brey.html Bynum, T. W. (2001). Computer ethics: Its birth and its future. Ethics and Information Technology, 3(2), 109–112. Dreyfus, H. L. (1999). Anonymity versus commitment: The dangers of education on the Internet. Ethics and Information Technology, 1(1), 15–21. Elkin-Koren, N. (1996). Cyberlaw and social change: A democratic approach to copyright law in cyberspace. Cardozo Arts & Entertainment Law Journal, 14(2), 215–end. Floridi, L. (1999). Information ethics: On the philosophical foundations of computer ethics. Ethics and Information Technology, 1(1), 37–56. Gotterbarn, D. (1995). Computer ethics: Responsibility regained. In D. G. Johnson & H. Nissenbaum (Eds.), Computers, ethics, and social values (pp. 18–24). Englewood Cliffs, NJ: Prentice Hall. Grodzinsky, F. S. (1999). The practitioner from within: Revisiting the virtues. Computers and Society, 29(1), 9–15. Introna, L. D. (2001). Virtuality and morality: On (not) being disturbed by the other. Philosophy in the Contemporary World, 8(1), 31–39. Introna, L. D., & Nissenbaum, H. (2000). Shaping the Web: Why the politics of search engines matters. The Information Society, 16(3), 169–185. Johnson, D. G. (2001). Computer ethics (3rd ed.). Upper Saddle River, NJ: Prentice Hall. Johnson, D. G. (2004). Computer ethics. In H. Nissenbaum & M. E. Price (Eds.), Academy and the Internet (pp. 143–167). New York: Peter Lang. Moor, J. H. (1985). What is computer ethics? Metaphilosophy, 16(4), 266–275. Nissenbaum, H. (2004). Privacy as contextual integrity. Washington Law Review, 79, 119–158. Spinello, R. A., & Tavani, H. T. (Eds.). (2001). Readings in cyberethics. Sudbury, MA: Jones and Bartlett. van den Hoven, J. (1994). Towards principles for designing political-administrative information systems. Van den Hoven, J. (1994). Towards principles for designing political-administrative information systems. Information and Public Sector, 3, 353–373.
ETHNOGRAPHY Ethnography has several meanings. It means the study and recording of human culture; it can also mean the work produced as a result of that study—a picture of a people. Today, however, someone interested in human-computer interaction (HCI)
240 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
is more likely to encounter ethnography as a term that describes a research method. As a research method, ethnography is widely used in a multitude of ways, but unfortunately also misused.
Principles of Ethnography Ethnography is most properly understood as a research methodology. This methodology, which can be described as participant observation, or, more informally, fieldwork, is rooted in the social science of anthropology. Anthropology is the discipline that attempts to develop a holistic account of everything having to do with human beings and their activities—humankind in space and time. Ethnography developed specifically as the chief approach of cultural anthropology, one branch of anthropology. Cultural and social anthropologists use long-term participation and observation to develop deeply contextualized accounts of contemporary ways of life. This distinguishes them from, for example, archaeological anthropologists, who aim to construct similarly holistic understandings of past cultures. What is distinctive about ethnography is its particular way of knowing, what a philosopher would call its epistemological approach. (Epistemology is the philosophical study of how we come to know what we know, or how we justify our belief that we know.) Ethnographers pay as much attention to the ways in which people (including ethnographers themselves) perceive what they know as they do to how people act because ethnographers believe that the ways in which people act are affected by their cultural milieu. Even when focused on actions, ethnographers pay close attention to what those actions reveal about what people take as known. That is, ethnographic methodology accepts the importance of cultural construction to both ideas and action. Further, ethnography attempts to incorporate this acceptance into both the generation and representation of anthropological knowledge. This approach differs from modernist or positivist approaches to knowledge, such as those informing statistical reasoning, which attempt to isolate “facts” from their knowers and from knowers’ culturally dependent acts of knowing.
At the same time, ethnographers generally accept the existence of a reality external to the constructs of the knower. In accepting this external reality, and therefore also acknowledging the value of talk regarding what is knowable about it, ethnography differs from, for example, postmodernism, which tends to limit discussion to human apprehensions rather than the effect of these apprehens i o n s o n t h e wor l d . In te l l e c t u a l l y b e t we e n postmodernism and positivism, ethnography is an empirical research strategy that strives to account satisfactorily for the dynamics of different cultures, or of human culture in general, without trying to construct transcendent laws of human action.
History of Ethnography Historically, ethnography emerged out of late-nineteenth-century British anthropology. Historians of anthropology credit the British anthropologist W. H. R. Rivers (1864–1922) as the first full practitioner of ethnography. Before Rivers, anthropologists generally relied entirely on data collected by missionaries or colonial officials; at most, they had only brief personal experience of the cultures they analyzed. Rivers stressed the importance of long-term, depth exposure to the culture of interest, so that the contexts of cultural “facts” such as kin terms or magical practices could be more fully grasped. The Polish expatriate Franz Boas (1858–1942) brought ethnography to anthropology in the United States, both by his own fieldwork and through teaching his students, notably Ruth Benedict (1887–1948) and Margaret Mead (1901–1978). Partly because of the great variety characteristic of the Native American cultures that were its principle foci, U.S. ethnography came to emphasis the particularity of each culture. The anthropologist usually credited with “inventing” ethnography, however, is the Polish nobleman Branislaw Malinowski (1884–1942). His long 1925 introduction to Argonauts of the Western Pacific is still presented as the classic statement of ethnography. Of particular note in Malinowski’s approach to ethnography was his stress on the need for the ethnographer to develop an “emotional dependency” upon the “native” informants. Only if the
ETHNOGRAPHY ❚❙❘ 241
ethnographer were cut off from regular contact with his or her culture of origin could he or she hope to develop the “insider” perspective still taken as the hallmark of good ethnographic writing or film. Under Malinowski’s influence, the participative aspect of ethnographic observation developed a particular quality. Even though the ethnographer knows that his or her knowledge of the culture is not yet complete, he or she tries to participate in cultural activity, performing an indigenous role as well as possible. The hope is that informants will critique ethnographers’ performance and thereby accelerate the pace of the ethnographers’ learning. This very active form of research differs markedly from the approach of social scientists focused on minimizing their impact on the society under observation. Ethnographic perspectives blended with sociology’s tradition of field research, especially in the founding of new subfields such as community studies. After World War II it became popular to model sociological studies as closely as possible on laboratory science, but nonetheless ethnography maintained a vigorous and continuous presence in important theoretical and empirical fields. For example, ethnographic studies were a key part of the 1960s displacement of industrial sociology (with its focus on bureaucracy), by the sociology of work (which focused on the actual labor process). Sociological ethnography did not retain all aspects of anthropological ethnography’s epistemology, however. For example, Malinoswki claimed that an ethnographer’s position as an outsider gave special insights, as he or she would be able to see aspects of a culture less visible to insiders, who took the culture for granted. Sociological ethnographers who were studying their own culture obviously could not claim to have an outsider’s insights. Interestingly, while Malinowski stressed how culture influenced the ways of knowing of the targets of ethnographic investigation, he believed the ethnographic knowledge of the professional anthropologist to be fully scientific in the positivist sense. He did not in general address the cultural biases and assumptions the anthropologist brought to the field. Such contradictions made ethnography vulnerable to critiques in the 1980s. Not only did those critiques spawn an interest in new, experimental forms of
ethnographic representation (including more literary forms, such as poetry), they also fed directly into subaltern studies—the study of oppressed peoples. The trend, part of the critique of anthropology’s role in colonialism, toward privileging the native’s cultural understanding over the outsider’s cultural understanding, is influential in contemporary cultural studies. Despite the internal anthropological critiques, the cachet of ethnography grew considerably during the 1980s in other fields. It was frequently featured as the methodology of preference in feminist critiques of social science and was drawn on by those advocating field studies in social psychology. Today, most cultural anthropologists and many other scholars continue to practice ethnography, drawing on this long and rich tradition of practice and critique.
Ethnography Meets HumanComputer Interaction In research on computing, ethnography has been as much a basis for critique as a methodology. The constructs of early HCI—for example, the notion of the “(individual) man [sic]–(individual) machine interface”—were critiqued, and social scientists began undertaking fuller, contextual ethnographic studies of computing. In 1991, for example, the anthropologist David Hakken argued that, instead of striving after “human-centered” computing, a better goal would be “culture-centered.” That is, rather than trying to design systems to meet universal characteristics, they should be oriented toward specific cultural frameworks. Field study of actual computing encouraged some computer scientists to broaden their conception of the nature of interfaces by developing systems for computer-supported collaborative (or collective) work. Terry Winograd and Fernando Flores’s 1986 theoretical critique of positivism in computer science (Understanding Computers and Cognition: A New Foundation for Design) drew on ethnography. Geoffrey Bowker is among the many computerscience ethnographers who now refer to the antiformalist trends in computer science as social informatics.
242 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Anthropological ethnographers such as Henrick Sinding-Larsen and Lucy Suchman were central to perhaps the single most influential tradition of social informatics, variously referred to in Europe as the Scandinavian approach or as user participation in systems development and called participatory design in the United States. For a variety of practical as well as political reasons, Nordic systems developers wanted to broaden the influence of users over systems development, and ethnography seemed like a good way to gain entry into the users’ world. Kristen Nygaard, arguably the computer scientist with the most sustained influence on this trend, understood the potential value of ethnography. He recruited Sinding-Larsen onto an Oslo hospital project called Florence, one of the three projects recognized as foundational to the Scandinavian approach. T h ro u g h t h e i r p a r t i c i p a t i o n i n t h e a n nu a l Øksøyen and IRIS (Information Research in Scandinavia) and decennial Computers in Context conferences, ethnographers like Suchman, Jeanette Blomberg, and Julian Orr brought ethnography into a continuing dialogue with Nordic and, later, U.S. systems development. From its inception in projects such as Florence, however, the relationship between ethnography and HCI studies has been complex. The relationship has spawned a wide variety of approaches as well as misunderstandings. For example, the frequent failure of a computerized system to perform in the manner intended might be a consequence of either its design or of something in the context of its use. To figure out which, one has to investigate the use context. One could do this by making the laboratory more like the use context (one approach to “usability”), or one could examine how the system is actually used in the real world, through ethnographic studies of use. Because “usability studies” and “use studies” sound similar and have some things in common, they were sometimes glossed together, even though they have very different epistemological underpinnings. What one learned, or concluded could not be learned, depended upon training and professionally preferred reading style (Allwood and Hakken 2001). The medical anthropologist Diana Forsythe has chronicled how a different misunderstanding emerged from the bizarre courting dance of ethnography and artificial
intelligence: Beginning in the mid-1980s, the former moved from being an object of derision for those trained in rigorous natural science to a privileged technique, albeit distorted from its anthropological form. In the process of domesticating ethnography for informatics, some informaticians turned themselves into self-trained ethnographers, while other social scientists (e.g., Crabtree) developed a “quick and dirty” ethnography. To build effective object-oriented as opposed to relational databases, one needs a good understanding of the notional “things” relevant to a work process. After some initial efforts to identify these themselves, informaticians turned the task over to social scientists who, after a week or so “hanging out” in a work site, could generate a list of apparently relevant notions. Such lists, however, are likely to lack the depth of interrelational and contextual understanding that would come from longer, more intense participant observation. In “quick and dirty” appropriations of fieldwork, ethnography ceases to be an epistemology and is reduced to a technique, one of several qualitative research tools. In the late 1990s, before her untimely death, Forsythe despaired of these developments. The computer scientist Jonas Løwgren and the cultural anthropologist James Nyce have criticized HCI researchers interested in ethnography for wanting to do ethnography but only managing an ethnographic gaze. This gaze, not ethnography in the full sense, has been incorporated into practices as diverse as program evaluation and educational research, with uneven consequences. When their limitations are understood, the various appropriations of the ethnographic gaze can be of substantial value, but it should not be confused with ethnography in the full sense. It is ironic that just when ethnography was under attack in its home base of cultural anthropology, its general popularity as a way of knowing was spreading very widely. Too often, however, research self-labeled as ethnographic violates one or more of the epistemological premises at the core of anthropological ethnography. As a result, the term is used to cover such a broad array of approaches as to have lost some of its meaning. Its methods are often qualitative, but ethnography is not just qualitative methods. Indeed, ethnographers often also deploy quantitative methods in
ETHNOGRAPHY ❚❙❘ 243
their data collection and invoke numbers in their analyses. Good ethnography integrates various kinds of information, but particularly information derived from active participation, which is at the center of the idea of ethnography.
graphic gaze. In evaluating especially these latter, it is useful to keep in mind that, to the cultural anthropologist, ethnography is more than an array of methods; it is a way of knowing. David Hakken
Current Ethnography of HCI Fortunately, today there is a rich body of good ethnographic HCI research. One type of study focuses specifically on the work of people developing computer systems. Forsythe’s 2001 Studying Those Who Study Us: An Anthropologist in the World of Artificial Intelligence includes several ethnographic studies of computer scientists in the process of developing artificial intelligence and knowledge-engineering systems. The cultural anthropologist Gary Downey uses ethnography to study the education of computer engineers, while the linguist and computer specialist Stephen Helmreich focuses on computer scientists who see themselves as developing artificial forms of life in code. Hakken’s Cyborgs@cyberspace? reports on the results of his ethnographic study of Nordic systems developers. Another body of HCI ethnography looks at computing among actual users in the general population. The sociologist Susan Leigh Star’s The Cultures of Computing (1995) contains several good examples. Hakken’s The Knowledge Landscapes of Cyberspace (2003) deals with knowledge management in commercial organizations, social services, and schools. A new group of design anthropologists are trying to put this knowledge to work in product development. A final body of HCI ethnography reaches back to an earlier tradition by placing computing practices in broad social contexts. Computing Myths, Class Realities (1993) is one example, as is The Internet: An Ethnographic Approach (1999), which examines Internet use in Trinidad. As computing becomes more densely integrated into non-Western social formations, studies such as Jonah Blank’s Mullahs on the Mainframe (2001) will provide well-considered insights to those who seek to integrate technical and economic development work. In sum, there is reason to expect continuing expansion of rich ethnography in the study of HCI, as well as new appropriations of the ethno-
See also Anthropology and HCI; Sociology and HCI
FURTHER READING Allwood, C. M., & Hakken, D. (2001). ‘Deconstructing use’: Diverse discourses on ‘users’ and ‘usability’ in information system development and reconstructing a viable use discourse. AI & Society, 15, 169–199. Blank, J. (2001). Mullahs on the mainframe: Islam and modernity among the Daudi Borhas. Chicago: University of Chicago Press. Blomberg, J. (1998). Knowledge discourses and document practices: Negotiating meaning in organizational settings. Paper presented at the annual meeting of the American Anthropological Association, Philadelphia, PA. Bowker, G., Star, S. L., Turner, W., & Gasser, L. (1997). Introduction. In G. Bowker, S. L. Star, W. Turner, & L. Glasser (Eds.), Social science, technical systems, and cooperative work: Beyond the great divide (pp. x–xxiii). Mahwah, NJ: Lawrence Erlbaum Associates. Clifford, J., & Marcus, G. (Eds.). (1986). Writing culture: The poetics and politics of ethnography. Berkeley and Los Angeles: University of California Press. Crabtree, A. (1998). Ethnography in participatory design. Paper presented at the Participatory Design Conference, Seattle, WA. Downey, G. (1998). The machine in me: An anthropologist sits among computer engineers. New York: Routledge. Ehn, P. (1988). Work-oriented design of computer artifacts. Stockholm: Almqvist & Wiksell. Forsythe, D. (2001). Studying those who study us: An anthropologist in the world of artificial intelligence. Palo Alto, CA: Stanford University Press. Hakken, D. (1991). Culture-centered computing: Social policy and development of new information technology in England and the United States. Human Organization, 50(4), 406–423. Hakken, D. (1999). Cyborgs@cyberspace?: An ethnographer looks to the future. New York: Routledge. Hakken, D. (2003). The knowledge landscapes of cyberspace. New York: Routledge. Hakken, D., & Andrews, B. (1993). Computing myths, class realities. Boulder, CO: Westview Press. Helmreich, S. (1999). Silicon second nature: Culturing artificial life in a digital world. Berkeley and Los Angeles: University of California Press. Miller, D., & Slater, D. (1999). The Internet: An ethnographic approach. Oxford, UK: Berg. Nyce, J., & Løwgren, J. (1995). Toward foundational analysis in human-computer interaction. In P. J. Thomas (Ed.), The social and interactional dimensions of human-computer interfaces (pp. 37–46). Cambridge, UK: Cambridge University Press
244 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Orr, J. (1996). Talking about machines: An ethnography of a modern job. Ithaca, NY: Cornell University Press. Sinding-Larsen, H (1987). Information technology and management of knowledge. AI & Society, 1, 93–101. Star, S. L. (Ed.). (1995). The cultures of computing. Oxford, UK: Blackwell. Suchman, L. (1987). Plans and situated actions. Cambridge, UK: Cambridge University Press. Van Mannen, J. (1983). The fact of fiction in organizational ethnography. In J. Van Maanen (Ed.), Qualitative methodology (pp. 37–55). Newbury Park, CA: Sage. Whyte, W. F. (1991). Participatory action research. Newbury Park, CA: Sage. Winograd, T., & Flores, F. (1986). Understanding computers and cognition: A new foundation for design. Norwood, NJ: Ablex.
EVOLUTIONARY ENGINEERING How can people make computers solve problems without having to program them with every detail needed for a solution? One answer is to endow the computers with some form of artificial intelligence that allows them to make judgments independently. Another answer makes computers assist humans in breeding solutions rather the way farmers breed animals and plants. This is called genetic programming, or evolutionary programming, and it opens the possibility of very different relationships between people and their machines. Since the dawn of the agricultural age thousands of years ago, people have understood that artificial selection can gradually transform populations of living creatures to make them more useful for human beings. For example, people who herd cattle might slaughter the cows that give little milk, while allowing the best dairy cows to survive and have offspring. Over generations, this will produce cows that give more milk. Genesis 30–31 reports that Jacob fully understood how to breed sheep and goats for desirable characteristics in biblical times. In the mid-nineteenth century, Charles Darwin developed the concept of natural selection, which has become a key analytic principle in biology. In the twentieth century, historians, social scientists, and computer scientists began to apply evolutionary ideas to technology.
Technological Evolution People like to think that new technology is invented by heroic geniuses such as Leonardo da Vinci and Thomas Alva Edison, but many scholars argue that innovation is more like a process of biological evolution in which individuals play only minor roles. In his classic 1922 study of social change, the sociologist William F. Ogburn acknowledged that inventors tend to be more intelligent than the average. But, he explained, inventions are collective additions to an existing cultural base that cannot occur unless the society has already gained a certain level of expertise in the particular area. For example, the telegraph was created by combining many existing elements, such as electricity, coils, batteries, signaling, and alphabet codes. A personal computer connected to the World Wide Web combines a television set (monitor), telephone (Internet connection), and typewriter (keyboard), plus a myriad of small electronic and programming innovations, each of which has its own heritage reaching back in time to earlier inventions such as the telegraph and the electronic calculator. Ogburn said that technical evolution takes place in four interrelated steps: invention, accumulation, diffusion, and adjustment. He devoted an entire chapter of his influential book to a long list of inventions and discoveries that were made independently by two or more people, illustrating the fundamental role of culture in preparing the way for such innovations. These inventions accumulate, such that an advancing culture possesses more of them each year, and they diffuse from their original application and geographic area to others. Finally, society must adjust to new technologies, often by innovating in other fields, as happened in recent history when the spread of the Web made new kinds of e-businesses possible, which in turn required both programming and economic innovations. From this perspective, invention is very much like genetic mutation in biology. Many mutations are disadvantageous and die out, and the same is true for inventions. From the vast menagerie of inventions produced, only some survive, spread, and combine with other successful inventions in a manner analogous to sexual reproduction, producing offspring that in turn may combine with other inventions if they survive in the harsh environment of the market-
EVOLUTIONARY ENGINEERING ❚❙❘ 245
place. Like living creatures, technological innovations inherit characteristics from their predecessors, exhibit great diversity caused by both mutation and recombination, and undergo selection that allows some to survive while others become extinct.
Genetic Programming In 1975 the computer scientist John Holland offered an evolutionary approach to programming that he called "genetic plans," but that today is known as genetic algorithms. In biology, the genetic code is carried by DNA molecules, which are long strings of nucleotide bases denoted by four letters of the alphabet: A (adenine), C (cytosine), G (guanine), and T (thymine). By analogy, genetic algorithms typically employ strings of letters or numbers that can be read by the software system as if they were short programs specifying a series of actions. For example, URDL might instruct the cursor of the computer to go through a series of moves on the screen: up, right, down, and left. A genetic algorithm system contains many of these strings, perhaps thousands of them. Each can be interpreted as different instructions for solving the same well-specified problem. The URDL code might be instructions for moving the cursor through a maze on the computer screen. (URDL would not be a very good solution to the maze problem, however, because it merely returns the cursor to where it started.) All the strings currently in the system are tested to see how far each one takes the cursor from the start of the maze toward the goal. That is, the system evaluates the fitness of each string. Then it performs selection, removing strings like URDL that have very poor fitness and copying strings that have high fitness. Selection cannot be too harsh, or it will eliminate all variation among the strings, and variation is essential to evolution. One way to increase variation is through mutation—randomly adding, subtracting, or substituting a letter in a string. Suppose the first four moves in the maze are UURR, and the path then branches both up and down. Then both UURRU and UURRD will have high fitness, and both UURRL and UURRR will have lower fitness. After repeating these steps many times, this process will result in a population of long strings that represent
good paths through the maze, perhaps even many copies of the one best solution. Selection, reproduction, and mutation are enough for evolution to take place as it does in microorganisms, but humans and many other complex life forms have one more important process: sexuality. From an evolutionary perspective, sex has one great advantage. It combines genes from different lineages. Imagine that you are breeding dairy cows. Perhaps Bossie the cow produces lots of milk but is mean and kicks you whenever you approach. Bertha is sweet tempered but produces little milk. Assuming these characteristics are genetically determined, you let both Bossie and Bertha have many babies. Then you breed the offspring of one with the offspring of the other, hoping to get a variety of mixtures of their characteristics. Perhaps one of Bossie's and Bertha's granddaughters, Bessie, combines their virtues of abundant milk and sweet disposition. She becomes the mother of your herd of ideal dairy cows. Another granddaughter, Bortha, is mean and unproductive. She becomes roast beef. The equivalent of sexual reproduction in genetic algorithms is called crossover. After checking the fitness of each string in the system, the program makes some new strings by adding part of one high-fitness string to part of another. In the maze example, UUDUU and RLRRL could produce a string of higher fitness, UURRU, through crossover. Adding crossover to a genetic algorithm generally allows it to find better solutions quicker, especially if the problem is difficult. Each of the strings evolving through a genetic algorithm is a kind of computer program, and this method can actually be used to write software. In recent years, experiments have shown the approach can solve a very wide range of engineering problems, in hardware as well as software.
Engineering Applications A team led by the computer scientist John Koza at Stanford University has been using genetic programming to design electronic circuits, including filters for audio systems optimized to block some frequencies and pass others, controllers such as automobile cruise control devices, and circuit generators that produce
246 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
desired outputs. In a way, designing electronic circuits is the opposite of our maze example, because it involves assembling electronic components into a maze of circuits having the desired characteristics, rather than finding the best path through a pre-existing maze. A number of researchers have used genetic programming to design the details of neural networks, a kind of artificial intelligence approach that can be built as analog electronic circuits but are usually expressed digitally in software instead. One of the classic challenges for genetic programming was designing truss structure for use in large machines and architectural buildings. The problem is to design an open structure of beams and girders that can reach horizontally out from a wall a given distance and support a given weight, and to do so with the minimum amount of material. This problem involves many variables, because beams and girders of various sizes can be connected in many different ways to form the truss, so it is a nice computational challenge. It is also relatively straightforward to test the quality of the solutions produced by the computer, either through mathematical analysis or through actually building a design and seeing how much weight it can support before breaking. The same approach can be useful in chemical engineering, for example to determine an optimum combination of raw materials and temperature to produce high quality plastics reliably at low cost. A research team at Brandeis University, called DEMO (Dynamical and Evolutionary Machine Organization) and including the scientists Jordan Pollack and Gregory Hornby, has used genetic programming to design robots and virtual creatures. DEMO believes that evolutionary engineering will be greatly facilitated by two related technological developments: improvements in the quality of computer aided mechanical design, including simulations, and development of inexpensive methods for rapid prototyping and manufacture of single items. Together, these two innovations could achieve both the cost-effectiveness of mass production and the customer satisfaction of high-quality skilled craftsmanship with unique designs. Combined with genetic programming, these methods could produce an evolutionary system in which machines constantly
improved over time, the better to serve the needs of human beings. In the GOLEM (Genetically Organized Lifelike Electro Machines) project, the DEMO team has used genetic programming to design populations of simple robots that are then actually built, so that their characteristics can be compared in the real world, rather than just in computer simulations. These very simple robots have one task only: to creep across the floor. They are constructed out of piston-like actuators and bars that form a structure that can hinge at the connection. In the computer, a population of simulated robots evolves by means of genetic algorithms through hundreds of generations. Mutation occasionally adds or subtracts a bar or actuator at a r a n d o m l y ch o s e n p o i n t i n t h e s t r u c t u re . Selection tests how far each of the simulated robots could creep in a given period of time, and those that could go the farthest have more offspring in the next generation. A number of the robots have actually been fabricated and can indeed crawl around the laboratory. The DEMO team set up the computer and fabrication system, and established its goal, but the robots' design was carried out automatically by evolution.
Challenges and Opportunities While evolutionary computing can solve many engineering problems, it is used relatively seldom outside research studies. One reason is that the calculations are time consuming, especially in figuring the relative fitness of all the strings. Parallel processing helps here, because it will allow all the fitness evaluations to happen simultaneously, and John Koza's team works with a cluster of a thousand computers. But there is also a human-computer interaction barrier, because we do not yet have convenient systems that professional engineers can use to tell the genetic algorithm what kind of solution it is supposed to evolve. A related problem concerns how to give the computer the real-world information it can use to evaluate the fitness of competing designs. In the near future we can imagine that engineers will be able to define problems through a comfortable multimedia computer interface, and highly
EXPERT SYSTEMS ❚❙❘ 247
intelligible output will describe solutions as they evolve. Manufacturing companies could unobtrusively use their customers to evaluate the fitness of designs by abandoning mass production of standard models in favor of near infinite variety. The result could be an approach that radically enhances human creativity at the same time it inspires people to think about engineering design in a fresh way.
Ogburn, W. F. (1922). Social change. New York: Huebsch. Pollack, J. B., Lipson, H., Hornby, G., & Funes, P. (2001). Three generations of automatically designed robots. Artificial Life, 7(3), 215–223.
EXPERT SYSTEMS
William Sims Bainbridge See also Artificial Intelligence
FURTHER READING Bainbridge, W. S. (in press). The evolution of semantic systems. Annals of the New York Academy of Science. Basalla, G. (1988). The evolution of technology. Cambridge, UK: Cambridge University Press. Deb, K., & Gulati, S. (2001). Design of truss-structures for minimum weight using genetic algorithms. Finite Elements in Analysis and Design, 37(5), 447–465. Dennett, D. C. (1995). Darwin's dangerous idea. New York: Simon & Schuster. Gallagher, J. C., & Vigraham, S. (2002). A modified compact genetic algorithm for the intrinsic evolution of continuous time recurrent neural networks. In W. B. Langdon, E. Cantú-Paz, K. Mathias, R. Roy, D. Davis, R. Poli et al. (Eds.), GECCO 2002: Proceedings of the Genetic and Evolutionar y Computation Conference (pp. 163–170). San Francisco: Morgan-Kaufmann. Goodman, E. D., Seo, K., Rosenberg, R. C., Fan, Z., Hu, J., & Zhang, B. (2002). Automated design methodology for mechatronic systems using bond graphs and genetic programming. In 2002 NSF Design, Service, and Manufacturing Grantees and Research Conference (pp. 206–221). Arlington, VA: National Science Foundation. Holland, J. H. (1975). Adaptation in natural and artificial systems. Cambridge, MA: MIT Press. Hornby, G. S., & Pollack, J. B. (2002). Creating high-level components with a generative representation for body-brain evolution. Artificial Life, 8(3), 223–246. Hornby, G. S., & Pollack, J. B. (20012). Evolving L-systems to generate virtual creatures. Computers and Graphics, 25(6), 1041–1048. Koza, J. R. (1992). Genetic programming. Cambridge, MA: MIT Press. Koza, J. R., Keane, M. A., & Streeter, M. J. (2003). Evolving inventions. Scientific American, 288(2), 52–59. Li, Y., Rangaiah, G. P., & Ray, A. K. (2003). Optimization of styrene reactor design for two objectives using a genetic algorithm. International Journal of Chemical Reactor Engineering, 1, A13. Miller, G. (2000). Technological evolution as self-fulfilling prophecy. In J. Ziman (Ed.), Technological innovation as an evolutionary process (pp. 203–215). Cambridge, UK: Cambridge University Press.
Expert systems (ES) are computer systems that capture and store human problem-solving knowledge (expertise) so that it can be utilized by less knowledgeable people. An alternate term is knowledge-based expert systems. Expert systems imitate human experts’ reasoning processes in solving specific problems and disseminate scarce knowledge resources, leading to improved, consistent results. As the knowledge in an expert system is improved and becomes more accurate, the system may eventually function at a higher level than any single human expert can in making judgments in a specific, usually narrow, area of expertise (domain). Expert systems are part of artificial intelligence (the subfield of computer science that is concerned with symbolic reasoning and problem solving). They use a symbolic approach to representing knowledge and simulate the process that experts use when solving problems. Knowledge, once captured through the knowledge acquisition process, must be represented, typically as production rules (knowledge representation methods in which knowledge is formalized into rules containing an “IF” part and a “THEN” part and, optionally, an “ELSE” part). However, additional knowledge representations (formalisms for representing facts and rules about a subject or a specialty) exist; each problem has a natural fit with one or more knowledge representations. To be useful, knowledge must be utilized through a reasoning process implemented in the inference engine (the expert system component that performs reasoning [thinking]). The structure of expert systems is important, as are the application areas to which expert systems have been successfully applied.
248 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Expertise, Experts, and Knowledge Acquisition Expertise is the extensive, task-specific knowledge acquired from training, reading, and experience. Through their expertise, experts make better and faster decisions than nonexperts in solving complex problems. A person requires much time to become an expert. Novices become experts only incrementally. Experts have a high skill level in solving specific problem types. Experts generally are good at recognizing and formulating problems, solving problems quickly and properly, explaining their solutions, learning from experience, restructuring knowledge, breaking rules when necessary, and determining relevant factors. When faced with new problems, their solutions tend to be pretty good. To mimic a human expert, we can implement an ES that directly incorporates human expertise. Typically, an ES can explain how it obtains solutions and why it asks for specific information and estimate its measure of confidence in its solutions, as would an expert. The objective of an expert system is to transfer expertise from an expert to a computer system and then to other nonexpert humans. This transfer involves knowledge acquisition, knowledge representation, knowledge inferencing, and, finally, knowledge transfer to the user. Knowledge is stored in a knowledge base and reasoned with by an inference engine. Knowledge acquisition is the process of extracting, structuring, and organizing knowledge and transferring it to the knowledge base and sometimes to the inference engine. Formally, knowledge is a collection of specialized facts, procedures, and judgment rules. A knowledge engineer applies AI methods to applications requiring expertise, including developing expert systems. Knowledge engineering involves knowledge acquisition, representation, validation, inferencing, explanation, and maintenance. Knowledge engineering also involves the cooperation of human experts in explicitly codifying the methods used to solve real problems. The most common method for eliciting knowledge from an expert is interviews. The knowl-
edge engineer interviews expert(s) and develops an understanding of a problem. Then he or she identifies an appropriate knowledge representation and inferencing approach. Acquired knowledge must be organized and stored in a knowledge base. A good knowledge representation naturally represents the problem domain. People have developed many useful knowledge representation schemes through the years. The most common are production rules and frames. Rules are used most frequently. Any number of representations may be combined into a hybrid knowledge representation. Most commercial (called production) ES are rule based. Knowledge is stored as rules, as are the problem-solving procedures. Knowledge is presented as production rules in the form of condition-action pairs: “IF this condition occurs, THEN some action will (or should) occur.” This action may include establishing a fact to be true, false, or true to some degree, which is known as a “confidence level.” For example, a rule may state: “IF the light bulb is off, THEN move the switch to the ON position.” Each rule is an autonomous chunk of expertise. When utilized by the inference engine, the rules behave synergistically (relating to combined action or operation). Rules are the most common form of knowledge representation because they are easily understood and naturally describe many real situations. A frame includes all the knowledge about a particular object. This knowledge is organized in a hierarchical structure that permits a diagnosis of knowledge independence. Frames are used extensively in ES. Frames are like index cards that describe conditions and solutions. Given a real problem, a case-based reasoning (CBR) inference engine searches the frames to identify a closest match to solve a problem. This procedure is similar to an expert remembering a specific situation that is like or close to the new one encountered.
Inferencing and Explanation An expert system literally can reason (e.g., think). After the knowledge in the knowledge base is at a sufficiently high level of accuracy, it is ready to be
EXPERT SYSTEMS ❚❙❘ 249
used. The inference engine is a computer program that accesses the knowledge and controls one or more reasoning processes. The inference engine directs a search through the knowledge base. It asks for facts (for the IF part of rules) that it needs in order to “fire” rules and reach their conclusions (the THEN part). The program decides which rule to investigate, which alternative to eliminate, and which attribute to match. The most common inferencing methods for rule-based systems are backward and forward chaining. Backward chaining is a goal-driven approach to problem solving. One starts from an expectation of what is to happen (hypothesis), then seeks evidence (facts) to support (or contradict) the expectation. An ES starts with a goal to be verified as either true or false. Then it looks for a rule that has that goal in its conclusion. It then checks the premise of that rule in an attempt to satisfy this rule. When necessary, the ES asks the user for facts that it needs to know. If the search for a specific solution fails, the ES repeats by looking for another rule whose conclusion is the same as before. The process continues until all the possibilities that apply are checked or until the initially checked rule (with the goal) is satisfied. If the goal is proven false, then the next goal is tried. Forward chaining is a data-driven approach. It starts with all available information (facts) and tries to reach conclusions. The ES analyzes the problem by looking for the facts that match the IF portion of its IF-THEN rules. As each rule is tested, the program works its way toward one or more conclusions. Typically, backward chaining is utilized in diagnostic systems such as those for the medical or equipment repair areas, whereas forward chaining is utilized in financial and accounting applications. Automatic control applications, such as those that run steel mills and clay processing plants, typically use forward chaining. Human experts are often asked to explain their decisions. Likewise, ESs must also be able to explain their actions. An ES must clarify its reasoning, recommendations, or other actions. The explanation facility does this. Rule-based ES explanation traces rules that are fired as a problem is solved. Most ES explanation facilities include the “why?” question
(when the ES asks the user for some information); advanced systems include the “how?” question (how a certain conclusion was reached). A key issue in expert systems is the fuzziness of the decision-making process. Typical problems have many qualitative aspects (the engine sounds funny), and often when a rule reaches a conclusion, the expert may feel it is right only about seven times out of ten. This level of confidence must be conside re d . Ce r t a i n t y t h e o r y p e r f o r m s t h i s t a s k . Certainty factors (CF) express belief in an event (fact or hypothesis) based on evidence (or the expert’s assessment) along a scale, for example, anywhere from 0 (completely false) to 1 (completely true). The certainty factors are not probabilities but rather indicate how true a particular conclusion is.
Expert System Components and Shells The three major components in every expert system are the knowledge base, inference engine, and user interface. An expert system may contain the following additional components: knowledge acquisition subsystem, blackboard (workplace), explanation subsystem (justifier), knowledge refining system, and user(s). Most expert systems do not contain the knowledge refinement component. When a system makes an error, the error is captured and examined, and the knowledge base is updated through the knowledge acquisition subsystem. The knowledge base contains the relevant knowledge necessary for understanding, formulating, and solving problems. It includes facts such as the problem situation and theory of the problem area and special heuristics (rules that direct the use of knowledge to solve specific problems). The inference engine may include general-purpose problem solving and decision-making rules. The inference engine is the brain of the ES. It is also known as the “control structure” or “rule interpreter” (in rule-based ES). The inference engine provides a methodology for reasoning about information in the knowledge base to reach conclusions. It provides directions about how to use the system’s knowledge by developing the agenda that organizes and controls the steps taken whenever consultation is performed.
250 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Expert systems contain a language processor for friendly communication between the user and the computer. This communication can best be performed through menus in a graphical user interface. Some ESs use natural language processors. Knowledge engineers generally use a software tool called a “shell.” Expert system shells (computer programs that facilitate the relatively easy implementation of a specific expert system, similar to the concept of a Decision Support System generator) include the major components of the expert systems (except for the knowledge itself). Generally a shell can represent knowledge in only one or two ways (for example, rules and frames) and manipulate them in a limited number of ways. Using a shell, the knowledge engineer can focus on the knowledge because the shell manages the knowledge, the interface, the inferencing method(s), and the inferencing rules. Only the knowledge need be added to build an expert system. Examples of some commercial rulebased shells are Corvid Exsys, Ginesys K-Vision, CLIPS, and JESS. Most shells run directly on World Wide Web servers. Users and knowledge engineers access them with Web browsers.
Successful expert systems examples include MYCIN (medical diagnosis, Stanford University), XCOM (computer system configuration, Digital Equipment Corporation), Expert Tax (tax planning, Coopers & Lybrand), Loan Probe (loan evaluation, Peat Marwick), La-Courtier (financial planning, Cognitive Systems), LMOS (network management, Pacific Bell), LMS (production planning, scheduling, and management, IBM), and Fish-Expert (disease diagnosis, north China). The Nestle Foods Corporation developed an expert system to provide accurate information and advice on employee pension funds. America Online utilizes an expert system to assist its help desk personnel. Many help desks incorporate expert systems that may be accessed either by the organization’s personnel or customers. Expert systems enable organizations to capture the scarce resource of expertise and make it available to others. Expert systems affect an organization’s bottom line by providing expertise to nonexperts. Expert systems often provide intelligent capabilities to other information systems. This powerful technology will continue to have a major impact on knowledge deployment for improving decision-making. Jay E. Aronson
Expert Systems Application Areas Expert systems can be classified by their general problem areas. Expert systems classes include: Interpreting: Inferring situation descriptions from observations ■ Predicting: Inferring likely consequences of given situations ■ Diagnosing: Inferring system malfunctions from observations ■ Designing: Configuring objects under constraints ■ Planning: Developing plans to achieve goals ■ Monitoring: Comparing observations to plans, flagging exceptions ■ Debugging: Prescribing remedies for malfunctions ■ Repairing: Executing a plan to administer a prescribed remedy ■ Instructing: Diagnosing, debugging, and correcting student performance ■ Controlling: Interpreting, predicting, repairing, and monitoring system behaviors
See also Artificial Intelligence; Information Organization
■
FURTHER READING Allen, B. P. (1994). Case-based reasoning: Business applications. Communications of the ACM, 37(3), 40–42. Aronson, J. E., Turban, E., & Liang, T. P. (2004). Decision support systems and intelligent systems (7th ed). Upper Saddle River, NJ: Prentice Hall. Awad, E. M. (1996). Building expert systems: Principles, procedures, and applications. Minneapolis/St. Paul, MN: West Publishing. Dean, T., Allen, J., & Aloimonos, Y. (2002). Artificial intelligence: Theory and practice. Upper Saddle River, NJ: Pearson Education POD. Feigenbaum, E., & McCorduck, P. (1983). The fifth generation. Reading, MA: Addison-Wesley. Giarratano, J. C. (1998). Expert systems: Principles and programming. Pacific Grove, CA: Brooks Cole. Hart, A. (1992). Knowledge acquisition for expert systems. New York: McGraw-Hill. Jackson, P. (1999). Introduction to expert systems (3rd ed.). Reading, MA: Pierson Addison-Wesley.
EYE TRACKING ❚❙❘ 251
Kolonder, J. (1993). Case-based reasoning. Mountain View, CA: Morgan Kaufmann. Russell, S. J., & Norvig, P. (2003). Artificial intelligence: A modern approach (2nd ed.). Upper Saddle River, NJ: Prentice Hall. Winston, A. (1992). Artificial intelligence (3rd ed.). Reading, MA: Addison-Wesley. Zahedi, F. (1993). Intelligent systems for business. Belmont, CA: Wadsworth.
EYE TRACKING Eye-movement-based, or gaze-based, interaction has been suggested as a possible easy, natural, and fast noncommand style of interaction with computers. Indeed eye typing, in which the user gazes at letters to type, has often been cited as the prototypical example of an eye-based, noncommand, interactive application. Such gaze-based interaction relies on the interface’s ability to track the position and movement of the user’s eyes. In general, the purpose of eye tracking can be diagnostic or interactive. In diagnostic applications, eye movements are recorded so that scene elements that the user looked at and possibly paid attention to can be analyzed and evaluated later. Used diagnostically, eye trackers help evaluate the user’s attentive behavior. Interface usability studies can use eye movement data to test the visibility of some feature of an interface display, such as desktop or webpage elements. For example, the researcher Joseph H. Goldberg and his colleagues reported in 2002 on the use of eye movements to evaluate the design of webpages, including the positioning of webpage portals. In 1998 the psychologist Keith Rayner reviewed a host of similar diagnostic eye-movement applications. This article concentrates on interactive, realtime applications of eye tracking.
Eye-Tracking Devices The movements of the eyes have been measured and studied for more than a hundred years. Early techniques, however, were either invasive (for example, involving the embedding of a scleral coil in a contact lens worn by the user), or not particularly precise, such as early video-oculography, which relied on
frame-by-frame study of the captured video stream of the eye. Relatively recent advances in eye-tracking technology, which have resulted in increasingly accurate, comfortable, unobtrusive, and inexpensive eye trackers, have made possible the adoption of eye trackers in interactive applications. Today’s eye trackers typically employ a video camera to capture an image of both the eye’s pupil and the corneal reflection of a nearby on- or off-axis light source (usually infrared because infrared light is invisible to the naked eye). Computer and video-processing hardware are then employed to calculate the eye’s point of regard (defined as the point on the screen or other stimulus display being looked at by the viewer) based on these two optical features. Such video-based corneal-reflection eye trackers have become the device of choice in interactive applications. Interactive Control: Early Application Examples Demonstrations of two early interactive uses of eye trackers were published in 1990 by Robert Jacob and by India Starker and Richard Bolt. Starker and Bolt used eye movements to navigate in, and interact with objects within, a graphical fantasy world. Jacob used eye movements to enable the user to select objects on a desktop display (in this case Navy ships that could be selected by gaze to obtain information about them). Both examples of interactive eye-tracking applications exposed a key difficulty with the use of eye movements for interaction, which Jacob called the Midas-touch problem. When eye movements are used for selection of objects shown on a computer display, simulating a mouse-based interface, selection confirmation may be problematic. With a mouse, the user typically confirms selection of interface features by clicking mouse buttons. The eyes, however, cannot register button clicks. The result is that, just as everything Midas touched indiscriminately turned to gold, anything gazed at can potentially be selected. To allow the user to select objects via eye movements, Jacob proposed a measure of dwell time to activate selection; that is, the eyes would have to rest on an object for a certain amount of time—for example, 500 milliseconds. Starker and Bolt used a similar thresholding technique to zoom in on objects of interest in their virtual 3D world.
252 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
The Midas-touch problem is related to a more general concern associated with eye movements, namely, eye movement analysis. During normal visual perception, the fovea (the retinal region of highest acuity) is moved about the visual field through very fast eye movements known as saccades. Saccadic eye movements are so fast, and as a result the scene is so blurred during a saccade, that the viewer is rendered effectively blind during these brief periods of foveal repositioning. Areas or regions of interest are inspected with more scrutiny during a relatively steadier form of gaze known as a fixation. Characteristic eye movements include pursuit of moving objects (pursuit movements) and involuntary eye rotation counteracting head movement (vestibular movements), among others. For interactive uses, fixations are usually of greatest interest, because fixated locations are highly correlated with locations to which the user is devoting the greatest visual attention. Eyemovement analysis is used to characterize the type of eye movement recorded by the eye tracker. This is a particularly important component of eye tracking since w ithout analysis it may not be know n whether regions in the visual field are simply being glanced at, fixated, or passed over by saccades. Several different techniques for eye movement analysis are available; the two most popular are based on the eye’s position or its velocity. Both seek to identify fixations in the time course of eye movements. Position-based methods typically classify fixations a priori by measuring the deviation of sampled eye movements from some recently calculated central location. Velocity-based techniques, on the other hand, start by defining saccades, since these are often easier to locate in the eye movement time course. Fixations are generally then assumed to be composed of eye movement points that lie outside the brief saccadic periods. Just as in position-based techniques, a threshold, this time of velocity, is used to locate saccades in an effort to classify fixations. The important point here is that regardless of which technique is used, some form of eye movement analysis is usually required to make sense of eye movement patterns. Interactive Control: Recent Advances Eye-tracking equipment has improved greatly from the early interactive applications of the 1990s
and earlier, fertilizing research in a variety of interactive scenarios. Eye typing is still an exemplary interactive application, particularly since it provides computer access and a means of communication for certain disabled users, such as quadriplegics. Among the research teams that have examined interactive gaze-based object selection in virtual reality are Vildan Tanriverdi (working with Robert Jacob) and Nathan Cournia and his colleagues. In this interactive situation, eye-based pointing, based on a raycasting approach (selecting objects in the virtual world by shooting an invisible ray along the line of sight) was found to be more efficient than arm-based pointing when arm pointing relied on the arm extension approach (lifting the arm to point as if a laser were attached to one’s fingertip). Research on general gaze-based selective eyetracking applications has also continued, with advances in applications in which gaze is used as an indirect indicator of the user’s intent. Arguing that gaze-based pointing is an interactive strategy that inappropriately loads the visual channel with a motor control task, Shumin Zhai and his colleagues contend that gaze-based functionality is fundamentally at odds with users’ natural mental model, in which the eye searches for and takes in information while coordinating with the hand for manipulation of external objects. Zhai and colleagues provide an alternative to direct gaze-based selection of user interface objects by using gaze as an indirect accelerator of the mouse pointer. Other gaze-based interaction styles use gaze indirectly to manipulate interface objects without necessarily requiring the user’s awareness of the eye tracker. An example of this class of eye-tracking modality is gaze-contingent graphical rendering of complex scenes. By exploiting knowledge of the user’s gaze and the limited capacity of human peripheral vision, the system is able to focus its limited resources on the display regions projected onto the user’s fovea. This technique is particularly well suited to applications burdened with rendering complex graphical data. David Luebke and his colleagues have developed a technique for gaze-contingent rendering in which 3D graphical objects are rendered at high resolution only when the user is focusing on them directly. These view-dependent level-of-detail
EYE TRACKING ❚❙❘ 253
techniques thus aim to degrade the spatial resolution of 3D graphical objects imperceptibly. In a related method focused on degrading temporal resolution, Carol O’Sullivan and her colleagues interactively manipulated the precision of 3D object collisions in peripheral areas so that the gap between colliding objects in the periphery was larger due to lower precision. Indirect gaze-based pointing approaches have also been developed to support computer-mediated communication systems. A prevalent problem in such systems is the lack of eye contact between participants and the lack of visual deictic (“look at this”) reference over shared media. In multiparty teleconferencing systems, these deficiencies lead to confusion over who is talking to whom and what others are talking about. Roel Vertegaal and his fellow researchers offer a solution to both problems by tracking each participant’s gaze. Their gaze-aware multiparty communication system provides still images or video of remotely located participants in a virtual teleconferencing and document-sharing system. Containing pictures or video of participants’ faces, 3D boxes rotate to depict a participant’s gaze direction, alleviating the problem of taking turns during communication. Furthermore, a gaze-directed spot of light is shown over a shared document to indicate the user’s fixated regions and thereby provide a deictic reference.
Future Directions While many interactive eye-tracking applications have successfully been developed, current state-ofthe-art eye trackers suffer from the requirement of calibration. In most cases, especially those requiring high levels of accuracy, the eye tracker must be calibrated to each user before the device can be operated. Even more problematic, some eye trackers suffer from calibration drift; that is, during interactive sessions the eye-tracker accuracy degrades, requiring recalibration. Eye trackers are steadily improving, and not all eye trackers require extensive calibration. Operating at a coarse resolution and utilizing facial recognition subsystems, some eye trackers provide general gaze direction with a substantially reduced calibration requirement. The current goal
of eye-tracking technology research is to eliminate the necessity for calibration altogether. An autocalibrating eye tracker, possibly based on multiple camera input, may soon be available. Anticipating improved sensing technologies, such as autocalibrating eye trackers, an emerging human-computer interaction strategy is concerned with designing attentive user interfaces, or AUIs, in which the interface is aware of the direction of the user’s (visual) attention. By tracking the user’s eyes, AUIs attempt to match the characteristics of computer displays to certain characteristics of human vision, such as the distinction in human vision between foveal and peripheral vision. Such AUIs make better use of limited resources by tailoring display content to that which is useful to human vision (e.g., a small fraction of the display at the point of regard is displayed at high resolution while peripheral regions are displayed at lower resolution, matching the resolvability of peripheral human vision while simultaneously conserving computational resources). As the technology matures, eye trackers will endure as a significant component in the design of interactive systems. Andrew T. Duchowski
FURTHER READING Baudisch, P., DeCarlo, D., Duchowski, A. T., & Geisler, W. S. (2003, March). Focusing on the essential: Considering attention in display design. Communications of the ACM, 46(3), 60–66. Bowman, D. A., & Hodges, L. F. (1997). An evaluation of techniques for grabbing and manipulating remote objects in immersive virtual environments. In Proceedings of the Symposium on Interactive 3D Graphics (pp. 35–38). Providence, RI: ACM. Cournia, N., Smith, J. D., & Duchowski, A. T. (2003, April). Gazevs. hand-based pointing in virtual environments. In Proceedings of CHI ’03 (Short Talks and Interactive Posters (pp. 772–773). Fort Lauderdale, FL: ACM. Duchowski, A. T. (2003). Eye tracking methodology: Theory & practice. London: Springer-Verlag. Goldberg, J. H., & Kotval, X. P. (1999). Computer interface evaluation using eye movements: Methods and constructs. International Journal of Industrial Ergonomics, 24, 631–645. Goldberg, J. H., Stimson, M. J., Lewenstein, M., Scott, N., & Wichansky, A. M. (2002). Eye tracking in web search tasks: Design implications. In Proceedings of Eye Tracking Research & Applications (ETRA) (pp. 51–58). New Orleans, LA: ACM. Jacob, R. J. (1990). What you look at is what you get: Eye movement-based interaction techniques. In Proceedings of CHI ’90 (pp. 11–18). Seattle, WA: ACM.
254 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Luebke, D., Hallen, B., Newfield, D., & Watson, B. (2000). Perceptually driven simplification using gaze-directed rendering (Technical Report CS-2000-04). Charlottesville: University of Virginia. Majaranta, P., & Raiha, K.-J. (2002). Twenty years of eye typing: Systems and design issues. In Proceedings of Eye Tracking Research & Applications (ETRA) (pp. 15–22). New Orleans, LA: ACM. Nielsen, J. (1993, April). The next generation GUIs: Non-command user interfaces. Communications of the ACM, 36(4), 83–99. O’Sullivan, C., Dingliana, J., & Howlett, S. (2002). Gaze-contingent algorithms for interactive graphics. In J. Hyönä, R. Radach, & H. Duebel (Eds.), The mind’s eye: cognitive and applied aspects of eye movement research (pp. 555–571). Oxford, UK: Elsevier Science.
Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124(3), 372–422. Salvucci, D. D., & Goldberg, J. H. (2000). Identifying fixations and saccades in eye-tracking protocols. In Proceedings of Eye Tracking Research & Applications (ETRA) (pp. 71–78). Palm Beach Gardens, FL: ACM. Starker, I., & Bolt, R. A. (1990). A gaze-responsive self-disclosing display. In Proceedings of CHI ’90 (pp. 3–9). Seattle, WA: ACM. Tanriverdi, V., & Jacob, R. J. K. (2000). In Proceedings of CHI ’00 (pp. 265–272). The Hague, Netherlands: ACM. Vertegaal, R. (1999). The GAZE groupware system: Mediating joint attention in multiparty communication and collaboration. In Proceeding of CHI ’99 (pp.294–301). Pittsburgh, PA: ACM.
FACIAL EXPRESSIONS FLY-BY-WIRE FONTS
F FACIAL EXPRESSIONS The human face provides information that regulates a variety of aspects of our social life.Verbal and non-verbal communication are both aided by our perception of facial motion; visual speech effectively compliments verbal speech. Facial expressions help us identify our companions and inform us of their emotional state.In short,faces are windows into the mechanisms that govern our emotional and social lives.To fully understand the subtlety and informativeness of the face and the complexity of its movements,face perception and face processing are becoming major topics of research by cognitive scientists,sociologists,and most recently,researchers in human computer interaction (HCI),computer vision, and computer graphics. The automation of human face processing by a
computer will be a significant step towards developing an effective human-machine interface. Towards this end, it is important to understand the ways in which a system with the ability to understand facial gestures (analysis), and the means of automating this interpretation and/or production (synthesis) might enhance human-computer interaction. Also essential is understanding how meaning is derived from the complex rigid and non-rigid motions associated with the face; this is perhaps key to the machine perception of human expression.
Psychological Importance of Facial Expressions The face is a multi-signal, multi-message response system capable of tremendous flexibility and 255
256 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
specificity. It is the site for sensory inputs and the communicative outputs. In 1978 Paul Ekman, a psychologist at the University of California medical school, described how faces convey information via four general classes of signals: static facial signals: permanent features of the face like the bony structure and soft tissue masses contributing to facial appearance; ■ slow facial signals: changes in facial appearance over time, wrinkles, texture, and so on; ■ artificial signals: exogenously determined features, such as eyeglasses, and cosmetics; and ■ rapid facial signals: phasic changes on neuromuscular activity leading to visible changes in facial appearance.
■
All four of these classes contribute to facial recognition; however,only the rapid signals convey messages via emotions in a social context. The neuropsychology of facial expressions supports the view that facial movements express emotional states, and that the two cerebral hemispheres are differently involved in the control and interpretation of facial expressions. Some of the initial work studying the relationship between facial expressions and emotions was undertaken in the early nineteenth century by Charles Darwin and C.B.Duchenne du Boulogne,a neurophysiologist.Their work still has a strong influence on the research techniques used to examine expression perception. Until recently the majority of studies on facial expressions have continued to examine the perception of posed expressions in static photographs. Most of these studies suggest that seven universal categories of expressions can be recognized by members of all cultures, both literate and preliterate. Researchers are now beginning to study facial expressions in spontaneous and dynamic settings to avoid the potential drawbacks of using static expressions and to acquire more realistic samples. The problem, of course, is how to categorize active and spontaneous facial expressions in order to extract information about the underlying emotional states.
Representations of Facial Motion After the seminal nineteenth century work of Duchenne and Darwin, the most significant work
on quantitative and qualitative analysis of facial expressions was undertaken by Paul Ekman and W. Friesen, who produced a widely used system for describing visually distinguishable facial movements in 1978. This system, called the Facial Action Coding System, or FACS, is based on the enumeration of all “action units” that cause facial movements; the combination of these action units results in a large set of possible facial expressions. FACS coding is done by individuals trained to categorize facial motion based on the anatomy of facial activity—that is, how muscles singly and in combination change facial appearance. Some muscles give rise to more than one action unit, and the correspondence between action units and muscle units is approximate. A FACS coder, after undergoing extensive training, codes expressions by dissecting an expression, decomposing it into the specifics that produced the motion. This coding is done by analyzing the relationship between components of the expressions and judgments made by the coders from static photographs and more recently from videos. In 1978 John Bassili, a psychologist at the University of Toronto, argued that because facial muscles are fixed in certain spatial arrangements, the deformations of the elastic surface of the face to which they give rise during facial expressions may be informative in the recognition of facial expressions. To verify his claim, Bassili conducted experiments by covering the faces of actors with black makeup and painting white spots in at random locations. Faces were divided into upper and lower regions (to correlate with FACS data for upper and lower regions) and recognition studies were conducted. This study showed that in addition to the spatial arrangement of facial features, the movement of the surface of the face does serve as a source of information for facial recognition. This significant observation is that we use muscles and connected skin to model facial action dynamically. Recently the video-coding community has proposed a new standard for the synthesis of facial action called Facial Action Parameters (FAPs). FAPs is a basic parameterization of facial movement that resembles the efforts of the earlier facial animation systems. Underlying this parameterization are implicit hooks into the FACS system. The FAPs
FACIAL EXPRESSIONS ❚❙❘ 257
model does extend upon the earlier representations by defining a set of sixty-eight parameters. Additionally, these parameters are lumped into two separate levels, visemes (representing mouth posture correlated to a phoneme) and expressions (representing expressions like joy, sadness, anger, disgust, fear, and surprise with a single parameter). All the FAPs are constructed by combining lower-level facial actions and are widely used to synthesize facial motions, primarily for low-bandwidth telecommunication applications.
Tracking Facial Motion To support natural human-computer interaction, there have been several attempts to track facial expressions from video sequences. Kenji Mase, while a researcher at NTT and the ATR Media Integration and Communications Research Laboratories in Kyoto, Japan, was the first to introduce an analysis of video sequences and to present a method to track action units using optical flow. The results of this 1991 approach showed the usefulness of motion estimation using optical flow for observing facial mot i o n . A f e w ye a r s l a te r, Ir f a n E s s a a n d A l ex Pentland at the Massachusetts Institute of Technology, employed a similar approach, but added a muscle-based face model to extract finer facial movements. Around the same time, M. J. Black from Xerox PARC, working with Yaser Yacoob and Larry Davis at the University of Maryland, extended this approach to include the use of local parameterized models of image motion to track faces; their method required an affine model of the different regions of the face for tracking. In 1996 D. DeCarlo and D. Metaxes added a deformable model of the face to estimate motion and shape for tracking, which served as a constraint in the measurement of motion. All these approaches show that a more robust tracking and estimation of facial movements is improved by combining information about the model of the face and very detailed measurements of facial movement. They also show that the more detailed the model (resulting in higher computational cost), the better the tracking. In 1993 Demetri Terzopoulos and Keith Waters at the University of Toronto introduced a sophisti-
cated method that tracked linear facial features to estimate the corresponding parameters of a 3-D wire frame face model, which made it possible to reproduce and recognize facial expressions. They used contour tracking, which required that facial features be highlighted with make-up for robust tracking. In 1996 Irfan Essa and his colleagues at the Massachusetts Institute of Technology and the Georgia Institute of Technology, developed a similar method. At the cost of significantly higher computation, their approach incorporated a more detailed model of the face using finite element techniques and very fine pixel-by-pixel measurements of image motion, coupled with feature tracking to measure facial action. To aid in the tracking and analysis of facial motions, their approach combined the measurement of motion from a video with a model. Both approaches generate very good interpretations and animations of facial expressions, and researchers are currently working on other methods that rely on simple feature tracking and color/motion tracking for tracking faces.
Recognition of Facial Motion Recognition of facial expressions can be achieved by categorizing a set of predetermined facial motions as in FACS, rather than determining the motion of each facial point independently. Black and Yacoob use local parameterized models of image motion to measure FACS and related parameters for recognition. These methods show an 86 percent overall accuracy in correctly recognizing expressions over their database of 105 expressions (which included data from live subjects and television shows). Mase, using a smaller set of data (thirty test cases), obtained an accuracy of 80 percent. Both methods rely on FACS combinations to recognize expressions. A 1999 research project by Jeffrey Cohn and his colleagues at Carnegie Mellon University reported 91, 88, and 81 percent agreements for brow, eye, and mouth movements, between manual FACS codes and their automated system. Their system uses hierarchical motion estimation coupled with feature point tracking to measure facial movements. Their database contains about a hundred university students making expressions as instructed. In a 1999
258 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
paper, Marian Stewart Bartlett and her colleagues reported an accuracy of 91 percent over eighty sequences (each sequence is six frames) containing six upper-face FACS actions. They used a hybrid approach that combined spatial variation, feature tracking, and motion estimation within a neural network framework. The last two methods are, however, not recognizing expressions per se, but are comparing them to FACS codes that were validated by human experts. In 1991 Paul Ekman suggested that the timing of expressions was an important cue in detecting the difference between true and fake facial expressions and emotions, and in the mid 1990s, Essa and his colleagues proposed an approach based on a growing body of psychological research that argued that it was the dynamics of the expression, rather than detailed spatial deformations, that was important in expression recognition. They moved away from a static analysis of expressions (which is how the FACS model was developed) towards a whole-face analysis of facial dynamics in motion sequences, which could only be achieved by an automated “perception” of facial motion in image sequences within a dynamic estimation framework. Using over fifty-two video sequences of twelve subjects, they were able to achieve a recognition accuracy of 98 percent in recognizing expressions of happiness, anger, disgust, and surprise. Several new methods for extracting FAPs (as opposed to FACs) from video have also been been introduced to support automatic extraction of FAPS parameters, which in turn could aid in the recognition of facial expressions. An important aspect of this new work is to aid in bandwidth limited communication with 3D faces.
Expression and Emotions While the relationship between emotions and expressions has been well studied in the psychological literature, quantifiable representations that connect what can be observed and modeled to what can be inferred with knowledge from other sources (like context and familiarity with the person being interacted with) also play a major role. To this end, it is important to rely on some of the other sensors (like
audio for affect) and to develop systems that allow for extended interactions, so that a system can observe users for an extended time to “get to know” them. Significant progress has been made in building systems that can recognize expressions in a noninvasive manner. With the growth of robots and interactive environments, it is easy to predict that technology will soon be developed in which the realtime interpretation of expressions will be key. Irfan Essa See also Affective Computing; Gesture Recognition
FURTHER READING Bassili, J. (1978). Facial motion in the perception of faces and of emotional expression. Journal of Experimental Psyschology, 4, 373– 379. Bartlett, M. S., Hager, J. C., Ekman, P., & Sejnowski, T. J. (1999). Measuring facial expressions by computer image analysis. Psychophysiology, 36(2), 253–263. Black, M. J., & Yacoob, Y. (1997). Recognizing facial expressions in image sequences using local parameterized models of image motion. International Journal of Computer Vision, 25(1), 23–48. Bruce, V., & Burton, M. (1992) Processing images of faces. Norwood, NJ: Ablex Publishing. Cohn, J., Zlochower, A., Lien, J., & Kanade, T. (1999). Automated face analysis by feature point tracking has high concurrent validity with manual faces coding. Psychophysiology, 36, 35–43. Darrell, T., Essa, I., & Pentland, A. (1996). Task-specific gesture modeling using interpolated views. IEEE Transaction on Pattern Analysis and Machine Intelligence, 18(12). Darwin, C. (1965). The expression of the emotions in man and animals. Chicago: University of Chicago Press. DeCarlo, D., & Metaxas, D. (2000). Optical flow constraints on deformable models with applications to face tracking. International Journal of Computer Vision, 38(2), 99–127. Duchenne, G.-B. (1990). The mechanism of human facial expression: Studies in emotion and social interaction. Cambridge University Press; Editions de la Maison des Sciences de l’Homme. Eisert, P., & Girod, B. (1998). Analyzing facial expression for virtual conferencing. IEEE Computer Graphics & Applications, 18(5). Ekman, P. (1978). Facial signs: Facts, fantasies and possibilities. In T. Sebeok, (Ed.), Sight, Sound and Sense. Bloomington: Indiana University Press. Ekman, P. (1991). Telling lies: Clues to deceit in the marketplace, politics, and marriage. New York: Norton. Ekman, P., & Friesen, W. (1978). Facial action coding system. Palo Alto, CA: Consulting Psychologists Press. Essa, I., Basu, S., Darrell, T., & Pentland, A. (1996). Modeling, tracking and interactive animation of faces and heads using input from video. In Proceedings of Computer Animation Conference 1996 (pp. 68–79). New York: IEEE Computer Society Press.
FLY-BY-WIRE ❚❙❘ 259
Essa, I., & Pentland, A. (1997). Coding, analysis, interpretation, and recognition of facial expressions. IEEE Transaction on Pattern Analysis and Machine Intelligence, 19(7), 757–763. Mase, K. (1991). Recognition of facial expressions for optical flow. IEICE Transactions, Special Issue on Computer Vision and its Applications, E 74(10). MPEG.(1999).Overview of the MPEG-4 standard.Technical Report ISO/ IEC JTC1/SC29/WG11 N2725. International Organisation for Standardization (ISO), Seoul, South Korea. Retrieved April 8, 2004, from http://drogo.cselt.stet.it/mpeg/standards/mpeg-4/mpeg-4.htm. Parke, F., & Waters, K. (1996). Computer facial animation. Wellesley, MA: AK Peters. Pelachaud, C., Badler, N., & Viaud, M. (1994). Final report to NSF of the standards for facial animation workshop. Philadelphia: National Science Foundation, University of Pennsylvania. Retrieved April 8, 2004, from http://www.cis.upenn.edu/hms/pelachaud/workshop _face/workshop_face.html. Picard, R. (1997). Affective computing. Cambridge, MA: MIT Press. Tao, H., Chen, H., Wu, W., & Huang, T. (1999). Compression of mpeg4 facial animation parameters for transmission of talking heads. IEEE Transactions on Circuits and Systems for Video Technology, 9(2), 264. Waters, K., & Terzopoulos, D. (1992a).“The computer synthesis of expressive faces.” Philosophical Transactions of the Royal Society of London, B, 335(1273), 87–93. Waters, K., & Terzopoulos, D. (1992b).“Modelling and animating faces using scanned data.” The Journal of Visualization and Computer Animation, 2(4), 123–128. Yacoob, Y., & Davis, L. (1994). Computing spatio-temporal representations of human faces. In Proceedings of the Computer Vision and Pattern Recognition Conference (pp. 70–75). New York: IEEE Computer Society.
FLY-BY-WIRE Fly-by-wire is a phrase used to describe situations in which computers are used as an indispensable mediating agent between a human operator and a medium of transportation. As the term suggests, it first saw the light of day in aircraft applications; now, however, the term is also often taken to include systems in which computers are used in the control of automobiles. The traditional means of controlling an aircraft is to connect the pilot controls to the control surfaces by means of mechanical or hydraulic linkages. By contrast, in a fly-by-wire system, the operator’s commands are wholly or partially fed into a computer, which then determines the appropriate control settings. Perhaps the first complete fly-by-wire system was built at what is now the NASA Dryden Flight Research
Center in Edwards, California, in 1972: The test platform was an F-8C Crusader aircraft. Since then, fly-by-wire has been used widely in military aircraft; it is currently also used in many civilian passenger aircraft, including most Airbus models (starting with the Airbus A320 in 1988) and the Boeing 777. In the 1990s computers also came to be embedded into the control of automobiles. Antilock braking, in which the computer derives inputs from the driver’s brake pedal and rapidly pulses brakes where necessary, is an example. Other drive-by-wire approaches are in development, including mechanisms that will allow cars to sense the distance to the car in front and slow down to keep this spacing sufficient for safety. There are several advantages to using fly-by-wire technology. First, the weight of a fly-by-wire control system is generally much less than that of traditional controls. In aircraft, this is no small advantage. Second, since computers are so much faster than humans, fly-by-wire makes quicker reactions to rapid changes in the controlled system or the operating environment possible. This is especially useful in aircraft applications where, to improve maneuverability, changes are made to the airframe (structure of the aircraft) that render it intrinsically less stable. In such cases, the control must be extremely agile to respond to the onset of instability before the instability becomes irrecoverable and the airplane crashes. Another advantage to fast reaction is the potential for reduced drag, as a result of an improved trim setting of the controls. Third, a well-designed system can reduce both the physical and mental workload of the pilot. Removing the direct mechanical-hydraulic linkage from the cockpit controls to the control surfaces reduces the physical effort required to handle them, and having a mediating system that displays information in an appropriate way reduces the mental strain on the pilot. Lives can be lost if fly-by-wire systems fail. For this reason, a great deal of effort is spent in ensuring that they are highly reliable. A commonly quoted requirement for civilian aircraft is that the system should not fail catastrophically at a rate of more than once every billion ten-hour flights. This number is so low that fault-tolerant techniques must be used to ensure that even if some components of the system
260 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
fail, there is enough reserve capacity for the system to continue operating. The system must also have the intelligence to switch the more critical part of the workload out of malfunctioning units to those that are still running. (See the discussion of fault tolerance below.)
Basics of Fly-By-Wire Operation When a pilot applies an input to a control, this is recognized and translated into a digital signal, which is sent to the processors at the heart of the fly-bywire system. These processors then compute the appropriate control output to be sent to the actuators (for example, the ailerons, rudder, spoilers, and flaps). The bulk of the complexity lies in the computation of the control output: A fly-by-wire system does not merely translate the pilot commands to actuator signals; it also takes the current aircraft state and the safe-operating envelope into account as it makes its calculations. From this arises one of the major controversies in fly-by-wire control: To what extent must the fly-by-wire system protect the aircraft from potentially incorrect or harmful inputs from the pilot? One school of thought requires that the pilot be empowered to override the system at will in an emergency, to almost “fly the wings off the airplane” if that is what is required to maintain safety and save lives. The argument in favor of this approach is that the actual envelope of safe (or at least survivable) operation is somewhat broader than the formal safety envelope. The pilot of a commercial jetliner can be expected to have, through years of experience and training, enough experience to make judgments about what risks are worth taking in a grave emergency, one that may not have been foreseen in its entirety by the designers of the fly-by-wire system. The other school of thought would have the flyby-wire system be the ultimate arbiter of what control inputs are safe, and modulate the pilot’s commands when this is deemed appropriate. Proponents of this viewpoint may argue that the probability of the pilot’s making a mistake is sufficiently large that the system should be able to winnow out commands that it believes to be dangerous. Both schools would probably agree that when the pilot’s input is obviously harmful, it should be
appropriately mediated by the system; the question is where the line should be drawn. For example, one significant source of danger to aircraft is pilotinduced oscillations (PIOs). A PIO occurs when the pilot and the aircraft dynamically couple in such a way as to induce instability. For example, if the pilot applies too great an input to attempt to correct a deviation from the desired-state trajectory, the system can deviate in the other direction. The pilot might then overcorrect for this second deviation; the system can deviate some more, and so on. PIOs are a known potential cause of loss of aircraft. One important application of fly-by-wire systems is to recognize when such PIOs might occur and appropriately modulate the pilot’s commands to the control surfaces.
Response Time: The Key Parameter The response time of a fly-by-wire system is key to the quality of control provided by the system. Flyby-wire systems may fail not only by providing the wrong output (or no output at all), but also by providing the correct output too late. The tasks that run on the system must therefore be scheduled in such a way that the real-time deadlines are met. Taskscheduling algorithms are a very active area of research in real-time computing. Typically, one assigns tasks to processors and then the operating system of each processor executes a scheduling algorithm to schedule them (that is, to decide what task is to run at what time). One example of a scheduling algorithm is the “earliest deadline first” policy, in which, as the term implies, the task with the earliest deadline is picked to run. The designer must ensure that there are enough computational resources to meet all task deadlines: To do this requires a careful analysis that takes into account how long execution of each task will take in a worst-case scenario, which tasks need to be run and at what rate, the efficiency of the scheduling algorithm, and the capabilities of the individual processor.
Fault Tolerance Since lives depend on the successful operation of flyby-wire systems, the systems must exhibit extraor-
FLY-BY-WIRE ❚❙❘ 261
dinarily low failure rates. We have already mentioned the effective failure rate of one in ten billion per flying hour. These failure rates are far lower than those of individual processors. For this reason, it is important that fly-by-wire systems be fault-tolerant. Fault tolerance means that the system can tolerate a certain number of faults and still function acceptably. This requires the presence of redundancy, that is, extra capacity in the system that can be utilized whenever components fail. Redundancy can be classified into four broad categories: hardware, software, time, and information. Hardware Redundancy Hardware redundancy is the presence of additional hardware and the associated controls required to manage it. Broadly speaking, there are two ways in which faults can be recovered through hardware redundancy: forward and backward. In forward recovery, the system is able to mask the effects of failure so that no time is lost. Triple-modular redundancy (TMR), in which there are three processors and a voting circuit, is an example of hardware redundancy that makes forward recovery possible. The three processors execute the same code, and their output is voted on. The majority result of the vote— that is, the output produced by the majority of processors—is the output of the TMR module. It is easy to see that if one of the three processors were to fail, the two remaining processors would constitute the majority output and would mask the incorrect output of the faulty processor. Another example of hardware redundancy is the use of multiple actuators for the same control surface: If one actuator fails and pushes in the wrong direction, the remaining actuators should have enough physical capacity to overwhelm it and manage to configure the surface correctly. In backward recovery, the system recognizes a faulty output and then reruns the program on a functioning processor. The faulty output is typically recognized by means of an acceptance test. An acceptance test alerts the system to a fault when it detects output outside an acceptable range. For example, if the pressure sensor in a submersible reports that the pressure at a depth of 200 meters is 1 atmosphere, the acceptance test would flag an error.
Acceptance tests are not perfect. There is always the chance that an erroneous output will fall within the acceptable range of output or that an unusual set of circumstances will cause a correct output to fall outside what is designated as the acceptable range. If the acceptable range is too narrow, the number of correct outputs falsely tagged as incorrect tends to increase; if it is too wide, a larger number of faulty outputs will slip through. Devising an acceptable compromise is a difficult and important problem. The designer of hardware redundancy must ensure that correlated failures are kept to a minimum. For example, the control lines that connect the pilot inputs to the elevator and rudder controls must be spaced as far apart as possible, to make it less likely that a single event, such as the collapse of a portion of the floor, severs all of them. Also, the design must be such that faults are prevented from spreading. For example, a short circuit in one part of the system should not trigger a chain reaction that burns out a substantial number of units. Likewise, erroneous data must be kept from spreading. Fault- and error-containment zones are generally established to suitably isolate each subset of hardware from the others. Software Redundancy Software redundancy is crucial because software in high-performance systems tends to be far more complex than hardware, and software faults can pose a considerable risk to correct functioning. Modern software-engineering techniques go a long way toward reducing the number of faults per thousand lines of code; however, this is still not sufficiently low to meet the stringent requirements of fly-by-wire, and redundancy techniques are often used. Software redundancy consists of using multiple versions of software to do the same function and then vote on the results. One might, for example, have three independent teams of software developers write an engine controller module, run each version on a different member of a TMR cluster, and vote on the results. The hope is that the number of coincident failures (i.e., multiple software versions failing on the same input) is small enough for the system to be sufficiently reliable.
262 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Designers go to great lengths to reduce the chances of coincident failure. The teams developing the various versions are generally kept isolated from one another to avoid mistakes being copied from one team to another. Different programming languages may be used to reduce the chances of common errors being caused by a commonality of language. They may also attempt to use different algorithms to do the same function, to reduce the chance of numerical instability striking multiple versions for the same input. Time and Information Redundancy Time redundancy means having enough time to rerun failed computations. The vast majority of hardware failures are transient: They go away after some time. A powerful—and simple—approach is simply to wait for a while and then retry the same computation on the same processor. If the failure is transient and the waiting time has been long enough for the transient effect to die away, this approach will work. Time redundancy is obviously also necessary to effect backward recovery. Information redundancy that relies on codes that detect and correct errors is useful primarily in dealing with faults in memory or in communication. Memory is often subject to transient upsets. For example, when energetic charged particles (like alpha particles) pass through a memory cell, they can affect its contents. While the charged particles do not usually cause permanent harm to the physical structure, the content has suffered a spurious change. Communication is also subject to error because of noisy channels. Noisy channels pose the greatest problem in the case of wireless communication, less of a problem when electric cables are used, and the least problem when optical fibers are used. Codes render data more resilient by setting up internal correlations that can be exploited to detect (and possibly correct) erroneous bits.
Example: Boeing 777 Fly-By-Wire System The Boeing 777 fly-by-wire system makes extensive use of redundancy to ensure high reliability. The
pilot controls are received in analog form by four actuator control electronics (ACE) units, converted to digital form, and sent to the primary flight computers (PFC). The PFC complex (described in more detail below), which also receives sensor data on altitude, temperature, speed, and so forth, executes the flight control algorithms and sends its outputs back to the ACE units. These units then convert this digital output to analogue outputs to specify the control surface settings. The PFC complex has three PFC channels: left, center, and right. Each channel consists of three lanes, which are full-fledged computational systems, built around the AMD 29050, Motorola 68040, and the Intel 80486 processors, respectively. Thus, each channel uses a diversity of hardware to reduce the chances of coincident failures. By contrast, software diversity is not used: The same code (actually, with slight modifications to suit the three different processors), in the Ada programming language, runs on each of the three channels. Communication is over three bidirectional buses of about 2-megabits-per-second capacity. Each PFC channel can transmit on exactly one of these buses: These buses are labeled left, center, and right, indicating which channel is entitled to transmit on that bus. Every lane on every channel monitors each of the buses. Similarly, each ACE unit can broadcast on exactly one bus: two are permitted to use the left bus, and one each on the center and right buses. The three lanes of each channel can be in one of three states: command, standby, or monitor. Only the command lane of each channel is permitted to transmit on the bus. To reduce the chance of coincident failures, the system tries to ensure that the command lanes of each channel use a different processor. For example, the command lanes may be chosen such that the Intel lane of the first channel, the Motorola lane of the second channel, and the AMD lane of the third channel are the designated command lanes. Of course, failures among the lanes may make it impossible to ensure such hardware diversity among the command lanes. If the command lane processor is detected (by the other lanes in its channel) to have failed four times, it is replaced by another lane from its channel.
FLY-BY-WIRE ❚❙❘ 263
In normal functioning, therefore, each of the three buses carries the output of its corresponding channel. The input values used by each channel are obtained by selecting the middle value of the copies read on the buses. Each ACE has its own set of actuators to which it relays the commands received from the PFCs. For the sake of fault-tolerance, an individual control surface may be driven by more than one actuator.
Analysis The increasing capability and decreasing cost of modern processors has made fly-by-wire systems more practical. Their original use was in military aircraft. However, starting in the late 1980s, fly-by-wire has been used in civilian aircraft as well. Sophisticated fault-tolerant algorithms have to be used to meet the stringent reliability requirements, and a suitable task assignment and scheduling algorithm are used to ensure that critical tasks meet their deadlines. The very high reliability requirements of fly-bywire systems pose a challenge to the process of estimating their reliability and certifying them as fit to fly. Although many reliability models for software exist in the literature, none of them is completely convincing, and much more research needs to be done in this area. Fly-by-wire can provide control capability far superior to traditional control techniques. One intriguing application, in the wake of the 2001 terrorist incidents involving aircraft, is using fly-bywire to implement no-fly zones around important buildings. These zones can be made part of the specified flight constraints, and the system programmed to ensure that pilot commands to breach these zones are countermanded, while still retaining flight safety. C. M. Krishna
FURTHER READING Aidemark, J., Vinter, J., Folkesson, P., & Karlsson, J. (2002). Experimental evaluation of a time-redundant execution for a brakeby-wire application. Proceedings of the International Conference
on Dependable Systems and Networks (DSN02) (pp. 210–218). Cupertino, CA: IEEE CS Press. Briere, D., & Traverse, P. (1993). AIRBUS A320/A330/A340 electrical flight controls: A family of fault-tolerant systems. Proceedings of the Fault-Tolerant Computing Symposium (FTCS-23) (pp. 616–623). Cupertino, CA: IEEE CS Press. Carter, J., & Stephenson, M. (1999). Initial flight test of the production support flight control computers at NASA Dryden Flight Research Center (NASA Technical Memorandum TM-1999-206581). Washington, DC: NASA. Cataldo, A., Liu, X., & Chen, Z. (2002). Soft walls: Modifying flight control systems to limit the flight space of commercial aircraft. Retrieved July 18, 2003, from http://buffy.eecs.berkeley.edu/Research Summary/03abstracts/acataldo.1.html deGroot, A., Hooman, J., Kordon, F., Paviot-Adet, E., Iounier, Lemoine, M., et al. (2001). A survey: Applying formal methods to a softwareintensive system. In 6th IEEE International Symposium on High Assurance Systems Engineering (HASE ’01) (pp. 55–64). Cupertino, CA: IEEE CS Press. Droste, C. S., & Walker, J. E. (2001). The general dynamics case study on the F-16 fly-by-wire flight control system. Reston, VA: American Institute of Aeronautics and Astronautics. Fielding, C. (2001). The design of fly-by-wire flight control systems. Retrieved July 18, 2003, from http://www.shef.ac.uk/acse/ukacc/ activities/flybywire.pdf Johnson, B. (1989). The design and analysis of fault-tolerant digital systems. New York: Addison-Wesley. Knight, J. C. (2002). Safety-critical systems: Challenges and directions. In 24th International Conference on Software Engineering (ICSE ’02) (pp. 547–550). Cupertino, CA: IEEE CS Press. Krishna, C. M., & Shin, K. G. (1997). Real-time systems. New York: McGraw-Hill. Leveson, N. (1995). Safeware: System safety in the computer age. New York: Addison-Wesley. Littlewood, D., Popov, P., & Strigini, L. (2001). Modeling software design diversity: A review. ACM Computing Surveys, 33(2), 177– 208. Perrow, C. (1999). Normal accidents. Princeton, NJ: Princeton University Press. Peterson, I. (1996). Fatal defect: Chasing killer computer bugs. New York: Vintage Books. Riter, R. (1995). Modeling and testing a critical fault-tolerant multiprocess system. In Proceedings of the Fault-Tolerant Computing Symposium (pp. 516–521). Cupertino, CA: IEEE CS Press. Schmitt, V., Morris, J. W., & Jenney, G. (1998). Fly-by-wire: A historical and design perspective. Warrendale, PA: Society of Automotive Engineers. Storey, N. (1996). Safety-critical computer systems. New York: Addison-Wesley. Thomas, M., & Ormsby, B. (1994). On the design of side-stick controllers in fly-by-wire aircraft. ACM SIGAPP Applied Computing Review, 2(1), 15–20. Voas, J. (1999). A world without risks: Let me out! In 4th IEEE International Symposium on High Assurance Systems Engineering (HASE ’99) (p. 274). Cupertino, CA: IEEE CS Press. Yeh, Y.C. (1998). Design considerations in Boeing 777 fly-bywire computers. In 3rd IEEE High Assurance Systems Engineering Conference (HASE) (pp. 64–73). Cupertino, CA: IEEE CS Press.
264 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
FONTS Font terminology derives from letterpress printing, and comprises the elements of printed language (the letters, numbers, punctuation marks, and other items) called a character set. The traditional use of the term font changed with digital technology. Font once meant a character set at one size, in one style, of one typeface, within one family. “Helvetica bold italic 12,” for example, is the twelve-point font of the bold style of the italic typeface from the Helvetica family. Font in today’s common use generalizes the more specific terms typeface (a character set’s intrinsic design and visual properties), type style (a typeface variation), and typeface family (font character sets, components, and variations that comprise a group through shared identifying characteristics). This article uses font in the general common usage.
Historical Background Language is the unified and systematic means of communication by standardized sounds and signs that function through convention. For European culture and its offshoots in the New World, this means alphabetic sign groups, written and read horizontally from left to right. History identifies a three-stage development toward the alphabet. At first there are forms for objects, called pictograms; then there are forms for ideas, called ideograms; finally there come to be forms for sounds, called phonograms. These signs and symbols evolved from nonstandard forms without an organizational system to standard forms and systems and thence to the alphabet. The visual form of Western European written languages derives from the Roman monumental capitals, or majuscules, that date from about 100 CE, and the Carolingian minuscules that dates from about 800 CE. These writing styles evolved and varied widely from about 1350 to 1500, when Renaissance scribes formalized them as the humanistic minuscule, or roman style, and the humanistic cursive or italic style.
Johannes Gutenberg (c. 1395–1468) invented the first reliable means of casting individual, moveable, and reusable metal printing type. The first printers intended printed books to rival scribes’ manuscripts, and so the printed roman type replicated the humanistic minuscule and the humanistic cursive writing styles. Printed books replaced manuscripts, and designed fonts replaced pen-based replications. Fonts that evolved in Italy, France, and England from about 1470 to about 1815 weathered critical evaluation over time and remain standards. Fonts designed after about 1815 generally were extensions, revivals, or variations of these earlier models.
Style Properties Style properties of fonts are width (the character width excluding serifs, serifs being the horizontal termination to vertical strokes), weight (the vertical stroke thickness), contrast (the horizontal stroke thickness compared to the weight), serif form (bracketed, unbracketed, or none), x-height (the height of the small letters), and curve axis (either inclined or vertical). Table 1 gives the style properties of four common fonts. Font size is the body height measured in points and is not the letter size. In digital type, one point is one seventy-second of an inch. A metal type piece is an elongated rectangle. The body end shows the character in high relief for making the stamp-like ink transfer onto paper. Most languages require a uniform character line, and so require a uniform height system. Type characters and type bodies vary in width but not in height, and do not take up the entire body surface. Letter size varies among fonts in the same point size. Garamond, for example has a smaller x height than Helvetica, and so appears smaller at the same font size. Digital font size retains the definition that font size is body size, although the font body—also called the bounding box—is no longer physical. Normative text sizes range from eight points to twelve points and headings usually start at fourteen points but can be larger. Most current software can set font sizes in point decimals, enabling fine adjustments.
FONTS ❚❙❘ 265
TABLE 1.
Style Properties in Approximate Percentages for the Respective Letter
width
78.0% h
83.5% h
80.0% h
76.5% h
weight
19.0% h
12.5% h
13.5% h
15.3% h
contrast
18.5% v
48.0% v
88.9% v
42.5% v
unbracketed
bracketed
none (sans serif)
bracketed
x height
59.3% h
60.0% h
73.0% h
68.0% h
curve axis
vertical
rotated left
vertical
rotated left
serif form
h=capital letter height; v = vertical stroke thickness (weight)
Font Design Knowledge of font style evolution and the visual properties of the written and printed archetypes provides a formal understanding of font design. A sense of font design aesthetics differs with the intellect and sensibility of the designer, and the design and visual challenges of creating a new font are the same as they always were. Only technological factors have changed. Drawing is the traditional approach to visual investigation and the design process, and font design oftenbeginswithdrawingsonpaper.Designdrawings resolve intercharacter and intracharacter visual problems.Whilenosinglecharacter,orglyph,isperfect,the designer strives for balanced compromise that yields anaestheticandfunctionalwhole.Designersmanually markfinaloutlinedrawingsforanchorpointsandspacing side bearings. Technicians then plot anchor points and side bearings into a font-specific application designed to compile digital font formats and descriptor files. Testing and editing follow and continue until the font is approved for release and distribution.
Digital Font Formats Originally, digital typesetting required markup language to control expensive raster image processors (RIP) and photo-based output equipment. In
the mid-1980s, Adobe Systems, Apple Computer, and Monotype Corporation collaborated to offer PostScript fonts on the LaserWriter printer. When combined with the Macintosh WYSIWYG (“what you see is what you get”) display and Aldus PageMaker, which was the first desktop publishing program, digital graphics and typesetting became widely accessible. Though PostScript fonts, also known as Type 1 fonts, are widespread, OpenType and Unicode are emerging industry standards. OpenType is a one-file format developed in the mid-1990s jointly by Adobe and Microsoft. It has two main advantages: crossplatform compatibility, and extended Latin and nonLatin language application. OpenType combines existing formats with additional extensions compatible with operating systems for Mac, Windows, and Unix (with FreeType an open source font engine). OpenType also replaces ISO-Latin encoding with Unicode encoding (the mapping of character codes to glyphs). (Unicode is an international encoding standard that assigns a unique code number to each glyph. A glyph is a visual representation of a character.) Unicode enables OpenType to have multiple and multilingual character sets, some with more than 65,000 characters, and OpenType is compatible with PostScript Type 1 fonts. Other font formats
266 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
A Personal Story—Our Most Memorable “Nightmare” Many participants in a digital workflow experience unforeseen problems, and most clients, writers, editors, designers, developers, and publishers have a most memorable project story—their worst nightmare. Here is an anecdote to exemplify such a situation concerning fonts. Several years ago, a client in Asia contracted us to design an English-language publication produced in Europe. We exchanged the files by e-mail without incident between the client in Asia and our studio in America. We were careful about cross-platform issues such as: Mac to PC conversion, different software packages and versions, even different font publishers and versions. Throughout the project stages, we had no content or file problems, and the client approved the project for production in Europe. After several weeks we got a call from the frantic client that the project was on hold due to massive typographic errors! Text letters had changed to different letters, letters had transposed, and so on. We could not imagine how such a situation could happen. However, it was true. Everyone set aside finding fault, and worked together for a solution. In review, we determined that the project began on a PC for writing and editing, changed to a Mac for design, changed back to PC for client approval, and then was e-mailed to a European service provider who probably ran a language translation program on the project. Still, cross-platform projects can work. While we do not know the true cause of the problem, we suspect changes in text encoding caused the changes in letter assignments. We learned that human-to-human interaction together with human-computer interaction is good business. We learned to plan for technical problems, and that early and frequent communication with everyone throughout the workflow does reduce risk and increases resolution when problems do happen. Thomas Detrie and Arnold Holland
serve specific computing platforms, purposes, and devices. These font formats include ClearType, that improves font legibility on liquid crystal display (LCD) screens; GX fonts, that improve font handling in interactive graphics; and TrueType fonts, that contain complete font information in one file. PostScript Fonts In 1984 Adobe Systems introduced the programming language PostScript to define shapes in outline using Bezier curves (curves defined by two anchoring end points and two middle points that can be moved to change the shape of the curve). PostScript fonts have no specific size. A PostScript output device renders the characters in specific sizes as designated. Because mathematical terms specify outline fonts, they require less memory than visual bitmapfont data. Outline fonts retain smooth contours when slanted, rotated, or scaled to any size. Two main components comprise PostScript fonts, a screen font (bitmap file) and PostScript printer font
(outline file). The two files are necessary because the technology used for video screen display is different from the technology used for a printed reproduction. The screen font is a low-resolution screen pixel, or picture element representation of the printer font. Despite a technique that uses a grayscale to make the letter edges appear smooth, the limited digital information of a screen font prevents high-resolution output. The screen-font file contains screen font sizes and styles. Standard sizes are nine point, ten point, twelve point, fourteen point, eighteen point, and twenty-four point. Standard styles are roman, italic, bold, and bold italic. Multiple-Master Fonts Artisans adjusted early fonts to be optimal for each point size. Giambattista Bodoni (1740–1815) displays 144 cuts of his roman type in the first volume of Manuale Tipografico. Creating optimal designs for each font declined early for economic
FONTS ❚❙❘ 267
reasons, despite the development of machines like Benton’s pantograph punch cutter (1885), which made it easier to cut master patterns. Standard digital fonts have traditional style variation—the attributes of weight and width. In 1989 Adobe Systems introduced multiple-master technology, which allowed for the manipulation of fonts along multiple axes (in this case the term axis represents qualities such as the weight, width, and optical size of a character). The master designs determine the dynamic range for each axis in a font, and PostScript technology enables interpolation between the master designs. The dynamic range of a multiplemaster font with two axes, weight and width, covers permutations from light condensed through bold expanded. Multiple-master fonts with an optical-size axis can improve legibility in smaller-sized fonts by, for example, opening closed letter parts, reducing contrast, strengthening serifs, and increasing width. For larger-sized fonts, the optical-size axis can add refinements such as increased contrast, thinner serifs, and decreased width. TrueType Fonts TrueType is outline font technology that combines the screen font and the printer font into one file. TrueType fonts have no specific sizes and can have benefits in a mixed-platform environment. However, TrueType fonts seldom print well on PostScript output equipment. TrueType technology is different from PostScript technology, so a PostScript printer must interpret a TrueType font or substitute a PostScript font. This often increases raster image processor time. Because of their incompatibility, TrueType and PostScript fonts are best not used in the same document. Some fonts may print as bitmaps or not at all. Even when a PostScript font and a TrueType font have the same name, their metrics (the size and spacing limits) are different. This can create type ID conflicts, and if the font does print, it may reflow and cause different line breaks.
Font Management Font management reduces font problems to improve computer performance, usability, and project work-
flow. Common font problems are font organization, font omission, and pseudo fonts. A font filing system makes it easier to collect and send the fonts associated with a document, or to reconstruct a document if problems occur. A font filing system also helps prevent errors caused by using the wrong font, mixing font formats, or mixing font publishers. The Mac operating system (OS) and Microsoft’s Windows operating system handle fonts. On machines using a UNIX operating system, however, fonts are part of a Windows application installed widely on UNIX machines. Many fonts available in Windows are not available for UNIX, so cross-platform documents need font formats accessible to both systems. Generally, it is good to keep system folder fonts to a minimum. Each font added to your system is active and takes up random-access memory (RAM), the short-term memory the computer uses to store information in process. (This is as opposed to using storage memory, called read-only memory, or ROM). When there are many fonts using up RAM, there is less RAM available for applications. Many digital font publishers supply standard fonts. However, standard fonts from different publishers are not identical. It is best to avoid mixing publishers within the same font family. Computers do not distinguish fonts with the same name from different publishers, or always substitute one publisher for another. This can alter document appearance and increase processing times. Therefore, computer users are advised to create separate font folders and to file each font by name. Then one can have subfolders for different publishers’ versions of the same font. (For example, in a folder for Garamond typefaces, one might have subfolders labeled Adobe Garamond and Agfa Garamond for versions of that font from those two publishers.) Missing fonts, missing font components, and pseudo fonts are common font problems. Consider, for example, the calligraphy font Apple Chancery. Users can use the toolbar style buttons B (bold) and I (italic) to create Apple Chancery Bold Italic. However, Apple Chancery Bold Italic is only a screen representation. Apple Chancery Bold Italic does not exist as a printer font. A non-PostScript printer may
268 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
emulate the font for printing but a PostScript output device will substitute a default font or not print the font. Only active PostScript fonts will print on PostScript equipment. Font utility programs add convenience and productivity to font tasks and can help reduce font management problems. In addition to handling the problems mentioned above, font utility programs can create bitmapped fonts from PostScript outline fonts for accurate display. They can interpolate missing font sizes and help improve fonts printed on non-PostScript output devices. Font utility programs can list the font style variations together on a menu. Normally, the application font menu displays active fonts alphabetically by attribute, not alphabetically by name. This increases the tendency to make style variations with the font style buttons located on the application tool bar. A font family list helps avoid pseudo fonts, and makes it easier to select active fonts. Font utility programs also enable easier font activation or deactivation, as well as the designation of font sets. Font sets provide job-specific font lists. When a supplier requests the font list for a document, that list is the same as the font set. Although we have discussed fonts from a technical angle and, for the purposes of this encyclopedia, focused on digital fonts, fonts can also be discussed from an aesthetic, cultural, or linguistic perspective. Typography, a subject closely connected to fonts, places heavy emphasis on readability issues, usually in the context of traditional printing technology. Font work associated with computer printing and display focuses mainly on legibility issues in the generation of new letter designs. Thomas Detrie and Arnold Holland See also Laser Printer; Unicode
FURTHER READING Adobe Systems. (1999). PostScript language reference, 3E. Reading, MA: Addison-Wesley.
Baecker, R. M., & Marcus, A. (1990). Human factors and typography for more readable programs. Reading, MA: Addison-Wesley. Berry, W. T., & Poole H. E. (1966). Annals of printing. London: Blandford Press. Bigmore, E. C., & Wyman, C. W. H., (Eds.). (1978). A bibliography of printing. New Castle, DE: Oak Knoll Books. Bringhurst, R. (1992). The elements of typographic style. Vancouver, Canada: Hartley and Maerks. Carter, R., Day, B., & Meggs, P. (2002). Typographic design: Form and communication. Hoboken, NJ: John Wiley & Sons. Müller-Brockman, J. (1985). Grid systems in graphic design. New York: Hastings House. Dowding, G. (1961). An introduction to the history of printing types. London: Wace. Frutiger, A. (1980). Type, sign, symbol. Zürich, Switzerland: ABC Verlag. Frutiger, A. (1989). Signs and symbols: Their design and meaning. New York: Van Nostrand Reinhold. Gerstner, K. (1974). Compendium for literates: A system of writing. Cambridge, MA: MIT Press. Jaspert, W., Pincus, B., Turner, W., & Johnson, A. F. (1970). The encyclopaedia of type faces. New York, NY: Barnes & Noble. Johnson, A. F. (1966). Type designs: Their history and development. London: Deutsch. Karow, P. (1994). Digital typefaces: Description and formats. New York: Springer-Verlag. Karow, P. (1994). Font technology: Methods and tools. New York: Springer-Verlag. Lawson, A. S. (1971). Printing types: An introduction. Boston: Beacon Press. McLean, R. (1980). The Thames & Hudson manual of typography. New York: Thames & Hudson. McGrew, M. (1993). American metal typefaces of the twentieth century. New Castle, DE: Oak Knoll Books. Morison, S. (1999). A tally of types. Boston: David R. Godine. Moxon, J. (1958). Mechanick exercises on the whole art of printing, 1683–84. London: Oxford University Press. Muir, P. H., & Carter, J. (Eds.). (1983). Printing and the mind of man. Munich, Germany: Karl Pressler. Prestianni, J. (Ed.). (2002). Calligraphic type design in the digital age: An exhibition in honor of the contributions of Hermann and Gudrun Zapf. Corte Madera, CA: Gingko Press. Prust, Z. A. (1997). Graphic communications: The printed image. Tinley Park, IL: Goodheart-Willcox. Ruegg, R., & Frölich, G. (1972). Basic typography. Zürich, Switzerland: ABC Verlag. Ruder, E. (1981). Typographie: A manual of design. New York: Hastings House. Shneiderman, B. (1998). Designing the user interface: Strategies for effective human-computer interaction. Reading, MA: AddisonWesley. Steinberg, S. H. (1996). Five hundred years of printing. New Castle, DE: Oak Knoll Books. Sutton, J., & Bartram, A. (1968). An atlas of typeforms. New York: Hastings House. Updike, D. B. (1980). Printing types: Their history, forms, and use. New York: Dover.
GAMES GENDER AND COMPUTING GEOGRAPHIC INFORMATION SYSTEMS GESTURE RECOGNITION GRAPHICAL USER INTERFACE
G GAMES Games are activities that are designed to entertain players and that are governed by rules. Games are competitive, usually pitting players against each other or against tasks set by the rules. Typically the rules define ways to win and lose. The elements common to all games are: Competition: Games involve competition between two or more players or at least a challenge to a single player. ■ Rules: Games must have rules; after all, a game is its rules. ■ Goals: Players must have a drive to play a game, be it for a high score, personal gain, physical survival, exploration, or just entertainment. ■
GRID COMPUTING GROUPWARE
■
Purpose: Games must have a point, no matter how banal. The point is to some extent a goal in itself.
Modern types of games are computer or video games. They are a type of interactive entertainment in which the player controls electronically generated images that appear on a video display screen. Such games include video games played in the home on special machines or home computers and those played in arcades. A computer game is not always a video game or vice versa. The usual distinction today is subtle; a game is a “computer game” if it is played on a generalpurpose computer; game is a “video game” if it is played on a computer that is specialized for game play. Computer games feature a large collection of direct controls exploiting the full computer keyboard, 269
270 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
whereas video games tend to use more layers of menus or activity sequences via a game controller. One difference between computer games and video games arises from the fact that computers have high-resolution monitors, optimized for one person watching at close range, whereas video game consoles use a much lower-resolution commercial television as their output device, optimized for watching at a greater distance by more than one person. As a result, most computer games are intended for single-player or networked multiplayer play, whereas many video games are intended for local multiplayer play, with all players viewing the same TV set.
Games History To understand the concept of games and its relation to computer technologies we should know how games have shaped the games industry so far. In 1951 Ralph Baer, senior engineer at Loral Electronics, suggested creating an interactive game that people could play on their television sets. His idea was not developed at the time, but it is the first example of anyone considering new technology as a medium for playing games. In fact, ten years passed before the first real computer game was created. In 1961 students at the Massachusetts Institute of Technology (MIT) designed and programmed the game Spacewar. Spacewar ran on the university’s programmed data processor (PDP-1), a computer that took up the floor space of a small house. The trackball was invented to control the game. For a further decade Spacewar was the limit of video gaming, and therefore video gaming was available to a only select few people with access to large-scale computing resources such as the PDP series machines. In 1971 Nutting Associates released Computer Space, a game based on Spacewar and developed by Nolan Bushnell. It was the first arcade video game, but it was not very popular, and only fifteen hundred units were sold. Bushnell attributed this failure to the game being too complicated, noting that people weren’t willing to read instructions (Winter 2004). He decided to go into business for himself, calling his new company “Atari.” In 1972, only a year after Computer Space, Atari released Pong, Bushnell’s second arcade video game.
That year also brought the release of the first home games console, the Magnavox Odyssey, developed by Ralph Baer based on the idea he’d had twenty-one years previously at Loral Electronics. The Odyssey connected to a television set and came with twelve games and two hand controls. More than 100,000 units were sold during its first year, but its success was short lived. In 1974 Atari released Home Pong. It used a new technology, the microchip. Atari’s approach meant that a single chip could be used to perform all the operations required to play the game; this game became known as “Pong on a chip.” The Odyssey used separate discrete circuits for each operation (collision detection, on-screen scoring, etc.), which meant that it was much more expensive to produce. The Odyssey is the first example of a killer technology (an application for the technology so innovative and fascinating that a large number of people would be compelled to buy it), but it was soon priced out of the market as a succession of games using large-scale integrated (LSI) chips similar to Atari’s was released by numerous competing companies. The number of Pong-type games on the market continued to increase until the arrival of the next killer technology. In 1976 the Fairchild company released its Video Entertainment System (later renamed “Channel-F”). This system was a programmable console, meaning that it was not limited to a specific game or set of games preprogrammed into it. Atari released its own programmable console, the Atari 2600, in 1977. Both consoles ran games from plug-in cartridges. Other consoles followed based on this model. The first single-circuit board computer, the Apple I, also was released in 1976. This computer began a surge in the popularity and availability of home computers, and by the early 1980s a number of competing systems had emerged, including the now-ubiquitous IBM PC. This was a time when many companies were trying to predict what the next killer technology would be. Some believed that the answer was a fusion of games console and personal computer. Mattel and Coleco released upgrade packs to convert their consoles into personal computers. Neither product was successful. The belief that personal computer technology would kill games consoles led most manufacturers to
GAMES ❚❙❘ 271
abandon games consoles. Atari switched from games to manufacturing its own line of personal computers, and Mattel and Coleco bowed out of the computer industry altogether in order to manufacture their own games console. These actions turned out to be in poor judgment because the relatively high cost of personal computers led many who wanted just a games console to look elsewhere. In 1985 the Nintendo Entertainment System (NES) provided exactly what games players wanted—a low-cost dedicated games console. It was quickly followed in 1986 by the rival Sega Master System. In 1989 Nintendo released the Game Boy, a handheld games console. By 1991 the Game Boy had been joined by competition from the Atari Lynx and the Sega Game Gear, both technically superior to the Game Boy. The Game Boy had an eight-bit processor (a computer circuit able to process data of eightbit length that can be addressed and moved between storage and the computer processor) and a monochrome LCD screen with no built-in lighting. Its rivals both featured full-color backlit screens, and the Lynx featured even a sixteen-bit processor at a time when sixteen-bit games consoles were new. However, both the Lynx and the Game Gear fell by the wayside, and the Game Boy remained popular for twelve years until its successor, the Game Boy Advance, was released in 2001. The Game Boy’s real killer technology was its portability. Its rivals were heavy and cumbersome by comparison, and their backlit screens used up so much power that they had a limited battery life. Also in 1989 Sega released its sixteen-bit games console, the Genesis (known as the “MegaDrive” in Japan and Europe). Nintendo replied in 1991 with the Super Nintendo Entertainment System (SNES). By the mid-1990s these sixteen-bit consoles, combined with the growing success of multimedia PCs (driven by the widespread adoption of the CD-ROM drive), combined to kill off the midrange personal computers such as the Atari ST and Commodore Amiga, which could not compete with the games consoles in price and could no longer compete with IBM PCs and Apple Macs in performance. Since then progress has been driven by the move toward faster processors and larger storage space, but a move toward technological convergence also has
occurred. Most PCs and games consoles can now play movies and music as well as video games.
Game Design In interactive entertainment all games are designed to create an enjoyable, involving, challenging experience for the player. The interactive entertainment industry classifies its game titles by genre. Genres are important concepts for game design and can often influence the technologies used in a game. Some of the common genres are action, adventure, strategy, role-playing, and simulation. Other taxonomies (systems of classification) exist. Game design is a broad process that involves everything from the target audience to game play mechanics to the atmosphere exhibited by a game’s content. It is a complex process that requires a complete understanding of technology, game theory, storytelling, marketing, team leadership, and project management. The objective is for a design to be as complete as possible but also flexible to unexpected changes designers make in the game specification. Designers lay out the design in a primary document called the “game specification,” which is used as the main reference in the development process.
Game Development Process When computer games first became mainstream during the 1980s one person usually designed and programmed a game, and that process is still the case for simple games for low-end platforms such as mobile, personal digital assistant (PDA), or even simple twodimensional games for PCs. However, to create a competitive game with a three-dimensional engine, an involving story, and hundreds of megabytes of media content, a game studio employing many multitalented people must work for four or five years. Teams work simultaneously to create the game according to the game specification. An average game studio will have three main teams: programming, art, and level design (the creation of environment, stages, or missions playable by a gamer in any type of computer or video game). Programmers work on coding the game engine (the core software component of a computer
272 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
game) and level-designing tools. Level designers use these tools to lay out the structure of the game, including content from the artists and storytelling from the game specification. The art team creates game objects (the components of games which can be rendered and with which the player can react; today they mostly occur in three dimensions—3D) from concept sketches. The nature of game programming most often is governed by the current standards in game technology. This scenario is common to such a point that game engines might undergo upgrades even during the development of the games. Game implementation is as dependent on the technological standards of the games industry as it is on the requirements of a game specification. During the past decade interactive entertainment has made huge advances in computing. The best known of these advances are in 3D rendering hardware and software. Equally important are other advances in game technology, such as artificial intelligence, algorithms (detailed sequences of actions or procedures to perform or accomplish some task or solve a problem) for multiplayer networked games, software and hardware for sound media, and input peripherals such as controllers. Computer games have had a profound effect on computing and human-computer interaction (HCI). Computer games have been sources of innovations in several areas, including platforms (computers and consoles), application programming interfaces (API), graphics and sound hardware and software, peripherals, game logic, and artificial intelligence.
Platform and Language To date more than sixty-five video game systems have been marketed. Recent and well-known systems include Dreamcast, GBA, GameCube, Nintendo 64, Playstation, Playstation 2, Xbox, PSP, and NGage. New systems are introduced almost each year. The most popular programming languages for computer game implementation are C and C++ languages. They are fast, compiled (an executable program is created from source code), high-level languages that work on practically all platforms. The most accepted platform for computer games is
Microsoft Windows, namely the Win32 programming environment. The main reason for this acceptance is the availability of the Microsoft DirectX game development API, which together with C++ is the industry standard. On console platforms such as the Playstation 2, the GameCube, or even the GBA, C++ is again used as the programming language. However, specialized console hardware devices, such as the Playstation 2’s vertex processing unit (a graphics processing unit able to apply programs that process pixels by applying shading, lighting, or other rendering effects to every vertex to be processed), are often accessed by low-level assembly (a symbolic language that is converted by a computer into executable machine-language programs) calls rather than by higher level APIs.
Video Technology and 3D Graphics Computer games are best known for graphics because such visuals are the most important aspect of human-computer interaction, and computer games succeed largely in their exploitation of a computer interface. From the early days of the PC until the Super VGA (Video Graphics Array) era, video cards were measured by their 2D visual quality, the number of colors they could display, and the resolutions they supported. 2D games render animated sprites (small bitmap images often used in animated games, but also used as a synonym for ‘icon’) and a background directly to a video card’s screen buffer (an area in the computer’s memory (RAM) where a small amount of data is stored for a short amount of time before it is used). Early 3D games, such as Flight Simulator, Descent, and iD Software’s Wolfenstein 3D and Doom, used softwarerendered raytracing (a technique for producing views of a virtual three-dimensional scene on a computer) to calculate the pixel (the smallest picture element of a digital image) values rendered to the screen buffer. Unfortunately, software 3D rendering could not handle a scene with more than a few hundred polygons, limiting games to relatively low screen resolutions, and was not powerful enough to remove many visual flaws from the raytracing process. 3D games soon evolved to the point that they needed more complex scenes with higher numbers
GAMES ❚❙❘ 273
of polygons. Although software raytracing engines worked, they were not fast enough. Thus, engineers became interested in implementing common 3D algorithms in hardware. During the mid-1990s several 3D APIs were candidates for hardware implementation. OpenGL (based on SGI’s (Silicon Graphics Incorporated) IRIX GL) was a successful standard API because of its ease of use and extensive industry support. Microsoft’s Direct3D was new to graphics but was supported by Windows 95. Hardware vendors also proposed their own APIs, such as NEC’s PowerVR (used in the Sega Dreamcast) and 3Dfx Glide. The game that revolutionized 3D graphics was iD Software’s Quake, the first 3D shooter (a game that presents a lot of violent fighting and shooting and depicts the action from the perspective of the characters-players) to feature entirely 3D objects using polygon-modeled characters instead of sprite players. Although Quake ran on most computers with its excellent software renderer, iD Software also developed a version called “GLQuake” that rendered using OpenGL. This development occurred around the same time that Microsoft released a hardwareOpenGL driver for Windows 95 and 3Dfx shipped its first Voodoo 3D accelerator card (a special printed circuit board, usually plugged into one of the computer’s expansion slots, that makes the computer work faster). For the first time consumer-level hardware ran a 3D software API that could be used for real-time 3D graphics. When graphics accelerators first hit the market, the main visual improvements were smooth text u re s a n d l i g h t i n g e f f e c t s . To d ay h a rdw a re supports many more features. As hardware progresses the capabilities of realtime 3D graphics are quickly approaching those of prerendered raytracing software as opposed to realtime raytracing software. Although real-time 3D is still raytraced, a variety of texturing and rendering methods is used to produce lifelike, surreal, and impressive effects. Many advanced rendering techniques devised to improve realism, and allowing photorealistic quality, are now standard in most graphics engines. OpenGL and Direct3D are the only APIs left on the playing field of PC graphics.
Artificial Intelligence Artificial intelligence (AI) in games generally differs from artificial intelligence in academic computer science. Much of this difference has to do with different goals. In games AI typically needs to exhibit only enough realism that its actions are consistent with the game world and rules. AI as a computer science discipline has a broader goal—to pass the Turing test (a behavioral test conceived by Alan Turing in 1950 designed to test whether or not a system is intelligent), in other words, to behave indistinguishably from a human. Sometimes game logic (the internal mechanism of a game that performs all the tasks needed for it to work) and AI share this goal. However, games are less strict about what is accepted as artificial intelligence. Artificial intelligence in games is limited by the constraint to perform as quickly as possible, leaving time for other game logic (e.g., collision detection and response) and rendering while maintaining a smooth frame rate. This constraint does not allow for complex AI systems, such as a neural network (a general-purpose program that has applications outside of potential fields, including almost any problem that can be regarded as pattern recognition in some form), genetic algorithms, or even advanced knowledge-based systems. As a result, behavior for AI game agents tends to be determined by relatively simple finite state machines (FSM) (abstract machines that have only a finite, constant amount of memory) with cleverly designed, predefined rules. Regardless of the differences, many concepts from academic AI are used in games. Chief among the concepts is the rules-based system, a fairly generic concept considering that all AI characters, as players in a game, must adhere to game rules and formulate their own rules by which to play. AI players are equally goals based; however, often their goals are different from those of the human player, depending on whether the game is competitive or cooperative. AI systems can be implemented using FSM, which analyze scenarios on a per-context basis. FSMs are often paired with fuzzy logic (an approach to computing based on “degrees of truth” rather than the usual “true or false” Boolean logic on which the modern computer is based), which quantizes factors in decision making.
274 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
The most sophisticated aspects of game AI are often better termed “game logic.” Specifically, pathfinding (the problem of determining a path in a complex environment) is a challenging problem, particularly for enemy bots in first-person shooters and movable units in real-time strategy games. The most commonly used algorithms in pathfinding are Dijkstra’s algorithm for simple pathfinding in static mazelike environments, and A* for pathfinding in more open areas, particularly with obstacles moving dynamically. In role-playing games involving nonplayer characters (NPCs), the human player interacts with AI whose rules are defined by a script. These NPCs use prewritten dialogues. A well-scripted NPC can often act more appropriately, if not realistically, in the context of a game than can a simple artificial life agent (a virtual creature controlled by software, such as a creature in Black & White), which can understand the raw logic of game theory but can only poorly grasp the player’s relationship with the game world. NPC scripting plays an even greater role in defining AI’s rulesbased system. For example, NPCs in Bioware’s role-playing game Baldur’s Gate use a rules-based script to determine which weapons to use in which combat situations.
Networking Overall, games have not changed networking nearly as much as networking has changed games. The multiplayer aspect, particularly on the PC, is essential to modern computer gaming in practically all genres. The most prominent game network software is Microsoft’s DirectPlay, part of the DirectX API. It builds a wrapper (a library of software programs which hide low-level details of a programming language) on top of TCP/IP and other network protocols, allowing programmers to quickly connect game boxes on a network and exchange packets (one unit of binary data capable of being routed through a computer network) without having to code a sophisticated communication system from scratch. With broadband Internet finally starting to spread to the consumer mainstream, online gaming is starting to really take off. Market research has suggested
that as many as 114 million people could be playing online games by 2006. Online games fall into a number of games genres. At its simplest level an online game could be an interactive website that allows visitors to play simple games against each other, but many video games now come with an online competitive element to them, allowing a player to connect to the Internet and challenge friends or complete strangers. Online “communities” have grown up around such games, and developers often expend significant resources catering to them.
Controller Interaction with Games and Health Issues Since the early years of computer games and console gaming systems, the game controller has played an important role in human-computer interaction by allowing users to directly interact with video games. A game controller is a computer input device that takes data from the user and transfers it to the computer or gaming console, where the software interprets the data and performs the action that the user wanted. Examples of controllers are the typical gamepad, the flight simulator controller, the analogue joystick, the light gun (a pointing device for computers that is similar to a light pen), and Sony’s EyeToy. These devices all work in the same way: They take movement and action cues from the user, translate them into tiny electrical pulses that a computer can understand, and send them to the machine on which the game is being played. Controllers mostly have been designed to provide only input. However, now companies are designing controllers that provide feedback or output in the form of force feedback, also known as “haptic feedback,” which is a rumble or vibration of the controller. For example, players can now feel when they are being tackled in Madden NFL or feel the force of being shot in UT. This feature adds to the fun of the game. Game controllers were originally developed to bring the arcade experience to the home, either through video games on the computer or through
GAMES ❚❙❘ 275
console gaming systems. When designing arcade cabinets, companies could create custom sticks and pads for each individual game, but building a new controller for every game that is designed for home consoles or computers would not be feasible. When the Atari 2600—one of the first gaming consoles—was released in 1977 it was bundled with a couple of square-based broomstick-like joysticks, each with an eight-position lever and an action button. These joysticks were some of the first game controllers. The Atari joystick was built with economy and durability in mind but not ergonomics. A major problem with such a joystick is the way in which it is operated. The constant movement of the player’s wrist necessary to operate such a joystick can cause painful irritation of overused muscles and tendons, leading to RSI (repetitive strain injury) and other pathologies such as carpal tunnel syndrome. To avoid this health issue, developers began to design controllers with smaller, thumb-operated joysticks. Before thumboperated joysticks were developed, developers used directional pads on game controllers. These directional pads were used on such controllers as the original Nintendo controller. A directional pad had four directions of movement on it. A person could move up or down, left or right. Today controllers use combined efforts of directional pads, thumb-controlled joysticks, and many combinations of buttons to allow the user to play games safely.
Games as Educational Tools People increasingly use games for education in business, management, marketing, medicine, schools; they even use detective games. Many video and computer games seek to provide a realistic 3D visual experience. Training simulators for people such as pilots, tanker captains, soldiers, and law enforcement officers similarly seek to provide a realistic experience. The U.S. Army puts soldiers through simulated training and missions, teaching not only tactics, but also teamwork and military principles. The U.S. Navy has found that use of Microsoft’s Flight Simulator game improves the performance of flight cadets. Games and training simulations are converging as techniques and technology from games are being
adapted for use in training simulations for the military, law enforcement, air traffic controllers, and operators of all kinds of physical equipment. Abdennour El Rhalibi See also Artificial Intelligence; Multiagent Systems; Three-Dimensional Graphics
FURTHER READING Adams, J. (2002). Programming role playing games with DirectX. Indianapolis, IN: Premier Press. Anderton, C. (1998). Digital home recording. San Francisco: Miller Freeman Books. Barron, T., & LostLogic. (2002). Multiplayer game programming. Roseville, CA: Prima Tech. Bates, B. (2001). Game design: The art and business of creating games. Roseville, CA: Prima Tech. Binmore, K. (1997). Fun and games: A text on game theory. Lexington, MA: D. C. Heath. Chapman, N. P. (2003). Digital media tools. Chichester, UK; Hoboken, NJ: Wiley Ed. Crooks, C. E., & Crooks, I. (2002). 3D game programming with Direct X 8.0. Hingham, MA: Charles River Media. Danby, J. M. A. (1997). Computer modeling: From sports to space flight, from order to chaos. Richmond, VA: William-Bell. DeLoura, M. (Ed.). (2000). Game programming gems. Hingham, MA: Charles River Media. Deloura, M. (Ed.). (2001). Game programming gems II. Hingham, MA: Charles River Media. Dempski, K. (2002). Real-time rendering tricks and techniques in DirectX. Indianapolis, IN: Premier Press. Engel, W. F. (Ed.). (2002). Direct3D ShaderX: Vertex and pixel shader tips and tricks. Plano, TX: Wordware Publishing. Hallford, N., & Hallford, J. (2001). Swords and circuitry: A designers guide to computer role-playing games. Roseville, CA: Prima Tech. Mulholland, A., & Hakal, T. (2001). Developer's guide to multiplayer games. Plano, TX: Wordware Publishing. Preece, R. S. (2002). Interaction design: Beyond human-computer interaction. New York: John Wiley & Sons. Rasmusen, E. (2001). Games and information: An introduction to game theory. Malden, MA: Blackwell Publishing. Rollings, A., & Morris, D. (2000). Game architecture and design. Scottsdale, AZ: Coriolis Group. Rouse, R. (2001). Game design, theory and practice. Plano, TX: Wordware Publishing. Thomas, L. C. (1984). Games, theory, and applications. New York: Halsted Press. Walsh, P. (2001). The Zen of Direct3D game programming. Roseville, CA: Prima Tech. Watt, A., & Policarpo, F. (2001). 3D games: Real-time rendering and software technology, 1. New York: ACM Press.
276 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
GENDER AND COMPUTING There is ample evidence of relationships between gender and computer attitudes, computer self-efficacy, computer anxiety, and computer use. In general, females have less favorable attitudes, lower self-efficacy, greater anxiety, and lower levels of use than do males. These gender differences are pervasive and persistent. They have been observed in the United States, the United Kingdom, Israel, Romania, Hong Kong, and other countries and in children and adults of all ages. They have been observed for at least three decades and using a variety of measuring instruments. Given persistent and pervasive gender differences in attitudes, feelings, and behavior regarding computers, it is no surprise that women are underrepresented in computing majors and computing careers. Of the 24,768 bachelor of science degrees in computer and information sciences awarded in the United States in 1996–1997, less than 7,000 went to women. Fewer than one in six doctorates in computer science go to women. Moreover, the gender gap has been increasing despite considerable efforts to reduce it. In Europe, for example, female enrollment in computer science courses declined from 28 percent in 1978 to 13 percent in 1985 and to 9 percent in 1998. In contrast, women's representation in other previously maledominated fields, such as medicine and veterinary medicine, has increased dramatically. Women who have entered computing tend to be concentrated in the “soft” areas such as marketing and sales and support functions such as help desk and customer service—areas requiring good interpersonal skills, which are deemed more likely in women than men. Men dominate in technical areas, such as systems analysis and programming.
Explaining the Relationship between Gender and Computing Scholars from various disciplines and practitioners in various computer science industries have offered a variety of explanations for the relationship between
gender and computing. Some explanations focus on gender role socialization, which deters women from choosing careers that society deems atypical for females, such as computing. Other explanations focus on stereotypes about computing and how the social construction of computing as masculine may be keeping women away. Still others consider how gender differences in values may be implicated in women's decisions not to engage in computing activities or computing careers. Discrimination in the classroom and workplace, either blatant or indirect, is yet another explanation for the under-representation of women in computing.
Gender Role Socialization Research indicates that women and girls are systematically steered away from technical fields through school culture, traditional gender roles, and other societal pressures. Gender role socialization, particularly during the high school years, works against young women’s interest in computing. Females encounter traditional attitudes about computing among their peers, parents, and teachers. Computing environments, such as computer labs and clubs, are typically female-unfriendly. Not only are women in the minority in these settings, they are also less likely to be included in discussions there, especially informal conversations. When girls receive career advice, counselors may not mention or may actively discourage them from pursuing nontraditional career choices. Parents contribute to gender role socialization in the technology they make available to their children. In the United States, five times as many boys as girls have computers to use at home, with parents spending twice as much money on technology products for their sons as for their daughters. Video games, an important pathway to interest in computing, are played and made primarily by males. In addition emphasizing competition and violence, which run counter to female socialization, video games typically relegate female characters to limited roles as damsels in distress or sideshow prostitutes. When women are the main characters in video games, they are usually provocatively dressed in ways that emphasize their sexuality (as, for example, with Lara Croft in the video game Tomb Raider.
GENDER AND COMPUTING ❚❙❘ 277
“Computer Girl” Site Offers Support for Young Women
C
alling itself “a bridge from high school to the computer world,” the Computer Girl website (www.computergirl.us) is sponsored by ACM-W (the Association for Computing Machinery’s Committee on Women in Computing). Moderated by Amy Wu, a high school student with a keen interest in computer science, the site offers advice from women in computing and connects girls with mentors and “big sisters” in the field. There are also links to relevant articles and information on scholarships for women in computing. Below is a typical question and answer exchange from the site: Q: What kind of pre-college experience should I have? A: Before college, make sure you have a computer at home, even if it means saving for months or buying a used one. Unless you are using a computer frequently (constantly), you do get rusty. Join a computer club or, better yet, start one of your own, particularly a girl-only club for both fun and for helpful support. Try to meet successful women in computing. Write letters, send emails, visit offices—whatever you can do to get advice and gain support for your interests. Take an internship with a technology company and even try to target a woman-owned tech company. Do your research, do your homework and don't give up!
The situation appears somewhat better in Eastern Europe than it does in Western Europe and the United States. The former Communist countries of Eastern Europe have historically produced proportionately more female technologists, engineers, and physicists than Western Europe or the United States. Soviet industrialization efforts, which emphasized both gender equality and the importance of technology, created a relatively gender-neutral view of technology. Thus, as late as the 1980s, as many if not more females than males were studying to be engineers in those countries. The majority of children in math and computing in Bulgaria and Romania are female, comprising more than double the proportion of females in similar courses in the United Kingdom.
It is therefore unsurprising that gender differences in computer attitudes, self-efficacy, anxiety, and use are smaller in Eastern European countries than in the West. What is surprising is that gender differences exist at all, and in the same direction as in the West. Although the evidence is by no means as overwhelming as it is for the United States and Western European countries, females in Eastern European countries appear to have less computer self-efficacy and more computer anxiety than their male counterparts, and use computers less often than do their male counterparts. Stereotypes and the Social Construction of Computing There exists a stereotype of computer users as myopically obsessed with computing to the exclusion of everything else, including people and relationships. Although both male and female computer users consider this stereotype to be “not me,” the image is more threatening for females than males because of the importance of people and relationships in the gender role socialization of females. The scholars Sherry Turkle and Seymour Papert argued in a 1990 paper that women and men may have different computer styles, and that the male style is more strongly supported by the computer culture. Females’ computer style is relational and characterized by efforts to connect and interact with objects— to have a relationship with objects and with the computer itself (for example, by anthropomorphizing the computer, giving it a personality). Males’ computer style is impersonal and characterized by efforts to distance oneself from objects, to command and conquer, to avoid relationships, either between objects or between self and objects, including the computer. The social construction of computing favors impersonal-style users. Studies of elementary and high school children reveal that girls have accepted the masculine view of computing. Girls are more likely than boys to believe they will not be good at computing, that they are not qualified for computing, that they will not enjoy computing, and that they will not be able to obtain a good position in computing. Girls believe that computing work is done in isolation, that it involves sitting at a computer screen all day, and that the primary
278 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Narrowing the Gap (ANS)—Two journal entries by 13-year-old Liliana Guzman capture the idea behind TechGYRLS, a national program designed to narrow the knowledge and skills gap between girls and boys in the realm of technology. “I like TechGYRLs because when you make a mistake there are no boys to laugh at you,” Guzman wrote in a journal recording her experiences in the program in Dallas last year, when she was in sixth grade. Her second entry said: “I think if you try you may get it, but if you give up, you won't get it.” Girls are not keeping up with boys when it comes to computers and other technologies, say experts in the field. “If technology is going to play a significant role in our future, something most experts agree is inevitable, and women are to be equal partners in shaping that future, we must find ways to capture and maintain girls' interest in computers and technology,” said Marla Williams, executive director of the Women's Foundation of Colorado. Williams' group recently released a report reviewing various computer games and software marketed to boys and girls and found a significant gap in their educational value. Boys' products develop their computer literacy and programming skills, it found, while products for girls emphasize fashion and hairdos. The report encouraged creation of higher-quality software titles of interest to girls. Capturing and maintaining girls' interest in computers is a goal shared by the founders of the YWCA-affiliated
activity is either programming or office administration. They also believe that computing requires mathematics, and girls tend to feel less competent in mathematics than do boys. Similar findings regarding the nature of computing work were obtained in a 1998 study of adults in Australia, Hong Kong, the United Kingdom, and the United States. Women were more likely than men to believe that computing careers involved solitary work in front of a computer screen and would not involve teamwork. They were more likely than men to believe that computer work would not involve travel, and that the profession required technical, mathematical, and management skills. Though both
TechGYRLS and similar programs. They strive to expose girls to technology—and professional women who use it —in an all-girl setting. […] Pamela Cox, an elementary school teacher and TechGYRLS leader, said the 14-week, after-school program for 9- to 13-year-olds focuses on using a computer animation program as well as robotics construction kits. “The animation program requires them to do some basic programming,” Cox said.“They start by creating geometric figures, then they can add a picture to it. It's really neat. What's great is when they get to the point where they can help each other. All you have to do is get them started.” The Dallas program is one of seven around the country, all affiliated with the YWCA of the USA. . . . “We're working with girls coming into a phase of life where their decision may be to paint their nails or hang out in a mall,” said Khristina Lew, a YWCA spokeswoman. “Or they can focus on something that will have a positive effect on their lives. Girls can be shy or embarrassed about pursuing traditionally masculine careers. TechGYRLS is a program that can make them more comfortable with that idea and hopefully will stay with them as they grow up.” Karen Pirozzi Source: Girls get help in keeping up with boys in computer skills. American News Service, 2000.
genders believed that a computing career would require a continual updating of skills, women believed this more strongly than did men. Other findings from this study indicated that both genders assigned gendered preferences in computing subdisciplines. For example, both believed that women preferred multimedia and documentation to programming. In discussing the social construction of computing, feminist scholars have suggested three reasons why women are staying away from computing: communicative processes that handicap women (e.g., excluding them from informal communications), social networks that favor men, and male claims to knowledge overlaid by gendered power relations (men having more
GENDER AND COMPUTING ❚❙❘ 279
power than women). Feminists assert that it is this masculine discourse, embedded in a masculine computing culture, that is the major deterrent to women. Some feminist scholars have suggested that more sophisticated theorization about computing is needed to encourage women into the field. They point out that computing is a new and amorphous field consisting of a set of disparate and complex practices and technologies. Academic computing does not accurately reflect the field. It relies too heavily on mathematical formalism and ignores the creative approaches to computing that are needed in the workplace. Moreover, the image of the computer scientist as antisocial and uninterested in the end users is actually contrary to the requirements of most computing work for strong interpersonal skills and attention to the end user. Gender Differences in Values The gender gap in computing may reflect gender differences in work values. Women place greater value than do men on interpersonal relationships at work and on work that helps others. The masculine construction of computing may lead women to perceive computing careers as incompatible with their work values. Research supports this view. Women state that they are actively rejecting computing careers in favor of careers that better satisfy their values. For example, previously male-dominated careers such as medicine and law have experienced a dramatic increase in the number of female participants, due at least in part to the construal of these professions as compatible with women's valuing of relationships and service to others. Other work values that have been implicated in women's decisions not to enter computing fields are the value of pay and job status, particularly when these values conflict with the family values. Women value high pay and job status less than do men, and value work flexibility to accommodate family more than do males. Thus, the promise of high-paying, high-status computing jobs will be less attractive to women who are making career decisions than it will be to men. Discrimination From elementary school to the workplace, computing environments have been characterized as unfriendly if not outright hostile to women. Females receive less encouragement to engage in computing activities,
their contributions are less likely to be acknowledged and valued, and they are more likely to be discouraged and even ridiculed for engaging in computing activities. While the downplaying of women's contributions is frequently unintentional, the cumulative effect is lower self-esteem and self-confidence, especially about the likelihood of success in computing. By adolescence females express less self-confidence about computing than do males, even when faced with objective evidence of equivalent performance. Women who do enter computing careers face subtle and sometimes not-so-subtle discrimination in the workplace. They are often assigned less-challenging work, passed over for promotion, and less often acknowledged for their contributions than are men with similar qualifications and job experience.
The Internet: A Special Case? The Internet, once the almost exclusive domain of males, is now an equal-opportunity technology, but only in the United States. Females comprise just over half of the U.S. Internet users, a dramatic change from their 5 percent representation in 1995. However, males are the predominant Internet users in all other countries. For example, 75 percent of Brazilian, 84 percent of Russian, 93 percent of Chinese, and 96 percent of Arab Internet users are male. Within the U.S. evidence it appears that although women are as likely as men to access the Internet, they do so less often, for shorter periods of time, visit fewer Web domains, and engage in fewer and different Internet activities than males. For example, women are more likely than men to use the Internet to communicate with family and friends, consistent with women’s stronger interpersonal orientation. Some researchers have argued that aspects of the online world discourage women from using the Internet. Pornography and sexual predators can make the Internet seem dangerous. Flaming—sending or posting abusive, hostile remarks electronically—is a form of direct aggression of the sort that women are socially conditioned (if not biologically predisposed) to avoid. Gender differences in communication dynamics in the real world may be replicated online. Women get less respect and acknowledgement for their contributions; men dominate online discussions.
280 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Is the Relationship between Gender and Computing a Problem? There are at least two reasons for believing that the declining number of women in computing is a problem. First, for women, the decision to refuse to consider computing careers closes off potentially rewarding futures for them. Women typically work in jobs that pay less, have fewer benefits, and offer fewer opportunities for advancement than the jobs typically held by men, including computing jobs. Excluding such careers from consideration is particularly troublesome at a time when women are more likely to be the main or sole providers for their families. Computing careers generally pay well, provide good benefits for families, and offer opportunities for personal growth and advancement. For women to dismiss computing, especially if dismissal is based on misperceptions or misinformation about computing, puts them at a disadvantage in terms of their own needs as workers and family providers. Second, the computing field needs women workers. Currently, and into the foreseeable future, there is a strong and unmet need for qualified workers in computing. One way to meet this need is to encourage more women to enter the field.
Characteristics of Women in Computing There are of course some women who have overcome gender socialization, gender stereotypes, the masculine construction of computing, and overt and subtle discrimination to find satisfying careers in computing. A considerable amount of research has focused on identifying what characteristics set these women apart, both from men in computing and from women in other careers. When asked their reasons for choosing computing as a career, these women gave responses that in some cases reflected their personal attributes and in some cases reflected external factors. Personal Attributes Women who enter computing often do so because they perceive it as related to their other interests, which are often (but not always) people-oriented (medi-
cine, for example). In contrast, men who enter computing tend to be focused on computing itself. Women's interest in computing appears to develop more slowly and much later than men's interest: Men's interest often develops in early childhood as a result of playing video games, while women's interest often develops in high school as a result of using computers in as a means to an end (for example, as a tool for completing school projects such as science reports).Women are more likely than men to begin their computing careers at a later stage (generally after age thirty), often following a first career or a career break. Women in computing often regard themselves as having high ability in math and science. They enjoy logical thinking and problem solving, and see computing careers as a good fit with their abilities. External Factors Women in computing report having family, friends, and teachers who supported their interest and decision to enter computing. Institutional support also appears to be a factor: The availability of high-quality, affordable education at a desired location influenced some women's decisions to enter computing. In particular, recruiting strategies that emphasize the high quality and low cost of the institution's computing programs and that emphasize geographic factors such as nearness to family have been more successful than strategies that do not explicitly mention these factors. Finally, the high pay, favorable job prospects, and opportunities for challenging work and advancement that careers in computing offer are all reasons that women cited as important in their decision to enter computing.
Increasing the Representation of Women in Computing A number of recommendations are available on how best to increase the representation of women in computing. First, the image of computing needs to be reconstructed to reflect the diversity of skills and approaches that are desirable in the field today more accurately. In both curriculum design and pedagogy, greater emphasis should be placed on interpersonal, business, and multitasking management skills, all of
GEOGRAPHIC INFORMATION SYSTEMS ❚❙❘ 281
which are needed in today's computing careers, along with technical skills. Second, both boys and girls need to be provided with positive computing experiences early on (in elementary school or earlier). Computing environments should be made more welcoming to girls. For example, environments could focus more on cooperation as a means of problem solving than on competition to determine who has the “best” solution. Computing activities should take into account female socialization and stereotypic influences on interests and preferences. For example, multimedia computing activities or activities that involve cooperation and collaboration, particularly among same-gender others, have been shown to be well received by girls. Less clear is whether video games targeting girls are helping to achieve gender equity in computing. Third, young women need more complete and accurate information about computing careers. The diversity of skills and approaches needed in computing and connections between computing and other fields should be highlighted when educating students about computing careers. It should also be made clear how computing careers can satisfy a diverse set of values. Fourth, role models and mentors need to be as available to girls and women as they are to boys and men. Research on gender and computing has steadfastly and uniformly advocated the use of mentors and role models for recruiting and retaining women. Taken together, these recommendations may help more women discover satisfying careers in computing. Linda A. Jackson
Brosnan, M. J., & Lee, W. (1998). A cross-cultural comparison of gender differences in computer attitudes and anxiety: The U.K. and Hong Kong. Computers in Human Behavior, 14(4), 359–377. Cassell, J., & Jenkins, H (Eds.). (1998). From Barbie to Mortal Kombat: Gender and computer games. Cambridge: MIT. Cone, C. (2001). Technically speaking: Girls and computers. In P. O'Reilly, E. M. Penn, & K. de Marrais (Eds.), Educating young adolescent girls (pp. 171–187). Mahwah, NJ: Lawrence Erlbaum Associates. Durndell, A., & Haag, Z. (2002). Computer self-efficacy, computer anxiety, attitudes toward the Internet and reported experience with the Internet, by gender, in an East European sample. Computers in Human Behavior, 18, 521–535. Gorriz, C., & Medina, C. (2000). Engaging girls with computers through software games. Communications of the Association of Computing Machinery, 43(1), 42–49. Jackson, L. A., Ervin, K. S., Gardner, P. D., & Schmitt, N. (2001). Gender and the Internet: Women communicating and men searching. Sex Roles, 44(5–6), 363–380. Kirkpatrick, H., & Cuban, I. (1998). Should we be worried? What the research says about gender differences in access, use, attitudes and achievement with computers. Educational Technology, 38(4), 56–61. Margolis, J. & Fisher, A. (2002). Unlocking the clubhouse: Women in computing. Cambridge, MA: MIT. Morahan-Martin, J. (1998). Males, females and the Internet. In J. Gackenbach (Ed.), Psychology and the Internet (pp. 169–198). San Diego, CA: Academic Press. Panteli, N., Stack, J., & Ramsay, H. (2001). Gendered patterns in computing work in the late 1990s. New Technology, Work and Employment, 16(1), 3–17. Robertson, M., Newell, S., Swan, J., Mathiassen, L., & Bjerknes, G. (2001). The issue of gender within computing: Reflections from the UK and Scandinavia. Information Systems Journal, 11(2), 111–126. Sanger, J., Wilson, J., Davies, B., & Whittaker, R. (1997). Young children, videos and computer games. London: Falmer. Schott, G., & Selwyn, N. (2000). Examining the “male, antisocial” stereotype of high computer users. Journal of Educational Computing Research, 23(3), 291–303. Tapscott, D. (1998). Growing up digital: The rise of the Net generation. New York: McGraw-Hill. Teague, G. J. (2002). Women in Computing: What brings them to it, what keeps them in it? SIGCSE Bulletin, 34(2), 147–158. Turkle, S., & Papert, S. (1990). Epistemological pluralism: Styles and cultures within the computer culture. Signs: Journal of Women in Culture and Society, 16(1), 128–148.
See also Digital Divide; Sociology and HCI
FURTHER READING American Association of University Women (AAUW). (2000). Techsavvy: Educating girls in the new computer age. Washington, DC: AAUW Educational Foundation. Balka, E., & Smith, R. (Eds.) (2000). Women, work and computerization. Boston, Kluwer. Blair, K., & Takayoshi, P. (Eds.). (1999). Feminist cyberspace: Mapping gendered academic spaces. Stamford, CT: Albed. Brosnan, M. J. (1998). Technophobia: The psychological impact of information technology. London: Routledge.
GEOGRAPHIC INFORMATION SYSTEMS A geographic information system (GIS; the abbreviation is used for the plural, geographic information systems, as well) is capable of performing just about any conceivable operation on geographic information, whether editing and compilation, analysis,
282 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
mining, summarizing, or visualization and display. Geographic information is a particularly well-defined type of information, since it refers specifically to the surface and near-surface of Earth and links observations and measurements to specific locations (for all intents and purposes the term geospatial is synonymous with the term geographic). Maps are the most familiar form of geographic information, so a GIS can be considered simplistically as a computerized collection of maps, but a far wider assortment of types of information can be included in a GIS than are included on a map, including customer records (e.g., records that a mail order company might keep on its customers) that are tagged with geographic locations such as street addresses, or images of Earth's surface from remote sensing satellites, or information gathered using the Global Positioning System (GPS). Today GIS development is a major application of computing technology, with an annual market for software, data, and services totaling on the order of $10 billion. The general public is likely to encounter GIS through Web-based services such as MapQuest that offer maps and driving directions computed from digital maps. Most municipal governments will use GIS to track, manage, and plan the use of their geographically based assets and activities, as will utility and telecommunication companies, resource management agencies, package delivery companies, and departments of transportation. GIS is extensively used in the military for tasks such as targeting missile systems, planning battlefield tactics, and gathering intelligence.
GIS Representations Two major forms of data representation are used in GIS: raster and vector. In raster form, an area is represented as an array of rectangular cells, and variation of some phenomenon of interest over the area is expressed through values (e.g., degrees of light or darkness, colors, or numeric values representing such properties as annual rainfall) assigned to the cells. This form is used for remotely sensed images from satellites, and today GIS users have easy access to a wide range of such images, from government sources such as NASA and the U.S. Geological Survey, to commercial sources such as IKONOS and Quickbird. Images of interest to GIS users will have spatial reso-
lutions (cell sizes) ranging down to 1 meter or less. Raster data is also the preferred format for digital elevation models (DEMs), which represent Earth's topographic surface through measurements at regular intervals. DEM data is available for most of the United States at 30-meter resolution, and for the world at 1-kilometer resolution. In vector representation, phenomena are represented as collections of points, lines, or areas, with associated attributes. Vector representation is widely used to disseminate data from the U.S. Census, for example, providing summary statistics for states, counties, cities, or census tracts, and representing each reporting zone as an area. Lines and areas are most often represented as sequences of straight-line segments connecting points, and as such are termed polylines and polygons respectively. Vector representation is also used for the street centerline databases that describe the locations of streets, roads, and highways, and are widely used to support way finding. A vector GIS is also capable of representing relationships between objects—for example, between points representing incidents of crime and the neighborhoods in which the crimes occurred. This capability allows places of work to be linked to workers' home locations, or connections to be made between bus routes. Because relationships are in general unaffected by stretching or distortion of the geographic space, they are generally termed topological data, to distinguish them from geometric data about object positions and shapes. A GIS database makes use of both raster and vector formats, and typically will contain several distinct layers, or representations of different phenomena over the same geographic area.For example,layers might include representations of maps of topography, soils, roads,rivers and lakes,and bedrock geology.By including all of these layers in a single database, it is possible to use the GIS to explore relationships and correlations, for example between soils and bedrock geology, and to combine layers into measures of suitability for various types of land use or vulnerability to pollution.
GIS Functions The most important parts of a GIS are those that support its basic functions, allowing users to compile, edit, store, and display the various forms of geographic
GEOGRAPHIC INFORMATION SYSTEMS ❚❙❘ 283
information. Defining location on Earth's surface can be a complex task, given the large number of alternative coordinate systems and projections available to mapmakers. A GIS thus needs the ability not only to convert between raster and vector representations, but also to overcome differences between coordinate systems (such as the latitude-longitude coordinate system), between map projections (such as the Mercator projection or the Lambert Conformal Conic projection), and between the various mathematical figures used to approximate the shape of Earth. In the United States, for example, geographic data may use either of two mathematical figures: the North American Datum of 1927, based on the Clarke ellipsoid of 1886, or the newer North American Datum of 1983, based on a unified global geodetic system. Many distinct coordinate systems are in use, ranging from the high-accuracy State-Plane Coordinate systems defined by each U.S. state to the lower-accuracy Universal Transverse Mercator system originally devised for military applications. Once the foundation for such basic operations has been built, a GIS developer can quickly add a vast array of functions and capabilities. These may include sophisticated algorithms for designing and printing hard-copy maps, algorithms to identify optimum routes for vehicles through street networks or optimum locations for new retail outlets, methods for computing correlations between data in different layers or for combining layers into measures of suitability, and methods for evaluating potential land use decisions. All these uses are termed spatial analysis; when applied to extremely large data sets in an exploratory mode they are termed data mining. The list of supported forms of spatial analysis is huge, and an industrial-strength GIS will offer literally thousands. In addition, there is an active market in extensions to basic GIS products, offered by third parties and designed to be compatible with a specific vendor's base product.
GIS as Human-Computer Interaction The original motivation for the development of GIS in the 1960s came from the need to automate certain basic operations. One was map editing, which is very difficult and time-consuming if performed
by hand, given the technical issues involved in moving or deleting hand-drawn lines on paper maps. The other was the measurement of the area of arbitrarily shaped zones on maps, as required, for example, in the task of inventorying land use, or the planning of new subdivisions. This use of GIS as a personal assistant that can perform tasks on geographic data that the user finds too tedious, expensive, inaccurate, or time-consuming to perform by hand, drove almost all of the first thirty years of GIS development. It required modes of human-computer interaction (HCI) that were suited to the task, providing a comparatively skilled user with easy access to information and the results of analysis. More recently, however, a number of other requirements have come to dominate developments in GIS-driven HCI. A GIS that is used in a vehicle to provide the driver with instructions on reaching his or her destination must convey the information without distracting the driver from the driving task. Many of the in-vehicle navigation systems that are now being installed in cars, either as original equipment or as optional accessories, provide for auditory instructions as well as visual output, and may include voice-recognition functions for input as well. HCI issues also arise when GIS must be designed for use by people with visual impairment, and there have been several interesting developments along these lines in the past ten years. Advanced GIS use requires a high level of skill and training on the part of its user. Courses in GIS often include advanced work in map projections and in spatial statistics. Thus another set of HCI issues arise in GIS applications that are designed for use by children, or by other groups whose knowledge of advanced GIS concepts is limited. Encarta, Microsoft’s CD-ROM encyclopedia, for example, offers a number of functions associated with geographic information, including simple mapmaking, and retrieval of information using maps as organizing frameworks. A child cannot be expected to understand map projections, so it is common for such systems to display information on a three-dimensional globe rather than on a flattened or projected Earth. A child cannot be expected to understand the cartographer's concept of scale or representative fraction, so such systems resort to clever metaphors as
284 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION;;
Geographic Information Systems Aid Land Conservation (ANS)—Technology that helps a firm like Sears, Roebuck and Co. find a good department store location or steers a developer to an ideal housing site is also proving useful to nonprofit organizations concerned with land stewardship issues such as conservation, environmental justice and sustainable development. The technology is known as Geographic Information Systems, and computer giant Hewlett Packard and software maker Environmental Systems Research Institute Inc. have awarded grants totaling $6 million to put it in the hands of land preservation groups. While GIS analyzes large amounts of complicated information about an area of terrain and turns it into easily understood maps and graphics, it is costly and requires trained users and hefty computer hardware. According to Hewlett Packard executive Forrest Whitt, some people in the computer industry wanted to “even the playing field” by putting GIS technology employed by mineral exploiters and private developers into the hands of nonprofit groups. The New York Public Interest Research Group (NYPIRG) used grant money awarded in 1998 to help voters track
the basis for specifying level of detail, for example by allowing the user to raise or lower the viewpoint relative to Earth's surface, revealing less and more detail respectively.
Virtual Reality and Uncertainty A GIS contains a representation of selected aspects of Earth's surface, combining raster and vector formats to achieve a representation using the binary alphabet of digital systems. When used at the office desk, as is typical of the vast majority of GIS applications, the representation in effect replaces the real world, limiting its user's perception of reality to the information contained in the database. The real geographic world is infinitely complex, revealing more detail the closer one looks apparently ad infinitum, so it follows that any database representation must be at best a generalization, abstraction,
the impact of political issues on their neighborhoods. For example, it can project graphic overlays on maps that illustrate land use issues affecting a neighborhood and sources of campaign dollars in a city election. “We were able to use this technology to depict the trends in pesticide use in the state,” said Steven Romalewski, the program's director, “as well as (mapping) where the majority of Mayor Rudolph Giuliani's campaign contributions were coming from.” He said many New York City voters seemed interested to learn it didn't come from one of the city's five boroughs. The Southern Appalachian Forest Coalition, an alliance in six states from Virginia to Alabama, used GIS technology to raise awareness of old-growth forest preservation and to identify remaining wild areas. The California Wildlands Project is using its GIS grant to create a statewide map for habitat conservation. John Maggio Source: Land Conservation Groups Benefit from Development Technology. American News Service, March 9, 2000.
approximation, or sampling of the real thing that it purports to represent. Only in certain very limited circumstances, such as the representation of objects that are truly mathematical, including the straight lines of land surveys, is it possible to achieve close to perfect representation. This fundamental principle of GIS representations has led to great interest in the topic of uncertainty, which can be defined as the difference between what the database tells the user about the real world and what the real world would reveal to the user if visited directly. In some cases uncertainties can be resolved from the user's own memory, particularly if the user has visited the area that is represented in the database. GIS use is thus always most successful if combined with personal knowledge. There are many sources of uncertainty, including measurement error (it is impossible to measure location on Earth's surface exactly), generalization
GEOGRAPHIC INFORMATION SYSTEMS ❚❙❘ 285
(the omission of local detail from representations in the interests of simplicity or limiting data volume), vagueness in the definitions of key terms (for example, there are variations in the way soil and land cover are classified, and confusion on the part of the user about what the data are intended to represent. This last type of uncertainty commonly arises when the user of a certain collection of data misunderstands the intent of the creator of the data, perhaps because of poor documentation. Uncertainty has been studied within several theoretical frameworks, including geostatistics, spatial statistics, and fuzzy-set theory. Each has its benefits, and each is suited to particular settings. Statistical approaches are most appropriate when uncertainty arises because of measurement error, or when it can be characterized using probabilistic models. Fuzzy sets, on the other hand, appear to be more appropriate when dealing with imperfect definitions, or when experts are uncomfortable making precise classifications of geographic phenomena.
Augmented Reality and Mobile GIS In recent years the development of wireless networks and miniaturized devices has raised the possibility of a fully mobile GIS, no longer confined to the desktop. Laptop computers now have virtually the same computational power and storage capacity as desktop workstations, and wireless networks can provide bandwidths approaching those available via Ethernet and other local-area networks. Laptops are relatively cumbersome, however, with heavy battery consumption, and personal data assistants (PDAs) offer better mobility with some sacrifice in computational power and storage capacity. Wearable computers are increasingly practical; with wearable computers a central processing unit and storage devices are packed into a cigar-box-sized package to be worn on the belt, and visual output is provided through devices clipped to eyeglasses. In summary, then, we are approaching a point at which it will be possible to use GIS anywhere, at any time. This clearly has the most significant implications when GIS is used in the subject area, allowing the user to be in direct sensory contact with the phenomena being studied and analyzed.
Mobile GIS is already in use in many applications. Utility company workers, surveyors, and emergency incident managers already routinely have access to GIS capabilities through suitably configured PDAs, although these devices are more likely to be used for data input than for analysis. Information technologies are routinely used by scientific workers in the field to record observations, and GPS transponders are used to track animals to develop models of habitat use. For those uses of mobile GIS, the user is in contact both with the database and with the reality represented by the database. The term augmented reality (AR) is often used to describe those uses, since sensory reality is being extended through information technology, allowing the user to see things that are for one reason or another beyond the senses. AR can be used to see under the surface of the street when digging to install new pipes, allowing the construction crew to avoid accidentally damaging existing facilities. AR can be used to address the inability of visually impaired people to see their surroundings, and exciting developments have occurred recently in the development of systems to aid such personal way finding. AR can be used to superimpose historic views on the field of view, creating interesting opportunities in tourism. The ability of a cell phone user to see the locations of nearby businesses displayed in map form on the cell phone screen is also a form of AR. The long-term implications of AR are profound, since they give people the ability to sense aspects of their surroundings that are beyond their senses. AR also presents concrete problems for HCI. The displays provided by laptop computers and PDAs are adversely affected by the strong light conditions typical of the outdoors. Displays that clip on eyeglasses offer comparatively high resolution (similar to a PDA), but make it very difficult to implement pointand-click interaction. Displays on cell phones are too small for many GIS applications, which tend to require large display areas for suitable resolution (it is difficult, for example, to annotate street maps with names on a cell phone screen). Reference has already been made to the problems of visual display for drivers. Input devices, such as one-handed keyboards, are also difficult to use. Finally, heads-up display, in which information from the GIS is superimposed directly on the field of view, requires head-mounted displays
286 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
that are much more cumbersome and difficult to use than a display clipped to the eyeglasses.
GIS and Data Sharing Traditional paper maps are very efficient repositories of geographic information, and as such represent major investments. The typical national topographic map sheet of the U.S. Geological Survey, for example, covering an area approximately 15 kilometers on a side at a scale of 1:24,000 costs on the order of $100,000 to create, and must be regularly updated with new information if it is to remain current. It takes some fifty thousand such sheets to cover the forty-eight contiguous states, and if the entire series were to be recreated today the total investment would be in excess of $5 billion. Remote sensing satellite programs require investments in the hundreds of millions; a 1993 study by the U.S. Office of Management and Budget found total annual investment in geographic information by federal agencies to exceed $4 billion. Not surprisingly, then, society has traditionally relied on national governments to make these kinds of investments, through national mapping agencies and national space programs. Only national governments have been able to afford the cost of the complex systems needed to create maps. Today, this picture is changing radically. Anyone with $100 can purchase a GPS receiver capable of determining location to better than 5 meters and can use it to make digital maps of local streets or property boundaries. Mapping software is available for the average PC, with the result that today virtually anyone can be a cartographer, making and publishing maps on the Internet. Moreover, national governments find it increasingly difficult to justify the kinds of annual expenditures needed to maintain mapping programs. In 1993 the U.S. National Research Council began to advocate the concept of a National Spatial Data Infrastructure (NSDI), a set of institutional arrangements and standards that would coordinate a new form of decentralized production of geographic information. The NSDI is intended to support a patchwork approach, replacing uniform, government-produced series of maps with coverage at varying scales produced as appropriate by local, state, or federal agencies. The NSDI
provides the format standards to ensure sufficient uniformity in these data sets as well as the metadata standards to allow parts of the patchwork to be described effectively. Since 1993 the United States has made enormous investments in information technology infrastructure for data sharing. In accordance with U.S. law, the vast bulk of geographic information produced by the federal government is in the public domain, free of copyright restrictions, and available for no more than the cost of reproduction. Today the amount of such data available from websites is on the order of petabytes (quadrillions of bytes), and growing rapidly. This free resource has in turn stimulated the development of a wide range of applications and an industry dedicated to adding value to data by making it easier to use, more current, or more accurate. The term geolibrary has been coined to describe the websites that provide geographic information. By definition a geolibrary is a library that can be searched based on geographic location—that is capable of answering queries of the form “What have you got about there?” The National Research Council has explored the concept and status of geolibraries in one of a series of reports pertaining to the NSDI. Geolibraries present interesting issues of user interface design. All allow users to display a world map and to zoom in to an area of interest, refining the search criteria with additional requirements. But the area of interest for many users is defined not by a location on a map or by coordinates, but by a place-name, and many users will not be able to easily locate that place-name on a map. This issue is solved through the use of a gazetteer, an index that converts placenames to coordinates. But most gazetteer entries provide only a point reference, which is problematic for extended features of complex shape, such as rivers or mountain ranges. The U.S. National Geospatial Data Clearinghouse is an example of a geolibrary that allows search and retrieval across a distributed archive, in effect allowing its users to visit and search several libraries simultaneously and with minimal effort. Such capabilities are made possible by metadata, the information that describes the contents of data sets in standard form, allowing vast catalogs of data to be searched quickly and easily. The dominant metadata
GEOGRAPHIC INFORMATION SYSTEMS ❚❙❘ 287
standard for geographic information is the Federal Geographic Data Committee's Content Standard for Digital Geospatial Metadata. It provides hundreds of potential fields for the description of the contents, lineage, quality, and production details of a data set.
Web-Based GIS Early efforts to build geolibraries, beginning in the mid 1990s, focused on the need to distribute data sets, by analogy to the traditional library whose responsibility ends when the book is placed in the hands of the reader. Under this model each user was required to maintain a full GIS capability, since all transformation and analysis occurred at the user’s end. Since the advent of the Web and the widespread popularity of Web browsers, more and more services have been developed by servers, with a consequent reduction in the complexity of the software the user must have to use the data sets. Today, a user of a standard Web browser can access services for many basic GIS operations. The task of geocoding, for example, which consists of converting street mailing addresses to coordinates, is now available from a number of sites, including MapQuest. Similarly it is possible to access remote services for converting place-names to coordinates, and it is expected that more and more GIS services will be available in this form by 2010. To be automatic and transparent, such Web services require adherence to standards, and many such standards have been developed in recent years by such organizations as the Open GIS Consortium. They allow a user's client GIS to request data from a geolibrary or a geocoding service from a provider, taking care of such issues as coordinate system transformation and clipping of data to match a user's study area. A vast range of mapping, geolibrary, and other services are now available and fully interoperable with popular GIS software. For example, a user of ArcGIS, a family of GIS software products created by ESRI (Environmental Systems Research Institute), might determine that a needed data set is not available on the desktop computer's hard drive, and might search ESRI's Geography Network website for suitable data. The search would be initiated over a distributed archive of registered
contributors and would return suitable hits. The user would then be able to use a chosen data set, but rather than copying it to the user’s client GIS, the data set would be accessed transparently over the Internet. Most GIS vendors now offer software to support the development of GIS services and Web-based mapping. Some services are available free, and others on a subscription basis. But it remains to be seen whether the provision of services is capable of providing sufficient cash flow to a company, and whether Web-based GIS is a viable long-term commercial proposition.
The IT Mainstream The history of GIS has been one of specialized application of information technology. The development of the first GIS in the 1960s required many original developments and inventions, including the first map scanner, the first topological data structure, and the first algorithm for map overlay. Today, however, the majority of the software in a GIS is industry standard, implementing mainstream solutions for operating systems, application development, object-oriented database design, and graphic interfaces. Undoubtedly GIS has become closer over the years to the IT mainstream, and today it is common for records in large database solutions to be tagged with geographic location. For example, the locations of credit card transactions are routinely tagged with location in space and time to support mining for evidence of misuse through the detection of anomalous behavior. Mainstream solutions are attractive to software developers because they allow massive economies of scale through the adoption of standard technologies that can serve many disparate applications. On the other hand it is clear that GIS applications will always be to some degree distinct from the mainstream. Cell phone mapping applications, for example, push the limits of available screen area and resolution. GIS database applications raise difficulties when the phenomena to be represented are fundamentally continuous rather than discrete: for example, roads and rivers are continuous features, not easily broken into the discrete chunks of database records. Topography deals with continuous surfaces not easily broken into squares or triangles for discrete representation. In all of these cases the need
288 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
for discrete representation causes downstream issues for applications (for example, representing continuously curved streets as straight lines with sharp bends leads to difficulties in simulating driver behavior). Moreover, GIS is about the representation of an infinitely complex real world, and its effective use will always require an understanding of the nature of that world, and the consequences of the inevitable generalization, approximation, and sampling that occur in digital representation.
Raper, J. (2000). Multidimensional geographic information science. New York: Taylor and Francis. Snyder, J. P. (1997). Flattening the earth: Two thousand years of map projections. Chicago: University of Chicago Press. Worboys, M. F. (1995). GIS: A computing perspective. New York: Taylor and Francis. Zhang, J. X., & Goodchild, M. F. (2002). Uncertainty in geographical information. New York: Taylor and Francis.
Michael F. Goodchild
The use of gesture, particularly hand gesture, as a means of communicating with computers and machines is attractive for several reasons. First, many researchers observe that humans possess great facility in performing gestures and appear to do so spontaneously. Second, the hands and arms always “comes attached” to the human end of the human–computer interaction exchange. There is no need to hunt for the missing remote control or to equip the human with a communications device if the computer could observe the user and react accordingly. Third, as the space in which we interact extends from one screen to many, from small screen to large, and from the confines of the two-dimensional panel surface into the threedimensional space beyond it, gestures present the promise of natural interaction that is able to match both the added expanse and dimensionality.
See also Navigation FURTHER READING Chrisman, N. R. (1997). Exploring geographic information systems. New York: Wiley. Clarke, K. C. (1999). Getting started with geographic information systems (2nd ed.). Upper Saddle River, NJ: Prentice-Hall. DeMers, M. N. (2000). Fundamentals of geographic information systems (2nd ed.). New York: Wiley. Duckham, M., Goodchild, M. F., & Worboys, M. F. (2003). Fundamentals of geographic information science. New York: Taylor and Francis. Hearnshaw, H. M., & Unwin, D. J. (Eds.). (1994). Visualization in geographical information systems. New York: Wiley. Kennedy, M. (1996). The Global Positioning System and GIS: An introduction. Chelsea, MI: Ann Arbor Press. Leick, A. (1995). GPS satellite surveying. New York: Wiley. Longley, P. A., Goodchild, M. F., Maguire, D. J., & Rhind, D. W. (Eds.). (1999). Geographical information systems: Principles, techniques, management and applications. New York: Wiley. Longley, P. A., Goodchild, M. F., Maguire, D. J., & Rhind, D. W. (2001). Geographic information systems and science. New York: Wiley. MacEachren, A. M. (1995). How maps work: Representation, visualization, and design. New York: Guilford Press. Medyckyj-Scott, D., & Hearnshaw, H. M. (Eds.). (1993). Human factors in geographical information systems. London: Belhaven Press. National Research Council. (1993). Toward a coordinated spatial data infrastructure for the nation. Washington, DC: National Academy Press. National Research Council. (1999). Distributed geolibraries: Spatial information resources. Washington, DC: National Academy Press. O'Sullivan, D., & Unwin, D. J. (2002). Geographic information analysis. Hoboken, NJ: Wiley. Peng, Z. R., & Tsou, M. H. (2003). Internet GIS: Distributed geographic information services for the Internet and wireless networks. Hoboken, NJ: Wiley. Peuquet, D. J. (2002). Representations of space and time. New York: Guilford.
GESTURE RECOGNITION
Organizing Concepts One may think of the human end of the interactive chain as being able to produce three key interactive signals: things that can be heard, seen, and felt (ignoring taste and smell as currently far-fetched for HCI). In this sense the computer’s input devices can be thought of as the sensory organs detecting the signals sent by its human partner. Under this formulation speech interfaces require auditory computer input, and the plethora of input devices by which the user moves a mouse or joystick or depresses keys would constitute the computer’s tactile sense. The sensory receptor for gesture is vision. One might relax this vision requirement to allow the use of various glove and magnetic, acoustic, or marker-based tracking technologies. For this discussion we shall include
GESTURE RECOGNITION ❚❙❘ 289
these approaches with the caveat that they are intended as waypoints toward the goal of vision-based gesture understanding. To move beyond promise to practice, one needs to understand what the space of gestures is and what it can afford in interaction. We organize our discussion around a purpose taxonomy. Interactive “gesture” systems may be divided into three classes: (1) manipulative, (2) semaphoric, and (3) conversational. The human hands and arms are the ultimate multipurpose tools. We use them to modify objects around us (moving, shaping, hitting, etc.) to signal one another and in the general service of language. While the psychology and psycholinguistics of gesture is a very involved field our tripartite segmentation adequately covers the use of gesture in HCI. These distinctions are not perfunctory—they have great significance for the vision-based processing strategy employed as well as the design of the interactive system that utilizes the gesture. Manipulative Gesture Systems Manipulative gesture systems follow the tradition of Richard Bolt’s “Put-That-There” system, which permits direct manipulation. The user interacted with a large wall-size display moving objects around the screen, with the movements tracked by an electromagnetic device. As will be seen later, this work may also be classified as conversational since cotemporal speech is utilized for object manipulation. We extend the concept to cover all systems of direct control. The essential characteristic of manipulative systems is the tight feedback between the gesture and the entity being controlled. Since Bolt’s seminal work there has been a plethora of systems that implement finger tracking/pointing, a variety of “finger flying”–style navigation in virtual spaces or direct-manipulation interfaces, such as control of appliances, computer games, and robot control. Other manipulative applications include interaction with wind tunnel simulations, voice synthesizers, and an optical flow–based system that detects one of six gross full-body gestures (jumping, waving, clapping, drumming, flapping, marching) for controlling a musical instrument. Some of these approaches use special gloves or trackers, while others employ only camera-based visual tracking. Such
manipulative gesture systems typically use the shape of the hand to determine the mode of action (e.g., to navigate, pick something up, point, etc.), while the hand motion indicates the path or extent of the controlled motion. When used in a manipulative fashion, gesture interfaces have a lot in common with other direct manipulation interfaces, the only distinction being the “device” that is used. As such, many of the same design principles apply in building manipulative gesture interfaces. These include ensuring rapid enough visual feedback for the control, size of, and distance to targets of manipulation and considerations for fatigue and repetitive stress order (as when one has to maintain hand positions, poses, and attitudes by maintaining muscle tension). Gestures used in communication/conversation differ from manipulative gestures in several significant ways. First, because the intent of the latter is manipulation, there is no guarantee that the salient features of the hands are visible. Second, the dynamics of hand movement in manipulative gestures differ significantly from those in conversational gestures. Third, manipulative gestures may typically be aided by visual, tactile, or force feedback from the object (virtual or real) being manipulated, while conversational gestures are typically performed without such constraints. Gesture and manipulation are clearly different entities sharing between them possibly the feature that both may utilize the same body parts. Semaphoric Gesture Systems Semaphores are signaling systems in which the body’s poses and movements are precisely defined to designate specific symbols within some alphabet. Traditionally, semaphores may involve the use of the human body and limbs, light flashes, flags, and the like. Although semaphore use inhabits a miniscule portion of the space of human gestures, it has attracted a large portion of vision-based gesture research and systems. Semaphore gesture systems predefine some universe of “whole” gestures g i ∈ G. Taking a categorial approach,“gesture recognition” boils down to determining if some presentation pj is a manifestation of some g i . Such semaphores may be either static gesture poses or predefined stylized movements. Note that such systems are patently not sign language
290 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
recognition systems in that only isolated symbols are entertained. Sign languages include syntax, grammar, and all the dynamics of spoken language systems. Some attempts have been made to recognize isolated sign language symbols (e.g., finger spelling), but the distance between this and sign language understanding is as far as that between optical character recognition and natural language understanding. Semaphoric approaches may be termed as communicative in that gestures serve as a universe of symbols to be communicated to the machine. A pragmatic distinction between semaphoric gestures and manipulative ones is that the former do not require the feedback control (e.g., hand–eye, force feedback, or haptic) necessitated for manipulation. Semaphoric gestures may be further categorized as being static or dynamic. Static semaphore gesture systems interpret the pose of a static hand to communicate the intended symbol. Examples of such systems include color-based recognition of the stretched-open palm where flexing specific fingers indicates menu selection, the application of orientation histograms (histograms of directional edges) for hand shape recognition, graphlabeling approaches where labeled edge segments are matched against a predefined graph of hand poses that simulate finger spelling, a “flexible-modeling” system in which the feature average of a set of hand poses is computed and each individual hand pose is recognized as a deviation from this mean, the application of global features of the extracted hand (using color processing) such as moments and aspect ratio to determine a set of hand shapes, model-based recognition using three-dimensional model prediction, and neural net approaches. In dynamic semaphore gesture systems, some or all of the symbols represented in the semaphore library involve predefined motion of the hands or arms. Such systems typically require that gestures be performed from a predefined viewpoint to determine which semaphore is being performed. Approaches include finite state machines for recognition of a set of editing gestures for an “augmented whiteboard,” trajectory-based recognition of gestures for “spatial structuring,” recognition of gestures as a sequence of state measurements, recognition of oscillatory gestures for robot control, and
“space-time’ gestures that treat time as a physical third dimension. One of the most common approaches for the recognition of dynamic semaphoric gestures is based on the Hidden Markov Model (HMM). First applied by Yamato, Ohya, and Ishii in 1992 to the recognition of tennis strokes, it has been applied in a myriad of semaphoric gesture recognition systems. The power of the HMM lies in its statistical rigor and ability to learn semaphore vocabularies from examples. A HMM may be applied in any situation in which one has a stream of input observations formulated as a sequence of feature vectors and a finite set of known classifications for the observed sequences. HMM models comprise state sequences. The transitions between states are probabilistically determined by the observation sequence. HMMs are “hidden” in that one does not know which state the system is in at any time. Recognition is achieved by determining the likelihood that any particular HMM model may account for the sequence of input observations. Typically, HMM models for different gestures within a semaphoric library are rank ordered by likelihood, and the one with the greatest likelihood is selected. In a typical HMM application, Rigoll, Kosmala, and Eickeler (1997) were able to train a system to achieve 92.9 percent accuracy in recognizing twentyfour dynamic semaphores using manually segmented isolated semaphores. This study illustrates the weakness of such approaches, in that some form of presegmentation or other constraint is needed. Semaphores represent a miniscule portion of the use of the hands in natural human communication. A major reason for their dominance in the literature is that they are the low-hanging fruit. Conversational Gestures Conversational gestures are those gestures performed naturally in the course of human multimodal communication. This has been variously termed gesticulation or coverbal gestures. Such gestures are part of the language and proceed somewhat unwittingly (humans are aware of their gestures in that they are available to subjective description after they are performed, but they are often not consciously
GESTURE RECOGNITION ❚❙❘ 291
constructed) from the mental processes of language production itself. The forms of these gestures are determined by personal style, culture, social makeup of the interlocutors, discourse context, and other factors. There is a large body of literature in psychology, psycholinguistics, neurosciences, linguistics, semiotics, and anthropology in gesture studies that lies beyond the scope of this article. We will list just two important aspects of gestures here. First, hand and arm gestures are made up of up to five phases: preparation, prestroke hold, stroke, poststroke hold, and retraction. Of these, only the stroke that bears the key semiotic content is obligatory. Depending on timing there may or may not be the pre- and poststroke holds. Preparations and retractions may be elided depending on the starting and termination points of strokes (a preparation may merge with the retraction of the previous gesture “phrase”). Second, there is a temporal synchrony between gesture and speech such that the gestural stroke and the “peak of the tonal phrase” are synchronized. There is a class of gestures that sits between pure manipulation and natural gesticulation. This class of gestures, broadly termed deictics (or pointing gestures), has some of the flavor of manipulation in its capacity of immediate spatial reference. Deictics also facilitate the “concretization” of abstract or distant entities in discourse and so are the subject of much study in psychology and linguistics. Following Bolt, work done in the area of integrating direct manipulation with natural language and speech has shown some promise in such combination. Earlier work involved the combination of the use of a pointing device and typed natural language to resolve anaphoric references. By constraining the space of possible referents by menu enumeration, the deictic component of direct manipulation was used to augment the natural language interpretation. Such systems have, for example, been employed for querying geographic databases. A natural extension of this concept is the combination of speech and natural language processing with pen-based gestures. The effectiveness of such interfaces is that pen-based gestures retain some of the temporal coherence with speech as with natural gesticulation, and this cotemporality was employed to support mutual disam-
biguation of the multimodal channels and the issuing of spatial commands to a map interface. Others have developed systems that resolve speech with deixes in regular video data. In Kendon’s (1980) parlance, a class of conventionalized gestures that may or may not accompany speech are termed emblems. The North American “OK” hand gesture is a typical emblem. While the temporal speech–emblem relationship is different from that of free-flowing gesticulation, emblematic gestures in conjunction with speech have been proposed for such applications as map interaction. Another approach to coverbal gesticulation is to parse hand movements into gesture phases. Wilson, Bobick, and Cassell (1996), for example, developed a triphasic gesture segmenter that expects all gestures to be a rest-transition–stroke-transition-rest sequence (ignoring pre- and poststroke holds). They required that the hand return to rest after every gesture. In another work Kettebekov, Yeasin, and Sharma (2003) fused speech prosody and gesticular motion of a television weather reporter (in front of a green screen) to segment the phases and recognize two classes of gestures (deictics and contours). All gestures are constrained to have separate preparations and retractions. They employed a HMM formalization. Sowa and Wachsmuth (2000) describe a study based on a system for using coverbal iconic gestures for describing objects in the performance of an assembly task in a virtual environment. In this work subjects wearing electromagnetically tracked gloves describe contents of a set of five virtual parts (e.g., screws and bars) that are presented to them in wallsize display. The authors found that “such gestures convey geometric attributes by abstraction from the complete shape. Spatial extensions in different dimensions and roundness constitute the dominant ‘basic’ attributes in [their] corpus … geometrical attributes can be expressed in several ways using combinations of movement trajectories, hand distances, hand apertures, palm orientations, hands h a p e s , a n d i n d e x f i n g e r d i re c t i o n” ( w w w .techfak.unibielefeld.de/~tsowa/download/Porto.pdf) . In essence, even with the limited scope of their experiment in which the imagery of the subjects was guided by a wall-size visual display, a panoply of
292 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
iconics relating to some (hard-to-predict) attributes of each of the five target objects were produced by the subjects. This author and colleagues approach conversational gestures from the perspective of the involvement of mental imagery in language production. The idea is that if gesticulation is the embodiment of the mental imagery that, in turn, reflects the “pulses” of language production, then one might be able to access discourse at the semantic level by gesturespeech analysis. They approach this using the psycholinguistic device of the “catchment” by which related discourse pieces are linked by recurrent gesture features (e.g., index to a physical space and a specific hand shape). The question becomes what computable features have the semantic range to carry the imagistic load. They demonstrate discourse segmentation by analyzing hand use, kinds of motion symmetries of two-handed gestures, gestural oscillations, and space-use distribution.
Quek, F. (in press). The Catchment Feature Model: A device for multimodal fusion and a bridge between signal and sense. EURASIP Journal of Applied Signal Processing. Quek, F., McNeill, D., et al. (2002). Multimodal human discourse: Gesture and speech. ACM Transactions on Computer-Human Interaction, 9(3), 171–193. Rigoll, G., Kosmala, A., & Eickeler, S. (1997). High performance real-time gesture recognition using hidden Markov Models. In Proceedings of the International Gesture Workshop. Bielefeld, Germany, September 1997. Sowa, T., & Wachsmuth, I. (2000). Coverbal iconic gestures for object descriptions in virtual environments: An empirical study. Post-Proceedings of the Conference of Gestures: Meaning and Use. Porto, Portugal, April 2000. Wilson, A. D., Bobick, A. F., & Cassell, J. (1996). Recovering temporal structure of natural gesture. Proceedings of the International Conference on Face and Gesture Recognition. Killington, VT Yamato, J., Ohya, J., & Ishii, K. (1992). Recognizing human action in time-sequential images using hidden Markov Model. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 379–385.
GOVERNMENT AND HCI
Possibilities of Gesture Use Gesture use in human-computer interaction is a tantalizing proposition because of the human capacity for gesture and because such interfaces permit direct access to large and three-dimensional spaces. The user does not even need to manipulate an input device other than the appendages with which they come. We have laid out a “purpose taxonomy” by which we can group gesture interaction systems and by which design may be better understood. Francis Quek
FURTHER READING Bolt, R. A. (1980). Put-that there. Computer Graphics, 14, 262–270. Kendon, A. (1980). Gesticulation and speech: Two aspects of the process of utterance. Relationship Between Verbal and Nonverbal Communication, 207–227. Kettebekov, A., M. Yeasin, & Sharma, R. (2003). Improving continuous gesture recognition with spoken prosody. Proceedings of the IEEE Conference on CVPR, 1, 565–570. McNeill, D. (1992). Hand and mind: What gestures reveal about thought. Chicago: University of Chicago Press. McNeill, D., Quek, F., et al. (2001). Catchments, prosody and discourse. Gesture 1, 9–33.
See Digital Government; Law Enforcement; Online Voting; Political Science and HCI
GRAPHICAL USER INTERFACE A graphical user interface, or GUI, is a style of humancomputer interaction that presents computer objects and actions as images on the screen. Graphical user interfaces are mostly operated with a pointing device such as a mouse. It could be argued that without graphical user interfaces personal computers would not have become as successful and popular as they are today. The history of GUIs is usually traced back to the Xerox Star user interface introduced in 1981, but the underlying technology goes back further. The first graphics program was Sketchpad, developed by Ivan Sutherland in 1963. The first mouse-driven interface was developed by Douglas Engelbart in 1968. Another key innovator was Alan Kay, who developed the concept of the
GRAPHICAL USER INTERFACE ❚❙❘ 293
Dynabook—an icon-driven notebook computer— in the 1970s. Kay also developed the Smalltalk-80 language and system (made available for public use in 1979–1980), which was the first wholly graphical user interface environment. At the time that Smalltalk-80 was developed, the main form of user interface was the command line, which required skilled users to type in complex command sequences. The users needed to know the names of the commands and the syntax of the commands for whatever computer operating system they were using. Operating systems such as Microsoft Windows and Linux still make a command line interface available, but most people seldom use it.
The Macintosh Finder The early GUIs ran on specialist hardware that was not available to the general user. It took developments in cheap microelectronics in the early 1980s to allow the production of affordable personal computers. The first of these was the Apple Lisa. Prior to producing the Lisa, the newly formed Apple Computer had produced the Apple II, which had a command-line-driven interface. The Lisa adopted some of the interface concepts from the Xerox Star interface to become the first personal computer with a graphical interface. However, the GUI’s memory requirements, demands on the central processing unit, and required hard-disk speed kept the Apple Lisa from becoming a commercial success. Apple followed the Lisa in 1984 with the more successful Apple Macintosh. The first Macintosh was an all-in-one unit with an integrated 9-inch (22.5-centimeter), black-and-white screen. The Macintosh Finder interface had all the GUI components we are familiar with today: A mouse for selecting graphical objects. The Apple mouse had one button for all actions. ■ A toolbar of menus for each application, with common menus such as File, Edit, and Help appearing in each application. ■ Menus—lists of items that represent commands to be carried out on computer objects. ■ Icons—pictorial representations of computer objects, such as files and disks. ■
Windows—collections of icons that represent the contents of a folder, directory, or an application. ■ A trash or wastebasket icon for the removal of objects. ■ Scrollbars, which make it possible to view large numbers of icons in a window by moving horizontally or vertically through the window. ■ Graphical display of text in different fonts and styles, text which the user can edit directly. ■ In addition to the graphical elements on the screen, the Macintosh Finder interface introduced the following supporting concepts: ■ The Edit menu, which supported a common cut, copy, and paste functions for all applications. This was supported in turn by the scrapbook, which made possible the exchange of information between applications via the Copy buffer. ■ Dragging as a means of copying or moving items from one folder to another, or from a folder to the wastebasket. ■ The ability to undo operations via the Undo menu item. This made it possible to correct mistakes. Similarly, items in the trash could be retrieved, as they were not immediately deleted. ■ Multiple selection of items either by dragging an area around them or using keyboard modifiers with mouse selection. ■ Resizable and moveable windows. The screen was treated as a virtual desktop and the windows as virtual sheets of paper that could be placed on top of one another or moved around the virtual desktop. ■
Taken together, the above features provided an interface that supported users in carrying out their tasks quite effectively. Unlike with a command line interface, users did not have to learn and remember command names or command syntax. Nor did users have to learn and remember complex hierarchies in order to find files and folders. Just as important, software developers had access to the components of the Finder user interface so that they could build graphical user interfaces for the Macintosh. Today there are many other GUI toolkits, including Windows API, Java Swing, which help programmers build interfaces. A GUI toolkit gives the programmer access to buttons,
294 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
check boxes, windows, text boxes, scrollbars, menus, and icons for building the graphical interface for an application. Apple also provided a set of user interface guidelines so that developers could produce interfaces that were consistent with the Finder and with other Macintosh applications.
between tasks without having to stop and restart their work. X Windows uses a three-button mouse, with the user customizing the action of each button.
Microsoft Windows
All the applications that we use today—word processing, spreadsheets, desktop publishing, e-mail tools, Web browsers, and so forth—are built on the same common “WIMP” elements: windows, icons, menus, and pointers. However, although GUIs provide the potential for improved interaction, it is possible to produce poor ones. The keys to successful graphical user interface design are attention to the sizing and layout of graphical items, particularly as these relate to Fitts’s Law, the appropriate use and combination of colors, the correct sequencing of actions and mouse events, and the involvement of endusers in the design process. Fitts’s Law states that the speed and accuracy with which a user can select an on-screen object depends on the size of the object and how far the user has to move the pointer. The implications for the usability of graphical interfaces are that graphical features should be as large as practically possible given display constraints and that frequently used features should be grouped near one another to minimize the distance that the user has to move the cursor to activate them. Appropriate use and combination of colors is important for graphical user interfaces. Overuse of bright colors can be distracting; similarly, using the wrong combination of colors can result in color combinations that lack contrast and make text hard to read. Graphical user interfaces give users freedom to choose the order in which they carry out tasks. Certain interactions require a mode or state in the interface that alerts the user when there is a problem or confirms an action such as a permanent deletion. However, the incorrect use of modes can lock users in an unwanted state when they should be permitted to make their own choices. Such mode errors can be avoided by constructing dialogue diagrams of the state of the GUI.
Microsoft introduced its first version of a desktop graphical user interface in 1985. It was not as fullfeatured as the Macintosh Finder interface, but it had the advantage of running on the cheaper hardware of IBM-compatible personal computers and was thus available to more people. There has been an ongoing debate concerning the relative merits of the Apple and Microsoft Desktop user interfaces. Usability professionals generally believe that Microsoft caught up with the quality of the Macintosh interface with the release of Windows 95. Microsoft Windows uses a two-button mouse with the left button used for selection and the right button used for pop-up menus, depending on the context.
The X Windows System At the same time as the Macintosh appeared, researchers at MIT were developing a graphical user interface for the UNIX operating system, called the X Windows system. This was another important development in the evolution of GUIs, and the descendants of the X Windows system can be seen today on Linux. X Windows had some of the features of the Macintosh but also exploited the power of UNIX. UNIX is a multitasking operating system, which means that it can run several applications at the same time. An X Windows user could thus run several applications at the same time, each in their own window. This feature was later included in Apple and Microsoft user interfaces; it enables users to switch
UNIX is simple. It just takes a genius to understand its simplicity. Dennis Ritchie
The Design of Graphical User Interfaces
GRID COMPUTING ❚❙❘ 295
The same graphical interfaces that help the majority of users also help the software developers who build them. The new techniques of visual programming, using such tools as Visual Basic or Macromedia Flash, enable developers to develop the graphical user interface rapidly and to add actions to the graphical elements as required. This capability supports rapid prototyping of the user interface and enables end users to get an early look at the developing application. At this point end users can experiment and discuss the interface with the developer so that it more closely matches their needs.
Accessible Graphical Interfaces Popular and useful though they are, GUIs have the potential to cause problems for users with visual and motor impairment. Such impairment may range from mild visual problems to complete blindness and includes color blindness and inability to use standard pointing devices. To avoid problems for such users, graphical user interfaces need to be augmented with accessibility features. For example, screen readers are a computer common tool for the visually impaired. Graphical user interfaces needed to be built with appropriate text labels and audio cues to support such users. Keyboard shortcuts are also necessary to support those who have difficulty using the mouse and similar pointing devices. For people with color blindness, certain color combinations should be avoided, as people with the condition cannot distinguish the colors if they are next to each other. The most common form of color blindness is redgreen (inability to distinguish red from green), followed by blue-green (inability to distinguish blue from green). Along with advances in microelectronics and telecommunications, graphical user interfaces are one of the cornerstones of the current digital revolution. Graphical user interfaces remove the barrier of complexity from computer use: People can work through the graphical interface on the task at hand rather than on the task of operating a computer. Graphical user interfaces have evolved from the 1960s from specialized workstations to everyone’s desktop; now they are spreading to personal devices and everyday household appliances. If they
are carefully designed, GUIs can make all manner of devices easier to use. With careful attention to the needs of the intended users, GUIs can greatly assist us all in our lives, whether at work or play. David England See also Alto; Mouse FURTHER READING Dix, A. (1997). Human-Computer interaction. New York: PrenticeHall. Neilsen, J. (1994). Usability engineering. Boston: Academic Press. Preece, J., Rogers, Y., & Sharp, H. (2002). Interaction design. New York: John Wiley. Shneiderman, B. (2003). Designing the user interface (4th ed.). Reading, MA: Addison-Wesley.
GRID COMPUTING As the Internet ushered humanity into the Information Age, communication and access to computing resources and data have become an integral part of life in the developed world. Scientists are attempting to harness the considerable resources made available through the Internet to offer computing, communication and data solutions for those who require massive amounts of computer processing power. One such solution is grid computing, also known as Internet computing, adaptive computing, meta-computing, global computing, and even planetary computing, referring to the much-acclaimed SETI@home Project, which depends on Internetconnected computers to Search for Extra-Terrestrial Intelligence (SETI).
History and Definition Ian Foster, a computer scientist at the University of Chicago, and Carl Kesselman, of the Information Sciences Institute at the University of Southern California, earned world recognition by proposing a new paradigm in distributed computing in the mid 1990s, which they referred to as “grid computing.”
296 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Grid computing made it possible to use the vast array of new networks, including the Internet, to bring globally dispersed computing resources together. Grid computing provides computing power in much the same way that a power grid creates a single, reliable, pervasive source of energy by utilizing electricity generated from many suppliers, dispersed through many geographical regions. Fran Berman, Geoffrey Fox, and Tony Hey, editors of Grid Computing—Making the Global Infrastructure a Reality, defined grid computing as follows: [It] “integrates networking, communication, computation and information to provide a virtual platform for computation and data management, in the same way that the Internet integrates resources to form a virtual platform for information” (9). In essence, grid computing refers to a set of common standards, protocols, mechanisms, and tools that could be implemented to harness idle computing resources, data resources, specialized scientific instruments, and applications in order to create a coordinated and collaborative virtual supercomputer that would offer almost infinite processing power and storage space.
nate over 10 petabytes of the data they expect to generate from the new particle accelerator due to begin operations in 2006. SETI@home uses the principles of grid computing to harness the idle computing resources donated by almost 5 million personal computer users throughout the world. Commercial enterprises find grid computing a viable option for addressing their fluctuating computing needs. Organizations have found that subscribing to a grid network and sharing resources is more economical than investing in new resources. Many hardware vendors, including IBM, Hewlett-Packard, and Dell offer solutions to commercial clientele that include such services as computing-on-demand, storage-on-demand, and networking-on-demand. Theses services, coupled with specialized applications and value-added services, make grid solutions very desirable to the commercial sector. Grid computing also has the capacity to offer its commercial customers end-to-end systems integration, management and automation, end-to-end security solutions, disaster recovery, higher performance levels, and reduced upfront investments.
Grid Computing Applications
Fundamentals of Grid Computing
Although grid computing traces its inception to widearea distributed supercomputing, today it is used to support the needs of myriad disciplines that need dispersed data resources and high computational power, such as high-energy physics, biophysics, molecular biology, risk analysis and modeling, financial modeling, scenario development, natural disaster modeling, geophysics and astrophysics, weather forecasting, computer simulation, and first-response coordination of emergency services. One major U.S. grid computing project is the National Science Foundation’s $53 million Tera-Grid, which connects computing resources and provides one of the largest grids available. The Tera-Grid performs calculations at a speed of 13.6 teraflops (13.6 trillion floating-point operations per second), offers over 0.6 petabytes (millions of gigabytes) of disk space, and has a dedicated network interconnecting all the nodes at 40 gigabits per second. The high-energy physics lab of the European Organization for Nuclear Research (CERN) created a data grid to dissemi-
A grid computing network consists of three core functionality areas that create a seamless computing environment for the user: grid fabric, grid middleware, and grid portals and applications. The grid fabric comprises all hardware components connected to the grid. These could range from personal computers to supercomputers that run on diverse software platforms like Windows or UNIX. The grid fabric could also include storage devices, data banks, and even specialized scientific instruments like radio telescopes. The grid fabric would also contain resource management systems that keep track of the availability of resources across the grid. The next layer of components on a grid network, known collectively as the grid middleware, may be further divided into two categories— core grid middleware and user-level grid middleware. Core grid middleware programs like Globus and Legion provide the basic functionality for the grid network. They include resource allocation and process management tools, resource identification and registra-
GRID COMPUTING ❚❙❘ 297
tion systems, and basic accounting and time management systems. Most importantly, core grid middleware also includes the core security components of the grid, which include local usage policy management and authentication protocols. The userlevel grid middleware includes programming tools and resource management and scheduling tools to efficiently utilize globally dispersed grid resources. Programming environments such as GrADS help computing users develop and execute programs that suit their unique requirements. Middleware schedulers like AppLeS and Condor-G provide task scheduling to efficiently manage the available computing resources to complete queued tasks. The final functionality areas of the grid are the portals and applications used to access and utilize the grid resources. Web-enabled portals allow users to interact with distributed grid resources and choose the resources that are most compatible with their task requirements while adhering to their security and financial constraints. Most grids available today offer a suite of applications that are fully integrated into the grid network. These applications can be used to harness the vast computational power of the grid or to access remote data sets dispersed throughout the world in order to conduct simulations and data mining projects or to perform other complex calculations. A number of available grids cater to specific user groups with applications that address niche needs. For instance, the European Data Grid operated by CERN offers specialized computational and data resources to the high-energy physics community. Monash University’s Virtual Laboratory project (http://www.gridbus.org/vlab/) offers applications and data resources for research in the area of molecular biology. NASA’s Information Power Grid (http://www.ipg.nasa.gov/) provides computational resources for aerospace research communities, while the Earth System Grid (https://www.earthsystemgrid.org/) caters to geoscientists working on eco-systems and climatic modeling. Most grid computing networks could be categorized into three main classifications based on applications and user demands: computational grids, data grids, and service grids. Computational grids cater to consumers who require arbitrary levels of
processing power. When a complex mathematical model or a simulation, which requires immense computing power, has to be processed, a consumer could use the vast distributed computation power available on the grid to perform the task. Data grids, on the other hand, are massive data repositories that often also integrate discipline-based applications. The par ticle physics data grid (www.ppdg.net), for example, provides high-energy and nuclear physicists distributed data and computational resources. Since moving extremely large data sets over networks is cumbersome and inefficient, many data centers offer high-performance computing resources and specialized applications, which make data mining and analysis more effective thus strengthening the global pervasiveness of data grids. Finally, service grids provide organizations with the ability to adopt Web service technologies to optimize their business processes. Especially in business-to-business environments, these service grids are crucial in creating interoperable, stable, secure interfaces for businesses to communicate and streamline their operations. Similar to a dynamic Web host, a service-grid-based Web host will be able to transfer resources and accommodate peak demand periods, ensuring that servers will not crumble under high demand.
Grid Computing Issues Interoperability and scalability are two concepts fundamental to grid computing. Thus a standards-based open architecture that fosters extensibility has been found to be most successful in implementing a grid network. A standard set of protocols is fundamental to the way resource owners and grid users access, utilize, and share resources. These protocols govern all aspects of interaction between various components and consumers while preserving local autonomy and the control of the resource owners. Resource owners could exercise control over their resources by prescribing various limitations and restrictions on when, how, and by whom the resources could be utilized. The grid should be able to integrate a range of technologies manufactured by various component manufacturers and running on diverse software platforms
298 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
to create a single seamless entity that can handle the work demands of multiple users. A grid could include a few integrated resources and grow to include millions more. With such a complex network, the probability of a resource failing is high. Thus grids should be dynamic, resilient, and adaptive to detect failed resources and make necessary changes to accomplish the assigned tasks using the available resources effectively and efficiently. This dynamic nature of the grid creates a challenge for resource management and the scheduling applications that have to keep track of the ever-changing composition of the grid. Grid computing is a collaborative effort that brings together distributed computing resources to meet the high computational and data demands of consumers efficiently. It differs from other forms of high-performance computing systems in several ways. Supercomputers are often a single entity running on a single platform under one administrative domain that can be dedicated to a single task. While they possess the capacity for high-throughput, they are not efficient in assembling dispersed data resources nor can they be easily integrated with other technologies. Even though grid computing can offer virtually endless computing power, supercomputers are more effective for tasks requiring low-latency and high-bandwidth communications. Cluster computing often uses homogenous interconnected PCs and workstations within a single administrative domain for highthroughput applications. While cluster computing works much the same way as grids, they are usually geographically restricted, smaller in the number of systems utilized, and rarely made available for public use.
How the Grid Works Dispersed computing resources, or the grid fabric (including computers, data bases, and specialized instruments), are integrated to the grid through the deployment of core middleware programs like Globus that support the basic access requests and authentication protocols. The core middleware on these machines is then able to recognize and respond to authorized users on the grid. At the same time, user-level grid middleware like GrADS can be used to
create grid-enabled applications or tools to harness necessary data and computing power to accomplish the desired task. The user then accesses the grid through a portal, and upon authentication to the grid interacts with a resource broker. The resource broker is then capable of identifying the resources that match the computational and data needs. Upon identification of these resources, the data and program are transported, scheduled for execution, processed, and the final results are then aggregated and delivered to the consumer. The resource broker follows the progress of the application and data process, making necessary changes to accommodate changing grid dynamics and resource failures. All these activities transpire seamlessly across different technologies and software and hardware platforms, and the consumer receives the final aggregated results, unaware of all the machines and tools that cooperated to deliver the final product. Grid computing makes it possible for a user to connect to a grid, access programs, data, and instruments dispersed throughout the globe, and interact with them seamlessly across diverse software and hardware platforms. Grid computing is a viable option to meet the growing computer needs of a world that is increasingly dependent on information acquisition and processing. Cavinda T. Caldera
FURTHER READING Berman, F., Fox, G., & Hey, T. (2002). Grid computing—Making the global infrastructure a reality. Indianapolis, IN: Wiley. Buyya, R. (2002). Economic based distributed resource management and scheduling for grid computing. Retrieved February 2, 2004, from http://www.cs.mu.oz.au/~raj/ Buyya, R., Abramson, D., & Giddy, J. (2000a). An economy driven resource management architecture for global computational power grids. Proceedings of the 2000 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA 2000). Las Vegas, NV: CSREA Press. Buyya, R., Abramson, D., & Giddy, J. (2000b). Nimrod-G: An architecture for a resource management and scheduling system in a global computational grid. The 4th International Conference on High Performance Computing in Asia-Pacific Region (HPC Asia 2000). New York: IEEE Computer Society Press. Chetty, M., & Buyya, R. (2002). Weaving computational grids: How analogous are they with electrical grids? Computing in Science and Engineering, 4, 61–71.
GROUPWARE ❚❙❘ 299
Foster, I., & Kesselman, C. (Eds.). (1999). The grid: Blueprint for a future computing infrastructure. Burlington, MA: Morgan Kaufman. Foster, I., & Kesselman, C. (2003). The grid (2nd ed.). Burlington, MA: Morgan Kaufmann. Foster, I., Kesselman, C., & Tuecke, S. (2001). The anatomy of the grid: Enabling scalable virtual organizations. International Journal of Supercomputer Applications, 15(3). Hagel, J., & Brown, J. S. (2002). Service grids: The missing link in web services. Retrieved February 2, 2004, from http://www.johnhagel.com/paper_servicegrid.pdf Information power grid: NASA’s computing and data grid. (2002). Retrieved February 2, 2004, from http://www.ipg.nasa.gov/ipgusers/globus/1-globus.html National and international grid projects. Retrieved February, 2, 2004, from http://www.escience-grid.org.uk/docs/briefing/nigridp .htm Waldrop, M. M. (2002). Grid computing could put the planet’s information-processing power on tap. Technology Review, May 2002.
GROUPWARE Groupware refers to any software system that is designed to facilitate group work and interaction. Groupware has been around since the 1970s. In particular, e-mail and chat-based groupware have long histories. E-mail, mailing lists, bulletin boards, newsgroups, and wikis (collaboratively created websites) are examples of asynchronous groupware systems (that is, there can be a time delay between a message being sent, read, and then responded to). Chat systems, multiuser games, group editors, shared whiteboards, and teleconferencing tools are examples of synchronous groupware (comments that one participant sends are instantly visible to other participants, and multiple responses can be made instantly and simultaneously. Most groupware systems are designed with the “different place” assumption; that is, they assume that the users are distributed across the Internet, interacting with one another only through the groupware itself. There has been some work in developing “same place” groupware systems for users who are present in the same physical space. Two good examples of such collocated groupware systems are the Brown XMX shared editor for use in electronic classrooms and the MIT Intelligent Room project.
Early Groupware Systems: E-mail, Chat, and the Web The first e-mail system was created by the computer engineer Ray Tomlinson in 1971 and became generally available on the Arpanet (Advanced Research Projects Agency Network, the precursor to the Internet) in 1972. It rapidly gained popularity as the Internet grew during the 1970s and 1980s. Although e-mail generally facilitates one-on-one interaction, the development of mailing list tools enabled it to support widely distributed group projects. The first newsgroups were developed in the early 1980s; they were similar to archived mailing lists except that users would send their text directly to a newsgroup rather than a list of users. The first chat-based groupware was the Arpanet “talk” command, released in 1972, three years after the establishment of the Arpanet in 1969. This command allowed one user to connect to another and to communicate by sending lines of text back and forth. It is still available in most UNIX operating systems. Multiuser chat became popular in 1984 with Compuserve's CM Simulator. This software was modeled after citizens band radios and provided a collection of chat rooms called channels that users could join. These early groupware systems led directly to Internet Relay Chat (IRC), ICQ, America Online Instant Messaging (AIM), and other instant messaging systems. From its inception, the World Wide Web was viewed as a tool for supporting group work. Tim Berners-Lee, the World Wide Web’s inventor, described it as a “distributed heterogeneous collaborative multimedia information system” (Berners-Lee 1991). The collaborative aspect referred to the idea that anyone with access to a Web server could create webpages and thereby help build this web of multimedia information. This vision of collaboration has been further realized with the rise of wikis and related systems.Wikis are websites that visitors to the site can edit simply by clicking on the “edit” link at the bottom of each page and providing user or password data (if required). The Wikipedia, a wiki encyclopedia, is a good example of the use of wiki technology. By 2004, it had more than
300 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
The Wide World of Wikis From Wiklopedia, which bills itself as “a multilingual freecontent encyclopedia that will always belong to everyone.”
Key Characteristics A WikiWikiWeb enables documents to be authored collectively in a simple markup language using a web browser. Because most wikis are web-based, the term “wiki” is usually sufficient. A single page in a wiki is referred to as a “wiki page”, while the entire body of pages, which are usually highly interconnected, is called “the wiki.” “Wiki wiki” means “fast” in the Hawaiian language, and it is the speed of creating and updating pages that is one of the defining aspects of wiki technology. Generally, there is no prior review before modifications are accepted, and most wikis are open to the general public or at least to all persons who also have access to the wiki server. In fact, even registration of a user account is not often required.
History Wiki software originated in the design pattern community for writing pattern languages. The Portland Pattern Repository was the first wiki, established by Ward Cunningham in 1995. Cunningham invented and named the wiki concept, and
150,000 articles in English and was being translated into a dozen languages.Anyone who visits the Wikipedia can edit existing pages or create new articles, and each article contains a link to all of its previous versions. Another successful web-based system is the SourceForge site, an open-source software development site. Software developers can come to the site and communicate asynchronously with other developers about their code; they can also check out and store multiple versions of their programs at the site. By 2004, SourceForge was hosting over 65,000 projects and 700,000 registered developers. All visitors to the site have access to the current code base, but only developers are allowed to make changes to it.
produced the first implementation of a wiki engine. Some people maintain that only the original wiki should be called Wiki (upper case) or the WikiWikiWeb. Ward's Wiki remains one of the most popular Wiki sites. In the final years of the 20th century, wikis were increasingly recognized as a promising technology to develop private and public knowledge bases, and it was this potential that inspired the founders of the Nupedia encyclopedia project, Jimbo Wales and Larry Sanger, to use wiki technology as a basis for an electronic encyclopedia: “Wikipedia” was launched in January 2001. It was originally based on the UseMod software, but later switched to its own open source codebase which has now been adopted by many other wikis.
Wiki Bus Tours There are virtual guided “bus tours” taking visitors to various wiki sites. These consist simply of a page on each participating wiki called “TourBusStop”, which gives the link to the next bus stop—basically, a type of web ring. Each bus stop page gives some info about that wiki, and one can choose to explore that particular wiki (thus “getting off the bus”), or continue to the next wiki in the tour. Source: Wiklopedia. Retrieved March 10, 2004, from http://en.wikipedia. org/wiki/WikiWiki
Design Issues The design of a groupware system has a definite effect on the interaction of the people using it. If the users do not like the resultant interaction, they may be severely hampered in performing the task that the groupware was supposed to help them with. In some cases they may be unable to perform their task at all, or they may rebel and refuse to use the system. To design successful groupware, one must understand the impact the technology will have on the task the group is trying to perform. Fundamentally, designing groupware requires understanding how people behave in groups. It also requires a good grasp of networking technology and how aspects of that
GROUPWARE ❚❙❘ 301
technology (for instance, delays in synchronizing views) can affect the user experience. Shortcomings in technology can render an otherwise promising tool useless, as minor issues of system responsiveness and reliability can become very significant when coupled with the dynamics of group interaction. Traditional issues of user interface design—for example, striving for a consistent interface and offering helpful feedback—are still relevant, since the technology still involves individuals. However, because the target users are groups, there are additional considerations. For instance, million-person groups behave differently from five-person groups, and the performance parameters of the technologies required to support the two types of groups are quite different. Likewise, groupware must be easier to use than software for single users, because the pace of use of an application is often driven by the pace of other users. Consequently, a difficult interface will accentuate disparities in user expertise, which can lead to frustrating delays and serious reductions in group productivity.
Coordination and Community Because groupware necessarily supports a community of users performing some task, it must address not only the work the users perform that relates directly to the task, but also the work that users perform to stay coordinated during execution of that task. Users need ways to exchange information about the task at hand. They need to establish and follow conventions for activity, and they must be aware of what other users are doing. Users spend a portion of their time maintaining common ground. One method for minimizing disparity between users' viewpoints, WYSIWIS (“what you see is what I see”), creates systems that give users a similar viewpoint on the task. Another approach is to design groupware features that help the users be more aware of each other's actions. Groupware designers must also keep in mind the various social issues that arise in collaboration. When people are dealing with one another remotely, establishing and maintaining identity can become a difficult security issue that the groupware designer must
take into account. Designers need to know how homogeneous the users are, the roles people are likely to play, and who key decision makers are and what influence they have on the decision-making process. Groupware designers should investigate the effect the technology will have on the sense of group identity, culture, and environment that emerge in longterm collaboration. Additionally, groupware designers must consider how the system will deal with sensitive social issues such as anonymity and accountability of actions. Observational studies, usage studies, surveys, and prototyping are all important tools in designing successful groupware. Good development may require a spiral model of user-centered or participatory development, wherein developers observe users using a prototype version of the groupware system, collect and analyze data from that study, and redesign the software accordingly. Multiple cycles of design and testing are usually necessary to produce quality groupware. Strong analysis tools can help reduce development tasks by allowing the designer to understand difficulties that users encounter while trying to perform their tasks using the groupware. Methods such as ethnomethodology (where the social interaction between participants is examined in the context of their work) and various forms of discourse analysis have been successfully adapted as methods to study the interaction that emerges. By carefully examining the recurring problems of coordination that users encounter, the designers can identify what parts of the system need to be redesigned, and they can then create a system that effectively supports the users.
A Case Study: Instant Messaging Instant messaging (IM) is, as its name implies, a technology that lets people communicate with one another synchronously, in “real time,” as stated on the home page of America Online's Instant Messenger. After installing client software, a user of IM technology connects to a central server. The server then informs those users as to the online availability of those others included in his or her contact (or “buddy”) list. Likewise, the server informs others who have included the user in their contact list as to the online
302 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
presence of the user. Typically, a contact list may contain as many as two hundred “screen names.” When connected, a user can click on the name of another user who is also online, opening a window to that person which permits direct, real-time exchanges of messages between them. (Communication is typically in the form of text messages, although the technology permits audio and visual exchanges as well.) When a user disconnects from the server, others who include him or her in their contacts lists are informed that the person is no longer online. Different types of users utilize instant messaging in different ways. In a 2000 study, the scholars Bonnie Nardi, Steve Whittaker, and Erin Bradner reported that in the workplace coworkers used IM technology to quickly ask and answer questions and to clarify issues about ongoing tasks. It was also used to keep in touch with family and friends through brief exchanges. In addition, it was used to inquire about the availability of others for communication in other media as well as to arrange for offline meetings, although in a 2002 study the researcher Ellen Issacs and her associates reported that most of the IM conversations they studied, also in the workplace, remained in IM. Instant messaging is now rapidly spreading in the work world, as evidenced by mention in newspaper accounts, by conferences organized to promote its adoption, and by its inclusion in corporate culture.
Technological Subversion: Away Messages In a 2001 study for the Pew Internet and American Life project, researchers Amanda Lenhart, Lee Rainie, and Oliver Lewis describe contemporary teenagers as “the instant-message generation,” noting that 74 percent of U.S. teens with Internet access use instant messaging, often to stay in touch with friends and relatives who do not live nearby. Many others have also reported on the popularity of IM technology among teenagers. Interestingly, these studies do not discuss “away messages,” a remarkable feature of IM technology. The away feature can inform others that, while the user remains online and connected to the
Away Messages
A
way messages are perceived to be a necessity for many users of instant messaging. For those who don’t want to think up an original message, websites such as AIMawaymessages.com offer a variety of messages on topics from friendship and food to homework and finals. You helped me laugh, you dried my tears, Because of you, I have no fears. Together we live, together we grow, Teaching each other what we must know. You came in my life, and I was blessed. I love you girl, you are the best. Release my hand, and say good-bye, Please my friend don't you cry. I promise you this, it's not the end, ‘Cause like I said you're my best friend I am not currently available right now. However, if you would like to be transferred to another correspondent, please press the number that best fits your personality: If you are obsessive compulsive, please press “1” repeatedly. If you are codependant, please ask someone to press “2”. If you have multiple personalitites, please press “3”, “4”, and “5”. If you are paranoid delusional, we know who you are and what you want. Just stay on the line so we can trace your call. If you are schizophrenic, listen carefully and the little voice will tell you which number to press. If you are manic depressive, it doesn't matter what number you press, no one will answer. Source: AIMAwayMessages. Retrieved March 10, 2004. from http://aimawaymessages.com/
IM server, he or she is either not near the computer or wishes to remain undisturbed currently. In such a case, the IM client acts like an answering machine: It records messages left by others while providing feedback in the form of an automated away message sent to those who attempt to contact the user. On some clients, this message is visible whenever a user is away, allowing users to stay appraised of each others' status without needing to contact each other directly.
GROUPWARE ❚❙❘ 303
In AIM, the default away message is “I am away from my computer right now.” However, users go to great lengths to expand the kinds of messages they post and the uses to which such messages are put. Users, and especially college students who have access to broadband connections, employ away messages as a kind of bulletin board to inform, to entertain, and to manage social relationships. The possibilities of away messaging as well as its popularity are evident in a number of websites that collect and categorize away messages, serving as a resource on which users may draw. There is also software that enables users to keep track of those who are reading his or her away messages. In addition to having these communicative and social uses, the away message feature of IM may also serve a psychological function, providing users a sense of presence and of attachment. Although synchronous communication is still the primary function of IM, it is unlikely that its developers anticipated the extensive and varied uses of asynchronous away messages. Like the telephone, which was developed as a business tool, and like the Internet itself, which was developed by the military and initially used as a means of communication by a small handful of academic researchers, IM has developed in ways not necessarily foreseen or intended by its designers. It is another reminder of the difficulties in predicting how technology will affect society. While the general public tends to believe that a given technology dictates its uses, sociologists of technology argue that the uses of technology are socially shaped rather than being determined solely by the materiality of technological artifacts or the intentions of their designers. Technologies are often subverted by users and employed in ways quite different from those for which they were originally intended. The case of IM technologies, and especially of IM’s away message capability, illustrates the social shaping of technological change as well as the difficulty in predicting societal impacts of technology. Timothy J. Hickey and Alexander C. Feinman See also Chatrooms; Computer-Supported Cooperative Work
FURTHER READING Alterman, R., Feinman, A., Introne, J., & Landsman, S. (2001, August). Coordinating Representations in Computer-Mediated Joint Activities. Paper presented at the 23rd Annual Conference of the Cognitive Science Society, Edinburgh, United Kingdom. Retrieved August 7, 2003, from http://www.hcrc.ed.ac.uk/cogsci2001/pdffiles/0015.pdf Berners-Lee, T. (1991). World Wide Web seminar. Retrieved August 7, 2003, from http://www.w3.org/Talks/General.html Bijker, W. E., & Law, J. (Eds.). (1992). Shaping technology/building society: Studies in sociotechnical change. Cambridge, MA: MIT Press. Bodker, S., Gronbaek, K., & Kyng, M. (1993) Cooperative design: Techniques and experiences from the Scandinavian scene. In D. Schuler and A. Noamioka (Eds.), Participatory design: Principles and practices (pp. 157–176). Hillsdale, NJ: Lawrence Erlbaum Associates. Clark, H., & Brennan, S. (1991) Grounding in communication. In L. B. Resnick, R. M. Levine, & S. D. Teasley (Eds.), Perspectives on socially shared cognition (pp. 127–149). Washington, DC: American Psychological Association. Ellis, C. A., Gibbs, S. J., & Rein, G. L. (1991) Groupware: Some issues and experiences. Communications of the ACM, 34(1), 38–58. Erickson, T., & Kellogg, W. (2000) Social translucence: An approach to designing systems that support social processes. ACM Transactions on Computer-Human Interaction (TOCHI), 7(1), 59–83. Fischer, C. S. (1940). America calling: A social history of the telephone to 1940. Berkeley and Los Angeles: University of California Press. Garfinkel, H. (1967). Studies in ethnomethodology. Upper Saddle River, NJ: Prentice Hall. Grudin, J. (1990) Groupware and cooperative work. In B. Laurel (Ed.), The art of human-computer interface design (pp. 171–185). Reading, MA: Addison-Wesley. Hauben, M. (2003). History of ARPANET. Retrieved August 7, 2003, from http://www.dei.isep.ipp.pt/docs/arpa.html Hutchby, I. (2001). Conversation and technology: From the telephone to the Internet. Malden, MA: Blackwell. Issacs, E., Kamm, C., Schiano, D., Walendowski, A., & Whittaker S. (2002, April) Characterizing instant messaging from recorded logs. Paper presented at the ACM CHI 2002 Conference on Human Factors in Computing Systems, Minneapolis, MN. Retrieved August 7, 2003, from http://hci.stanford.edu/cs377/nardi-schiano/CHI2002. Isaacs.pdf Lenhart, A., Rainie, L., & Lewis,O., (2001) Teenage life online: The rise of instant-message generation and the Internet's impact on friendships and family relations. Retrieved August 7, 2003, from http://www.pewinternet.org/reports/pdfs/PIP_Teens_Report .pdf Nardi, B. A., Whittaker, S., & Bradner E. (2000). Interaction and outeraction: Instant messaging in action. In Campbell, M. (Ed.), Proceeding of the ACM 2000 conference on computer-supported cooperative work (pp. 79–88). Philadelphia: ACM. Schiano, D. J., Chen C. P., Ginsberg, J., Gretarsdottir, U., Huddleston, M., & Issacs, E. (2002, April). Teen use of messaging media. Paper presented at the ACM CHI 2002 Conference on Human Factors
304 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
in Computing Systems, Minneapolis, MN. Retrieved August 7, 2003, from http://hci.stanford.edu/cs377/nardi-schiano/CHI2002. Schiano.pdf Suchman, L., & Trigg, R. (1992) Understanding practice: Video as a medium for reflection and design. In J. Greenbaum & M. Kyng
(Eds.), Design at work: cooperative design of computer systems (pp. 65–89). Hillsdale, NJ: Lawrence Erlbaum Associates. Shneiderman, B. (1992). Designing the user interface: Strategies for effective human-computer interaction. Reading, MA: Addison-Wesley. Tyson, J. (2003). How instant messaging works. Retrieved August 7, 2003, from http://www.howstuffworks.com/instant-messaging.htm
HACKERS HANDWRITING RECOGNITION AND RETRIEVAL HAPTICS HISTORY OF HCI
H HACKERS A computer hacker was originally defined as someone who was expert at programming and solving problems with a computer; later it came to identify a person who illegally gained access to a computer system, often with the intent to cause mischief or harm. The idea of computer hacking started out with a similarly benign meaning: A computer hack was a solution to a hardware or programming problem. One could argue that the first digital computer, composed of two telephone relays switches, two light bulbs, a battery, and a tin can keyboard, was itself a hack. When the mathematician and inventor George Stibitz (1904–1995) put together that device in 1937, he was doing what generations of computer enthusiasts would continue to do for decades to come, using materials
HOLLERITH CARD HUMAN-ROBOT INTERACTION HYPERTEXT AND HYPERMEDIA
at hand to invent or reinvent technology for novel uses or to solve an immediate problem at hand. Each of the following generations of computer hackers has developed its own culture, its own sense of style, and, perhaps most importantly, its own sense of ethics. These various cultures have generally been shaped as oppositional or resistant cultures within broader institutional contexts. There have been several primary contexts for the development of hacker culture from the mid-1960s to the present.
The Original Hacker Ethic In the United States, the origins of hacker culture can be traced back to the universities and colleges of the 1960s and 1970s. Fueled by government research funding, primarily from the U.S. Department of Defense’s Advanced Research Projects Agency 305
306 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
(DARPA), the first generation of computer hackers were students working during off-hours in computer labs. The group that the engineer Joseph Weizenbaum called compulsive programmers and that the sociologist Sherry Turkle and the journalist Steven Levy document as the first generation of hackers was composed almost exclusively of computer science students at major U.S. universities in the 1960s and 1970s. For the first generation of hackers, hacking was a means to come up with clever or unusual solutions to seemingly intractable problems. Accordingly, the notion of hacking was contrary to what most computer science programs were teaching: structured programming. Where the curriculum of computer science instruction followed the philosophy of finding the single best answer to each problem and structuring that into code, the hacker ethic preached just the opposite—trying unusual and innovative approaches to discover new ways of handling a problem. The hacker ethic rejected conventional wisdom, favoring a more hands-on, bottom-up approach. This early generation established the basic ethos for hacker culture, best exemplified in Steven Levy’s characterization of the “Hacker Ethic”: 1. Access to computers should be unlimited and total. Always yield to the Hands-On Imperative. 2. All information should be free. 3. Mistrust authority—Promote Decentralization. 4. Hackers should be judged by their hacking, not bogus criteria such as degrees, age, race, or position. 5. You can create art and beauty on a computer. 6. Computers can change your life for the better. (Levy 1984, 39–49) This ethic underlay the development of many of the subsequent major technological advances in computation, including the creation of the personal computer (PC) and the Internet. The hacker ethic, as originally practiced, was blind to the surrounding political climate. Through the late 1960s and 1970s, the vast majority of money that went into funding computer science and technological development in the United States was given by the military. During the same period that some students were writing code in university labs, their more politically minded peers were deploying technology to a
different end. The Youth International Party was perhaps the most representative of these politically oriented students. Led by Abbie Hoffman, Jerry Rubin, and others, the Youth International Party used technology for personal empowerment and activism. The Youth International Party created a newsletter called Technological Assistance Program (or TAP) to provide technological knowledge to people who would otherwise have to pay companies what the group believed were unfair rates for technological services. TAP set out to teach people how the technology worked and how it could be exploited. Initially it provided details on how consumer services and utilities could be used for free as well as on how technology such as voice conferencing on the phone could be used for long-distance political organizing. As the PCs began to emerge as hobby electronics in the 1970s, small groups of hackers began having meetings to share programming ideas and to learn how to build and modify hardware. These groups, best represented by San Francisco’s Homebrew Computer Club, embraced the hacker ethic and began applying it to PC culture, sharing software and code and continually finding new ways to innovate and create new technology and software. In 1976 two of them, Steve Jobs and Steve Wozniak, founded Apple Computer and, along with a host of other early computer manufacturers, ushered in the age of the personal computer as consumer electronics. These three dimensions—the desire to invent, create and explore (taken from the original hacker ethic); an engaged hands-on approach, which focused on personal empowerment (borrowed from the political-activist hackers of the 1960s and 1970s); and the idea of community and sharing in the context of the personal computer (in the tradition of Homebrew)—came together in the 1980s in the second generation of hackers.
Computer Network Hacking The emergence of this new group of hackers was the result of the widespread availability of the personal computer, as well as the popularization of the figure of the hacker in the mainstream media. With the release of the film War Games in 1982, thousands of young, inspired hackers went online looking for those
HACKERS ❚❙❘ 307
who were similarly inclined. The hackers of the 1980s found their meeting place online, in the form of newly emerging computer bulletin board systems (BBS). Many of them founded BBS, usually run on a spare phone line and set up in the basement or a teenager’s bedroom. Using a dial-up modem, hackers could access the bulletin board, and once there, they could swap files and trade gossip as well as read the latest technical information about hacking. Because access to computer networks was limited, gaining access to computers often required breaking into systems (usually owned by universities) and then using those computers to access others. Through BBS, loose confederations of hackers formed. With names like Legion of Doom and Masters of Deception, they made names for themselves by posting files with the latest network exploits or files documenting their latest hacks as trophies. As the hacker scene grew larger, two hackers who called themselves Knight Lightening (Craig Neidorf) and Taran King (Randy King), respectively began to document the underground hacker culture in an online journal called Phrack (an amalgam of phreak, a term used to describe telephone hacking and hack, which was more specific to computers). Phrack published articles of interest to the hacker community on such topics as how to hack your local telephone company control office and how to pick locks. The journal also included information about the hackers themselves. In sections such as “Phrack Prophiles” and “Phrack World News,” hackers wrote about the personalities and events that has special significance for them. This generation of hackers embraced the ethic of open and free information, but found they had inherited a world in which computers and networks were proprietary and expensive. In order to explore, hackers found they needed to invade other people’s systems. There was no alternative. The vast majority of such hacking was merely exploratory and harmless. Indeed, it was customary for system administrators of the systems that got hacked to mentor young hackers when the former caught the latter in their systems. The worst the hackers usually suffered was a phone call to their parents or a stern warning. By the 1980s, hacking also spread throughout Europe, particularly in Germany and the Netherlands.
In the 1990s, when the emergence of the World Wide Web made online commerce feasible, the situation changed. With the growth of e-commerce, there was a significant shift in the ways that hackers behaved and the ways in which they were treated. As hackers discovered increasingly sophisticated ways to exploit security flaws, law enforcement developed an increased interest in hackers’ behavior.
Criminalization of Hacking Even in the 1990s, most incidents of computer hacking remained relatively harmless. Hackers were, however, being treated as criminals for the first time and were frequently prosecuted under federal wire fraud statutes, which carried heavy penalties including jail time and fines. As the Internet became more closely tied to notions of commerce in the public imagination, hacking ceased to be seen as a benign nuisance and came to be perceived as a public menace, with several hackers suffering the consequences. What had previously been viewed as pranks or at most petty vandalism has now gained the attention of U.S. government authorities. Where previous generations of hackers had roamed the networks of universities freely, the hackers of the 1990s were finding their options severely limited. Exploration was being redefined as criminality, and high-profile cases, such as the capture and prosecution of Kevin Mitnick, reinforced tensions between law enforcement and the hack community. Mitnick had been the subject of a nationwide manhunt, spurred on by a series of stories in the New York Times, which branded him cyberspace’s Most Wanted. Mitnick became a cause célèbre for the hacker community, having been denied a bail hearing and spending three years as a pretrial detainee. Perhaps most important, the damage caused by Mitnick’s hacking was hotly contested. Because Mitnick never gained financially from any of his hacking, the defense argued that the damage caused was minimal. The prosecution, however, claimed that Mitnick had not only stolen valuable code but had also rendered other code worthless merely by looking at it. The prosecutors set the figure at $100 million in damage, the maximum allowed under the statute. Ultimately, Mitnick pled guilty to making three fraudulent phone
308 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
calls. He received five years in prison for his crimes and was forced to pay restitution. Mitnick’s arrest and prosecution, along with a handful of others, sent signals to the computer underground that law enforcement was taking computer crime seriously, and that they were prepared to make the capture and prosecution of hackers a priority. At the same time, with the development and distribution of open-source operating systems such as Linux, hackers no longer needed to breach other people’s systems to explore network security. They were now able to experiment on their own machines and networks without the risk of being arrested. As the stakes for computer hackers were raised, many hackers turned their attention to system security. In the late 1990s a new breed of hackers, so-called white-hat hackers, emerged. The premise of whitehat hacking was that hackers themselves could help systems defend themselves against black-hat hackers (hackers who invade computer systems to cause disruption or damage). Typically, white-hat hackers would release security software, document security flaws, and hold seminars to inform industry about vulnerabilities and weaknesses. A number of highprofile white-hat hackers have even testified before Congress about the state of security on the Internet. As white-hat hacking became more accepted, whitehat hackers began forming collectives and security companies through which to offer their services.
Hacker Activism In the twenty-first century hackers have turned their attention to political affairs once again. A new movement, known as hactivism, is based on a fusion of hacker techniques and political activism. A number of groups and hackers, most notably Hactivismo and the Cult of the Dead Cow, have released information and software to help activists in repressive countries communicate effectively. Using expertise in networks, cryptography, and steganography (hiding information in images), these hackers have made it possible for dissidents in a number of countries to organize politically and have provided access to otherwise banned or censored information. The movement has also spawned more direct efforts at political disruption.
Techniques include the hacking of webpages and the replacement of the pages’ original content with political messages as well as the crashing of websites that carry messages to which hackers are opposed. The themes of innovative problem solving, a hands-on approach, and political and community action are threads that have run through hacker culture from its earliest days. As cultural attitudes towards technology have shifted, so has the nature of hacker culture and the computer underground, which continues to express the basic principles of hacker culture, attitudes, and actions in creative new ways. Douglas Thomas See also Law Enforcement; Movies FURTHER READING Levy, S. (1984). Hackers: Heroes of the computer revolution. New York: Dell. Thomas, D. (2002). Hacker culture. Minneapolis: University of Minnesota Press. Turkle, S. (1984). The second self: Computers and the human spirit. New York: Simon & Schuster. Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. New York: W. H. Freeman & Co.
HANDWRITING RECOGNITION AND RETRIEVAL Written information often needs to be electronically accessed or manipulated (as in editing). Although people generally learn to write by hand before they learn to type on a keyboard, it is fairly difficult for computers to work with handwritten information. In many situations, the handwritten information must be stored in the form of page images, which are difficult for computers to manage (index, search, or organize). Because handwriting is such an easy process for people, much research has gone into enabling
HANDWRITING RECOGNITION AND RETRIEVAL ❚❙❘ 309
computers to recognize and retrieve handwritten information. Text printed on paper using standard fonts can usually be recognized with high accuracy using an optical character recognition (OCR) engine. Commercial OCR software can recognize printed text with a character error rate of about 1 percent, provided the quality of the printing is good and standard fonts are used. The high accuracy is possible because printed characters are very uniform and are usually separated by spaces; OCR software can also be trained to recognize standard fonts. Handwriting recognition is more challenging because handwriting varies considerably between writers, and even for a given writer there are often some variations. In addition, the characters in a word are not always well formed.
Online and Offline Handwriting Handwriting can be categorized into online and offline handwriting respectively. With online handwriting, the act of writing is captured by the device. Pen stroke and velocity information are, therefore, available to aid the recognition process. With offline handwriting, it is assumed that the writing has already occurred (often on paper), and all that is available is a scanned image of the written document. In this situation, information on the pen’s movements is therefore not available. In recent years, significant advances have been made in online handwriting recognition. One approach has emphasized teaching people to write characters in a more distinctive way. For example, the Graffiti alphabet used by many portable digital assistants (PDAs) changes the way characters are constructed so that they are easier to recognize. Other approaches, such as those used by the Tablet PC, try to recognize a person’s actual handwriting. Since offline handwriting offers less information, it has had more limited success. Successes have been achieved in situations where the lexicon is limited and additional constraints are available. Bank check recognition takes advantage of the fact that the amount written in on a handwritten check makes use of only about thirty different words. Postal address recognition is another example where handwriting
recognition had been successful. Although the possible lexicon is large, the different fields in postal addresses (postal code, city names, and street names) restrict what a given written word may be. In the United States, the postal service uses machines to recognize and route a significant proportion of both printed and handwritten addresses. Despite those successes, however, the problem of offline handwriting recognition is still unsolved in situations where large, unconstrained lexicons are used, such as in handwritten manuscript collections.
Offline Handwriting Recognition: Preprocessing and Segmentation Before words in a handwritten document can be recognized, the document must be cleaned, artifacts (marks that are unrelated to the written text, such as creases where paper has been folded) removed, and the words segmented out and processed for recognition. Preprocessing may involve operations to improve the quality of the image, to correct for the slant of the writing, and to remove noise (which may be caused by many factors such as ink blotches, ink fading, or the scanning process). The segmentation process involves separating out individual words. Current segmentation techniques generally rely on knowledge of the spacing between text. In English, for example, there is space between lines of text, and the space between words is usually greater than the space between characters. The segmentation process is usually a two-stage process: First lines of text are detected and then the words are detected. In some situations, one can also try to detect the layout of the page. A common approach to finding lines of text is to analyze the pixel values along each row. Each image may be thought of as a matrix of pixel values organized into rows and columns where the pixel values represent the intensity of the image at each point. Each row may be replaced by a single number obtained by adding all the pixel values in that row. This creates a vector with as many numbers as rows. The values of this column vector when plotted on a graph show a curve with minima (assuming
310 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
black text corresponds to low pixel values) at locations corresponding to text and maxima at locations corresponding to the spaces between lines. The lines can, therefore, be extracted by noting the position of the maxima.
Analytic and Holistic Offline Handwriting Recognition There are two approaches to recognizing a handwritten word: the analytic and the holistic. The analytic method involves segmenting a word into characters and then recognizing each character. The word is segmented at a number of potential character boundaries. For each character segment, a classifier suggests possible character choices along with confidence values—that is, the degree of confidence it has in those choices. At this point, at each character position within the word, there is a list of potential character candidates, each associated with a confidence value. A graph is created whose nodes (points) are the segmentation points and whose edges (connections between the nodes) are possible character choices. The appropriate confidence value is used to weight each edge—that is, to suggest that a choice is more or less likely to be correct. Each path from node to node through the graph creates a string of characters, only some of which are genuine words. The cost of each path is obtained by adding the weights (confidences). The path of minimum cost which gives a legal word is chosen as the optimal path. The analytic method requires training only on individual characters. In English, this is a small set consisting of upper and lowercase letters, the digits, and punctuation marks. The fact that the total set of characters is so small makes it practical to obtain training samples of characters to create a classifier. The main weakness of the analytic technique is that it is so difficult to segment words into characters. The holistic technique does not require words to be segmented into characters. Instead, features are computed over the entire word image. Examples of such features include the length of the word, the number of loops, ascenders (an ascender is the portion of the letter that extend above the main body of
the letter—for example, the top portion of the lowercase letter l) and descenders (a descender is the portion of a lowercase letter that extends below the main body of the letter—for example, the lower portion of the lowercase letter p). A classifier is trained using features computed over a number of training examples of words. Words can then be recognized using this classifier. The holistic technique’s main advantage is that the difficult problem of segmenting a word into characters is avoided. On the other hand, holistic techniques must be trained on each word, which makes it difficult to use with a large number of different words. Holistic techniques work best when the vocabulary is small, as with bank check recognition.
Handwriting Retrieval Handwriting recognition is used to convert images of words into an electronic form that computers can interpret as text—for example, the American Standard Code for Information Interchange (ASCII) data-transmission code. However, this does not by itself solve the problem of accessing handwritten documents. For example, suppose one is interested in locating a particular page from the collected papers of George Washington. To do this, one needs to search the set of pages. Given ASCII text, one can use a search engine to do this. This approach is in fact used for online handwritten material, as online handwriting can be converted to ASCII with reasonable accuracy. However, this approach does not work for offline handwritten material, because handwriting recognition for documents with such large vocabularies is still not practicable. One possible approach, called word spotting, is to segment pages into words and to cluster word images (for example, one cluster might be all the instances of the word independence in George Washington’s manuscripts) using image matching. The clusters will have links to the original pages and may, therefore, be used to find the right page. An important distinction between recognition and retrieval is that the latter usually uses the context supplied by the other words in the page, and this can improve performance. This constraint has not
HAPTICS ❚❙❘ 311
been applied to offline handwriting recognition or retrieval yet, but applying such a constraint should improve performance.
The Future Handwriting recognition and retrieval is a challenging area. While there have been some successes, especially in recognizing postal addresses, much work remains to be done, especially in the area of recognizing and retrieving large-vocabulary documents. Solving this problem would allow computers to deal with handwritten material in the same way that they deal with typed input. R. Manmatha and V. Govindaraju See also Natural-Language Processing; Optical Character Recognition
Setlur, S., Lawson, A., Govindaraju, V., & Srihari, S. N. (2002). Large scale address recognition systems: Truthing, testing, tools, and other evaluation issues. International Journal of Document Analysis and Recognition, 4(3), 154–169. Vinciarelli, A., Bengio, S. & Bunke, H. (2003). Offline recognition of large vocabulary cursive handwritten text. In Proceedings of the Seventh International Conference on Document Analysis and Recognition (pp. 1101–1107). Los Alamitos, CA: IEEE.
HAPTICS Haptic interaction with the world is manipulation using our sense of touch. The term haptics arises from the Greek root haptikos, meaning “able to grasp or perceive.” Haptic interaction with computers implies the ability to use our natural sense of touch to feel and manipulate computed quantities. Haptic computer interaction is a relatively new field that has generated considerable interest in the 1990s and early years of the twenty-first century.
FURTHER READING Kim, G., & Govindaraju, V. (1997). Bank check recognition using cross validation between legal and courtesy amounts. International Journal on Pattern Recognition and Artificial Intelligence, 11(4), 657–674. Kim, G., & Govindaraju, V. (1997). A lexicon driven approach to handwritten word recognition for real-time applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4), 366–379. Kim, G., Govindaraju, V., & Srihari, S. (1999). Architecture for handwritten text recognition systems. International Journal of Document Analysis and Recognition, 2(1), 37–44. Madhvanath, S., & Govindaraju, V. (2001). The role of holistic paradigms in handwritten word recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(2), 149–164. Madhvanath, S., Kim, G., & Govindaraju, V. (1999). Chain code processing for handwritten word recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(9), 928–932. Madhvanath, S., Kleinberg, E., & Govindaraju, V. (1999). Holistic verification of handwritten phrases. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(12), 1344–1356. Manmatha, R., & Croft, W. B. (1997). Word spotting: Indexing handwritten manuscripts. In M. Maybury (Ed.), Intelligent multi-media information retrieval (pp. 43–64). Cambridge, MA: AAAI/MIT Press. Plamondon, R., & Srihari, S. N. (2000). On-Line and off-line handwriting recognition: A comprehensive survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(1), 63–84. Rath, T. M., & Manmatha, R. (2003). Word image matching using dynamic time warping. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (pp. 521–527). Los Alamitos, CA: IEEE.
A New Way To Interact with Computers Initially, computers could deal only with numbers. It took many years to realize the importance of operating with text. The introduction of cathode ray tube display technology allowed graphics to be displayed, giving people a new way to interact with computers. As processing power increased over time, three-dimensional graphics became more common, and we may now peer into synthetic worlds that seem solid and almost real. Likewise, until recently, the notion of carrying on a conversation with our computer was far-fetched. Now, speech technology has progressed to the point that many interesting applications are being considered. Just over the horizon, computer vision is destined to play a role in face and gesture recognition. It seems clear that as the art of computing progresses, even more of the human sensory palette will become engaged. It is likely that the sense of touch (haptics) will be the next sense to play an important role in this evolution. We use touch pervasively in our everyday lives
312 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
and are accustomed to easy manipulation of objects in three dimensions. Even our conversation is peppered with references to touching. The researcher Blake Hannaford at the University of Washington has compiled a list of verbal haptic analogies: We frequently make the analogy to haptics when we speak of our relationship to ideas, people, and information. We often use phrases like “get a feel,”“poke (into, around),” “put one’s finger (on the problem),” when referring to exploration. We use phrases like “(to stay, keep) in touch,”“tangible (concepts),”“(a) touchy (subject),” “hands-on learning” (often used literally), and “at my fingertips” when referring to contact. And phrases like “pressing issues,” “pushy (behavior),” “hard-hitting (presentation),” “get a grasp (of the situation),” and so forth are used when referring to impact or manipulation. In fact, it is quite surprising, given our natural propensity to touch and manipulate things, that haptic computer interfaces are not common. To explore and interact with our surroundings, we principally use our hands. The hand is unique in this respect because it is both an input device and an output device: Sensing and actuation are integrated within the same living mechanism. An important question is how best to transmit haptic information between a running computer program and a user’s hand. Providing position input to the computer from the hand is easy; providing force or torque output from the computer to the hand has proven to be difficult. We have not figured out how to invent good haptic devices that will link our hands in some way with a running computer program—and for that matter, we do not understand very well how best to write programs that can derive and serve up haptic information for our consumption.
Our Tactile and Kinesthetic Senses How does the skin detect pressure, friction, and vibration? We know the hand is a complicated system that includes articulated structure, nerves, muscles (for output), and senses (for input). But the hand’s sensory capabilities are at best imperfectly understood. Because of this, fully informed design methods for haptic interfaces do not yet exist. The sensory suite forms part of the somatosensory system, which has modalities of discriminative
touch (including touch, pressure, vibration), proprioception (including joint angle, muscle length and length rate of change, and tendon tension), and pain (including itch and tickle) and temperature. The first two modalities are the ones that are most important for haptic perception. The discriminative touch modality relies on four different kinds of receptors in the glabrous (hairless) skin of the hand. They are Meissner’s corpuscles, Pacinian corpuscles, Merkel’s disks, and Ruffini endings. Both the Meissner’s and Pacinian corpuscles are considered to be rapidly adapting (RA), responding mostly to changing stimuli, whereas the Merkel’s disks and Ruffini endings are considered to be slowly adapting (SA) and continue to fire in the presence of constant stimuli. Whereas the anatomical characteristics of these receptors are known, their precise role in psychophysical perception is less well understood. The Pascinian corpuscles respond to high-frequency vibrations such as those encountered when running one’s finger over a textured surface. The Meissner’s corpuscles are sensitive to sharp edges, the Ruffini corpuscles to skin stretch, and the Merkel’s disks to edges and pressure. The proprioception modality is also of major importance, although its receptors are less well understood than those of discriminatory touch. The joint angle receptors incorporate Ruffini endings and Pascinian corpuscles located at the joints, which respond to pressure applied to the receptor. Interestingly, subjects can resolve changes in angle between thumb and forefinger as small as 2.5 degrees. Muscle spindles, located between and among muscle fibers, report muscle length and rate of change of length. Being modified forms of skeletal muscle fibers, muscle spindles not only can send information to the brain, but can also receive commands causing them to contract, resetting their threshold setting. Golgi tendon organs, which respond to tension force within the tendons, seem to play a role in muscle control. In a haptic interface, both the proprioception and discriminative touch modalities play important roles in perception. Given these considerations, we must ask what to put in (or around, or in contact with) the hand in order for a running computer program to impart a realistic sensation of touch to a user. There are many possible approaches to answering this question.
HAPTICS ❚❙❘ 313
Haptic Interface Devices Haptic interaction between a person and a computer requires a special device that can convert a person’s motions into meaningful quantities that can be input into the computer and at the same time can convert computed quantities into physical forces and torques that the person can feel. Many different kinds of devices have been invented that enable haptic interaction with the whole hand, or arm, or even the whole body. We consider here only the type of device that can be engaged by the hand. Haptic devices can have several degrees of freedom (DOF). For example, an ordinary mouse device has two DOFs: It can be moved left-right and forward-backward, yielding two independent position measurements for input to the computer. Similarly, the familiar joystick gaming device can be tilted leftright or forward-backward, yielding two independent angles. Neither the mouse nor the joystick are haptic devices, since they provide input only to the computer. Adding motors or other actuators to these devices would permit haptic interaction, since they could then serve not only as input devices, but also as output devices. Haptic devices with three DOFs allow translation in three dimensions, and those with six DOFs allow both translation and rotation in three dimensions. Haptic devices usually employ either serial or parallel linkages whose joint positions are measured with encoders and whose joint torques are affected by motors. One may recognize that the preceding description also applies to robot arms; in fact, many early haptic interface devices were based on robot arms. In the case of a robot arm, the computer commands the arm’s joint motors to move the arm tip to a particular position and orientation in space. In the case of a haptic device, the user holds the arm tip, moving it to a particular position and orientation as the computer monitors the joint angles. When the computed position of the tip reaches a place where motion is no longer wanted (according to the virtualenvironment model), the computer turns on motors to provide stiff resistance to further motion. For this scheme to work, the arm must be back-drivable (that is, the user is able to move the robot around manually), or else the user’s hand force and torque must be measured at the tip and used to control motion.
Critical considerations for haptic interface devices include degrees of freedom, bandwidth, resolution, and stiffness range. First, haptic interaction with a realistic three-dimensional virtual environment has proven difficult with devices having fewer than six DOFs, but providing six-DOF capability often implies high mechanical complexity and cost. Second, the response of the device should be as lively as possible; otherwise interaction with the virtual environment will feel sluggish and unnatural. The hand is capable of feeling vibrations having frequencies of several hundred hertz (cycles per second), but achieving the highest possible bandwidth in an electromechanical device is challenging. Third, for the hand to feel small nuances in the virtual environment (for example, fine texture or stick-slip friction) the device must be able to resolve extremely small changes in position and force. Finally, when the user moves through free space, nothing should be felt, but when a hard virtual surface is contacted, it should feel perfectly rigid. In practical systems, neither perfect lack of resistance nor perfect rigidity is achieved, but the range of possible stiffness felt by the hand should be maximized. Haptic devices primarily engage the proprioceptive senses in the hand and arm, but also involve the discriminative touch sensors, chiefly through highfrequency effects. Collectively, bandwidth, resolution, and impedance range determine the fidelity of the haptic device. The ideal haptic interface device has yet to be invented. One popular commercial device is a small cable-driven highly back-drivable threeDOF arm with three encoders and three motors. A new approach uses magnetic levitation to provide six-DOF interactions over limited ranges of motion with a single moving part.
Haptic Rendering To be able to “feel” computed quantities, there must be algorithms and computer programs capable of deriving correct force and torque values for a given situation to be output to a haptic interface device. The term haptic rendering is used to describe these operations, in analogy with the familiar rendering of graphics on a computer display. Normally, haptic rendering is accompanied by simultaneous graphical
314 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
rendering in what is more properly referred to as a visual-haptic interface. Unlike graphical rendering, which can satisfy the eye at update rates of thirty frames per second or even less, haptic rendering must be done at rates approaching a kilohertz (that is, a frequency approaching a thousand cycles per second) to feel right to the hand. In many cases, one may desire to interact haptically with three-dimensional objects modeled in the computer. For example, suppose we have modeled a cube and a cone by using mathematical formulas that define their surfaces, and we wish to be able to touch these virtual objects with a point-like probe. The haptic system establishes a one-to-one correspondence between the (virtual) probe point and the position of the haptic device handle, called a manipulandum. This is very much like the relationship between a computer mouse on the desktop and the cursor on the computer display screen. As the user moves the probe point about in three dimensions by moving the manipulandum, the computer checks whether the point is outside an object in free space or inside an object, an operation termed collision detection. This test must be done very rapidly, perhaps a thousand times a second. As long as the probe point is in free space, the device motors are turned off and are able to turn freely. As soon as the point is determined to be inside the virtual cube or cone, the motors are turned on, providing torques to the device joints, which generate a stopping force on the manipulandum. If the user attempts to push the point farther into the virtual object, the motor currents are increased further to resist the motion. The user thus experiences a sensation of contacting the surface of a real object. With a three-DOF haptic device, the user may freely slide the point along the surface of the cone or surfaces of the cube, feeling their shapes. With a six-DOF haptic device, it is possible to make use of a three-dimensional virtual tool instead of just a point. The haptic system associates the position and orientation of the virtual tool with the position and orientation of the manipulandum. In our example, if the user contacts the virtual cube or cone with, say, a cube-shaped virtual tool, he or she will feel torques as well as forces, as a surface of the virtual tool rotates into contact with a surface of the cube or cone.
Collision detection for a virtual tool is much more complicated than that for a point. For either a point or a virtual tool, once the virtual object surface is contacted, realistic contact forces and torques must be derived. Researchers have developed a number of different algorithms for this. For example, if the surfaces are supposed to feel rigid, a very stiff spring is modeled. If the object is deformable, its surface is modeled as a network of springs that can deform much like a mattress deforms. Surfaces of virtual objects need not be smooth. Various textureand friction-rendering algorithms have been developed. In all these methods, there is a trade-off between rendering accuracy and rendering time that is severely taxing on computer resources. There are also issues of control stability. When we interact haptically with real objects, energy is almost always dissipated, but if the interaction between a haptic device and a virtual environment is not correctly modeled, energy can be generated, leading to vibration and sudden loss of control. Finding the best haptic rendering algorithms for a given situation continues to be an active area of research.
Psychophysics of Haptic Interaction One may characterize the operation of a haptic interface in terms of its engineering parameters, but in the end it is the user’s perception that really matters. Physiologists and psychologists have studied the human sense of touch for many decades. Psychophysics is the scientific study of the relationship between stimuli (specified in physical terms) and the sensations and perceptions evoked by those stimuli. Researchers have striven to characterize and distinguish psychophysical responses from various discriminative touch sensors by performing experiments. For example, the researchers Roberta Klatzky and Susan Lederman conducted an extensive psychophysical analysis of haptic performance under conditions in which the fingertip is covered by a rigid sheath, held in place by a thin rubber glove. This eliminates the spatial pressure gradient normally provided by the mechanoreceptors (particularly slowly adapting receptors such as the Merkel’s disks) and provides only a uniform net force (with, possibly, a gradient at the edges of the sheath that is unrelated to the
HAPTICS ❚❙❘ 315
felt surface), simulating conditions encountered when using a haptic device. Knowledge about the discriminatory touch modality can inform haptic interface design. It would seem that haptic sensation of subtle effects relating to texture and friction, which tend to be high-frequency phenomena communicated mostly through the user’s skin, is of major importance. It is precisely these highfrequency effects that permit a person to assess conditions in the real world rapidly. For example, when a machinist removes a part from a lathe, the first instinct is to feel the quality of the part’s finish; only later is it visually inspected and measured. Recently, psychologists have begun performing experiments to evaluate the efficacy of haptic interaction systems. In these experiments, subjects perform tasks using only vision, or only haptics, or a combination of vision and haptics, and then task performance is objectively measured. For example, subjects may be asked to fit a virtual peg into a closefitting virtual hole, with the peg’s (or the hole’s) position and orientation and the forces and torques on the peg or hole measured in real time as the task progresses. Subjects in a control group are asked to fit a corresponding real peg into a real hole while making the same measurements. By contriving to have the setup for the virtual and real cases nearly identical, it is possible to assess the degree of haptic transparency afforded by the haptic computer interaction system. It is generally found that task performance is enhanced when haptic feedback is included, but subjects experience more difficulty dealing with virtual environments than they do with real environments. Results point to the need to improve both haptic devices and haptic rendering algorithms.
How Haptic Computer Interaction Can Be Used Many people think that haptic computer interaction will have widespread utility in many fields of activity. A number of application areas are under exploration, but none have as yet entered the mainstream. One such field is computer-augmented design (CAD), in which countless products are developed for our daily use. With a visual-haptic interface, the designer
of an engine can pick up and place parts into a complicated assembly while feeling the fit and friction characteristics. Haptics can also be used for education and medical training. Students can learn physics involving mechanical or electromagnetic forces while actually feeling the forces. Virtual surgery can be performed, with the student feeling modeled viscoelastic properties of tissues. Multidimensional scientific data might be more easily understood through a visualhaptic interface that allowed the user not only to see the data, but also to feel it at any point. Haptic devices can also be used as hand controllers for virtual vehicles. For example, in a flight simulator, aerodynamic forces and vibration can be transmitted to the user’s hand to provide a more immersive experience. Many other potential applications are under consideration, including use by persons who are blind or visually impaired. Finally, haptics can be used to control remote machinery, such as a robot, with forces and torques reflected back to the operator. There is growing research activity in haptics through the efforts of device designers, algorithm developers, and psychologists. As the field evolves, these disparate specialists are beginning to work together and to share insights, generating new knowledge in a multidisciplinary endeavor. Meanwhile, many future applications of haptics are under consideration. Ralph L. Hollis See also Augmented Reality; Virtual Reality
FURTHER READING Baraff, D. (1994, July). Fast contact force computation for non-penetrating rigid bodies. Computer Graphics, Proceedings of SIGGRAPH, 23–34. Berkelman, P. J., & Hollis, R. L. (2000, July). Lorentz magnetic levitation for haptic interaction: Device design, performance, and integration with physical simulations. International Journal of Robotics Research, 19(7), 644–667. Bolanowski, S. J., Gescheider, G. A., Verillo, R. T., & Checkosky, C. M. (1988). Four channels mediate the mechanical aspects of touch. Journal of the Acoustical Society of America, 84(5), 1680–1694. Brooks, Jr., F., Ouh-Young, M., Batter, J. J., & Kilpatrick, P. (1990). Project GROPE: Haptic displays for scientific visualization. Computer Graphics, 24(4), 177–185.
316 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Burdea, G. C. (1996). Force and touch feedback for virtual reality. New York: John Wiley and Sons. Cotin, S., & Delingette, H. (1998). Real-time surgery simulation with haptic feedback using finite elements. IEEE International Conference on Robotics and Automation, 4, 3739–3744. James, D. L., & Pai, D. K. (1999, August). ArtDefo, accurate real time deformable objects. Computer Graphics, Proceedings of SIGGRAPH, 66–72. Jansson, G., Billberger, K., Petrie, H., Colwell, C., Kornbrot, D., Fänger, J. F., et al. (1999). Haptic virtual environments for blind people: Exploratory experiments with two devices. The International Journal of Virtual Reality, 4(1), 10–20. LaMotte, R. H., & Srinivasan, M. A. (1990). Surface microgeometry: Tactile perception and neural encoding. In D. Franzen & J. Westman (Eds.), Information processing in the somatosensory system (pp. 49–58). New York: Macmillan. Lederman, S. J., & Klatzky, R. L. (1999). Sensing and displaying spatially distributed fingertip forces in haptic interfaces for teleoperator and virtual environment systems. Presence, 8(1), 86–103. Massie, T. H., & Salisbury, J. K. (1994).The PHANToM haptic interface: A device for probing virtual objects. Proceedings of ASME Winter Annual Meeting, Dynamic Systems and Control, 55, 295–301. McLaughlin, M. L., Hespanha, J. P., & Sukhatme, G. S. (Eds.). (2002). Touch in virtual environments. Prentice Hall IMSC Press Multimedia Series. Upper Saddle River, NJ: Prentice Hall. Seow, K. (1988). Physiology of touch, grip, and gait. In J. G. Webster (Ed.), Tactile sensors for robotics and medicine (pp. 13–40). New York: John Wiley and Sons. Tan, H. Z., Lim, A., & Traylor, R. M. (2000). A psychophysical study of sensory saltation with an open response paradigm. Proceedings of the 9th International Symposium on Haptic Interfaces for Virtual Environments and Teleoperator Systems, ASME Dynamic Systems and Control Division, 69(2), 1109–1115.
HEALTH ISSUES AND HCI See Brain-Computer Interfaces; Cybersex; Keyboard; Law and HCI; Privacy; Work
HELP SYSTEMS See Adaptive Help Systems; Artificial Intelligence; Cognitive Walkthrough; Errors in Interactive Behavior; Information Filtering; Instruction Manuals; User Support
HISTORY OF HCI The history of human-computer interaction (HCI) includes the evolution of widespread practices. It also includes people, concepts, and advances in understanding that inspired new developments. Often decades elapse between visions or working demonstrations of concepts and their widespread realization. The field of HCI can be understood in terms of existing practices, new visions, and hardware that became substantially more powerful year after year.
Human Factors before Computers Through the centuries people developed highly specialized tools to support carpenters, blacksmiths, and other artisans. However, efforts to apply science and engineering to improving the efficiency of work practice became prominent only a century ago. Timeand-motion studies exploited inventions of that era such as film and statistical analysis. The principles of scientific management of the U.S. efficiency engineer Frederick Taylor, published in 1911, had limitations, but such principles were applied to U.S. assembly line manufacturing and other work practices in subsequent decades. World War I motivated a similar focus in Europe. World War II accelerated behavioral engineering as complex new weaponry tested human capabilities. One design flaw could cause thousands of casualties. A legacy of the war effort was an enduring interest in human factors or ergonomics in design and training. (Another legacy was the creation of the first digital computers.) Early approaches to improving work and the “man-machine interface” focused on the nondiscretionary (mandatory) use of technology. The assembly line worker was hired to use a system. The soldier was given equipment. They had no choice in the matter. If training was necessary, they were trained. The goals of workplace study and technology improvement included reducing errors in operation, increasing the speed of operation, and reducing training time. When use is nondiscretionary, small improvements help.
HISTORY OF HCI ❚❙❘ 317
A Personal Story—Highlights from My Forty Years of HCI Prehistory 1955 Receive my first desktop word processor when parents take out of storage an old black-framed Underwood typewriter to do my school papers for junior high. 1957 Discover K&E slide rules at Bronx High School of Science for use in math and physics class. Think it really cool to have an 18-inch one swinging from its own holster on my belt: a nerd gunslinger. 1959 As I go off to Lafayette College, parents give me my first laptop: a grey Royal portable typewriter, manual, of course. 1963 Leave for Harvard social relations graduate school with new baby-blue Smith Corona portable. The first affordable electric typewriter, albeit with a manual carriage return. 1964 Make one of the most important decisions in my life, by taking a computer course. Learn to keypunch and use a counter-sorter: the thing with twelve pockets that movies used to love because it gave good visuals. 1964 Discover that the command “do not fold, bend, spindle or mutilate” printed on my utility bills was because IBM cards treated this way would jam in counter-sorters and accounting machines. Henceforth, mutilate all my utility cards as a 1960s anti-bureaucratic protest and to create more jobs for workers who had to cope by hand with my de-automated card.
Mainframe 1964 Learn to program in: FAP [early Assembler] and Fortran II and IV. Submit many jobs (stacks of IBM punch cards) to the Harvard computer center, and sometimes get meaningful output back 10 hours later. 1966 Much of my dissertation data analysis done at this time on the new DataText do-it-yourself statistics program, which liberates scholars from dependence on technicians to do analyses. 1967 Just before I leave Harvard, I view the remote teletype access between our building and Project Mac at MIT and ARPAnet. I am amazed, but didn’t fully appreciate that this was a precursor of the Internet revolution. 1973 Start using IBM Selectric typewriter—with correction button—to write papers. 1976 Meet Murray Turoff and Roxanne Hiltz who are developing EIES, one of the first civilian e-mail-like systems, combining messaging and computerized conferencing. I happily join them and have not been offline since.
Remote Access to Mainframe 1979 New HCI interfaces arrive at the University of Toronto. I can submit computer runs from a typewriter-like terminal near my office instead of having to trudge to the mainframe building. 1985 E-mail becomes prevalent on a less-experimental basis: Bitnet in the U.S./Canada and later Netnorth in Canada. The two systems interconnect and link less easily with European equivalents. I schmooze with people near and far, and have never stopped.
Stand-Alone Personal Computing 1987 Buy my first stand-alone computer, the hot new 6-megahertz IBM AT with a 20-megabyte hard disk, one of the hottest machines of the time. No longer must I negotiate with secretaries to type and retype drafts of papers. This leads to more editing of drafts by me, but also greater productivity in the number of papers published.
The Internet Matures 1990 Cajole the director of the centre to patch together an Ethernet line running through all offices. It connects to a router that links to the computer centre mainframe
318 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
A Personal Story—Highlights from My Forty Years of HCI (continued) 1992 Buy my first laser printer for personal use: a Hewlett-Packard 4M. No need to trudge down the hall to print final versions. Now have the ability to print pictures and graphs at my own desk. 1992 The modern Internet era begins, with the expansion of e-mail to the rest of the world, especially through dot.com Internet addresses and commercial Internet Service Providers. 1994 Buy my first computer for the house. Saves me the trouble of going to the office on nights and weekends to do statistical analyses, e-mail or write. 1995 Buy my first laptop (Dell) to take with me as a visiting professor at Hebrew University. To cope with busy signals, I set my alarm for 2 am to rise and sign-on to the Internet.
Powerful Personal Computing 1996 Start Netville study with Keith Hampton of a highly wired suburb near Toronto, with 16MB connection. Discover that even more than high-speed Internet connections, people value the ability to always keep their Internet connections on so that they can quickly share a thought. 1996 Personal computers now fast enough that it is feasible to do large statistical analyses on them. Their graphical interfaces make it easier to compile commands at the cost of some flexibility. 1997 Buy first printer for home use. Now there is even less reason to go to the office. 1998 Early adopter of Google search engine. Miraculously fast, awesome coverage, and it uses social network analytic principles to identify important sites. Bookmarking now becomes a minor convenience rather than an absolute necessity for navigating the web. 1999 Obtain high-speed broadband connection (DSL) for the house, making it easy to access websites rapidly—and our phone line is no longer tied up by computer use.
The Internet Proliferates 2001 Set up my own website which has 35,000 hits by March 2004. People stop writing to me for my papers; they just go to the site and download 2002 Teach an undergraduate Internet and Society course in a “smart classroom” where each student has a PC on her desk, hardwired into the Internet. Biggest challenge is to stop students from emailing, IMing, and web surfing during class so that they will pay attention to my lectures. 2002 Student assistants reduce their trips to the library, since most scholarly journals now have electronic versions online. I then accumulate many articles to read because they are so easy to obtain online. Sometimes I even get to read them. 2003 Phone calls have essentially stopped coming to the office, except from my wife and the news media. Get over 100 e-mail messages a day (half of them spam), and send about 50. 2004 Have more than 2,500 e-addresses in my address book. 2004 After forty years of computing, I suffer a herniated (slipped) cervical disk. I have sat at computers for too many hours, days, and years. Barry Wellman Source: Wellman, B. (2004). HCI: A personal timeline. Retrieved April 1, 2004, from http://www.chass.utoronto.ca/~wellman/publications/hci_timeline/timeline-hci.pdf
HISTORY OF HCI ❚❙❘ 319
Early Visions and Demonstrations Until transistor-based computers appeared in 1958, visions of their future use existed mainly in the realm of science fiction because vacuum tube-based computers had severe practical limitations. The most influential early vision was Vannevar Bush’s 1945 essay “As We May Think.” Bush, who played a key role in shaping government funding of research in science and technology during and after World War II, described an inspiring albeit unrealistic mechanical device that anticipated many capabilities of computers. Key writings and prototypes by HCI pioneers appeared during the next decade. J.C R. Licklider, a research manager with a background in psychology, outlined requirements for interactive systems and accurately predicted which would prove easier and which more difficult to fulfill (such as visual displays and natural language understanding, respectively). Computer scientists John McCarthy and Christopher Strachey proposed time-sharing systems, crucial to the spread of interactive computing. In 1963 Ivan Sutherland’s Sketchpad display system demonstrated copying, moving, and deleting of hierarchically organized objects, constraints, iconic representations, and some concepts of object-oriented programming. Douglas Engelbart formulated a broad vision, created the foundations of word processing, invented the mouse and other input devices, and conducted astonishing demonstrations of distributed computing that integrated text, graphics, and video. Ted Nelson’s vision of a highly interconnected network of digital objects foreshadowed aspects of World Wide Web, blog (or Web log), and wiki (multiauthored Web resource) technologies. Rounding out this period were Alan Kay’s visions of personal computing based on a versatile digital notebook. These visions focused on the discretionary use of technology, in articles that included:“Man-Computer Symbiosis,” “Augmenting Human Intellect,” and “A Conceptual Framework for Man-Machine Everything.” Technology would empower people to work and interact more effectively and flexibly. These visions inspired researchers and programmers to work for the decades needed to realize and refine them. Some of the capabilities that the visions anticipated are now taken for granted; others remain elusive.
Nondiscretionary or discretionary use? Real life lies somewhere along a continuum from the assembly line nightmare satirized in the English actor Charlie Chaplin’s movie Modern Times to utopian visions of completely empowered people. Although some design opportunities and challenges affect the spectrum of computer use, distinct efforts focused on discretionary and nondiscretionary use have proceeded in parallel with only modest communication.
The First Forty Years A key to understanding the evolution of humancomputer interaction is to describe the evolution of who interacted with computers, why they did so, and how they did it. (See Figure 1.) The first computer builders did everything themselves. Following the appearance of commercial systems, for three decades most hands-on computer users were computer operators. Programmers and those people who read printed output interacted with computers but not directly. Labor was thus divided into these three categories: (1) Operators interacted directly with a computer: maintaining it, loading and running programs, filing printouts, and so on. (2) Programmers, a step removed from the physical device, might leave a “job” in the form of punched cards to be run at a computer center, picking up the cards and a printout the next day. (3) Users specified and used a program’s output, a printout, or report. They, too, did not interact directly with the computer.
Supporting Nondiscretionary Use by Computer Operators In the beginning the computer was so costly that it had to be kept gainfully occupied for every second; people were almost slaves to feed it. (Shackel 1997, 997)
For the first half of the computer era, improving the experience of hands-on users meant supporting low-paid operators. An operator handled a computer as it was, setting switches, pushing buttons, reading lights, feeding and bursting (separating) printer
320 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
paper, loading and unloading cards, magnetic tapes, and paper tapes, and so on. Teletypes were the first versatile mode of direct interaction. Operators typed commands, the computer printed responses or spontaneous status messages. The paper printout scrolled up, one line at a time. Displays (called “video display units” or “video display terminals” or “cathode ray tubes”) were nicknamed “glass ttys” (glass teletypes) because they functioned much the same, displaying and scrolling up typed operator commands, computer-generated responses, and status messages. Most were monochrome and restricted to alphanumeric characters. The first display to be marketed commercially cost around $50,000 in today’s dollars. They were expensive but cost a small fraction of a business computer. Typically one console or terminal accompanied a computer for use by an operator. Improving the design of buttons, switches, and displays was a natural extension of traditional human factors. In 1959 Shackel published “Ergonomics for a Computer,” followed in 1962 by “Ergonomics in the Design of a Large Digital Computer Console.” Little published research followed for a decade. In 1970 Shackel’s HUSAT (Human Sciences and Advanced Technology) research center formed, focused on general ergonomics. The first influential book was James Martin’s 1973 Design of Man-Computer Dialogues. After a visionary chapter that remains interesting to read, the book surveyed existing approaches to supporting operators. Yet, it was written for programmers and conveyed a sense of changes ahead. In 1980 five major HCI books were published; two focused on video display terminal (VDT) design and one on general ergonomic guidelines. Germany published VDT standards in 1981. By threatening existing products, these standards made designing for human capabilities a visible economic issue. Also during 1980 Stuart Card, Thomas Moran, and Allen Newell’s article “Keystroke-Level Model for User Performance Time with Interactive Systems” was published. They wrote: “The central idea behind the model is that the time for an expert to do a task on an interactive system is determined by the time it takes to do the keystrokes” (397). This model and successors such as GOMS (goals, operators,
methods, selection rules) were used to help quintessentially nondiscretionary users such as telephone operators, people engaged in repetitive tasks involving little reflection. GOMS added higher level cognitive elements to the perceptual-motor focus of “knobs and dials” human factors. A series of ergonomically justified interface guidelines culminated in 1986 with the publication of human factors experts Sidney Smith and Jane Mosier’s 944 guidelines. Sections were entitled “Data Entry,” “Data Display,” “Data Transmission,” “Data Protection,” “Sequence Control,” and “User Guidance.” The emphasis was on supporting operators. The guidelines mentioned graphical user interfaces (GUIs), then a new development, and the major shift and expansion of the design space ushered in by GUIs may have been a factor in discontinuing the guideline effort. By then change was rippling through the industry. Mainframe computers and batch processing still dominated, but time sharing of computers was allowing new uses, minicomputers were spreading, and microcomputers were starting to appear. Handson computing was becoming available to people who were not computer professionals, who would use technology only if it helped them work better. Improving the life of discretionary users had a history in the visions of Bush and others, of course, but also in the support for the other two categories of computer users: programmers and users of the output.
Supporting Discretionary Use by Computer Programmers Early programmers used a computer directly when they could because doing so was fun and faster. However, the cost of computers largely dictated the division of labor noted previously. Working as a programmer during the mid-1970s, even at a computer company, typically meant writing programs on paper that were then punched onto cards by keypunch operators. The jobs were run by computer operators, and the programmer received printed output. Improving the programmers’ interface to a computer meant developing constructs (e.g., subroutines), compilers, and programming languages. Grace Hopper was a farsighted pioneer in this effort through the 1950s.
HISTORY OF HCI ❚❙❘ 321
Programmers also worked to advance computer technology. In 1970 the Xerox company’s Palo Alto Research Center (PARC) was founded, with a focus on advancing computer technology. In 1971 Allen Newell proposed a project that was launched three years later: “Central to the activities of computing— programming, debugging, etc.—are tasks that appear to be within the scope of this emerging theory” of the psychology of cognitive behavior (quoted in Card and Moran 1986, 183). PARC and HUSAT were launched in 1970 and engaged in a broad range of research but with an interesting contrast. HUSAT research was focused on ergonomics, anchored in the tradition of nondiscretionary use, one component of which was the human factors of computing. PARC research was focused on computing, anchored in visions of discretionary use, one component of which was also the human factors of computing. PARC researchers extended traditional human factors to higher level cognition; HUSAT and European researchers introduced organizational considerations. Thousands of papers written on the psychology and performance of programmers were published during the 1960s and 1970s. Gerald Weinberg published the book The Psychology of Computer Programming in 1971. In 1980, the year when three books on VDT design and ergonomics were published, Ben Shneiderman published Software Psychology. In 1981 B. A. Sheil wrote about studies of programming notation (e.g., conditionals, control flow, data types), programming practices (flowcharting, indenting, variable naming, commenting), and programming tasks (learning, coding, debugging) and included a section on experimental design and statistical analysis. With time sharing and minicomputers in the late 1970s and 1980s, many programmers became enthusiastic hands-on users. Ongoing studies of programmers became studies of hands-on users. When personal computing was introduced, studies shifted to other discretionary users. The book Human interaction with computers, edited by Thomas R. G. Green and Harold T. Smith and also published in 1980, foreshadowed the shift. With a glance at “the human as a systems component,” one third of the survey was devoted to research on programming and the rest to designing for “non-specialist people,” meaning people who were not
computer specialists—that is, those people we now call discretionary users. The preface of the book echoed early visions: “It’s not enough just to establish what people can and cannot do; we need to spend just as much effort establishing what people can and want to do” (viii). Another effort to bridge the gap between programmer and other professionals emerged in John Gould’s group at IBM Watson Labs. Like the PARC applied psychology group, the Gould group evolved through the 1970s and 1980s to a cognitive focus from one that included perceptual-motor studies and operator support. In order to expand the market for computers, IBM realized it would be necessary to make them usable by people who could not be expected to program complex systems. Many key participants in early HCI conferences, including Ruven Brooks, Bill Curtis, Thomas Green, and Ben Shneiderman, had studied psychology of programming. Papers written on programmers as users were initially a substantial part of these conferences but gradually disappeared as programmers became a smaller subset of computer users. Other factors contributed to a sense that HCI was a new undertaking. Graphic displays dropped in price and became widely used during the late 1970s, opening a large, challenging design space. In the United States, academic hiring of cognitive psychology Ph.D.s fell sharply during the late 1970s, just when computer and telecommunication companies were eager to hire psychologists to tackle perceptual and cognitive design issues. In 1969 Association for Computing Machinery (ACM) had formed a special interest group (SIG) for social and behavioral scientists using computers as research tools. In 1982 this group of discretionary computer users decided to change its name and charter to the Special Interest Group on Computer-Human Interaction (SIGCHI), focusing on behavioral studies of computer use or human-computer interaction. SIGCHI drew heavily from cognitive psychology and software psychology and from sympathetic programmers and computer scientists. Many programmers and scientists were unaware of prior human factors studies of operators. Some cross-publication existed between human factors and human-computer interaction, but the endeavors remained distinct.
322 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
In Europe computer companies exerted less influence, and research boundaries were less distinct. In 1977 the Medical Research Council Applied Psychology Unit, renowned for theoretically driven human factors research, initiated an IBM-funded HCI project with a focus on discretionary use. In a 1991 survey with a European perspective, Liam Bannon decried the slow shift to a discretionary focus while also critiquing those people who adopted that focus for mostly considering initial use by new users. Some tensions existed between the human factors and community interaction communities. The former felt that its past work was not fully appreciated by the latter. Although methods and goals overlapped, the agendas of the two camps differed. A 1984 study contrasting performance and preference of users found evidence that even for a repetitive task, users might prefer an interaction technique that was pleasant but slower. Although this evidence was of interest to people studying discretionary use, a leading GOMS proponent recommended suppressing its publication lest it undermine the mission of maximizing performance. Businesses acquired the first expensive business computers to address major organizational concerns. Sometimes merely the prestige of an air-conditioned, glass-walled computer room justified the expense, but most computers were put to work. Output was routed to managers. In the field variously called “data processing” (DP), “management information systems” (MIS), “information systems” (IS), and “information technology” (IT), the term users referred to these managers. Like early programmers, they were well paid, discretionary, and not hands on. Supporting managerial use of computers meant improving the visual display of information, first on paper and eventually on displays as well. Because much computer output was quantitative, research included the design of tables, charts, and graphs— “business graphics” were one focus of much MIS usability research. Interaction with information remains central today: “Human Computer Interaction studies in MIS are concerned with the ways humans interact with information, technologies, and tasks” (Zhang et al 2002, 335). (All computer users viewed information. Operators used manuals and displays, programmers used flowcharts and printouts. Thus,
research on the organization and layout of information focused on both human factors and the psychology of programming.) MIS also introduced an organizational focus: approaches to deploying systems, resistance to systems, effects of using systems. As HCI came to include more group support issues, people found and explored commonalities in computer-supported-cooperative work conferences. Whereas by 1985 almost all programmers were hands-on computer users, until the late 1990s most managers avoided hand-on use. Managers still delegate much technology use, but most now use some software directly. Not surprisingly, management interest in HCI is growing, with an interest group and significant symposia and workshops conducted at major conferences since 2001.
Government Role in System Use and Research Funding Governments were the major purchasers of computers during the decades when “feeding the computer” was the norm. In addition to operators, governments employed vast numbers of data entry and other nondiscretionary users. Supporting these people meshed naturally with the focus on designing to fit human capabilities that arose in the world wars. Competitively bid contracts present challenges for government acquisition of systems. The government has to remain at arm’s length from the developer yet needs to specify requirements in advance. This situation led the U.S. government to participate in establishing ergonomic standards during the late 1970s and 1980s. Compliance with interface design standards and guidelines could be specified in a government contract. For example, Smith and Mosier’s guideline development effort mentioned previously was funded by the U.S. Air Force. Government agencies were early adopters of computers, but the work conducted by such agencies changes only gradually. Such agencies no longer employ computer operators in large numbers, but huge numbers of data entry and handling personnel remain at agencies concerned with such issues as census, taxes, and health and welfare. Power plant
HISTORY OF HCI ❚❙❘ 323
Nondiscretionary Use
Discretionary Use
Human–Compute Interaction
2005 Ongoing priority for government funding
Norman DUX 2003 IS/HCI organized
1995
Smith & Mosier
1985
HCI journal appears SIG CHI formed
Keystroke-level model Three ergonomics books
1980
Smith & Green Shneiderman
Martin
1975
Martin
HUSAT founded
1970
Xerox PARC founded
Shackel
1965
German VDU standards
Human Factors and Ergonomics
Kay Nelson Sutherland Engelbart Licklider
Transistor-based commercial computers 1955 Hopper
WWII human factors
1945
Bush
WWI training
1915 Taylorism FIGURE 1.
Timeline with publications and events
Psychology of Programming
Visions and Prototypes
324 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
operators and air traffic controllers are glued to systems that evolve very gradually. Ground control operations for space launches require highly trained computer users. Soldiers require training in equipment; weaponry grows ever more complex. The quantity of text and voice intercepts processed by intelligence agencies is immense. Overall, government remains a major employer of nondiscretionary computer users. Improving their work conditions is a central concern. Small efficiency gains in individual interactions can provide large benefits over time. Government is also a major funding source for information technology research. In Europe national and European Union initiatives have been the principal funding source. In Japan the government has funded major initiatives with HCI components, such as the Fifth-Generation Project. Since World War II the U.S. National Science Foundation, armed services (led by proponents of basic research in the Office of Naval Research), and intelligence agencies have been major sources of funding, although research laboratories established by telecommunication, hardware, and software companies have also been prominent since the 1970s. U.S. government funding remains focused on nondiscretionary computer use, with greatest emphasis being on speech recognition and natural language understanding. Little research on these two topics appears in HCI conferences, even though some people hope that the topics will eventually be more useful in discretionary situations. The National Science Foundation has also funded substantial work on using brainwaves to guide computer displays. This is another technology that may have its uses but probably not in many homes and offices. Research on nondiscretionary use is published in specialized journals and at conferences, including those of the Human Factors and Ergonomics Society and HCI International.
Corporate/Academic Role in System Use and Research Companies employ the information workers whose discretion has been growing, and a subset of technology company employees and academics from a
variety of disciplines is researching, developing, and marketing interactive systems. Few computer and software companies that focused on nondiscretionary use during the mainframe and minicomputer eras still do today. Most major vendors that thrived then are gone today, and the few that remain (IBM comes to mind) reformed themselves during the transition to discretionary use of computers during the 1980s. Most companies that are now active in human-computer interaction research and innovation came into prominence during the 1980s and 1990s, by which time it was a good commercial strategy to appeal to users who exercised discretion— either individual purchasers or organizations. HCI practitioners started with methods and perspectives from human factors and experimental psychology, much as interactive software developers inherited waterfall models—defining a series of steps in designing, developing, testing, and maintaining a new product—that had been crafted for other purposes. The need for new methods was obscured by the fact that traditional ergonomic goals of fewer errors, faster performance, quicker learning, greater memorability, and enjoyment still applied, although not with the same relative priority. Several factors led to change. The notion that “friendly” interfaces are frills in the workplace was eroded when people asked, “Why shouldn’t my expensive office system be as congenial as my home computer?” Also, as more software appeared, training people on each application was not feasible. Ease of learning was critical. As software came to support more group activities and detailed work practices, lab studies were supplemented with social and ethnographic (cultural studies) methods in research and practice. Contextual design and personas (fully specified but fictional users) are recent innovations far removed from techniques of early HCI. Finally, the need to seduce discretionary users grew as software became more capable and competitive. The Web promoted design aesthetics (not a major issue for data entry operators) and marketing (previously considered a distraction) as central to human-computer interaction. Reflecting this expansion of focus, SIGCHI co-sponsored the Designing for User Experiences (DUX) 2003 conference.
HISTORY OF HCI ❚❙❘ 325
This evolution is captured by the work of Don Norman. In the first paper presented at the first CHI (computer-human interaction) conference,“Design Principles for Human-Computer Interfaces,” he focused on tradeoffs among the attributes of speed of use, prior knowledge required, ease of learning, and errors. Twenty years later, in 2003, he published Emotional Design.
Related Theory and Disciplines The early 1980s were marked by a strong effort to provide a theoretical framework drawn especially from cognitive psychology for a field previously dominated by an engineering perspective. This effort was paralyzed by the rapid changes in the field. Graphical user interfaces and multimedia swept away interest in phenomena that been the focus of theoretical analysis, such as command naming (selecting memorable names for computer commands), and introduced a daunting array of new challenges. The growing focus on computer-mediated interaction between humans challenged the centrality of cognition, and awareness of the role of design and marketing fur ther reduced the prospects for an encompassing theoretical framework. A recent compilation of theory and modeling approaches to HCI includes several chapters with a cognitive orientation, a few with social science or cognitive-social hybrids, and one focused on computer science. As the academic home of HCI moved from psychology to computer science, HCI became more entwined with software engineering. Artificial intelligence has also had several points of contact with HCI.
The Trajectory The unanticipated arrival and consequences of the Web demonstrated the difficulty of anticipating the future, but a key goal of organizing a history is to identify trends that may continue. Discretionary computer use continues to spread. Nondiscretionary use remains significant and benefits from better understanding and interfaces wherever they originate. For many people software use that was once discretionary has become mandatory—we
can’t work without it. Increased digitally mediated collaboration forces us to adopt the same systems and conventions for using these systems. If we have choices, we may have to exercise them collectively. This situation in turn motivates people to develop features for customization and interoperation. For example, in 1985 each member of a team chose a word processor and exchanged printed documents with other members. In 1995 the team members may have had to use the same word processor to share documents digitally. Today the team members can again use different word processors if sharing documents digitally in PDF format suffices. One constant in the computer era has been the keyboard and display as a central interface component. With relentless miniaturization of components and increase in power, this era is fading. The expansion of human-computer interaction is clearly only a beginning. Jonathan Grudin See also Altair; Desktop Metaphor; ENIAC; Graphical User Interface
FURTHER READING Baecker, R., Grudin, J., Buxton, W., & Greenberg, S. (1995). Readings in human-computer interaction: Toward the year 2000. San Francisco: Morgan Kaufmann. Bannon, L. (1991). From human factors to human actors: The role of psychology and human-computer interaction studies in system design. In J. Greenbaum & M. Kyng (Eds.), Design at work (pp. 25–44). Hillsdale, NJ: Erlbaum. Barnard, P. (1991). Bridging between basic theories and the artifacts of human-computer interaction. In J. M. Carroll (Ed.), Designing interaction: Psychology at the human-computer interface (pp. 103–127). Cambridge, UK: Cambridge University Press. Beyer, H., & Holtzblatt, K. (1998). Contextual design. San Francisco: Morgan Kaufmann. Bush, V. (1945). As we may think. The Atlantic Monthly, 176, 101–108. Card, S. K., & Moran, T. P. (1986). User technology: From pointing to pondering. Proceedings of the Conference on History of Personal Workstations, 183–198. Card, S. K., Moran, T. P., & Newell, A. (1980). Keystroke-level model for user performance time with interactive systems. Communications of the ACM, 23(7), 396–410. Carroll, J. M. (Ed.). (2003). HCI models, theories and frameworks. San Francisco: Morgan Kaufmann. Carroll, J. M., & Campbell, R. L. (1986). Softening up hard science: Response to Newell and Card. Human-Computer Interaction, 2(3), 227–249.
326 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Dyson, F. (1979). Disturbing the universe. New York: Harper & Row. Engelbart, D. (1963). A conceptual framework for the augmentation of man’s intellect. In P. Howerton & D. Weeks (Eds.), Vistas in information handling (pp. 1–29). Washington, DC: Spartan Books. Engelbart, D., & English, W. (1968). A research center for augmenting human intellect. AFIPS Conference Proceedings, 33, 395–410. Fano, R., & Corbato, F. (1966). Time-sharing on computers. Scientific American, 214(9), 129–140. Greenbaum, J. (1979). In the name of efficiency. Philadelphia: Temple University Press. Grudin, J. (1990). The computer reaches out: The historical continuity of interface design. In Proceedings of the SIGCHI conference on human factors in computing systems ’90 (pp. 261–268). New York: ACM Press. Grudin, J., & MacLean, A. (1984). Adapting a psychophysical method to measure performance and preference tradeoffs in human-computer interaction. In Proceedings of INTERACT ’84 (pp. 338–342). Amsterdam: North Holland. Kay, A., & Goldberg, A. (1977). Personal dynamic media. IEEE Computer, 10(3), 31–42. Licklider, J. (1960). Man-computer symbiosis. IRE Transactions of Human Factors in Electronics, 1(1), 4–11. Licklider, J., & Clark, W. (1962). On-line man-computer communication. AFIPS Conference Proceedings, 21, 113–128. Long, J. (1989). Cognitive ergonomics and human-computer interaction. In J. Long & A. Whitefield (Eds.), Cognitive ergonomics and human-computer interaction (pp. 4–34). Cambridge, UK: Cambridge University Press. Martin, J. (1973). Design of man-computer dialogues. Englewood Cliffs, NJ: Prentice Hall. Nelson, T. (1965). A file structure for the complex, the changing, and the indeterminate. In Proceedings of the ACM National Conference (pp. 84–100). New York: ACM Press. Nelson, T. (1973). A conceptual framework for man-machine everything. In AFIPS Conference Proceedings (pp. M21-M26). Montvale, NJ: AFIPS Press.. Newell, A., & Card, S. K. (1985). The prospects for psychological science in human-computer interaction. Human-Computer Interaction, 1(3), 209–242. Norman, D. A. (1983). Design principles for human-computer interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1–10). New York: ACM Press. Norman, D. A. (2003). Emotional design: Why we love (or hate) everyday things. New York: Basic. Sammet, J. (1992). Farewell to Grace Hopper—End of an era! Communications of the ACM, 35(4), 128–131. Shackel, B. (1959). Ergonomics for a computer. Design, 120, 36–39. Shackel, B. (1962). Ergonomics in the design of a large digital computer console. Ergonomics, 5, 229–241. Shackel, B. (1997). Human-computer interaction: Whence and whither? JASIS, 48(11), 970–986. Shiel, B. A. (1981). The psychological study of programming. ACM Computing Surveys, 13(1), 101–120. Shneiderman, B. (1980). Software psychology: Human factors in computer and information systems. Cambridge, MA: Winthrop. Smith, H. T., & Green, T. R. G. (Eds.). (1980). Human interaction with computers. New York: Academic. Smith, S. L., & Mosier, J. N. (1986). Guidelines for designing user interface software. Bedford, MA: MITRE.
Sutherland, I. (1963). Sketchpad: A man-machine graphical communication system. AFIPS, 23, 329–346. Taylor, F. W. (1911). The principles of scientific management. New York: Harper. Weinberg, G. (1971). The psychology of computer programming. New York: Van Nostrand Reinhold. Zhang, P., Benbadsat, I., Carey, J., Davis, F., Galleta, D., & Strong, D. (2002). Human-computer interaction research in the MIS discipline. Communications of the Association for Information Systems, 9, 334–355.
HOLLERITH CARD The Hollerith card, also known as a “punch card” or an “IBM card,” was the preeminent digital medium of data entry and storage for three-quarters of a century until its replacement by the magnetic floppy disk. Hollerith cards were part of a system that specified particular relations between human beings and data processing machinery that were very different from the modern relations between human beings and real-time, networked systems. Herman Hollerith (1860–1929) attended Columbia College School of Mines and then gained valuable experience working on the 1880 U.S. census.A tabulating machine, invented by Charles W. Seaton, did none of the counting itself but merely moved scrolls of paper between rollers so that clerks could write information conveniently.After working with Seaton,Hollerith taught at the Massachusetts Institute of Technology while experimenting with his own ideas about data automation. Hollerith’s first approach was to punch holes in long paper strips, but he quickly switched to cards because correcting errors on punched paper strips was difficult. For a century, data had been encoded as holes in cards, for example, in music boxes, experimental player pianos, and the Jacquard loom that controlled complex patterns in weaving cloth. Hollerith was inspired by seeing train conductors punch tickets that recorded information about passengers, but he found that the conductors’ punch tool caused painful cramping of the hand. Hollerith filed a series of patents in the 1880s, then demonstrated his system with the 1890 census. Hollerith’s 1890 system combined manual methods with both mechanical and electric methods. A clerk entered data with a manual pantograph punch
HOLLERITH CARD ❚❙❘ 327
that helped locate the right points on cards and was more comfortable to use than a conductor’s punch tool. To tabulate the data, a clerk would place the cards into a press one at a time. The top of the press held a number of pins that could move up or down, one for each possible hole. If the hole had been punched, the pin would make electrical contact with a drop of mercury in a tiny cup, which would activate a counter. If the hole had not been punched, the pin would ride up and fail to close the circuit. To the right of the press was a sorter consisting of boxes with electrically operated lids. When a clerk closed the press, one of the lids would open so that the clerk could place the card into that particular box. For example, if each card represented data about one person, the sorter helped a clerk divide males from females for later separate analysis. During the following years Hollerith developed each part of this system: punching, counting, and sorting. Within a decade the cumbersome mercury cups and lidded boxes had been replaced, and cards were automatically fed at great speed through the tabulating and sorting equipment. Sets of electric relays combined data from different variables much as transistors would do in computers a century later but were programmed manually by plugging wires, as in the period’s telephone operator equipment. Goaded by a competitor, Hollerith added electrically driven keypunch machines in 1914. His system included innovative business practices, such as leasing rather than selling the machines and making much of his profit by selling the cards themselves. The Tabulating Machine Company he founded in 1896 was a precursor of IBM. Hollerith cards remained one of the major methods for data input for electronic computers as these machines were introduced during the 1940s and 1950s. As late as 1980, many U.S. universities still had keypunch machines on which scientists and administrative staff would enter data. In terms of human-computer interaction, these were dramatic and rather loud machines that could actually damage the user’s hearing if many machines were in a small room. A typical form was a desk with a fixed keyboard and the apparatus above and at the back. A stack of cards would be placed into a hopper, where the cards would feed one at a time into the machine. They were punched directly in front of the
operator, where they could be clearly seen as they jumped sideways when each hole was punched, until they zipped into the output pile. The operator might place around a drum a previously punched control card that programmed the keypunch to skip or specially punch certain columns. After they were punched, the cards would be placed into card readers, and processed data could be automatically punched on a new set of cards if desired. Cards locked users into the batch processing mode, in which users would carefully prepare a computer run at the keypunch machine, then wait as long as several hours for the fanfold computer printout that was the result of the particular job, but users could not interact directly in real time with the computer. A typical card was rectangular, with space for eighty columns of rectangular holes and room for twelve holes in each column, ten of which were marked by printed numerals 0–9. One hole in a given column represented a single digit, and a second hole in row 11 could mean a minus sign. Letters of the alphabet and other characters were represented by the combination of one hole in the first ten rows plus zone punches in rows 11 and 12 or by multiple punches in the numerical rows. Multipunched cards were flimsy, so cautious users made duplicate copies of their card decks. One advantage of Hollerith cards over magnetic data media is that human beings can read the cards directly. Already in 1902 Hollerith’s keypunch could typewrite the data across the top of the columns, and he himself noted how easy it was to sort the cards manually by gently putting a knitting needle through the holes. He cut the upper left corner off the cards so a user could immediately see if any had been placed the wrong way. The great fear of card users was that they might spill an entire box of two thousand punched cards, so many users marked lines diagonally across the top edges to facilitate reordering them if necessary. Hollerith cards may have been inconvenient in many respects, but they helped launch modern information technology and gave users a more intimate experience of data than do today’s fully electronic media. William Sims Bainbridge See also ENIAC
328 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
FURTHER READING Austrian, G. D. (1982). Herman Hollerith: Forgotten giant of information processing. New York: Columbia University Press.
HUMAN FACTORS ENGINEERING See Anthropometry; Keyboard; Task Analysis
HUMAN-ROBOT INTERACTION The relationship between robots and humans is so different in character from other human-machine relationships that it warrants its own field of study. Robots differ from simple machines and even from complex computers in that they are often designed to be mobile and autonomous. They are not as predictable as other machines; they can enter a human’s personal space, forcing a kind of social interaction that does not happen in other human-machine relationships.
Background The term robot first entered literature through the play R.U.R. (1920), by the Czech playwright and novelist Karel Capek (1890–1938); R.U.R. featured humanoid devices as servants for humans. In the mid-1950s, the first true robots appeared. A human operator working from a distance ran these devices, which had the ability to carry out numerical computations and contained mechanisms to control machine movement. The rest of the twentieth century saw robotics continue to make significant advances in such areas as more flexible motion, refined manipulators (e.g., articulated hands and walking devices), and increased intelligence. Researchers took advantage of progress in computer science and software engineering, including developments in parallel and distrib-
uted computing (which allow for more speedy computation through the use of multiple processors and/or computers), and more sophisticated user interface design. By the 1980s, robotics was recognized as fundamentally interdisciplinary, with major contributions from mathematics, biology, computer science, control theory, electrical engineering, mechanical engineering, and physics. By the 1990s, robots were increasingly involved in automated manufacturing environments, in deep-sea and space exploration, in military operations, and in toxic-waste management. Predictions abounded that robots would become important in home and office environments as well. At the beginning of the twenty-first century, we are closer to the day when various robot entities may be integrated into people’s daily lives. Just as computers began as academic and researchrelated computational tools but became personal electronic accessories for the general public, robots now have the potential to serve not only as high-tech workhorses in scientific endeavors but also as more personalized appliances and assistants for ordinary people. However, while the study of human-computer interaction has a relatively long history, it is only recently that sufficient advances have been made in robotic perception, action, reasoning, and programming to allow researchers to begin serious consideration of the cognitive and social issues of human-robot interaction.
From Human-Computer Interaction to Human-Robot Interaction In the past, techniques and methodologies developed under the general umbrella of user- or humancentered computing began by looking at static (unintelligent) software applications and their related input and output devices. Today these techniques are being extended to consider issues such as mobile wireless technology, wearable augmentation devices (such as miniature heads-up displays and cameras), virtual reality and immersive environments, intelligent software agents (both cooperative and autonomous), and direct brain interface technologies. In addition, mobile robotic agents are now poised to become part of our everyday landscape—in the
HUMAN-ROBOT INTERACTION ❚❙❘ 329
workplace, in the home, in the hospital, in remote and hazardous environments, and on the battlefield. This development means we have to look more closely at the nature of human-robot interaction; and define a philosophy that will help shape the future directions of this relationship. Human interface and interaction issues continue to be important in robotics research, particularly since the goal of fully autonomous capability has not yet been met. People are typically involved in the supervision and remote operation of robots, and interfaces that facilitate these activities have been under development for many years. However, the focus of the robotics community can still be said to be on the robot, with an emphasis on the technical challenges of achieving intelligent control and mobility. It is only in the early years of the twenty-first century that the state of the art has improved to such a degree that it is predicted that by 2010 there may be robots that
answer phones, open mail, deliver documents to different departments of a company, make coffee, tidy up, and run the vacuum. Due to the nature of the intelligence needed for robots to perform such tasks, there is a tendency to think that robots ought to become more like humans, that they need to interact with humans (and perhaps with one another) in the same way that humans interact with one another, and that, ultimately, they may replace humans altogether for certain tasks. This approach, sometimes termed human-centered robotics, emphasizes the study of humans as models for robots, and even the study of robots as models for humans.
Current Challenges Roboticists—scientists who study robotics—are now considering more carefully the work that has been going on in the sister community of human-computer
Carbo-Powered Robots TAMPA, Fla. (ANS)—When modern technology was in its infancy, scientists held out the hope that one day robots would cook our meals, do the housework and chauffeur the children to school. That hope has yet to become reality, but hold on: Here come the gastrobots. Powered by carbohydrates and bacteria, these robots with gastric systems are taking the science to new dimensions by mimicking not just the anatomy and intelligence of humans—but our digestive processes as well. Stuart Wilkinson, an associate professor of mechanical engineering at the University of South Florida, is pioneering the new subspecialty. “The main thing I’m shooting for is a robot that can perform some sort of task outdoors for long periods of time without anybody having to mess with it,” he said. Traditionally powered by regular or rechargeable batteries or solar panels, robots lose their efficiency when placed at any distance from a power source or human overseer. But when powered by food—say, fruit fallen to the ground or grass on a lawn—they have the potential to eat and wander indefinitely.
His test gastrobot—a 3-foot-long, wheeled device— uses bacteria to break down the carbohydrate molecules in sugar cubes. The process releases electrons that are collected and turned into electrical current. Any food high in carbohydrates could be used, the professor says, including vegetables, fruit, grains and foliage. Meat contains too much fat to be an efficient fuel, he pointed out—so the family pets are safe. A gastrobot would be far happier in an orange orchard, stabbing the fallen fruit and sucking the juice to propel itself. Measuring soil moisture and checking for insect infestations, it could then relay its findings via a cell phone connection to the farmer’s desktop computer. In its infancy, the new generation of robots has a few kinks yet to be worked out. At present, his creation “is a bit of a couch potato,” Wilkinson admitted, and requires 18 hours worth of carbo-loading to move for just 15 minutes. Then there’s the issue of, well, robot poop. “We need to develop some sort of kidney,” he explained. Source: Carbo-powered robot holds promise of relief from drudgery. American News Service, September 7, 2000
330 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
interaction (HCI), which has been studying technology development and its impact on humans since the 1960s. However, collaboration between HCI researchers and robotics researchers is not as straightforward as one might think. Until recently, much of the work in robotics has focused on integration of increasingly intelligent software on the more slowly evolving hardware platforms. Individual robots with some humanoid qualities have been developed with amazing capabilities, but it has taken years of extensive work to produce them, and they are still not advanced enough to accomplish real tasks in the real world. Human-robot interaction in these examples is studied primarily to find out what can we learn from humans to improve robots. On the other hand, since the late 1990s, much of the HCI community has adopted an explicitly strong emphasis on human-centered computing—that is, on technology that serves human needs, as opposed to technology that is developed for its own sake, and whose purpose and function may ultimately oppose or contravene human needs or wishes. Because humans are still responsible for the outcomes in human-machine systems—if something goes wrong, it is not the machine that will suffer the consequences or be punished—it is important that as robots become more independent, they are also taught how to become more compliant, communicative, and cooperative so that they can be team players, rather than simply goal-oriented mechanisms. Another challenge that faces researchers is how much like a human to make the robot. Does the robot’s physical form and personality affect how people respond to it? Does the context of the relationship play a role? Are the needs and desires of those who will interact with the robots different in the workplace than they are in the home, for example, or different in dangerous situations than they are in safe ones, or in interactions that occur close at hand as opposed to remotely? Interesting work by the sociologist Clifford Nass at Stanford University shows that often people will respond trustingly to technology and will attribute qualities such as intelligence to technology based on very superficial cues, such as how friendly or unfriendly the messages generated by the technology are. This has serious implications for the design of robots, especially those to be used in hazardous sit-
uations or other situations in which safety is critical. What if the robot has qualities that make the human think that it is smarter than it really is? To take another example, if the robot is to be used as an assistant to a disabled person or a senior citizen, would it be desirable to program the robot to act like it has emotions, even if it doesn’t really have any? Would this make the users of the robots feel more comfortable and happy about using the technology?
Current Applications and Case Studies Researchers are attempting to address these questions by taking their robots out of controlled laboratory environments and having them tackle real-world problems in realistic settings with real people as users. The results are bringing us closer to a more human-centered approach to human-robot interaction. Urban Search and Rescue One application is the use of robots for urban search and rescue (USAR). These are situations in which people are trapped or lost in man-made structures such as collapsed buildings. For example, after the collapse of New York City’s Twin Towers as a result of the terrorist attack of September 11, 2001, small teams of robots were fielded to give limited assistance to search and rescue operations. Because collapsed buildings and associated rubble pose risks not only to the victims but also to the rescue workers—secondary collapses and toxic gases are constant dangers while the workers are engaged in the time-consuming and painstaking tasks of shoring up entry points and clearing spaces—robot aid is potentially very desirable. Small, relatively inexpensive, and possibly expendable robots may be useful for gathering data from otherwise inaccessible areas, for monitoring the environment and structure while rescue workers are inside, for helping detect victims in the rubble, and eventually perhaps even for delivering preliminary medical aid to victims who are awaiting rescue. For the robots to work effectively, however, they must be capable of understanding and adapting to the organizational and information rescue hierarchy. They must be able to adapt to episodes of activity that may be brief and intense or long term; they must be equipped to help different levels of users who will
HUMAN-ROBOT INTERACTION ❚❙❘ 331
have differing information needs and time pressures. Most of the robots currently available for these kinds of hazardous environments are not autonomous and require constant supervision. The rescue workers will have to adapt as well. They will need to have special training in order to handle this technology. Currently robotics specialists, or handlers, are being trained in search and rescue to supplement rescue teams. However, even the specialists are not entirely familiar with the kind of data that the robots are sending back, and therefore understanding and interpreting that data in a time-critical situation poses additional challenges. Teams of researchers led by pioneers in the field, such as Robin Murphy of University of South Florida, are now studying these kinds of problems and work on improving the methodologies so that the humanrobot interaction can be more smoothly integrated into the response team’s overall operation. Personal Service Robots Personal service robots also offer many opportunities for exploring human-robot interaction. Researchers at the Royal Institute of Technology in Stockholm, Sweden, have been working on the development of a robot to assist users with everyday tasks such as fetching and delivering objects in an office environment. This effort has been targeted at people with physical impairments who have difficulty doing these kinds of tasks themselves, and a goal of the project is to develop a robot that someone can learn to operate in a relatively short period of time. From the early stages of this project, this group adopted user-centered techniques for their design and development work, and, consequently, have produced some very interesting results. Since ordinary people have little or no experience in interacting with a robot, a general survey was conducted to determine what people would like such a robot to do, how it should look, how they would prefer to communicate with it, and generally how they would respond to it. A large proportion of the respondents were positive about having robotic help with some kinds of basic household or other mundane tasks; the majority preferred the service robot not to act independently, and speech was the preferred mode of communication. Experiments with an early robot prototype showed that people had difficulty under-
standing the robot’s orientation (it was cylindrical in shape, with no clearly defined front), in communicating spatial directions, and in understanding what the robot was doing due to lack of feedback. Further iterations improved the physical design and the interface, and longer studies were conducted in an actual office environment with physically impaired people, who were given the opportunity to use the robot during their work days to perform tasks such as fetching coffee from the kitchen. One of the interesting observations from these studies was the insight that although the robot was the personal assistant of one individual, it also affected other people. For example, because the robot was not able to pour the coffee itself (it did not have any arms), it had to solicit help from someone in the kitchen to actually get the coffee into the cup.Another example was that people passing by in the hallway would greet the robot, although from the robot’s perspective, they were obstacles if they were in the way. These findings suggest that even if a robot is designed for individual use, it may need to be programmed to deal with a social context if it is to manage successfully in its working environment. Robots are working closely with humans in many other areas as well. Robotic technology augments space exploration in numerous ways, and in the military arena robotic units are being considered for surveillance, soldier assistance, and possibly even soldier substitutes in the future. Of perhaps greater concern are the areas in which robots will interact with ordinary people, as it remains to be seen whether the robots will be programmed to adjust to human needs or the humans will have to be trained to work with the robots. The robotic design decisions that are made today will affect the nature of human-robot interaction tomorrow. Erika Rogers See also Affective Computing; Literary Representations; Search and Rescue
FURTHER READING Billings, C. E. (1997). Issues concerning human-centered intelligent systems: What’s “human-centered” and what’s the problem? Retrieved July 21, 2003, from http://www.ifp.uiuc.edu/nsfhcs/ talks/billings.html
332 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Center for Robot-Assisted Search and Rescue. (n.d.). Center for robotassisted search and rescue: CRASAR. Retrieved July 21, 2003, from http://www.crasar.org/ IEEE Robotics & Automation Society. (1995). Proceedings of the IEEE/ RSJ international conference on intelligent robots and systems: Human robot interaction and cooperative robots. Piscataway, NJ: IEEE Robotics & Automation Society. Interaction and Presentation Laboratory. (n.d.). Human-robot interaction at IPLab. Retrieved July 21, 2003, from http://www.nada. kth.se/iplab/hri/ Lorek, L. (2001, April 30). March of the A.I. robots. Interactive Week, 8(17), 46. Retrieved August 29, 2003 from http://cma.zdnet.com/ texis/techinfobase/techinfobase/+bwh_qr+sWvKXX/zdisplay.html Norman, D. (2001). How might humans interact with robots? Retrieved July 21, 2003, from http://www.jnd.org/dn.mss/Humans_and_ Robots.html Rahimi, M., & Karwowski, W. (Eds.) (1992). Human-robot interaction. London: Taylor & Francis. Ralston, A. & Reilly, E. D. (Eds.) (1993). Encyclopedia of computer science (3rd ed.). New York: Van Nostrand Reinhold. Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people and places. Stanford, CA: CSLI Publications. Rogers, E., & Murphy, M. (2001, September). Human-robot interaction: Final report of the DARPA/NSF Study on Human-Robot Interaction. Retrieved July 21, 2003, from http://www.csc.calpoly. edu/~erogers/HRI/HRI-report-final.html National Aeronautics and Space Administration (NASA). (n.d.). Robotics. Retrieved July 21, 2003, from http://spacelink.nasa.gov/ Instructional.Materials/Curriculum.Support/Technology/Robotics/ Fong, T., & Nourbakhsh, I. (2003, March). Socially interactive robots [Special issue]. Robotics and Autonomous Systems, 42. Shneiderman, B. (1997). A grander goal: A thousand-fold increase in human capabilities. Educom Review, 32(6), 4–10. Retrieved July 21, 2003, from http://www.ifp.uiuc.edu/nabhcs/abstracts/shneiderman.html Simsarian, K. (2000). Towards human-robot collaboration. Unpublished doctoral dissertation, Swedish Institute of Computer Science, Kista, Sweden. Retrieved July 21, 2003, from http://www.sics.se/ ~kristian/thesis/ Takeda, H., Kobayashi, N., Matsubara, Y., & Nishida, T. (1997). Towards ubiquitous human-robot interaction. Retrieved July 21, 2003, from http://ai-www.aist-nara.ac.jp/papers/takeda/html/ijcai97-ims.html
HYPERTEXT AND HYPERMEDIA The terms hypertext and hypermedia refer to webpages and other kinds of on-screen content that employ hyperlinks. Hyperlinks give us choices when we look for information, listen to music, purchase products, and engage in similar activities. They take the
form of buttons, underlined words and phrases, and other “hot” (interactive) areas on the screen. Hypertext is text that uses hyperlinks (often called simply links) to present text and static graphics. Many websites are entirely or largely hypertexts. Hypermedia extends that idea to the presentation of video, animation, and audio, which are often referred to as dynamic or time-based content, or multimedia. Non-Web forms of hypertext and hypermedia include CD-ROM and DVD encyclopedias (such as Microsoft’s Encarta), e-books, and the online help systems we find in software products. It is common for people to use hypertext as a general term that includes hypermedia. For example, when researchers talk about hypertext theory, they refer to theoretical concepts that pertain to both static and multimedia content. Starting in the 1940s, an important body of theory and research has evolved, and many important hypertext and hypermedia systems have been built. The history of hypertext begins with two visionary thinkers: Vannevar Bush and Ted Nelson. Bush, writing in 1945, recognized the value of technologies that would enable knowledge workers to link documents and share them with others. Starting in the mid-1960s, Nelson spent decades trying to build a very ambitious global hypertext system (Xanadu) and as part of this effort produced a rich (though idiosyncratic) body of theory.
Linear and Nonlinear Media A linear communication medium is one we typically experience straight through from beginning to end. There is little or no choosing as we go. Cinema is a linear medium. In the world of print, novels are linear, but newspapers, magazines, and encyclopedias are somewhat nonlinear. They encourage a certain amount of jumping around. The Web and other hypertextual media are strongly nonlinear. Indeed, the essence of hypertext and hypermedia is choice—the freedom to decide what we will experience next. You can build a website in which the hyperlinks take the user on a single path from beginning to end, but this would be a strange website, and one can question whether it is really hypertext.
HYPERTEXT AND HYPERMEDIA ❚❙❘ 333
Ted Nelson on Hypertext and the Web I DON’T BUY IN The Web isn’t hypertext, it’s DECORATED DIRECTORIES! What we have instead is the vacuous victory of typesetters over authors, and the most trivial form of hypertext that could have been imagined. The original hypertext project, Xanadu®, has always been about pure document structures where authors and readers don’t have to think about computerish structures of files and hierarchical directories. The Xanadu project has endeavored to implement a pure structure of links and facilitated re-use of content in any amounts and ways, allowing authors to concentrate on what mattered. Instead, today’s nightmarish new world is controlled by “webmasters,” tekkies unlikely to understand the niceties of text issues and preoccupied with the Web’s exploding alphabet soup of embedded formats. XML is not an improvement but a hierarchy hamburger. Everything, everything must be
Nodes, Links, and Navigation Web designers and others who are interested in hypertext often use the term node to refer to chunks of content. Much of the time a node is simply a webpage. But there are times when we want to envision a cluster of closely related webpages as a single unit. Also, there are times when one physical webpage really behaves like two or more separate chunks of content. Furthermore, the page is not the fundamental unit of content in websites built with Flash (an animation technology from Macromedia and in many non-Web hypertext systems. Therefore, we do well to use the term node as the fundamental unit of hypertext content. Links (or hyperlinks) are the pathways between nodes. When we click links and thereby display a succession of webpages (nodes), we are in a sense navigating the website. Navigation is only a metaphor; no one, of course, travels anywhere. Navigation, however, is a very natural and useful metaphor because exploring a website (or a non-Web hypertext) is much like finding our way through a complex physical
forced into hierarchical templates! And the “semantic web” means that tekkie committees will decide the world’s true concepts for once and for all. Enforcement is going to be another problem :) It is a very strange way of thinking, but all too many people are buying in because they think that’s how it must be. There is an alternative. Markup must not be embedded. Hierarchies and files must not be part of the mental structure of documents. Links must go both ways. All these fundamental errors of the Web must be repaired. But the geeks have tried to lock the door behind them to make nothing else possible. We fight on. More later. Source: Ted Nelson Home Page. (n.d.) I don’t buy in. Retrieved March 29, 2004, from http://ted.hyperland.com/buyin.txt
environment such as a city. In both hypertext navigation and physical navigation, we choose the most promising route and keep track of where we go. If we get lost, we may backtrack to familiar territory or even return to our home base and start over. In the best case, we gain a mental picture of the overall structure of the environment (a bird’s eye or maplike view). At the same time, the concepts of nodes, links, and navigation have limitations, and their relevance and usefulness are being called into question as Web technologies become increasingly sophisticated. If clicking a link plays an audio sequence, is the audio sequence then a node? Does it matter whether the audio sequence is a single word or a three-minute popular song? If clicking a link on a webpage begins a video sequence on a portion of that same page, how do we describe what has happened? Is the video sequence a kind of subnode embedded within the node that is the page as a whole? In early hypertext systems links were just simple electronic pathways with a fixed origin and destination. But now if you visit an e-commerce website that
334 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
you have visited before, you may find an automatically generated, personalized link inviting you to buy a new book by an author whose books you have purchased in the past. Furthermore, this link may be gone the next time you visit the site. Do we need to distinguish between links that everyone sees and links that only appear under specific circumstances? A limitation of the navigation paradigm is that it does not correspond to the full range of user behavior. At times users do not think spatially; they just click the most promising links they see. Designers, in fact, have begun employing a different metaphor for Web use—the metaphor of the “information scent.” The idea is that users, like animals foraging or hunting for food, look for strong and distinct scents that point them toward their desired goals. Designers, therefore, should strive to create links that give off these strong and unambiguous scents.
Information Structures Designers of websites and other hypertexts must work hard to decide which nodes will be linked to which other nodes. Only with thoughtful linking will users be able to navigate successfully. Fortunately there are well-known arrangements of nodes and links—often called information structures—that guide designers as they work. By far the most important of these structures is the hierarchy. Also important are the weblike and the multipath structures. The Hierarchical Structure The hierarchy is by far the most important structure because it is the basis of almost all websites and most other hypertexts as well. This is so because hierarchies are orderly (making them easy to understand) and yet provide ample navigational freedom. On a hierarchically organized website, users start at the homepage, descend the branch that most interests them from among a number of possible branches, and make further choices as the branch they have chosen divides. At each level, the information on the nodes becomes more specific. Branches may also converge. Hierarchical structures are supplemented by secondary links that make them more flexible. The secondary links function mainly as shortcuts; they let
users jump around more freely. For example, users can move laterally along the sibling nodes of a single branch and can jump from one branch to another, without having to first move up to a higher-level node. There is almost always a link from every node back to the homepage (the top of the hierarchy) and there are usually other kinds of upward links. Especially when designing larger hypertexts, designers must choose between making the hierarchy wider (putting more nodes on each level) or deeper (adding more levels). One well-established design principle is that users navigate a wide hierarchy (one in which parent nodes have as many as thirty-two links to child nodes) more easily than a deep hierarchy. A great many print documents are hierarchies in one significant respect: They are often divided hierarchically into parts, chapters, sections, and subsections. These divisions create a logical hierarchy that the user encounters while reading linearly. Cross references in print invite the reader to jump from one part of the document to another and so are analogous to links in hypertext. Weblike Structures In a weblike structure, any node can be linked to any other. There are no rules—although designers must take great care in deciding which links will be most helpful to users. Relatively few weblike websites and non-Web hypertexts are built. This is because many subject areas seem to break naturally into a hierarchical structure and because users are apt to have trouble navigating unsystematic structures. Many weblike hypertexts are short stories and other works of fiction, in which artistic considerations may override the desire for efficient navigation. Mark Bernstein, who is the founder and chief scientist at Eastgate, a hypertext development and publishing company, questions the belief that weblike structures are necessarily hard to navigate. He has been a champion of weblike and other unorthodox hypertext structures for both fiction and nonfiction. Chains and Multipath Structures As noted earlier, content linked as a linear sequence of nodes—a simple chain structure—probably does not qualify as hypertext because the user’s choice is
HYPERTEXT AND HYPERMEDIA ❚❙❘ 335
highly restricted. Linear sequences, however, are regularly included within hierarchical websites, often taking the form of a tutorial, demo, or tour. It is possible to build a sequence of nodes that is in large part linear but offers various alternative pathways. This is the multipath structure. Often we find multipath sections within hierarchical websites. For example, a corporate website might include a historical section with a page for each decade of the company’s existence. Each of these pages has optional digressions that allow the user to explore events and issues of that decade. One may also find a multipath structure in an instructional CD-ROM in which learners are offered different pathways through the subject matter depending on their interests or mastery of the material.
make greater use of voice commands and commands issued by hand gestures. These and other advancements will surely change hypertext and hypermedia. For example, websites may provide much improved site maps consisting of a three-dimensional view of the site structure, perhaps using the metaphor of galaxies and solar systems. The Web may well become more intelligent, more able to generate personalized links that really match our interests. The Web may also become more social—we may routinely click links that open up live audio or video sessions with another person. As a communications medium changes, theory must keep pace. Otherwise, it becomes increasingly difficult to understand the medium and design successfully for it. We will therefore need to extend the hypertext concepts of nodes, links, and navigation and augment them with new concepts as well.
Node-Link Diagrams, Sketches, and the Design Process Because node-link diagrams show the overall structure of a website, Web developers often create them as part of the design process. Some Web authoring tools create these diagrams automatically. Using both node-link diagrams and mock-ups of webpages, designers can effectively plan out how the site as a whole should be linked and how to design the linking of individual pages. When webpages are well designed, the placement of the links on the page along with the phrasing of the links enables a user to grasp, at least in part, the overall site structure, the user’s current location, and whether he or she is moving down, across, or up in the hierarchy. Many websites provide site maps for users.Although site maps differ greatly in appearance and usefulness, they resemble node-link diagrams in that they provide the user with a bird’s eye view of the site structure.
Future Developments Computing and the Web will continue to evolve in a great many ways. Monitors may give way to neareye displays, at least for mobile computing. Virtual reality may become more widespread and may be routinely incorporated into the Web. We may
David K. Farkas See also Website Design
FURTHER READING Bernstein, M. (1991). Deeply intertwingled hypertext: The navigation problem reconsidered. Technical Communication, 38 (1), 41–47. Bolter, J. D. (1991). Writing space: The computer, hypertext, and the history of writing. Hillsdale, NJ: Lawrence Erlbaum Associates. Bush, V. (1996). As we may think. Interactions, 3(2), 35–46. Farkas, D. K., & Farkas J. B. (2002). Principles of Web design. New York: Longman. Hodges M. E., & Sasnett, R. M. (1993). Multimedia computing: Case studies from MIT Project Athena. Reading, MA: Addison Wesley. Landow, G. P. (1997). Hypertext 2.0. Baltimore, MD: Johns Hopkins University Press. Larson, K., & Czerwinski, M. (1998). Web page design: Implications of structure, memory, and scent for information retrieval. In Proceedings of ACM CHI ’98 Human Factors in Computing Systems (pp. 25–32). Los Angeles, CA: ACM Press. McKnight, C., Dillon A., & Richardson J. (1991). Hypertext in context. Cambridge, UK: Cambridge University Press. Nelson, T. H. (1992). Literary machines 93.1. Sausalito, CA: Mindful Press. Nielsen, J. (1994). Multimedia and hypertext: The Internet and beyond. Boston, MA: Academic Press. Nyce, J. M., & Kahn, P. (Eds.). (1991). From Memex to hypertext: Vannevar Bush and the mind’s machine. Boston: Academic Press. Parunak, H. V. D. (1991). Ordering the information graph. In E. Berk & J. Devlin (Eds.), Hypertext/hypermedia handbook (pp. 299–325). New York: McGraw-Hill.
336 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Pirolli, P., & Card, S. (1999). Information foraging. Psychological Review, 106(4), 643–675. Powell, T. A. (2000). Web design: The complete reference. Berkeley, CA: Osborne: McGraw-Hill.
Rosenfeld, L. & Morville, P. (2002). Information architecture for the World Wide Web (2nd ed.). Sebastopol, CA: O’Reilly. Rouet, J., Levonen, J. J., Dillon, A., & Spiro, R. J. (Eds.). (1996). Hypertext and cognition. Mahwah, NJ: Lawrence Erlbaum.
ICONS IDENTITY AUTHENTICATION IMPACTS INFORMATION FILTERING
I
INFORMATION ORGANIZATION INFORMATION OVERLOAD INFORMATION RETRIEVAL INFORMATION SPACES INFORMATION THEORY INSTRUCTION MANUALS INTERNET—WORLDWIDE DIFFUSION INTERNET IN EVERYDAY LIFE ITERATIVE DESIGN
ICONS Icons are visual symbols of objects, actions, or ideas. In computer software, these visual symbols are used to identify everything from a disk drive or file on a computer to the “print” command in a wordprocessing program. Icons are used in graphical user interfaces (GUIs), where a user selects them with a pointer that is manipulated via a mouse, trackball, or related device. Icons are intended to facilitate the use of a GUI by all levels of computer users. For novice users icons represent a program or command that would otherwise need to be remembered and typed. For more experienced users icons are easier to remember and are quicker to activate than a typed command. The use of icons in graphical user interfaces has
become popular to the point that standards for them have emerged within some operating systems, such as Microsoft Windows and Apple Macintosh. One result of such standards is consistency. Consistency allows users to easily use the same features among different programs and lessens the time needed to learn different programs. The graphical representation of an icon can range from the abstract to the concrete. An abstract icon is a symbol that reflects a convention of what the represented object, action, or idea actually is. Such a symbol is often of simple shape, line, and color. A concrete icon contains a more detailed, more graphical representation of the object, action, or idea, which makes interpretation easier. The link between the user (and his or her interpretation) and the object, action, or idea that the icon represents is a prime example of humancomputer interaction. Although icons are not the 337
338 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
sole answer to the question of how to make technology easier to use, icons provide a means of conducting work in a variety of environments for a wide range of computer users.
History The notion of icons originated before the graphical user interface or even computers were in use. Traditionally icons were religious images. However, the notion of icons has evolved. In 1935 the U.S. scientist and philosopher Charles Peirce defined an icon as a sign resembling an object. An icon has attributes that resemble those of the object that it represents in reality. In even earlier references during the early 1900s, philosophers likened an icon to a sign that resembles an object and contains its properties. An example of this resemblance perspective of an icon is a painting of a person. The painting resembles the person and is thus a representation of the person. The notion that icons resemble objects in reality was popular for several years. When the graphical user interface became commonplace for computers, the notion of icons as representations was maintained. What has changed are the extent of the representations and their size. Representations can range from abstract to photorealistic and appear even three dimensional. The size can vary from software program to software program, but an icon is usually less than 2.54 centimeters square. Icons were used in graphical user interfaces of early computers, such as the Xerox Star in 1973. In 1984 Apple released the Apple Macintosh, containing the first commercially successful operating system with a graphical user interface. As technical innovations progressed, the Macintosh interface evolved, Microsoft Windows became successful, and graphical user interfaces became common in other operating systems. Icons have been a mainstay throughout the evolution of the GUI. The visual aesthetics of icons evolved alongside the graphical capabilities of computers, and now a range of styles of icons is used in software. For example, some software programs arrange icons in groups to form a toolbar, whereas others arrange them on the screen more creatively. The complexity of programs has affected how icons are utilized. Some soft-
ware programs are complex and contain many features for users to access. In order to accommodate many features, users can sometimes customize the size of icons displayed on the monitor screen. Other software programs let users select which icons are displayed on the screen or move icons around the screen (as a toolbar or in their own window). The customization of icons can reach the file system as well. Whereas some systems use abstract icons for documents, other systems use other options. Some systems abandon generic representations of an image file in favor of more detailed representations. Some recent image editing programs, such as Adobe Photoshop, can produce a miniature version of the image in the file rather than a generic image file icon as the file’s icon. The rationale is to provide an immediate recognition of an image file’s contents that the file name may not provide. As the World Wide Web has become popular with a wide range of people, people who otherwise would not create their own computer software have had the opportunity to create personal webpages containing icons. The difference between the icons in software programs and the icons on webpages is that rather than activate a feature, icons on webpages will generally take the user to a different webpage. The ability of people to design their own icons and webpages and the ability to visit other websites, representing a variety of services, information, and diversions, have widened the range of icons in use by a wide range of people.
Advantages and Disadvantages One advantage of using icons instead of text labels is that icons are smaller than the corresponding text description of many objects, actions, or ideas. When many icons are displayed on a screen in order to allow a user to access a wide variety of features, preserving as much screen space as possible to maximize the user’s workspace is essential. Icons are space-efficient reminders of the functions they represent. For example, the floppy disk icon represents the “save” function in many software programs. Although many computer users save their work to their hard drive (as opposed to a floppy disk drive), the icon serves as a reminder that the floppy disk rep-
ICONS ❚❙❘ 339
resents the ability to save the current document to a storage device. To minimize the frustration and time that users need to learn a computer program, many interface designers use a metaphor. Some interfaces have an underlying theme, such as a desktop. A main component of a successful metaphor is that of carefully selected icons that apply to the metaphor. The use of such icons helps users and by representing objects or actions that are recognized as relevant to the overall interface metaphor. However, the use of a metaphor can also have negative connotations for icons. In many metaphors icons are an approximation that falls apart in terms of its ability to represent the characteristic of an object or action that would exist in the world that the metaphors represent. For example, consider the metaphor of a desktop and the icon of a trashcan that represents the “delete” function on the screen (the desktop). The metaphor is ineffective because people do not place their trashcan on top of their desk. Many expert computer users like to use keyboard shortcuts that enable tasks to be completed without taking their hands off of the keyboard. When such keyboard shortcuts are not available, and expert users must use a mouse to select icons, expert users can become frustrated with the interface. Productivity and satisfaction decrease. An example is the use of a word processor by users who are so expert that the mouse (and thus icons) is never used. If keyboard shortcuts were removed from the word processor, such users’ productivity would decrease and result in a need to change how they work. Although this drawback to icons is not specific to icons themselves, it relates to the use of icons in an interface.
Guidelines and Standards The design of icons is based on heuristics (aids in learning) and GUI standards. Heuristics require no mathematical proof or modeling, which makes them easy to use by developers of user interfaces. Icon heuristics include: ■ ■
Be simple and direct. Be clear in terms of what object, action, or idea one wants to represent in the icon.
Use an appropriate amount of visual detail and color in the icon. ■ Select an appropriate image because the image should be associated with the object, action, or idea that one wishes to represent. ■
However, using heuristics is not a guarantee that an icon will be successful. Users can be uncertain about the use of an icon if a designer does not characterize them appropriately. Many software companies, including Microsoft and Apple, have standards for their developers to use when designing user interfaces for operating systems. Such standards are also guides for third-party companies to use when designing user interfaces for application software that will run under the operating systems. These standards are intended to provide users a consistent “look and feel” for the applications and the operating system. However, third-party companies do not always follow the guidelines fully. Instead, some companies, such as Adobe and Macromedia, develop their own user interface “look and feel” for their software.
Problems and Possibilities A computer user must be able to interpret the symbolic meaning of an icon in order to use it. Whether abstract or concrete representations are used in icons, problems may arise when a user does not have the same cultural knowledge of an icon’s designer or does not have the visual acuity to interpret the information represented by an icon. For example, the icon for an U.S. telephone booth will likely not have any meaning to someone from a remote village in China. When an operating system is intended to be released to multiple countries with different cultural references, developers must keep special considerations in mind. The conventions of one country may not be the same of another country in terms of the shapes or colors selected for an abstract icon. A red “X” in the United States may not mean “stop” or “warning” in other parts of the world. Concrete icons have similar issues because the object, color, or context used in one culture may not be appropriate in other cultures. For example, an icon showing men and women together
340 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
in an office is acceptable in many cultures, whereas such a graphical representation is not acceptable in other cultures. Careful icon design is required for the international distribution of a software system. Advice from an expert in the culture of a target environment can save a software company embarrassment and the loss of business. Because many software programs are released internationally, different icon sets are developed for different countries. Currently most icons are static—symbols that do not move. However, with all of the advantages of icons, some researchers see the potential to extend the information capacity of icons with animation. Researchers have studied the use of animated icons in general use and in use by computer users who have impaired vision. In both cases animated icons have demonstrated benefits in comparison to traditional, static icons. For example, some visually impaired computer users can recognize an animated version of an icon at a smaller size than a static icon. Designers can use this fact to maximize the amount of workspace that visually impaired users can utilize when creating a document in a word-processing or e-mail program. Traditional icons are visual. However, computer users who cannot discern visual symbols can discern auditory signals. Thus, researchers have developed audio-based interfaces, including auditory icons that use sounds from everyday objects and allow computer users to interact with a computer system with sound. For example, a computer user can drag a document icon across the screen and hear the sound of pieces of paper being dragged across a table. Different sounds can accommodate different file sizes or other attributes. Although the visual aspect of an icon is not represented in an auditory icon, the notion of a symbolic (albeit auditory) representation is consistent. Icons have earned a prominent place in graphical user interfaces by representing everything from warnings to software features to programs and files. Icons represent information in a small package, whether that information is an object, action, or idea. Careful design, in terms of heuristics and standards, can maximize the usefulness of icons. Although is-
sues of interpretation by diverse groups of users remain, research will continue to support the use of icons in user interfaces. Stephanie Ludi See also Graphical User Interface
FURTHER READING Apple Computer. Apple developer connection—Icons. (2003). Retrieved January 4, 2004, from http://developer.apple.com/ue/ aqua/icons.html Apple Computer. Apple human interface guidelines. (2003). Retrieved January 7, 2004, from http://developer.apple.com/documentation/ UserExperience/Conceptual/OSXHIGuidelines/index.html#// apple_ref/doc/uid/20000957 Baecker, R., & Small, I. (1990). Animation at the interface. In B. Laurel (Ed.), The art of human-computer interface design (pp. 251–267). Reading, MA: Addison-Wesley. Bergman, M., & Paavola, S. (2001). The Commens dictionary of Peirce’s terms. Retrieved December 19, 2003, from http://www.helsinki.fi/ science/commens/terms/icon.html Caplin, S. (2001). Icon design: Graphic icons in computer interface design. New York: Watson-Guptill Publications. Cornell University Common Front Group. (n.d.). Concepts of user interface design. Retrieved January 11, 2004, from http://cfg.cit .cornell.edu/cfg/design/concepts.html Dix, A., Finlay, J., Abowd, G., & Beale, R. (1993). Human-computer interaction. New York: Prentice Hall. Gajendar, U. (2003). Learning to love the pixel: Exploring the craft of icon design. Retrieved January 11, 2004, from http://www .boxesandarrows.com/archives/learning_to_love_the_pixel_ exploring_the_craft_of_icon_design.php Haber, R. (1970). How we remember what we see. Scientific American, 222, 104–112. Nielsen, J. (n.d.). Icon usability. Retrieved December 20, 2003, from http://www.useit.com/papers/sun/icons.html Sun Microsystems Incorporated. (2001). Java look and feel design guidelines. Retrieved January 4, 2004, from http://java.sun.com/products/ jlf/ed2/book/ Bayley, A. (2000). KDE user interface guidelines. Retrieved January 11, 2004, from http://developer.kde.org/documentation/ design/ui/index.html Ludi, S., & Wagner, M. (2001). Re-inventing icons: Using animation as cues in icons for the visually impaired. In M. J. Smith, G. Salvendy, D. Harris, & R. J. Koubeck (Eds.), Proceedings of the Ninth International Conference on Human-Computer Interaction. New Orleans, LA: HCI International. Microsoft Corporation. Microsoft Windows XP—Guidelines for applications. (2002). Retrieved December 20, 2003, from http:// www.microsoft.com/whdc/hwdev/windowsxp/downloads/ default.mspx
IDENTITY AUTHENTICATION ❚❙❘ 341
Peirce, C., Hartshorne, C., Weiss, P., & Burks, A. (Eds.). (1935). Collected papers I–VIII. Cambridge, MA: Harvard University Press. Preece, J., Rogers, Y., Sharp, H., Benyon, D., Holland, S., & Carey, T. (1994). Human-computer interaction. Reading, MA: AddisonWesley. Shneiderman, B. (1998). Designing the user interface (3rd ed.). Reading, MA: Addison-Wesley.
IDENTITY AUTHENTICATION Authentication is the process of verifying that someone or something is what it claims to be. One can verify a person’s identity on the basis of what he or she knows (passwords, personal information, a personal identification number, and so on), what he or she possesses (an ID card, key, or smart card, for example), or biometrically, based on fingerprints, DNA, or other unique features. This article examines four sorts of authentication systems used in human-computer interaction: password authentication, Kerberos, digital signatures, and biometric authentication.
be read easily by the host system. Hashed passwords or encrypted passwords avoid these problems. Hashed Passwords Hash functions are used to produce hashed passwords. Hash functions take blocks of text as input and produce output that is different from the input. A good hash function is irreversible: It is impossible to reconstruct the original data from the hashed data. Also, if the hash function is good, then it will be nearly impossible to build a data block that would produces the given hashed data. A hash function must always produce the same output for the same given input—it must not contain any anomaly that leads to randomness of output for the same input. Systems using hashed passwords follow this sequence: The first time a person logs on to the system, the system accepts the password, applies a hash function to the password, and stores this hashed value in the system. Next time, when the user logs on, the system requests the password and the hash function is applied to the data the user has entered. If the resultant output matches the stored record, then it means user has entered the correct password and hence is authenticated.
Password authentication is the most common authentication technique. With password authentication, the user supplies his or her user name and a secret word (something only the intended user knows) to prove his or her identity. The information submitted by the user is compared with the information stored in the authentication system to validate the user. Passwords can be plain text, hashed text, or encrypted text.
Encrypted Passwords Encryption is a process used to scramble data to make it unreadable to all but the recipient. With encryption, passwords are encrypted using some encryption algorithm, and the encrypted text is stored in the system. Even the host system cannot read the encrypted text. When user logs on by supplying his or her password, it is encrypted using the same algorithm, with the resultant output being checked against the stored encrypted password. If both encrypted texts match, then the user is authenticated.
Plain-Text Passwords Plain-text passwords are the simplest passwords. Using plain-text passwords reduces administrative overhead because no preprocessing is required. But plaintext passwords have many disadvantages. On the Internet, it is easy for an intruder to sniff out such passwords and then use them to pass as the rightful user. Also, passwords that are stored in plain text can
Problems with Password Authentication Although there are different ways in which passwords can be used, all password authentication systems suffer from certain disadvantages. First, passwords can be guessed, since generally users have a tendency to pick familiar words as passwords. Passwordbreaking programs that use combinations of alphabet letters can crack word passwords quickly.
Password Authentication
342 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Second, password authentication requires two types of data: a user name and a password. It can become difficult for user to remember the pairs for multiple systems. A user may forget either his or her user name or his or her password. When that happens, users must create a new user name or a new password, or both. Third, because of the problem of remembering multiple passwords, many people use a single password for many systems. This means that once the password is cracked, all the systems using same password are endangered. Policies to Make the Password Authentication System Stronger Even though password authentication systems have inherent disadvantages, they can be used efficiently by following certain strategies. First, a policy can be enforced that requires passwords that are harder to break. Long passwords containing combination of alphabet letters and numerals are generally harder to break. Making passwords case-sensitive (that is, having a recognition system capable of distinguishing between uppercase and lowercase letters) also helps. Additionally, a single sign-on system, which gives a user access to multiple systems with a single password, can eliminate the need to remember multiple passwords. Finally, users can be reminded that they must never reveal their passwords to others.
him- or herself with the KDC by giving unique identifiers. On the user’s request the KDC selects a session key (a password limited to a particular session of communication), generates something called a ticket (a unique data message containing the user’s identity and the time range for which the ticket is valid), combines the ticket with the session key, and encrypts it with the user’s secret key. Since the user’s secret key is exposed in the ticket, the ticket is used to generate a second ticket from the TGS. This second ticket is encrypted by the session key of the TGS, so there is no risk of user’s secret key being exposed. Using this second ticket, the client is authenticated and a secure connection with the server is established. Benefits and Limitations of Kerberos The Kerberos system has the advantage of preventing plain-text passwords from being transmitted over the network. User names and passwords are stored centrally, so it is easier to manage that information. Furthermore, since passwords are not stored locally, even if one machine is compromised, there are no additional compromises. However, Kerberos has limitations. It is not effective against password-guessing attacks, and it requires a trusted path over which to send the passwords. If a hacker can sniff communication between the user and the initial authentication program, the hacker may be able to impersonate the user.
Kerberos
Digital Signature
Kerberos is an authentication protocol that was developed by MIT for authenticating a request for a service in a computer network. It relies on secret-key cryptography to securely identify users. There are five basic entities involved in the Kerberos authentication system: the user, the client (computer), which acts on behalf of the user, the key distribution center (KDC), the ticket-granting service (TGS), and the server, which provides the requested service. In the Kerberos system, users have secret keys (passwords) that are stored at the KDC. A user initiates the authentication procedure by requesting credentials (permission for access) from the KDC. To acquire these credentials the user first identifies
In any online transaction, the most important and critical consideration is who you are dealing with. Digital signatures—electronic signatures—are one method of confirming someone’s identity. Digital signatures are based on public-key cryptography, a form of cryptography that uses two keys, a public key and a private key, to ensure authentication. The private key is used to create the digital signature, and the public key is used to verify it. The private key is known only by signer, who uses it to sign the document; the public key, in contrast, is freely available and many people can share it. Thus, a document signed by one person can be verified by multiple receivers. Although the public key and
IDENTITY AUTHENTICATION ❚❙❘ 343
the private key are related to each other, it is computationally impossible to derive the signer’s private key from the public key. Use of digital signature involves two processes: digital signature creation, which the signer performs, and digital signature verification, which the receiver performs. Digital Signature Creation For a message, document, or any other information that needs to be signed digitally, an extract of the message or document is generated using a hash function. The signer further encrypts this hashed message by using his or her private key, and the doubly encrypted text becomes the digital signature. This digital signature is then attached to the message and stored or transmitted along with message. Digital Signature Verification To verify the signer’s digital signature, the recipient creates a new hash value of the original message is using the same hash function that was used to create first hash value. Then, using the freely available public key, this newly generated hash value is compared with the hash value attached to the message. By comparing these two hash values, the identity of the signer can be verified. Apart from identity authentication, digital signatures can also be used for message authentication. By comparing the hash values generated by the signer and receiver, the integrity of the message can be checked. If the message is altered by an intruder or damaged while in transit, the hash value generated by the receiver will not match with the original hash value attached by the signer. Advantages and Disadvantages of Digital Signature Digital signatures have both advantages and disadvantages. On the plus side, digital signatures cannot be copied. They ensure that data has not been tampered with after it has been signed. And, since digital signatures are created and verified electronically, they are safe from unauthorized influence. On the minus side, digital signatures are costly. Users must pay to obtain a digital signature, and recipients of digital signatures need special software to verify the signature.
Biometrics Authentication System Biometric authentication verifies the user by measuring certain physiological or behavioral characteristics such as fingerprints or retinas. The measurable characteristics used in any biometric system are unique. Using biometrics for identity authentication typically involves the following processes. Registering the User in the System This process is also called enrollment. In this step, the user’s biometric characteristic is measured using an input device. This step must be performed very carefully, since future authentications of the user depends on this sample. Processing the Biometric Characteristic The sample recorded by the input device is then processed and its features are extracted. Before extracting the features, biometric samples can be checked in order to ensure their quality. The number of samples needed for processing depends on the biometric characteristic that the system is using for authentication. Storage The processed sample, called the master template, is then stored in the database for the future use. Biometric systems can be used both for identification and for verification. When they are being used for identification, the processed biometric characteristic is compared with the entire set of master templates stored in the database. By this means, the system ensures that same person is not trying to enroll under two different names. When being used for verification, the system compares the processed biometric characteristic with the master template stored during enrollment. If it matches the master template, then user is authenticated.
Types of Biometrics Among the physiological and behavioral characteristics that are used by biometric systems are fingerprints (unique even for identical twins), face recognition, voice recognition (which has limitations, as a person’s voice can change with age and
344 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
different input devices can give different results), iris recognition, signature verification (which measures such characteristics as the writer’s pressure and speed when signing his or her name), and hand and finger geometry. Advantages and Disadvantages of Biometric Authentication System One advantage of biometric systems is that because they recognize the authenticated user him- or herself and not information that he or she has, they avoid the problem of lost, stolen, or forgotten passwords or identification numbers or cards. They are also fast and easy to use, and generally do not cost much. However, some biometric characteristics are subject to change as a person ages and hence must be updated periodically. Additionally, some biometric systems, such as signature verification or voice recognition, must operate within a tolerance range since it is very difficult to produce exactly the same signature or to speak in exactly the same tone of voice each time. Establishing the correct tolerance range can be difficult. When relying on a biometric system, one must be sure that the system is not producing too many false rejections. (False rejections should be below 1 percent.) Depending on the biometric characteristic being measured, some people may be excluded. For example, mute people cannot be put through voice recognition. Finally, although biometric systems are generally low cost, some input devices may need regular maintenance, which increases the cost.
Implications of Using Authentication Systems Clearly, all authentication systems have advantages and disadvantages; there is no one authentication system that is suitable for all situations. Password authentication is the cheapest authentication system, so if you want to authenticate identity but your budget is limited, password authentication is a good choice. With Kerberos, it is assumed that you are using trusted hosts on non-trusted networks. The basic goal of Kerberos is to prevent plain-text passwords from being sent across the network, so its
basic use is to prevent identity theft. Digital signatures are tamper proof, but they are only useful for authenticating the sender (or content) of a document. They cannot be used in situations in which authentication is required to give access to confidential data, for example. As for biometrics, because they rely on people’s physical features, they are difficult to use for authenticating participants in online transactions. Thus, every authentication system has some features but because of limitations cannot be used everywhere. Currently, then, people rely on a combination of different authentication systems for maximum security. Ashutosh Deshpande and Parag Sewalkar See also Privacy; Security
FURTHER READING Bellovin, S. M., & Merritt, M. (1991). AT&T Bell Labs limitations of the Kerberos authentication system. Retrieved February 17, 2004, from http://swig.stanford.edu/pub/summaries/glomop/kerb_ limit.html Brennen, V. A. A basic introduction to Kerberos. Retrieved February 17, 2004, from http://www.cryptnet.net/fdp/crypto/basic_intros/ kerberos/ How Stuff Works. (1998–2004). How do digital signatures work? Retrieved March 18, 2004, from http://www.howstuffworks.com/ question571.htm Jaspan, B. (1995). Kerberos users’ frequently asked questions 1.14. Retrieved February 17, 2004, from http://www.faqs.org/faqs/ kerberos-faq/user/. Kohl, J. T. (1991). The evolution of the Kerberos authentication service. Proceedings of the Spring 1991 EurOpen Conference. Retrieved February 17, 2004, from http://www.cmf.nrl.navy.mil/ CCS/people/kenh/kerberos-faq.html#whatis MIT Kerberos. (n.d.). Kerberos: The network authentication protocol. Retrieved March 18, 2004, from http://web.mit.edu/kerberos/www/ Podio, F. L., & Dunn, J. S. Biometric authentication technology: From the movies to your desktop. Retrieved March 18, 2004, from http:// www.itl.nist.gov/div895/biometrics/Biometricsfromthemovies.pdf Smeraldi, F., & Bigun, J. (2002). Retinal vision applied to facial features detection and face authentication. Pattern Recognition Letters, 23(4), 463–475. Syverson, P., Y Cevesato, I. (2001). The logic of authentication protocols. In R. Focardi & R. Gorrieri (Eds.), Foundations of security analysis and design (pp. 63–136). Heidelberg, Germany: Springer Verlag Treasury Board of Canada. (2001, 2003). PKI questions and answers for beginners. Retrieved March 18, 2004, from http://www.cio-dpi .gc.ca/pki-icp/beginners/faq/faq_e.asp
IMPACTS ❚❙❘ 345
IMPACTS Often the effects of computerization on society are remote and outside the control of the designer of any specific system. But design decisions in particular systems may also have more direct social effects, and these decisions are ones over which computer professionals may have influence. This article will first review some of the large-scale social effects of computing as well as some of the more direct social effects that computer systems can have, along with some of the methodologies that are being proposed to help predict and ameliorate these direct effects. In addition to distinguishing between remote and direct impacts, we can also distinguish between intentional effects (efficiency, unemployment) and unintentional effects (deskilling, component reuse in other applications) of computing systems—both of which may be either remote or direct. Even basic research in computing, such as the development of systems of fuzzy logic, can have both intentional effects (making medical diagnosis more reliable) and unintentional ones (making computer
database matching more efficient). Table 1 shows how basic research and system development, remote and direct effects, and intentional and unintentional effects interrelate. It is important to remember, however, that while the table presents discrete categories, each pair actually represents a continuum, with the names of each category representing endpoints on the continuums. Effects will fall somewhere on the continuum. Research is basic to the extent that it is solving intellectual rather than practical problems, though clearly many projects have both theoretical- and applied-research facets. An effect is direct when its outcome depends little upon the presence or absence of other factors (such as, for example, the presence of a union, or regulations, or reuse of the system for a different purpose). An effect is intentional to the extent that the developers and client foresee and desire that outcome and design the system to produce it. This categorization shows clearly that computer research produces some social effects that system designers are aware of and over which they have some control; it also produces effects that designers may not foresee and may not be able to control (in some
A U.S. Post Office first-day cover from 1973 notes the impact of microelectronics on the development of advanced technology. Photo courtesy of Marcy Ross.
346 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
TABLE 1.
Examples of Different Kinds of Social Impacts of Computing Basic Research (example: development of fuzzy logic systems) remote direct
System Development (example: data monitoring in banking call center) remote direct
Intentional
Increased reliability of medical diagnosis
Utilization of categorization algorithms in difficult areas
Unintentional
Increased privacy violations by marketers making use of data-matching techniques
Revisions of conceptions of human rationality
Improved Increased integration of productivity in customer data the call center across sales and support Increased job turnover among workers. Decreased job satisfaction among call center workers
cases they may share control with many other actors). Again, this is a continuum rather than a dichotomy. Creative designers can often find technical solutions to address even remote effects of system design, once they are aware of them.
Remote Social Impacts of Specific Types of Systems Much work on the social impacts of computing concentrates on the specific effects of specific types of systems, such as computer games, monitoring software, safety-critical systems, or medical applications. Computer Games and Violence Extensive playing of computer games has been documented to lead to thoughts, feelings, and reactions that are clear precursors of violent behavior. The link between use of the games themselves and violent behavior on the part of players is reasonably well substantiated, but not surprisingly is a matter of controversy. Data Mining, Work Monitoring, and Privacy Privacy threats are posed by systems designed to match data from databases without the knowledge of the users. These systems match data, for example,
across state lines in the United States to catch people filing for welfare benefits in more than one jurisdiction. The potential for privacy violation is clear, but from the corporate point of view, data mining is an effective way to discover important markets, or to reveal fraud. Computer systems are also used to monitor the workplace behavior of individuals, usually with an eye to reducing unwanted behavior or to increasing productivity. The success—and in some cases the legality—of such schemes depends crucially on how the employees are involved in the planning and implementation of the systems. Approaches to legislation of these privacy issues differ radically around the world, with Europe being very systematic and the United States taking a more fragmented approach. Safety-Critical Systems Much computer software is directly integrated into systems that run sensitive machinery in real time. This might be medical machinery, for example, or it might be machinery in missile launchers. In every case, the complexities of process, the range of possible input states, the close coupling of action links, and time-based interdependencies make it difficult to verify the safety of these systems. Nancy Leveson’s 1995 Safeware: System Safety and Computers and Neil Storey’s 1996 Safety-Critical Computer
IMPACTS ❚❙❘ 347
Therac-25: Safety Is a System Property Normally, when a patient is scheduled to have radiation therapy for cancer, he or she is scheduled for several sessions over a few weeks and told to expect some minor skin discomfort from the treatment. The discomfort is described as being on the order of a mild sunburn over the treated area. However, a very abnormal thing happened to one group of patients: They received severe radiation burns resulting in disability and, in three cases, death. The Therac-25 was a device that targeted electron or X-ray beams on cancerous tissue to destroy it. Electron beams were used to treat shallow tissue, while photon beams could penetrate with minimal damage to treat deep tissue. Even though operators were told that there were “so many safety mechanisms” that it was “virtually impossible” to overdose a patient, this is exactly what did occur in six documented cases (Leveson and Clark 1993). These massive radiation overdoses were the result of a convergence of many factors including:
Systems provide good overviews of the issues in designing safe systems in this area.
Social Impacts Associated with the Networked Nature of Computing Networked computing has increased the potential scale of effect of human action. Messages or programs written on a single machine by a lone actor can now propagate to millions of machines across the globe in a few minutes. This increase in the scale of action is available to anyone who can buy or rent access to the global network. One effect of this globalization of action over computer networks is that action can routinely cross legal jurisdictional boundaries. Pictures can be created and uploaded in a jurisdiction where their content is legal, but then viewed from another jurisdiction, in another country, where they are illegal. Computer viruses can be designed in jurisdictions where it is more difficult to prosecute the designers,
simple programming errors inadequate safety engineering ■ poor human computer interaction design ■ a lax culture of safety in the manufacturing organization ■ inadequate reporting structure at the company level and as required by the U.S. government. As noted by Nancy Leveson and Clark S. Turner (1993, para. 2), the authors of the study that investigated the effects of Therac-25: “Our goal is to help others learn from this experience, not to criticize the equipment’s manufacturer or anyone else. The mistakes that were made are not unique to this manufacturer but are, unfortunately, fairly common in other safety-critical systems.” Chuck Huff ■ ■
Source: Leveson, N., & Turner, C. S. (1993). An investigation of the Therac-25 accidents. IEEE-Computer 26(7), 18–41. Retrieved March 19, 2004, from http:// courses.cs.vt.edu/~cs3604/lib/Therac_25/Therac_1 .html
but then released to the entire world. Websites can be subject to denial-of-service attacks by actors across the globe. The legal issues of how actors will be held responsible for their actions in a global network are still being resolved and are embedded in larger cultural issues, such as the importance attached to privacy or property, and attitudes toward censorship.
More General Social Issues There are several more general social issues associated with computing that are not directly associated with the reach of the global network. The Digital Divide The simple issue of access to computing technology and to the benefits of the increasingly wired world has generated concern over what is called the digital divide, or the lack of access to current technology, connectivity, and training that people may face based on gender, race, class, or national wealth.
348 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Lo! Men have become the tools of their tools. —Henry David Thoreau
Intellectual Property Computing technology is based on movement and transformation of information from one form to another. The ease with which information can be transferred and software copied has made intellectual property theft a major issue. The chapter on intellectual property in Deborah Johnson’s Computer Ethics (third edition published in 2001) provides a good overview of the issues in this area. Employment Finally, one of the direct, intentional goals of much software development is to make businesses and other organizations more efficient. But along with increases in efficiency come job redundancies. There seems little evidence that technological advancement produces unemployment over the long run, but individual implementations often produce unemployment as a direct intentional outcome. The societal problem is to help resolve the structural issues that make it difficult for these workers to be reemployed in comparable and meaningful work. Similarly, deskilling of workers (simplifying their jobs so management has more control or can pay lower rates) is often the direct, intentional outcome of software development in business. Computer scientists in Scandinavia have worked in cooperation with labor unions and employers to develop software that adds to the skills and autonomy of workers while also increasing the quality of the product—a process called participatory design. It is unclear whether this model will be more widely adopted in countries where unions have less power and legitimacy.
Addressing the Social Impacts of Particular Implementations of Computing Many of the social issues reviewed so far are often remote from the implementation of any particular sys-
tem and out of the immediate control of the designers or implementers of the systems. But all systems are implemented within some social context. If they are successful, they will at least have the direct, intentional effects desired by their designers and implementers. However, they are also likely to have indirect and unintentional effects. The software engineer Donald Gotterbarn and computer ethicist Simon Rogerson make the case in “The Ethics of Software Project Management” (2001) that using systematic methods to become aware of these effects will improve the quality of the delivered software, help avert expensive software project failures, and in the end save both developers and clients money. Since the methods they and others advocate are relatively new and untried, there is little empirical evidence for these claims, but the logic of the case studies they present is compelling. Other scholars also provide case study evidence that these approaches help to produce better-designed products and may save clients money, but the methods are not widely enough used yet to make systematic evaluation possible. Still, it is worth reviewing these proposed methods. What all the methods have in common is making sure that early design decisions take into account larger contextual issues rather than merely narrowly focused technical and economic ones. The scientist Ben Shneiderman’s original proposal (published in 1990) was based on an analogy with the government requirement in the United States that construction projects complete an environmental impact statement to determine the project’s effect on the surrounding environment. He proposed a social impact statement be incorporated into the software development process of projects that were large enough to warrant it. This statement would begin by listing the stakeholders (people and groups likely to be effected by the software), would attempt to determine the systems effect on those stakeholders, and then propose modifications in the requirements or design of the system to reduce negative and enhance positive effects. Ben Shneiderman and Anne Rose reported on an attempt to implement this approach in 1996. The psychologist Chuck Huff has proposed the integration of particular social-science methods into the social impact statement as a way to increase its sensitivity to the social context in which a system is designed.
IMPACTS ❚❙❘ 349
The social scientist Enid Mumford developed a method that was inspired by her early association with the Tavistock Institute in London, a center founded in 1946 for the study of the interplay between technical systems and human welfare. Her method has been applied reasonably successfully to other technical systems and has seen a few successful implementations in software systems as well, making it the best substantiated approach currently available. She gives seven steps for a sociotechnical design process: 1. Diagnose user needs and problems, focusing on both short- and long-term efficiency and job satisfaction; 2. Set efficiency and job satisfaction objectives; 3. Develop a number of alternative designs and match them against the objectives; 4. Choose the strategy that best achieves both sets of objectives; 5. Choose hardware and software and design the system in detail; 6. Implement the system; and 7. Evaluating the system once it is operational. Gotterbarn provides a method and supporting software for investigating the social issues associated with the implementation of a system. His approach is as simple as asking what the effects of each of the system’s tasks will be on each of the stakeholders who are relevant to that task. But this simplicity is illusory, because of the explosion of combinations that occurs when you cross all tasks with all stakeholders and ask a series of questions about each of these combinations. The SoDIS (software development impact statement) software Gotterbarn outlined in a 2002 article helps to control this complexity. To complete a software development impact statement, one must: Identify immediate and extended stakeholders; ■ Identify requirements, tasks, or work breakdown packages; ■ Record potential issues for every stakeholder related to every task; and ■ Record the details and solutions to help modify the development plan. ■
The researchers Batya Friedman, Daniel Howe, and Edward Felton are proponents of a design process
called value-sensitive design that stresses the importance of iterations of conceptual, technical, and empirical tasks throughout the software design project in an attempt to account for important human values that are affected by the system. One starts with philosophically informed analysis of relevant values. One then identifies how existing and potential technical designs might enhance those values, and then uses social-science methods to investigate how those values affect various stakeholders related to the system. Like most other approaches, this methodology is a new development, and though it has had some interesting successes, it still awaits more careful validation. Two other approaches deserve mention as methods to address social issues in software design: computer-supported cooperative work (CSCW) and participatory design. Participatory design was mentioned earlier in the context of the employment effects of system implementation. It is an approach that attempts to integrate democratic values into system design by involving potential users of the system in intensive consultation during an iterative design process. Computer-supported cooperative work is an area in which systems are designed to support the work of groups. The work in this area focuses on those values that are most relevant in CSCW systems, such as privacy and trust. The importance of social issues in computing was recognized by the early innovators in the field of computing. The 1946 founding of the Tavistock Institute has already been mentioned, and as early as 1950, the mathematician Norbert Wiener addressed many of the issues considered in this article in his The Human Use of Human Beings. Socialimpact issues in computing then lay fallow for many years until revived in 1968 by Don Parker, who was followed in 1976 by Joseph Wiezenbaum. There was an explosion of work in the 1980s and 1990s concomitant with the proliferation of personal computers and the emergence of the Internet as tool for widespread use. At present numerous rigorous and systematic methods are emerging to take account of social-impact issues, ethics, and values in the design of software systems. Chuck Huff
350 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
See also Computer-Supported Cooperative Work; Digital Divide; Value Sensitive Design
FURTHER READING Adam, A. (2001). Gender and computer ethics. In R. A. Spinello & H. T. Tavani (Eds.), Readings in cyberethics, (pp. 63–76). Sudbury, MA: Jones and Bartlett. Anderson, C. A., & Bushman, B. (2001). Effects of violent video games on aggressive behavior, aggressive cognition, aggressive affect, physiological arousal, and pro-social behavior: A meta-analytic review of the scientific literature. Psychological Science, 12, 353–359. Anderson, C. A., & Dill, K. E. (2000). Video games and aggressive thoughts, feelings, and behavior in the laboratory and in life. Journal of Personality & Social Psychology, 78, 772–790. Baase, S. (1997). A gift of fire: Social, legal and ethical issues in computing. Upper Saddle River, NJ: Prentice Hall. Betts, M. (1994, April 18). Computer matching nabs double dippers. Computerworld (p. 90). Bjerknes, G., Ehn, P., & Kyng, M. (Eds.). (1987). Computers and democracy: A Scandinavian challenge. Aldershot, UK: Avebury. Bødker, S., Ehn, P., Sjögren, D., & Sundblad, Y. (2000). Co-operative design: Perspectives on 20 years with “the Scandinavian IT design mode”’ (Report No. CID-104). Retrieved June 16, 2003, from http://cid.nada.kth.se/pdf/cid_104.pdf Braman, S. (2004). Change of state: An introduction to information policy. Cambridge, MA: MIT Press. Brown R. (1967). Review of research and consultancy in industrial enterprises: A review of the contribution of the Tavistock Institute of Human Relations to the development of industrial sociology. Sociology, 1, 33–60. Bynum, T. (2000). Ethics and the information revolution. In G. Colllste (Ed.) Ethics in the age of information technology (pp. 32–55). Linköping Sweden: Linköping University Press. Camp, T. (1997). The incredible shrinking pipeline. Communications of the ACM, 40(2), 103–110. Colin, B. J. (1992). Regulating privacy: Data protection and public policy in Europe and the United States. Ithaca, NY: Cornell University Press. Collins, W. R., Miller, K., Spielman, B., & Wherry, P. (1994). How good is good enough? An ethical analysis of software construction and use. Communications of the ACM, 37(1), 81–91. Compaine, B. (2001). The digital divide: Facing a crisis or creating a myth. Cambridge, MA: MIT Press. Douthitt, E. A., & Aiello, J. R. (2001). The role of participation and control in effects of computer monitoring on fairness perceptions, task satisfaction, and performance. Journal of Applied Psychology, 86(5), 867–874. Friedman, B. (Ed.). (1997). Human values and the design of computer technology. London: Cambridge University Press. Friedman, B., Howe, D. C., & Felten, E. (2002). Informed consent in the Mozilla browser: Implementing value-sensitive design. In Proceedings of the 35th Annual Hawai’i International Conference on System Sciences [CD-ROM OSPE101]. Los Alamitos, CA: IEEE Computer Society: Friedman, B., & Kahn, P. H. (in press). A value sensitive design approach to augmented reality. In W. Mackay (Ed.), Design of augmented reality environments, Cambridge MA: MIT Press.
Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems, 14(3), 330–347. Gotterbarn, D. (2002). Reducing software failures: Addressing the ethical risks of the software development lifecycle. Australian Journal of Information Systems, 9(2). Retrieved December 10, 2003, from http://www.inter-it.com/articles.asp?id=195 Gotterbarn, D. & Rogerson, S. (2001). The ethics of software project management. In G. Colllste (Ed.), Ethics in the age of information technology (pp. 278–300). Linköping Sweden: Linköping University Press. Greenbaum, J., & Kyng, M. (Eds.) (1991). Design at work: Cooperative design of computer systems. Hillsdale, NJ: Lawrence Erlbaum. Huff, C. W. (1996). Practical guidance for teaching the social impact statement. In C. W. Huff (Ed.), Proceedings of the 1996 Symposium on Computers and the Quality of Life (pp. 86–90). New York: ACM Press. Huff, C. W., & Martin, C. D. (1995, December). Computing consequences: A framework for teaching ethical computing. Communications of the Association for Computing Machinery, 38(12), 75–84. Johnson, D. (2001). Computer ethics (3rd. ed.). New York: PrenticeHall. Johnson, R. C. (1997). Science, technology, and black community development. In A. H. Teich (Ed.), Technology and the future (7th ed.; pp. 270–283). New York: St. Martin’s Press. Kling, R. (Ed.). (1996). Computerization and controversy: Value conflicts and social choices (2nd ed.). San Diego, CA: Academic Press. Kraut, R., Kiesler, S., Boneva, B., Cummings, J., Helgeson, V., & Crawford, A. (2002). Internet Paradox Revisited. Journal of Social Issues, 58(1), 49–74. Kretchmer, S., & Carveth, R. (2001). The color of the Net: African Americans, race, and cyberspace. Computers and Society, 31(3), 9–14. Kusserow, R. P. (1984). The government needs computer matching to root out waste and fraud. Communications of the ACM, 27(6), 542–545. Kyng, M., & Mathiassen, L. (Eds.) (1997). Computers and design in context. Cambridge, MA: MIT Press. Leveson, N. (1995). Safeware: System safety and computers. Reading, MA: Addison-Wesley. Leveson, N., & Turner, C. S. (1993). An investigation of the Therac25 accidents. IEEE-Computer 26(7), 18–41. Lewis, S. G., & Samoff, J. (Eds.). (1992). Microcomputers in African development: Critical perspectives. Boulder, CO: Westview Press. Miller, M. (1992, August 9). Patients’ records are a treasure trove for a budding industry. Wall Street Journal (p. A21). Mumford, E. (1996). Systems design: Ethical tools for ethical change. London: MacMillan. Mumford, E., & MacDonald, B. (1989). XSEL’s progress. New York: Wiley. Parker, D. (1968). Rules of ethics in information processing. Communications of the ACM, 11(3), 198–201. Rocco, E. (1998). Trust breaks down in electronic contexts but can be repaired by some initial face-to-face contact. In Proceedings of CHI 1998 (pp. 496–502). New York: ACM Press. Rosenberg, R. S. (1997). The social impact of computers (2nd ed.). San Diego, CA: Academic Press. Shade, L. (1996). Is there free speech on the Net? Censorship in the global information infrastructure. In R. Shields (Ed.), Cultures of the Internet: Virtual spaces real histories, living bodies. Thousand Oaks, CA: Sage.
INFORMATION FILTERING ❚❙❘ 351
Shattuck, J. (1984). Computer matching is a serious threat to individual rights. Communications of the ACM, 27(6), 538–541). Shneiderman, B. (1990). Human values and the future of technology: A declaration of empowerment. Computers & Society, 20(3), 1–6. Shneiderman, B., & Rose, A. (1996). Social impact statements: Engaging public participation in information technology design. In C. Huff (Ed.), Computers and the quality of life: The proceedings of the Symposium on Computers and the Quality of Life (pp. 90–96). New York: ACM Press. Sproull, L., & Faraj, S. (1995). Atheism, sex, and databases: The Net as a social technology. In B. Kahin & J. Keller (Eds.), Public access to the Internet (pp. 62–81). Cambridge, MA: MIT Press. Storey, N. (1996). Safety-critical computer systems. New York: AddisonWesley. Swinyard, W. R., Rinne, H., & Keng Kau, A. (1990). The morality of software piracy: A cross cultural analysis. Journal of Business Ethics, 9, 655–664. Tavani, H. (2003). Ethics and technology: Ethical issues in an age of information and communication technology. Hoboken, NJ: John Wiley & Sons. Trist E. (1981). The evolution of socio-technical systems. Ontario, Canada: Ontario Quality of Working Life Center. U.S. Congress Office of Technology Assessment. (1986). Technology and structural unemployment: Reemploying displaced adults (OTAITE-250). Washington, DC: U.S. Government Printing Office. Wiener, N. (1950). The human use of human beings. New York: Da Capo Press. Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. New York: Freeman. Wresch, W. (1998). Disconnected: Haves and have-nots in the information age. In K. Schellenberg (Ed.), Computers in society (7th ed.; pp. 207–212). Guilford, CT: Dushkin/McGraw Hill. Yakel, E. (in press). The social construction of accountability: Radiologists and their recordkeeping practices. Information Society.
INFORMATION FILTERING The amount of information we create and exchange is far more than a person can easily manage. One way to overcome information overload is to personalize information delivery, that is, to present information that matches users’ preferences. Selective dissemination of information, alerting services, collaborative filtering, help systems, social filtering, social data-mining systems, and user-adaptive systems can collectively be called information-filtering (IF) systems. Personalized information systems are based on the information preferences, profiles, or user models that these IF systems produce. Information fil-
HELP SYSTEMS Computer-based systems that answer user questions on a program’s functions and help users lear n how to accomplish g iven tasks using the program.
tering suggests leaving certain information out, but the term has been used in different contexts. One context emphasizes blocking, which refers to automatic censorship of the material available to a particular user or kind of user. Another definition emphasizes selecting things from a larger set of possibilities and presenting them in a prioritized order based on a user’s interests. A conceptual framework for the design of IF systems comes from two established lines of research: information retrieval and user modeling. Information retrieval attempts to retrieve as many relevant items as possible while minimizing the amount of irrelevant information. A relevant item is one that helps satisfy an information need. Because the most distinctive characteristic of the field of IF is the existence of a profile to model the information need, most of the work on IF is being conducted within the framework of user modeling. A user model contains pertinent information about the user; the IF system uses such models to guide its personalization of information for the user. Filtering mechanisms depend on accurate representations of their users and theirs users’ needs in order to perform their task successfully. Characteristics of user models can vary according to the system, the user, and the task being performed. The information scientist P. J. Daniels’s profiles include user background, habits, topics of interests, and user status. Robert B. Allen suggests that the model should also include situation, task, and environmental information. For user profiles of potential customers of electronic shops, the scholars Liliana Ardissono and Anna Goy suggest domain expertise, lifestyle, and intended use of the items purchased as pertinent. Based on these elements, an IF system can determine which product to recommend, how to present product descriptions, how much technical information to include, and which linguistic form to employ.
352 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Filtering systems frequently present material to users based on how well it corresponds with a list of topics or subjects that the user has selected as being of interest. In 1997 Raya Fidel and Michael Crandall, scholars in the field of library and information science, published results of an empirical study evaluating users’ perception of filtering performance. In the study, they examined criteria users employed in judging whether or not a document was relevant. Although the profiles in the users’ filtering system were based on topics, the researchers found that users employed many criteria in addition to topics or subjects (for example, their current beliefs, knowledge, and working situation) to determine a document’s relevance.
User Models for Filtering Although researchers differ in how they categorize models, most agree that a model can be either canonical or individual in its modeling and that its acquisition of information can be either explicit or implicit. Canonical Modeling Canonical, or stereotypical, modeling models a typical user, while individual modeling models a particular user. Because stereotypes are collection of attributes that co-occur in users, inferences about a user that rely on stereotypical modeling can be made on a smaller sample of user behavior. The inheritance properties of some representation methods allow the system to infer data and predict actions. When the user is new, the system relies on the stereotype; as the system learns, less data is taken from the stereotype and more from the updated profile of the actual user. Stereotype models have their limitations: It is difficult to achieve a sufficiently personal expression with a stereotype model. Individual User Modeling Individual models are exemplified by the Personalized Text (PT) system, which aims to customize hypertext. In PT, readers are asked to do a concept inventory, which is a short dialogue from which can
be gleaned an initial model of their background knowledge. The customized hypertext of a user who claims to have a solid understanding of a concept will have the concept relegated to a glossary, while the users who don’t know the concept will find it as a live link providing a definition in the main body of the hypertext. In addition, a combination of cognitive and biological criteria such as eye movements, pulse, temperature, and galvanic skin response, can be used for individual optimization. Dara Lee Howard and Martha Crosby performed experiments to understand individual viewing strategies for bibliographic citations. Analysis of the users’ eye movements illuminated their actual behavior. The researchers found that relevant material was read sequentially while non-relevant material was viewed non-sequentially. Explicit Modeling Explicit models are built directly and explicitly by users. Users are presented with direct questions, and their answers comprise their profile. Advantages of the explicit model are that users are more in control and have a good understanding of what the system is doing. Thomas Malone and his colleagues found that the rules that constitute the profile of InfoLens, a filtering system that helps the user to filter incoming email messages. Users write the rules instructing the email system on the actions to take depending on the sender, message type, and date. Daniela Petrelli and her colleagues categorized visitors to museums according to “classical” dimensions, provided by each visitor, such as age, profession, education, and specific knowledge or background. Visitors also provided situational dimensions such as available time for the visit, motivation for the visit. Language style (expert versus naive) and verbosity used in the individualized guide was constructed from the user profile. Implicit Modeling Implicit models are inferred by the responses of the user assigning relevance to the information they are presented or by monitoring users’ actions and behavior. The usefulness of feedback for adjusting a profile has been long recognized. Luz Quiroga and
INFORMATION FILTERING ❚❙❘ 353
Javed Mostafa conducted experiments to compare the performance of explicit and implicit modes of profile acquisition. Results of the study suggested that context plays an important part in assessments of relevance. Implicit feedback mechanisms, as well as the explicit ones, based on surveys and questionnaires, are criticized for imposing an extra burden on the user. Monitoring user behavior, on the other hand, appears to be an efficient way to capture the users’ profile transparently. Other implicit profiling systems examine actions such as whether an electronic message is read or ignored, saved or deleted, and replied to or not. User interest in various topics can also be ascertained based on the time users spent reading NetNews articles.
Information Filtering and User Modeling Techniques and Tools Among the techniques and tools that are useful for information filtering are virtual agents, collective profiles, interactive information retrieval, intelligent help systems, collaborative filtering, mining social data, and bookmarking. Virtual Agents Virtual agents are software programs that act on behalf of the user, delivering selected, prioritized information to an IF system. Pattie Maes (1994), of MIT’s Media Lab, considers IF to be one of the many applications facilitated by agents, which act as personal assistants to their clients, alleviating potential information overload. Collective Profiles Collective profiles allow individual users to organize information that relates to their families and friends. SETA, a shopping recommender, maintains models of all the “beneficiaries” for whom the user selects goods while shopping. Research on personalized guides to museums suggested that such systems should support family profiles and family discussions and interaction in addition to maintaining individualized museum guide capabilities.
Interactive Information Retrieval The information scientist Nicholas Belkin developed the theory of the anomalous state of knowledge (ASK). The theory suggests that it is better to employ interactions than to model explicit knowledge. Belkin points out that people engage in information seeking to achieve some goal for which their current status of knowledge is inadequate. Thus, the users may not know, let alone be able to specify, what their information needs are. Intelligent Help Systems The computer scientist Johan Aberg designed a system that combines automatic and human help in a personalized and contextualized way. He describes his system as “a live help system that integrates human experts in the processes of advice-giving by allowing users to communicate with dedicated expert assistants through the help system” (Aberg 2002, 4). Human experts thus complement computer-based help but do not replace it. A disadvantage of this system is that it is difficult to provide human augmentation to the automatic help due to cost and availability of the human expert. Collaborative Filtering With collaborative filtering, the system gives suggestions on various topics based on information gleaned from members of a community or peer group. The system identifies users with similar preferences and makes recommendations based on what those others have liked. MovieLens is a home video recommendation system that chooses movies from a large and relatively stable database of movie titles, based on peers’ expressed taste in films. This approach differs from approaches that choose items based only on what a user has chosen before. Social Data Mining The process of mining social data reduces the cognitive burden that collaborative filtering places on the user. Vannevar Bush (1945) suggested that the rich trails followed by scholars through information repositories could guide others with similar interests. Statistics on how much time people spend reading various parts of a document, counts of spreadsheet cell
354 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
recalculations, and menu selections are sources for social data mining. Another approach exploits the structure of the World Wide Web itself. This approach has its roots in bibliometrics, which studies patterns of cocitations—the citation of pairs of documents in a third document. A link from one website to another may indicate that the two sites are similar. Bell Labs’ Brian Amento and his colleagues used their system TopicShop to investigate how well link-based metrics correlate with human judgments. Features collected by TopicShop could predict which websites were of highest quality, which made it possible for users to select better sites more quickly and with less effort. A variant approach, called social navigation, has focused on extracting information from web usage logs, recording browsing history, finding commonly traversed links. Peter Pirolli, a researcher at Xerox’s Palo Alto Research Center, and his colleagues researched data mining in conjunction with organization theories. Their information foraging theory attempts to understand how people allocate their resources to find information. It has been applied to information-seeking tasks, presenting users with a clustered navigable overview of the content of a document collection. Bookmarking David Abrams and his colleagues envision bookmarking as a common strategy for dealing with information overload. They surveyed 322 web users and analyzed the bookmark archives of 50 users to learn why people make a bookmark and how they organized and use them. One of their recommendations is to use automated filters to improve visualization and reuse. Rushed Kanawati and Maria Malek proposed CoWing (Collaborative Web Indexing), a system in which an agent interacts with other agents to fetch new bookmarks that match its client’s needs. All the solutions to information overload described above emphasize the need for specialization in information retrieval services using IF techniques. User modeling, which explores what affects users’ cognitive loads, is critical for reducing information overload. Currently, user modeling and information filtering techniques are employed in fields such as e-business, information retrieval, alerting systems, finance, banking, and communications. Research evaluating IF techniques and systems is on the rise,
but more empirical evaluations are needed if we are to ever understand how humans find, organize, remember, and use information. Luz M. Quiroga and Martha E. Crosby See also Information Overload; Information Retrieval; User Modeling
FURTHER READING Aberg, J. (2002). Live help systems: An approach to intelligent help for Web information systems (Linkopings Studies in Science and Technology, Dissertation No. 745). Linkopings, Sweden: Department of Computer and Information Science, Linkopings University. Abrams, D., Baecker, R., & Chignell, M. (1998). Information archiving with bookmarks: Personal Web space construction and organization. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 41–48. Allen, R. B. (1990). User models: Theory, method, and practice. International Journal of Man-Machine Studies, 32, 511–543. Amento, B., Terveen, L., & Hill, W. (2003) Experiments in social data mining: The TopicShop system. ACM Transactions on Computer-Human Interaction, 10(1), 54–85. Ardissono, L., & Goy, A. (1999). Tailoring the interaction with users in electronic shops. In J. Kay (Ed.), Proceedings of the 7th International Conference on User Modeling (pp. 35–44). New York: Springer-Verlag. Belkin, N. (2000). Helping people find what they don’t know. Communications of the ACM, 43(8), 58–61. Belkin, N., & Croft, B. (1992). Information filtering and information retrieval: Two sides of the same coin? Communications of the ACM, 35(12), 29–38. Bush, V. (1945, July). As we may think. Atlantic Monthly, 176(1), 101–108. Daniels, P. (1986). Cognitive models in information retrieval: An evaluative review. Journal of Documentation, 42(4), 272–304. Ellis, D. (1990). New horizons in information retrieval. London: The Library Association. Fidel, R., & Crandall, M. (1997). Users’ perception of the performance of a filtering system. In Proceedings of the 20th annual ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 198–205). New York: ACM Press. Hill, W., & Hollan, J. (1994). History-enriched digital objects: Prototypes and policy issues. Information Society, 10(2), 139–145. Howard, D. L., & Crosby, M. (1993). Snapshots from the eye: Towards strategies for viewing bibliographic citations. In G. Salvendy & M. Smith (Eds.), Advances in human factors/ergonomics: Humancomputer interaction: Software and hardware interfaces (Vol. 19B, pp. 488–493). Amsterdam: Elsevier Science. Kanawati, R., & Malek, M. (2002). A multi-agent system for collaborative bookmarking. In P. Georgini, Y. L’Espèrance, G. Wagner, & E. S. K. Yu (Eds.), Proceedings of the Fourth International BiConference Workshop on Agent-Oriented Information Systems
INFORMATION ORGANIZATION ❚❙❘ 355
(pp. 1137–1138). Retrieved December 16, 2003, from http://sunsite .informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-59/ 8Kanawati.pdf Kay, J., & Kummerfeld, B. (1994). User models for customized hypertext in J. Mayfield and E. C. Nicholas (Eds.). Advances in hypertext for the World Wide Web (47–69). New York: Springer-Verlag. Maes, P. (1994). Agents that reduce work and information overload. Communication of the ACM, 37(7), 30–40. Malone, T., Grant, K., Turbak, F., Brobst S., & Cohen, M. (1987). Intelligent information-sharing systems. Communications of the ACM, 30(5), 390–402. Morita, M., & Shinoda, Y. (1994). Information filtering based on user behavior analysis and best match text retrieval. In Proceedings of the Seventh Annual ACM-SIGIR Conference on Research and Development in IR. ( pp. 272–281). New York: Springer-Verlag. Oard, D. (1997). The state of the art in text filtering. User Modeling and User-Adapted Interaction, 7, 141–178. Petrelli, D., De Angeli, A, & Convertino, G. (1999). A user-centered approach to user modeling. Proceedings of the 7th International Conference on User Modeling, 255–264. Pirolli, P., James, P., & Rao, R. (1996). Silk from a sow’s ear: Extracting usable structures from the Web. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Common ground (pp. 118–125). Retrieved December 16, 2003, from http:// www.acm.org/sigchi/chi96/proceedings/papers/Pirolli_2/pp2.html Quiroga, L., & Mostafa J. (2002). An experiment in building profiles in information filtering: The role of context of user relevance feedback. Information Processing and Management, 38, 671–694. Rich, E. (1979). User modeling via stereotypes. Cognitive Science, 3, 335–366. Riedl, J., & Konstan, J. (2002). Word of mouse: The marketing power of collaborative filtering. New York: Warner Books.
INFORMATION ORGANIZATION We organize information—in our minds and in information systems—in order to collect and record it, retrieve it, evaluate and select it, understand it, process and analyze it, apply it, and rearrange and reuse it. We also organize things, such as parts, merchandise in a store, or clothes in a closet, using similar principles for similar purposes. Using data on foods as an example, this article introduces the following concepts: the entity-relationship (E-R) approach as the basis for all information organization; ■ database organization: relational databases, object-oriented databases, and frames;
templates for the internal organization of documents; ■ cataloging and metadata; ■ knowledge organization systems (KOS): (faceted) classification schemes, taxonomies, ontologies, and thesauri; knowledge representation. ■
The Entity-Relationship Approach Information organization depends on object characteristics (or properties), often expressed as statements: entities (nouns) are connected through relationships (verbs), for example: pecan pie has ingredient (shelled pecans, 2 cups, for taste)
Figure 1 shows an E-R conceptual schema for foods—a list of statement patterns, each defining a type Food product
hasName
Text
Food product
hasDescription
Text
Food product
hasHomePrepTime Time duration
Food product
isa
Food
Food product
comesFromSource
Food source[plant or animal]
Food product
comesFromPart
Anatomical
Food product
hasIngredient
(Food product, Amount [number and unit], Purpose) [(Chocolate, 50g, for taste), (BHT, 0.1g preservation)]
Food product
underwentProcess
(Process, Intensity, Purpose)[(broil, low heat, to brown)]
Food product
containsSubstance
(Substance, amount)[(fat,13 g) vitamin A, 4000 IU)]
Food product
intendedFor
Type of diet[lowfat, low-salt]
■
Entity-relationship (E-R) schema for a food product database FIGURE 1.
356 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
foodName ingredient no
unit
purpose
pecan pie
flaky piecrust
1
count
body
pecan pie
shelled pecans
2
cup
taste
pecan pie
eggs
5
count
body
pecan pie
white sugar
1
cup
taste
Diet Coke carbon355 ated water
ml
body
Diet Coke aspartame 200
mg
taste
fromSource: pecan tree fromPart: seed process: shelling foodName: eggs fromSource: chicken fromPart: egg (part of animal) FIGURE 3:
food product
diet
pecan pie
normal
Diet Coke
low cal
split pea soup
normal
unsalted butter
low salt
ice cream
normal
frozen yogurt
low cal
FIGURE 2.
foodName: shelled pecans
Tables (relations) in a relational database
This E-R schema is the basis for several approaches to storing and presenting data and for organizing a database for access and processing.
Database Organization In a relational database, data corresponding to one relationship type are expressed in a table (also called relation, Figure 2); data about one Aobject@, such as a food product, are distributed over many tables. Tables are very simple data structures that can be processed simply and efficiently. Object-oriented databases store all the information about an object in one frame that has a slot for every object characteristic as expressed in a relationship (Figure 3). A frame can also call procedures operating on its data, such as computing fat content by adding fat from all ingredients. Frames are complex data structures that require complex software. Frames
Sample frames with slots and slot fillers
(in databases or in the mind) use the mechanism of hierarchical inheritance for efficient data input and storage; for example, a frame for chocolate pecan pie simply refers to the pecan pie frame and lists only additional slots, such as ingredient: (chocolate, 50 g, for taste).
The Internal Organization of Documents: Templates A recipe is a simple document describing a food product, structured into a standard outline or document template (a frame applied to documents) with slots based on relationships (Figure 4). A template can be encoded using XML (eXtensible Markup Language) tags. Each tag is defined in an XML schema (not shown) and identifies a type of information. (The ability to define tailor-made tags for each application gives XML its power.) Each piece of information has a beginning tag and a corresponding end tag. Once the information is encoded using XML, it can be used for many purposes: to display a recipe in print or on the World Wide Web, produce a cookbook with table of contents and an index, find all recipes that use certain ingredients, compose the ingredient label for a food (ingredients in order of predominance), compute the nutrient values for a serving (using a nutrient value table for basic foods). As this example shows, organization of data in databases and structuring text in documents are alike. In Figure 4, ingredients are given in a database-oriented
INFORMATION ORGANIZATION ❚❙❘ 357
pecan pie 8serving 1.5hour A custard pie, loaded with pecans.
flaky pie crust1 count shelled pecans2cup eggs5count ...
1Prebake crust. Place pecans on baking sheet and bake 2Start the filling3 Beat the eggs. Beat in the sugar, salt, and butter ...
FIGURE 4.
Recipe following a standard outline (template), encoded with XML
mode (each element tagged separately), processingSteps in a text-oriented mode. (Just the tag; for database-oriented tagging, steps would be broken down into separately tagged processes, with data, such as tem-
• • • • • • • •
title creator subject description publisher contributor date type
• • • • • • •
format identifier source language relation coverage rights
The Dublin Core (dc) for the description of document-like objects FIGURE 5.
perature and duration tagged separately.) These data can then be formatted for text output.
Cataloging and Metadata The recipe/food database or the catalog of a Web store organizes the actual data from which users’ questions can be answered. A library catalog organizes data about books, which in turn contain the data to answer questions; the library catalog stores data about data or metadata, as do Web search engines and catalogs of educational materials. Metadata are stored and processed just like any other kind of data; whether a data item should be called metadata or just data is often a matter of perspective. The Resource Description Framework (RDF) has been designed to encode metadata but can be used to encode any data represented in the E-R approach.
358 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Food type side dishes . appetizers . soups . salads vegetable grain/starch dishes . pasta . grains . breads . pizza fish, poultry, meat · fish · poultry · meat sweet baked dishes pies, tarts, pastries · cookies, brownies, and cakes FIGURE 6.
Food source plant food source · Juglandaceae · ·Juglans (walnut) · ·Carya (Hickory) · · · C. illinoensis · · · (pecan) . compositae · ·Cichorium · · · C. intybus · · · C. endivia animal food source · vertebrates · ·fish · ·bird · ·mammal · · · Bovidae · · · · Bos (cattle)
Plant/animal part plant part . below ground · ·root · ·tuber . above ground · ·stem · ·leaves · ·fruit (anat. part) · · · seed animal part . skeletal meat . organ meat · ·liver . egg fruit (anat. part)
Process
Substance
mechanical process . shelling . peeling . slicing . grating . crushing cooking process . c. with dry heat · ·baking · ·broiling . c. w. microwave . c. w. moist heat · ·boiling · ·steaming . c. with fat or oil freezing
food substance . bulk nutrient · ·carbohydrate · · · sugar · · · starch · · · fiber · · · · soluble f. · ·protein · ·fat . trace nutrient · ·vitamin · ·mineral non-food substance . preservative · ·BHT . package glue
Excerpts of a faceted classification for the food domain
There are many standards defining metadata elements for different kinds of objects, for example the Dublin Core (Figure 5). These are often encoded in XML, for example: How to cook everything Mark Bittman cookbook Macmillan (Not all records use all dc elements.) (The pecan pie example is based on a recipe in this cookbook, which also inspired the food type classification)
Knowledge Organization Systems (KOS) For the benefit of the user, a cookbook or a grocery store arranges like foods together, just as a li-
brary arranges books on one subject together and like subjects close to each other. Such arrangement requires a classification (or taxonomy), such as Figure 6, column 1, for foods, or the Dewey Decimal Classification for all subjects. To describe foods by their characteristics, we need, for each characteristic or facet, a classification of the possible values (the possible fillers for a given frame slot); examples of facets, each with a partial classification of values, as shown in Figure 6. A classification is a structure that organizes concepts into a meaningful hierarchy, possibly in a scheme of facets. The classification of living things is a taxonomy. (The term taxonomy is increasingly used for any type of classification.) A classification is now often called an ontology, particularly if it gives richer concept relationships. A classification deals with concepts, but we need terms (words or phrases) to talk about concepts. However, the relationships between language and concepts are complex. A concept can be expressed
INFORMATION ORGANIZATION ❚❙❘ 359
Belgian endive
Symbols used
DFVegetable consisting of the leaves of Chicorium inty-bus, growing in a small, cylindrical head.
DF Definition UF Used For USE BT Broader Term NT Narrower Term RT Related Term
COmbination: vegetable : Cichorium intybus : leaves UFchicon chiccory (vegetable) [spelling variant] chicory (vegetable) French endive witloof BThead vegetable salad vegetable RT chicory (coffee) FIGURE 7.
A typical thesaurus entry
by several terms, such as Belgian endive, French endive, witloof, chicory, and chicon, which all refer to the same vegetable; these terms are in a synonym relationship with each other. Conversely, a term may refer to several concepts, such as chicory, which refers (1) to a vegetable and (2) to a coffee substitute made from the root of the same plant; such a term has the property of being a homonym (in information retrieval, a character string with multiple meanings). A thesaurus is a structure that (1) manages the complexity of terminology by grouping terms that are synonymous to each other and disambiguating homonyms by creating a unique term for each meaning and (2) provides conceptual relationships, ideally through an embedded classification/ontology. A thesaurus often selects from a group of synonyms the term, such as Belgian endive, to be used as descriptor for indexing and searching in a given information system; having one descriptor for each concept saves the searcher from having to enter several terms for searching. The descriptors so selected form a controlled vocabulary (authority list, index language). Figure 7 shows a typical thesaurus entry. Rich conceptual relationships can be shown graphically in concept maps, which are used particularly in education to aid understanding; they represent semantic networks, which a user or a
SEMANTIC WEB A common framework that allows data to be shared and reused across application, enterprise, and community boundaries.
computer can traverse along links from one concept to the next (a process called spreading activation). Conceptual and terminological relationships can be encoded for computer storage using the Topic Map standard or RDF, both implemented in XML.
Outlook Information organization is important for people to find and understand information. It is also important for computer programs to process information to make decisions or give recommendations, for example in medical expert systems and electronic commerce (e-commerce) and semantic Web applications (where information organization is called “knowledge representation”). These applications require well-thought-out conceptual structures, which must be developed by beginning from scratch or by refining existing knowledge organization systems (KOS). The most serious challenge is ensuring the interoperability of KOS and metadata schemes worldwide so that different systems can talk to each other. Dagobert Soergel See also Expert Systems; Information Retrieval; Markup Language; Ontology
FURTHER READING Bailey, K. D. (1994). Typologies and taxonomies: An introduction to classification techniques. Thousand Oaks, CA: Sage Publications. Dodds, D. (2001). Professional XML metadata. Hoboken, NJ: Wrox. Jonassen, D. H., Beissner, K., & Yacci, M. (1993). Structural knowledge: Techniques for representing, conveying and acquiring structural knowledge. Hillsdale, NJ: Lawrence Erlbaum. Lancaster, F. W. (1972). Vocabulary control for information retrieval. Washington, DC: Information Resources Press.
360 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Lynch, P., & Horton, S. (2002). Web style guide: Basic design principles for creating Web sites (2nd ed.). New Haven, CT: Yale University Press. Milstead, J., & Feldman, S. (1999). Metadata: Cataloging by any other name . . . Metadata projects and standards. Online, 23(1), 24–40. Retrieved January 22, 2004, from www.infotoday.com/online/OL1999/milstead1.html Mondeca topic organizer. Retrieved January 22, 2004, from http://www .mondeca.com/ Ray, E. (2003). Learning XML (2nd ed.). Sebastopol, CA: O’Reilly Rob, P., & Coronel, C. (2004). Database systems: Design, implementation, and management (6th ed.). Boston: Course Technology. Rosenfeld, L., & Morville, P. (2002). Information architecture for the World Wide Web: Designing large-scale web sites (2nd ed.). Sebastopol, CA: O’Reilly. Skemp, R. R. (1987). The psychology of learning mathematics. Hillsdale, NJ: Lawrence Erlbaum. Soergel, D. (1974). Indexing languages and thesauri: Construction and maintenance. New York: Wiley. Soergel, D. (2000). ASIST SIG/CR Classification Workshop 2000: Classification for user support and learning: Report. Knowledge Organization, 27(3), 165–172. Soergel, D. (2003). Thesauri and ontologies in digital libraries. Retrieved January 22, 2004, from http://www.clis.umd.edu/faculty/soergel/ SoergelDLThesTut.html Sowa, J. F. (2000). Knowledge representation: Logical, philosophical and computational foundations. Pacific Grove, CA: Brooks/Cole. Staab, S., & Studer, R. (Eds.). (2004). Handbook on ontologies in information systems. Heidelberg, Germany: Springer. Taylor, A. G. (2003). The organization of information (2nd ed.). Westport, CT: Libraries Unlimited. Vickery, B. C. (1960). Faceted classification: A guide to construction and use of special schemes. London: Aslib. Vickery, B. C. (2000). Classification and indexing in science. Burlington, MA: Butterworth-Heinemann. Zapthink. (2002). Key XML specifications and standards. Retrieved January 22, 2004, from http://www.oasis-open.org/committees/download.php/173/xml%20standards.pdf XXX
INFORMATION OVERLOAD Information overload has a personal, social context too. Information overload is the confusion that results from the availability of too much information. When a person is trying to find information or solve a problem, and there is an overabundance of information available, the result is confusion and an inability to filter and refine the onslaught of information so that it is possible to make a decision.
In the Beginning During the 1960s mainframe computers were programmed with punch cards and magnetic tapes that made such innovations as automated accounting systems and space travel possible. In a short period of time, the development of personal computers and computer networking made it possible for people to have the power of a mainframe on their desktop. People could write their own documents and memos and use spreadsheet software to perform numerical analysis with relative ease. Today the presence of the Internet puts the world at one’s fingertips. Any product of service that a person wants to buy is readily available, not from two companies, but from over two hundred. The terms information overload, infoglut, schologlut and data smog all refer to the same topic—too much information to comprehend. The following examples demonstrate a variety of contexts for information overload: A person wants a book on gardening. A search of gardening books on the Amazon.com website yields 30,707 possible books. ■ An executive begins the day by logging into her e-mail. She opens Microsoft Outlook only to find she starts the day with sixty e-mail messages; ten of them are advertisements. ■ A person is watching television. He has Direct TV with subscriptions to movie and sports channels and can choose from over 500 different stations. ■ A national security advisor receives 200 messages a day about potential threats to national security. ■
Limits to Human Cognition The first study related to information overload was done in 1956 by George A. Miller. In his work on the human ability to retain pieces of information, Miller discovered that people generally can retain seven plus or minus two pieces of information at any point in time. When the number of items of information is much greater than that, cognitive overload occurs and retention is hindered. Miller also described “chunking” of information. By that he meant that a person can retain more than seven pieces of information by grouping, or chunking, like pieces of information together. This seminal study in psy-
INFORMATION OVERLOAD ❚❙❘ 361
chology went on to form the practical basis for graphical user interface design, providing guidance on how many application windows should be open at one time and how to nest menus. A model for information overload that defines characteristics and symptoms of information overload was developed by Schneider (1987). Factors that influence and exacerbate information overload include uncertainty, ambiguity, novelty, complexity, intensity, and amount and rate of input. These factors and organizational or environmental conditions work together to create information overload. Primary symptoms of overload listed by Schneider are loss of integration, loss of differentiation, and confusion. The amount of information we have and the massive access we have to it are two of the major contributors to information overload. The age of ubiquitous computing— being able to access, use, and communicate by computer, anytime, any place, anywhere—is here. Current estimates are that over 2 billion webpages exist on the Internet. There are over ten thousand scholarly journals and databases. The amount of information conveyed is gigantic. Equally amazing is the ease with which we can access Internet, databases, phone, radio, and television from anywhere we travel. Professionals often take laptops and check e-mail while on vacation. Even children carry cellular phones. Technology exists in every aspect of our daily lives and there is little escape from it. Increasingly, people identify the social ills created by information overload. In a speech by Tim Sanders, a Yahoo executive, information overload was blamed for the rise in work stress (Soto 2003). Co-workers are continually interrupted by technologies such as instant messaging and e-mail. They’ve become so confined by technology that they send a message to someone 5 feet away rather than talk to that person face-to-face.
Technical Solutions to Information Overload Some estimates indicate that 80 percent of people abandon electronic purchases at the checkout. Other
people, when faced with massive amounts of information to make a decision, simply shut down and make their decisions based up “instinct.” However, quitting or ignoring information is not a viable way to make many decisions. Instead, the problems of information overload created by technology can also be alleviated by technology in the following ways: Filtering tools: A good example of information filtering is using a web browser. When you search for “Saturn” (the planet), you may end up with links to cars, planets, and nightclubs. You can further refine your search to include the word planet and exclude the words car and club. The narrower you define your search, the more likely you are to get the information you are looking for. E-mail applications also come with filters. If you get a great number of junk–e-mail messages asking you to refinance your mortgage, you can set a filter to block mail from a specific address or mail that contains specific keywords. The messages you filter out can be deleted from your computer without your viewing them. If you set the filter narrowly enough, there will be a reduction in the amount of information you need to process. The danger is that you may exclude information that could be of potential value. ■ Intelligent agents: An intelligent agent is a software program that can actively seek information for you based on parameters you set. It differs from a search engine or information filter in that it actively seeks specific information while you are doing other things. If you are an osteopath, for example, your agent can actively, continually seek the latest research on bone fractures. When it finds something relevant, it will bring it to your attention. ■ Web agents work using cookies (pieces of information from websites recorded on your hard drive) to track user preferences and provide additional information of value to users. For example, if you are shopping for a digital camera, a web agent can supply you with special deals on cameras, competing products, and accessories available for the camera in which you are interested. ■ Prioritizing schemes: One of the weakest aspects of information-seeking technology is that it does
■
362 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
not often rate the quality of the information it finds. A search engine provides information based on algorithms for indexing pages and the content (meta tags, header information, page keywords) of the pages that are indexed. Different search engines yield different results because they use different indexing techniques. Pages, sometimes unrelated to what you are seeking, are found because the authors designed them for success in online searches. To overcome this, some searching tools, such a periodical indexes and site searches, do provide an estimate of the quality of the found information. Often, the rating of quality is given as a percent or as an icon showing a fill bar indicating relevance. Many applications and websites offer quick prioritization schemes to allow the user to see information based on what is important to them. For example, in purchasing a digital camera, I am allowed to order my information according to price, popularity, brand, or relevance. Prioritizing gives users a quick way to focus on what they find relevant or important. However, the prioritization provided by the website or database is usually a generic scheme to help all users. A more personalized approach to handling information can be found through personal web portals. Personalized portals: A personalized portal is an access point for information based on a user’s personal preferences. For example, a farmer could use a portal that monitored weather, costs associated with harvesting and delivery of food, and competitor’s prices. The rich information that the farmer needs to make decisions is contained in one portal interface, showing no distracting or confounding additional information. ■ Design: In addition to filtering and seeking technologies, design and visualization can assist with information overload too. In web development, simple design concepts can assist the user in finding the information they seek. The concept of proximity— putting like items together— helps the user chunk large amounts of information into fewer, more manageable pieces. Information intensive websites provide a clear visual hierar■
chy in which like items are grouped. If the item group is not relevant to the user, the user can easily ignore the information that is not relevant. For example, if I go to the Newsweek site to read about business, I can quickly ignore featured stories, opinion polls, and multimedia offerings because they are all located and distinguished somewhere else on the site from the magazine’s regular business columns. Good design can make finding information easy. However, sites are also designed to get a person to do something. While the user’s goal might be to buy the cheapest camera, the purpose of the site may be to sell overstocked items. In this case the design could draw the user to special offers instead of facilitating the search for a quality camera.
Making Optimal Decisions Information is important because it assists us in making decisions. With perfect information, we can make the optimal decision. When faced with problems, people seek as much information as they can find to support a rational decision that will best help them solve their problems. They optimize their decisions based on sought-out information. Given too little, too much, or contradictory information, people are forced to find solutions that are satisfactory. This is called “satisficing,” or choosing a solution that will work, even if it is not the optimal solution. In his work on information filtering (1987), Thomas Malone concedes that the value of technology is not so much in eliminating unwanted information as it is in seeking information that is relevant. To this end we see an explosion of technological devices and sources filling the market. As the number of products increases, so do the technological innovations for managing those products. Consider, as an example, the invasiveness of telemarketers in the United States. Their access to individual phone numbers and other private information led to the development of new telecommunications “blocking” tools: telephone technologies now that display who is calling and provide callblocking and call-waiting services, all for a fee. Ultimately, a national “Do Not Call” list was im-
INFORMATION RETRIEVAL ❚❙❘ 363
plemented so people could elect to have their numbers protected from telemarketer harassment. Now technology is used to check the list before the call is made, rather than users having to purchase their own blocking devices. The growth of information, access, and products ensures that overload is a consequence of the information age. As a result, innovations and tools to cope with information overload will continue to be developed, along with social policies and norms to reduce overload. Ruth A. Guthrie FURTHER READING Malone, T. W., Grant, K. R., Turbak, F. A., Brobst, S. A., & Cohen, M. D. (1987). Intelligent information sharing systems. Communications of the ACM, 30(5), 390–402. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81–97. Schneider, S. C. (1987). Information overload: Causes and consequences. Human Systems Management, 7, 143–153. Soto, M. (2003). The toll of information overload: Too much technology diminishes work relationships. (2003, August 8). Seattle Times, (p. C1). Tushman, M. L., & Nadler, D. A. (1978). Information processing as an integrated concept on organizational design. Academy of Management Review, 3, 613–624.
INFORMATION RETRIEVAL Information retrieval systems are everywhere: Web search engines, library catalogs, store catalogs, cookbook indexes, and so on. Information retrieval (IR), also called information storage and retrieval (ISR or ISAR) or information organization and retrieval, is the art and science of retrieving from a collection of items a subset that serves the user’s purpose; for example: ■ ■
webpages useful in preparing for a trip to Europe; magazine articles for an assignment or good reading for that trip to Europe;
educational materials for a learning objective; digital cameras for taking family photos; ■ recipes that use ingredients on hand; ■ facts needed for deciding on a company merger. ■ ■
The main trick is to retrieve what is useful while leaving behind what is not.
The Scope of IR IR systems are part of a family that shares many principles (Figure 1). Two distinctions are of particular importance: 1. A system for unstructured information deals with such questions as: The economic impact of the Reformation, The pros and cons of school uniforms, or Find a nice picture of my niece. It finds documents that are more or less useful; the user must then extract the data needed. In contrast, a system for well-structured information deals with precise questions and returns precise answers, exactly the small pieces of data needed: the salary of Mrs. Smith; the population of China; the winner of the 1997 World Series. 2. Finding versus creating answers. IR and database systems merely find what is already there: for example, from a patient database, a patient’s symptoms; from a disease database, the diseases these symptoms point to (or a medical textbook from which to extract this information); and from a drug database, the drugs that treat a disease. A physician must then absorb all this information, derive a diagnosis, and prescribe a drug. A medical expert system goes beyond just finding the facts—it creates new information by inference: It identifies a disease that explains the patient’s symptoms and then finds a drug for the disease.
The Objects of IR Traditionally, IR has concentrated on finding whole documents consisting of written text; much IR research focuses more specifically on text retrieval— the computerized retrieval of machine-readable text without human indexing. But there are many other interesting areas:
364 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
finding answers and information that alread exist in a sstem Search by navigation (following links, as in a subject directory and the Web generally) Unstructured information (text, images, sound)
Hypermedia systems (Many small units, such as paragraphs and single images, tied together by links)
Structured information
FIGURE 1.
Search by query (as in Google)
creating answers and new information b analsis and inference – based on quer
IR systems (Often dealing with whole documents, such as books and journal articles) Database management systems (DBMS)
Data analysis systems Expert systems
The IR system family
Speech retrieval, which deals with speech, often transcribed manually or (with errors) by automated speech recognition (ASR). ■ Cross-language retrieval, which uses a query in one language (say English) and finds documents in other languages (say Chinese and Russian). ■ Question-answering IR systems, which retrieve answers from a body of text. For example, the question “Who won the 1997 World Series?” finds a 1997 headline “World Series: Marlins are champions.” ■ Image retrieval, which finds images on a theme or images that contain a given shape or color. ■ Music retrieval, which finds a piece when the user hums a melody or enters the notes of a musical theme. ■ IR dealing with any kind of other entity or object: works of art, software, courses offered at a university, people (as experts, to hire, for a date), products of any kind.
■
Text, speech, and images, printed or digital, carry information, hence information retrieval. Not so for other kinds of objects, such as hardware items in a store. Yet IR methods apply to retrieving books or
people or hardware items, and this article deals with IR broadly, using “document” as stand-in for any type of object. Note the difference between retrieving information about objects (as in a Web store catalog) and retrieving the actual objects from the warehouse.
Utility, Relevance, and IR System Performance Utility and relevance underlie all IR operations. A document’s utility depends on three things, topical relevance, pertinence, and novelty. A document is topically relevant for a topic, question, or task if it contains information that either directly answers the question or can be used, possibly in combination with other information, to derive an answer or perform the task. It is pertinent with respect to a user with a given purpose if, in addition, it gives just the information needed; is compatible with the user’s background and cognitive style so he can apply the information gained; and is authoritative. It is novel if it adds to the user’s knowledge. Analogously, a soccer player is topically relevant for a team if her
INFORMATION RETRIEVAL ❚❙❘ 365
abilities and playing style fit the team strategy, pertinent if she is compatible with the coach, and novel if the team is missing a player in her position. Utility might be measured in monetary terms: “How much is is it worth to the user to have found this document?” “How much is this player worth to us?” “How much did we save by finding this
software?” In the literature, the term “relevance” is used imprecisely; it can mean utility or topical relevance or pertinence. Many IR systems focus on finding topically relevant documents, leaving further selection to the user. Relevance is a matter of degree; some documents are highly relevant and indispensable for
Query description
Document titles
Relevant
Production and uses of plastic pipes 1 The production of copper pipes As the examples show, simple word match is often not enough; retrieving documents and assessing relevance require knowledge: The system needs to know that polyethylene and PVC are plastics, that tube is another word for pipe, that artery in the context of 6 means a major street and in 7 a pipe in the body, usually made of plastic.
2 Cost of plastic pipe manufacture
/
3 Polyethylene water pipes
/
Bioinformatics
1 Bioinformatics
/
Bioinformatics is the application of sophisticated computer methods to studying biology. This is another illustration of the variability of language IR systems must deal with.
2 Computer applications in the life sciences
/
3 Biomedical informatics
/
4 Modeling life processes
/
4 Steel rod manufacture 5 Spiral PVC tubes as cooling elements
/
6 Innovative plastic surface for new city artery 7 Artificial arteries help heart bypass patients
/
8 Plastic mouthpieces in making smoking pipes
5 Modeling traffic flow 6 Modeling chemical reactions in the cell
/
Jewish-Gentile relations
1 We played with our non-Jewish friends.
/
This could be a question to the Shoah Foundation’s collection of transcribed testimonies from Holocaust survivors. None of the stories that shed light on this question has the query phrase in it. Relevance must be inferred from the entire context.
2 We were taunted in school.
/
FIGURE 2.
3 Aryan people had many advantages. 4 My mother talked often to the neighbors.
/
5 Jews were deported to concentration camps. 6 Jews were forbidden to attend concerts.
Query descriptions compared with document or story titles
/
366 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
the user’s tasks; others contribute just a little bit and could be missed without much harm (see ranked retrieval in the section on Matching). From relevance assessments we can compute measures of retrieval performance such as recall = How good is the system at finding relevant documents? discrimination = How good is the system at rejecting irrelevant documents? precision = Depends on discrimination, recall, and the # of relevant documents Evaluation studies commonly use recall and precision or a combination; whether these are the best measures is debatable. With low precision, the user must look at several irrelevant documents for every relevant document found. More sophisticated measures consider the gain from a relevant document and the expense incurred by having to examine an irrelevant document. For ranked retrieval, performance measures are more complex. All of these measures are based on assessing each document on its ow n, r ather than considering the usefulness of the retrieved set as a whole; for example, many relevant documents that merely duplicate the same information just waste the user’s time, so retrieving fewer relevant documents would be better.
How Information Retrieval Systems Work IR is a component of an information system. An information system must make sure that everybody it is meant to serve has the information needed to accomplish tasks, solve problems, and make decisions, no matter where that information is available. To this end, an information system must
EXPERT SYSTEM A computer system that captures and stores human problem-solving knowledge or expertise so that it can be used by other, typically less-knowledgeable people.
(1) actively find out what users need, (2) acquire documents (or computer programs, or products, or data items, and so on), resulting in a collection, and (3) match documents w ith needs. Determining user needs involves (1.1) studying user needs in general as a basis for designing responsive systems (such as determining what information students typically need for assignments), and (1.2) actively soliciting the needs of specific users, expressed as query descriptions, so that the system can provide the information (Figure2). Figuring out what information the user really needs to solve a problem is essential for successful retrieval. Matching involves taking a query description and finding relevant documents in the collection; this is the task of the IR system (Figure 2). The simplest text retrieval systems merely compare words in the query description with words in the documents (title, abstract, or full text) and rank documents by the number of matches, but results are often poor (Figure 2). A good IR system provides the access points required to respond to user needs in retrieval and selection. This means preparing user-oriented document representations (Figure 3) that describe a document by several statements using as verbs and Entities as subjects and objects. The allowable Entity Types and define what kinds of information the system can store; they make up the conceptual schema. Fo r s o m e e n t i t y t y p e s ( i n t h e e x a m p l e Person, Text, Phrase, and URL), values can be freely chosen; for others (Subject and Function), values come from a controlled vocabulary that fixes the term used for a concept. For example, pipe is used for the concept also known as tube, so the user needs to enter only one term. If the user enters tube, the system (or the user) follows the thesaurus crossreference tube USE ST pipe
(ST = Synonymous Term)
The thesaurus also includes conceptual crossreferences: pipe BT hollow object
(BT = Broader Term)
and pipe NT capillary
(NT = Narrower Term)
INFORMATION RETRIEVAL ❚❙❘ 367
Statement
Data field
Document
Person
John Smith
Author
Document
Te xt
Artificial arteries help heart ...
Title
Document
Te xt
A clinical study ... showed that ...
Abstract
Document
Phrase
artificial arteries
Free text
Document
Subject
Blood Vessel Prosthesis
Descriptor
Document
Function
Coronary Artery Bypass
Function
Document
URL
www.healtheduc.com/heart/...
URL
FIGURE 3.
Document representation as a group of statements
(For the structure of thesauri, see the article on Information Organization.) The conceptual schema and the thesaurus must of course reflect user needs. If an entity (such as a document or a data file) is sought as a source of data/information, the data about the entity are used as metadata (data describing data); thus, the data in Google’s catalog of Web pages are used primarily as metadata.
Steps in the IR Process An IR system prepares for retrieval by indexing documents (unless the system works directly on the document text) and formulating queries, resulting in document representations and query representations, respectively; the system then matches the representations and displays the documents found and the user selects the relevant items. These processes are closely intertwined and dependent on each other. The search process often goes through several iterations: Knowledge of the features that distinguish relevant from irrelevant documents is used to improve the query or the indexing (relevance feedback). Indexing: Creating Document Representations Indexing (also called cataloging, metadata assignment, or metadata extraction) is the manual or
automated process of making statements about a document, lesson, person, and so on, in accordance with the conceptual schema (see Figure 3). We focus here on subject indexing—making statements about a document’s subjects. Indexing can be documentoriented— the indexer captures what the document is about, or request-oriented – the indexer assesses the document’s relevance to subjects and other features of interest to users; for example, indexing the testimonies in Figure 2 with Jewish-Gentile relations, marking a document as interesting for a course, or marking a photograph as publication quality. Related to indexing is abstracting— creating a shorter text that describes what the full document is about (indicative abstract) or even includes important results (informative abstract, summary). Automatic summarization has attracted much research interest. Automatic indexing begins with raw feature extraction, such as extracting all the words from a text, followed by refinements, such as eliminating stop words (and, it, of), stemming (pipes Y pipe), counting (using only the most frequent words), and mapping to concepts using a thesaurus (tube and pipe map to the same concept). A program can analyze sentence structures to extract phrases, such as labor camp (a Nazi camp where Jews were forced to work, often for a company; phrases can carry much meaning). For images, extractable features include
368 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
color distribution or shapes. For music, extractable features include frequency of occurrence of notes or chords, rhythm, and melodies; refinements include transposition to a different key. Raw or refined features can be used directly for retrieval. Alternatively, they can be processed further: The system can use a classifier that combines the evidence from raw or refined features to assign descriptors from a pre-established index language. To give an example from Figure 2, the classifier uses the words life and model as evidence to assign bioinformatics (a descriptor in Google’s directory). A classifier can be built by hand by treating each descriptor as a query description and building a query formulation for it as described in the next section. Or a classifier can be built automatically by using a training set, such as the list of documents for bioinformatics in Figure 3, for machine learning of what features predict what descriptors. Many different words and word combinations can predict the same descriptor, making it easier for users to find all documents on a topic. Assigning documents to (mutually exclusive) classes of a classification is also known as text categorization. Absent a suitable classification, the system can produce one by clustering – grouping documents that are close to each other (that is, documents that share many features). Query Formulation: Creating Query Representations Retrieval means using the available evidence to predict the degree to which a document is relevant or useful for a given user need as described in a freeform query description, also called topic description or query statement. The query description is transformed, manually or automatically, into a formal query representation (also called query formulation or query for short) that combines features that predict a document’s usefulness. The query expresses the information need in terms of the system’s conceptual schema, ready to be matched with document representations. A query can specify text words or phrases the system should look for (free-text search) or any other entity feature, such as descriptors assigned from a controlled vocabulary, an author’s organization, or the title of the journal where a
document was published. A query can simply give features in an unstructured list (for example, a “ bag of words”) or combine features using Boolean operators (structured query). Examples: Bag of words: (pipe tube capillary plastic polyethylene production manufacture) Boolean query: ( p i p e O R t u b e O R c a p i l l a r y ) AND(plastic OR polyethylene) AND(production OR manufacture)
The Boolean query specifies three ANDed conditions, all of which are necessary (contribute to the document score); each condition can be filled by any of the words joined by OR; one of the words is as good as two or three. If some relevant documents are known, the system can use them as a training set to build a classifier with two classes: relevant and not relevant. Stating the information need and formulating the query often go hand-in-hand. An intermediary conducting a reference interview helps the user think about the information need and find search terms that are good predictors of usefulness. An IR system can show a subject hierarchy for browsing and finding good descriptors, or it can ask the user a series of questions and from the answers construct a query. For buying a digital camera, the system might ask the following three questions: What kind of pictures do you take (snapshots, stills, ...)? ■ What size prints do you want to make (5×7, 8×10, . . .)? ■ What computer do you want to transfer images to? ■
Without help, users may not think of all the features to consider. The system should also suggest synonyms and narrower and broader terms from its thesaurus. Throughout the search process, users further clarify their information needs as they read titles and abstracts. Matching the Query Representation with Entity Representations The match uses the features specified in the query to predict document relevance. In exact match the sys-
INFORMATION RETRIEVAL ❚❙❘ 369
Query term (weight in query)
housing (weight 2)
conditions (1)
Siemens (2)
idf, log(idf)
10,000, log=4
100, log=2
100,000, log=5 10,000, log=4
barracks (5 times)
conditions (3)
Siemens (2)
40
6
Doc. 1
term(tf ) (tf = frequency
+
+
20
+
“labor camps” (3)
Score
“labor camps” (4) 48
=
114
Doc. 2
housing (3 times)
conditions (2)
Siemens (2)
“labor camps” (4)
96
term in
Doc. 3
housing (3 times)
conditions (4)
Siemens (1)
“labor camps” (4)
90
each
Doc. 4.
housing (3 times)
conditions (3)
Siemens (2)
“labor camps” (3)
86
document)
Doc. 5
housing (2 times)
conditions (10)
“labor camps” (1)
48
of the
FIGURE 4.
Computing relevance scores
tem finds the documents that fill all the conditions of a Boolean query (it predicts relevance as 1 or 0). To enhance recall, the system can use synonym expansion (if the query asks for pipe, it finds tubes as well) and hierarchic expansion or inclusive searching (it finds capillary as well). Since relevance or usefulness is a matter of degree, many IR systems (including most Web search engines) rank the results by a score of expected relevance (ranked retrieval). Consider the query Housing conditions in Siemens labor camps. Figure 4 illustrates a simple way to compute relevance scores: Each term’s contribution is a product of three weights: The query term weight (the importance of the term to the user), the term frequency (tf) (the number of occurrences of the term in the document, synonyms count also), and the rarity of the term or inverse document frequency (idf) on a logarithmic scale. If document frequency = .01 (1 % or 1/100 of all documents include the term), then idf = 100 or 102 and log(idf) = 2. For example, in Figure 4 the contribution of housing to relevance score of Document 1 is query weight 2 * log(idf) 4 * tf (term frequency in document) 5 = 40 (Google considers, in addition, the number of links to a webpage.) Usually (but not in the simple
example), scores are normalized to a value between 0 and 1. Selection The user examines the results and selects relevant items. Results can be arranged in rank order (examination can stop when enough information is found); in subject groupings, perhaps created by automatic classification or clustering (similar items can be examined side by side); or by date. Displaying title + abstract with search terms highlighted is most useful (title alone is too short, the full text too long). Users may need assistance with making the connection between an item found and the task at hand. Relevance Feedback and Interactive Retrieval Once the user has assessed the relevance of a few items found, the query can be improved: The system can assist the user in improving the query by showing a list of features (assigned descriptors; text words and phrases, and so on) found in many relevant items and another list from irrelevant items. Or the system can improve the query automatically by learning which features separate relevant from irrelevant items and thus are good predictors of relevance. A simple version of automatic query adjustment is this: increase the weights of features from relevant items and decrease the weights of features from irrelevant items.
370 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
IR System Evaluation
Outlook: Beyond Retrieval
IR systems are evaluated with a view to improvement (formative evaluation) or with view to selecting the best IR system for a given task (summative evaluation). IR systems can be evaluated on system characteristics and on retrieval performance. System characteristics include the following:
Powerful statistical and formal-syntax-based methods of natural language processing (NLP) extract meaning from text, speech, and images and create detailed metadata for support of more focused searching. Data mining and machine learning discover patterns in large masses of data. Sophisticated database and expert systems search and correlate huge amounts of different types of data (often extracted from text) and answer questions by inference. New visualization techniques using high-resolution displays allow users to see patterns and large networks of linked information. Sophisticated user models allow intelligent customization. IR can be integrated into day-to-day work: A medical IR system can process a patient’s chart, find several relevant articles, and prepare a tailor-made multi-document summary, or it can deduce the drugs to be prescribed. A legal IR system can take an attorney’s outline of the legal issues in a case, find relevant cases or sections of cases, and arrange them according to the outline to give the attorney a running start on writing a brief. All these advances contribute to an unprecedented level of support for problem solving, decision making, and intellectual work.
the quality of the conceptual schema (Does it include all information needed for search and selection?); ■ the quality of the subject access vocabulary (index language and thesaurus) (Does it include the necessary concepts? Is it well structured? Does it include all the synonyms for each concept?); ■ the quality of human or automated indexing (Does it cover all aspects for which an entity is relevant at a high level of specificity, while avoiding features that do not belong?); ■ the nature of the search algorithm; ■ the assistance the system provides for information needs clarification and query formulation; and ■ the quality of the display (Does it support selection?). ■
Measures for retrieval performance (recall, discrimination, precision, novelty) were discussed in the section Relevance and IR system performance. Requirements for recall and precision vary from query to query, and retrieval performance varies widely from search to search, making meaningful evaluation difficult. Standard practice evaluates systems through a number of test searches, computing for each a single measure of goodness that combines recall and precision, and then averaging over all the queries. This does not address a very important system ability: the ability to adapt to the specific recall and precision requirements of each individual query. The biggest problem in IR evaluation is to identify beforehand all relevant documents (the recall base); small test collections have been constructed for this purpose, but there is a question of how well the results apply to large-scale real-life collections. The most important evaluation efforts of this type today are TREC and TDT (see Further Reading).
Dagobert Soergel See also Information Filtering; Information Organization; Ontology; Search Engines FURTHER READING Baeza-Yates, R., & Rubiero-Neto, B. (1999). Modern information retrieval. Reading, MA: Addison Wesley. Berners-Lee, T., Hendler, J., Lassila, O. (2001). The semantic web. Scientific American, 284(5), 34–43, Retrieved January 22, 2004, from http://www.sciam.com Blair, D. C. (1990). Language and representation in information retrieval. Amsterdam: Elsevier Science. Brin, S., & Page, L. (1998). The anatomy of a large-scale hypertextual web search engine. 7th International World Wide Web Conference (WWW7). Computer Networks and ISDN Systems, 30(XXX). Retrieved January 22, 2004, from www-db.stanford.edu/ ~backrub/google.html, www7.scu.edu.au/programme/fullpapers/ 1921/com1921.htm, decweb.ethz.ch/WWW7/00/ Boiko, B. (2002). Content management bible. New York: Hungry Minds. Retrieved January 22, 2004, from http://metatorial.com/index.asp Chu, H. (2003). Information representation and retrieval in the digital age. Medford, NJ: Information Today.
INFORMATION SPACES ❚❙❘ 371
Feldman, S. (1999). NLP Meets the Jabberwocky: Natural language processing in information retrieval. ONLINE, May 1999. Retrieved January 22, 2004, from www.onlinemag.net/OL1999/feldman5.html Feldman, S. (2000). The answer machine. Searcher, 8(1), 1–21, 58–78. Retrieved January 22, 2004, from http://www.infotoday.com/ searcher/jan00/feldman.htm Frakes, W. B., & Baeza-Yates, R. (Eds.). (1992). Information retrieval: Data structures and algorithms. Englewood Cliffs, NJ: Prentice Hall. Hert, C. A. (1997). Understanding information retrieval interactions: Theoretical and practical applications. Stamford, CT: Ablex. Jackson, P., & Moulinier, I. (2002). Natural language processing for online applications: Text retrieval, extraction, and categorization. Amsterdam: John Benjamins. Negnevitsky, M. (2001). Artificial intelligence: A guide to intelligent systems. Reading, MA: Addison Wesley. Soergel, D. (1985). Organizing information: Principles of database and retrieval systems. Orlando, FL: Academic Press. Soergel, D. (1994). Indexing and retrieval performance: The logical evidence. Journal of the American Society for Information Science, 4(8), 589–599. Sparck Jones, K., & Willett, P. (1997). Readings in information retrieval. San Francisco, CA: Morgan Kaufmann. Text REtrieval Conference (TREC), cosponsored by the National Institute of Standards and Technology (NIST) and the Defense Advanced Research Projects Agency (DARPA). Retrieved January 22, 2004, from http://trec.nist.gov/ Wilson, P. (1973). Situational relevance. Information Storage and Retrieval, 9(8), 457–471. Witten, J., & Bainbridge, D. (2002). How to build a digital library. San Francisco, CA: Morgan Kaufmann.
INFORMATION SPACES In all walks of life the flood of data is ever increasing. Data are raw measurements—meteorological, stock market, or traffic flow data, for example. Information is data that have more structure. Such data have been subjected to preprocessing and linked with other data, leading to greater meaningfulness to people as they make decisions in their work or leisure. In addition to data and information, a third term that characterizes the life cycle of human decision making is knowledge. Thus, we have a simple path that starts with raw quantitative or qualitative data (plural of datum, Latin for “given”). This path leads on to information, which we could call “potentially meaningful data” or the set of components that is usable in making decisions. Finally the path leads to knowledge—information that has been deployed in decision making and is relevant to people.
DATA MINING The process of information extraction with the goal of discovering hidden facts or patterns within databases.
Information spaces can facilitate finding and using information. They involve representation, most often spatial or similar (e.g. locations and interconnections in graph form).
The Geometric Metaphor People often consider information spaces in geometric terms, and geometry is more often than not the basis for sophisticated graphical display of data. Consider the following setting: We have three books characterized by four themes or three individuals characterized by four behavioral characteristics. This semantic aspect is not immediately relevant to us. A data table or cross-tabulation of our books and themes can be represented in an information space as three points (rows), each possessing four coordinates or dimensions. We could equally well consider a space of four points (columns), each possessing three coordinates or dimensions. Various mathematically well-defined algorithmic techniques are available to us as software to display such an information space or to process it by giving priority to important aspects of it. One such technique, known as “principal components analysis” (PCA), redefines the coordinates or dimensions in order to determine a better expression of the points considered. Interestingly, a close mathematical relationship exists between the analysis of the four three-dimensional points and the three fourdimensional points. Simultaneous display is possible in a kindred technique called “correspondence analysis.” The geometric metaphor is a powerful one. It allows any quantitatively characterized data set to be displayed and processed. It also allows qualitative data sets to be processed through the simple device of taking the qualitative attributes as quantitative ones (e.g., presence versus absence scored as 1 or 0). Origins or sources of data can be varied. In databases data are structured in defined fields and
372 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
in specified relationships. An online data feed in finance or meteorology, where data are continually captured or generated, may be partially structured. On the other hand, data from the World Wide Web or from the news media—think of national financial and business confidence indicators, for example—are partially structured at best. Raw text or tables drawn from published articles are usually taken as unstructured. A first task in viewing such data in information space terms is to impose structure on the data. People often use the term coding for assigning numerical values to keywords or reformulating originally numerical data. A second task is normalization, that is, ensuring that the data are consistent and inherently comparable. Some of our measurements cannot “shout louder than others.” Subsequent tasks are defined by our objectives in seeking information. A large range of statistical, neural network, machine learning, or data mining approaches to processing data could be relevant.
Visualization-Based User Interfaces Visualizing information and data stored in databases or in unstructured or semistructured repositories is important for the following reasons: 1. It allows the user to have some idea before submitting a query as to what type of outcome is possible. Hence, visualization is used to summarize the contents of the database or data collection (i.e., information space). 2. The user’s information requirements are often fuzzily defined at the outset of the information search. Hence, visualization is used to help the user in information navigation by signaling related items, by showing relative density of information, and by inducing a (possibly fuzzy) categorization on the information space. 3. Visualization can therefore help the user before the user interacts with the information space
INFORMATION SPACES Representations, most often spatial or similar (e.g. locations and interconnections in graph form), that facilitate can facilitate finding and using information.
and as the user interacts. The progression is natural enough that the visualization becomes the user interface. Information retrieval was characterized in terms of “semantic road maps” by Doyle in 1961. The spatial metaphor is a powerful one in human information processing and lends itself well to modern distributed computing environments such as the Web. The Kohonen self-organizing feature map (SOFM, originally developed by T. Kohonen who works in Helsinki, Finland) method is an effective means toward this end of a visual information retrieval user interface. Our emphasis is on both classical approaches to data analysis and recent approaches that have proved their worth in practical and operational settings. Such algorithms start with data taken from real life, which is multi-faceted or multidimensional. Then an algorithm such as principal components analysis projects multidimensional input data (expressed as a set of vectors or points) into a more practical and observable low-dimensional space, which in practice is usually the best-fitting plane. PCA is implemented using linear algebra, where the eigenvectors (i.e. mathematically most important underlying facets of the multidimensional cloud of points under investigation) of the covariances or correlations (expressing a large set of pairwise relationships between the multidimensional data points) serve to define a new coordinate system. Correspondence analysis is similar to PCA. It is particularly suitable for data in the form of frequency counts or category memberships (e.g., frequencies of occurrence in a set of discrete categories); on the other hand PCA is suitable for continuously changing measurement values in our input data. Like PCA and correspondence analysis, multidimensional scaling also targets a best-fitting low-dimensional space (e.g., best planar or “cartographic” fit, rather like a street map). Multidimensional scaling takes all possible ranks as input and owes its origins to application domains where ranks are easier to define compared to more precise measurement. Examples of where ranks are more easily obtained include perceptual studies in psychology and aptitude studies in education.
INFORMATION SPACES ❚❙❘ 373
Visual interactive user interface to the journal Astronomy and Astrophysics based on eight thousand published articles. Relative intensity represents relative document density. FIGURE 1.
The Kohonen map achieves a similar result through iterative optimization, a different algorithm that has implications for practical deployment in that it is usually slower. Importantly, the Kohonen map output is highly constrained: Rather than a continuous plane, it is instead (nearly always) a regular grid. A regular grid output representation space offers an important advantage in that it easily provides a visual user interface. In a Web context it can be made interactive and responsive.
Kohonen Self-Organizing Feature Maps Two visual user interface frameworks are based on two types of input data. The first input data framework is based on document/index term dependencies. From such data a cartographic representation is periodically (i.e., in batch mode as opposed to real-time) updated and made available as a clickable “image map” contained in an
HTML page. The image file itself is created and links made active in it using a batch script on the server side (i.e., the user’s communicating partner, which is serving content to the user) and provided to the user as an inlined image (i.e., inserted in the HTML page served to the user). The second input data framework is based on linkage data. A multigraph (i.e. a graph with possibly more than one link between any pair of node or object) of links connecting three or more types of objects is first created. Such objects include author name, publication title, and content-related keyword (e.g., astronomical object name). In this case a Java (the programming language often used for web software applications) application is used to construct the interactive cartographic representation in real time. (See Figure 1.) Figure 1 shows a visual and interactive user interface map using a Kohonen self-organizing feature map. The original of this map is in color, and it is show n here in monochrome. Relative color intensity—brightness—is related to density of document clusters located at regularly spaced nodes of the map, and some of these nodes/clusters are annotated. The map is deployed as a clickable image map, allowing activation on user command of CGI (Common Gateway Interface: protected web server area for the support of executable programs) programs accessing lists of documents and—through further links—in many cases the full documents. Such maps are maintained for thirteen thousand articles from the Astrophysical Journal, eight thousand from Astronomy and Astrophysics, and more than two thousand astronomical catalogues. In Figure 1 strongly represented concepts, spanning wide areas of observational astronomy, are shown in bold: ISM = interstellar matter, COSMO = cosmology, MHD = magneto-hydrodynamics, SUN, STARS, GALAXIES. MHD processes exist in the sun, which in turn is a star. Cosmology research is usually galaxy oriented. This divide between solar system research and cosmology research outside the solar system is fairly well represented in this map. Near COSMO we see terms such as gravitational lensing, redshift, dark mattergamma ray bursts, and so on. This annotation of the map was informed by the keywords used to construct the map and was carried out
374 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
manually. Consideration was given to automated annotation but left for future work. Portable computing platforms allow for a new area of application of visual user interfaces, given the normal mode of interaction using a stylus. Not all Web browsers on PDA (personal digital assistant) platforms support image maps. One that does is the FireViewer application running on a Palm operating system (OS) platform.
Related Kohonen-Type Maps Hyperlink-rich data present an interesting case for taking a visualization tool further. The extensible markup language (XML) format is more appropriate than the more mundane hypertext markup language (HTML). The latter is limited in document linkage and supports little description detail. HTML most notably lacks any special support for document structure. Essentially such a visualization tool is a Web browser with specialized functionality. The prototype of one such tool was developed for XML data. The data related to astronomers, astronomical object names, and article titles. They were open to the possibility of handling other objects (images, summary tabulations, etc.). Through weighting, the various types of links could be given priorities. An algorithm was developed to map the nodes (objects) to a regular grid of cells, which were clickable and provided access to the data represented by the cluster. Given the increasingly central role of XML in access to Web information and data, the importance of such clustering for data organization and for knowledge discovery can be underscored. Such an interactive visual user interface works in the following way. We consider a set of documents. The units clustered are authors, titles, and astronomical objects. The map is arranged to give a central position to a selected unit (e.g., a person— an astronomer). The annotations of paper titles or of astronomical objects shown in the regular grid are representative or important ones. Clicking on a location provides considerably greater detail in an additional panel relative to what is presented in a “global” visual view in the clickable visual interface.
Summarizing the types of information space display that we have covered, we can distinguish between the following two types of input for maps of information spaces. Both are potentially of relevance for data with varying degrees of structure, including data originating in databases and in HTML Web files. Keyword based: The bibliographic maps exemplified in Figure 1 are of this type. The keywords or index terms provide the dimensions of a geometric space in which our objects are located. ■ Sparse graph: This is likely to be the case whenever XML richer link functionality, compared to the relatively limited forward links supported by HTML, is used as the basis for associations between our objects. ■
If an interdependency graph containing a great amount of data is available, a convenient way to process such data is to project the objects, using these interdependencies, into a geometric space. We can do this using principal coordinates analysis, which is also referred to as “classical multidimensional scaling” and “metric scaling.”
Ontologies: Support for Querying We have noted that the mathematical notions of geometric spaces can be found behind many aspects of how we think about information spaces and behind many approaches to displaying and visualizing information spaces. However, having information spaces cooperate and collaborate such that a user can draw benefit from more than one information space at one time leads us to requirements of a different sort. For interoperability of information systems of any type, we need to consider a common language to support formal querying or more informal searching for information resources. A term that has come to be much in vogue of late is ontology—the terminology underpinning a common language. An ontology lists the terminology used in a particular area (e.g. a particular field of business, or engineering) and some of the relationships between these terms. Its aim is to help user searching, since the salient information aspects of an area are essentially summarized by such an ontology.
INFORMATION SPACES ❚❙❘ 375
Ontology describes a terminology hierarchy—a helpful basis for supporting querying. Such a concept hierarchy defines a sequence of mappings from a set of low-level concepts to higher-level, more general concepts. These concepts may be defined within two structures. A hierarchical structure is such that a so-called child node cannot have two parents, but a parent node can have more than one child node. A lattice structure is such that a child node can have two parents. A concept hierarchy can be explicitly generated by expert users before the data are queried and will be static, or it can be generated automatically, and the user may reform the hierarchy when needed. The concept hierarchy can be based on hierarchical structure and generated by, for example, economists. This will be illustrated by a case study using Eurostat (the Statistical Office of the European Union, the prime business and economic statistical data agency at European level) databases and in particular documents from the Eurostat Economic Bulletins.
Key National Accounts and Germany
Labor Costs and International
This Eurostat database supports three concept hierarchies: branches, themes, and countries. The total number of branch concepts is 423, the total number of theme concepts is 30, and the total number of country concepts is 23. An example extract of the branch concept hierarchy is shown as follows. Total Industry Total industry (excluding construction) ■ Mining, quarrying and manufacturing ■ Intermediate goods industry ■ Energy ■ Intermediate goods industry, excluding industry ■ Capital goods industry ■ Consumer goods industry ■ Durable consumer goods industry ■ Non-durable consumer goods industry ■ Mining and quarrying ■ Mining and quarrying of energy producing materials ■ ■
Documents Related to Climate Germany and R&D
Investment
Editorial
German Economy
Economy and Asia
Employment and Great Britain and France
Labor Cost and East and West Germany
Micro-Macro Economy and East Germany
F I G U R E 2 . The distribution of topics found as a result of a query discussed in the article and based on an ontology for this domain of economic information. Note: GB = Great Britain
376 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Mining and quarrying except energy producing materials ■ Manufacturing ■ Electricity, gas and water supply ■ Construction ■
A query is formed from the cross-product of the entries of the concept hierarchy categories. For example: Branch: Total Industry, Energy Theme: Production Country: Germany, United Kingdom In this case the result query will be: Total Industry and Production and Germany Total Industry and Production and United Kingdom Energy and Production and Germany Energy and Production and United Kingdom We can seek documents having at least one of the preceding combinations. (See Figure 2.) Figure 2 shows in schematic form the type of result that we may find. A possibly important consideration in such work is that the information resulting from a user query be built on the fly into an interactive graphical user interface (GUI).
Outlook Summarizing information is necessary. A visual summary is often a natural way to summarize. When we add the possibilities for human-computer interaction, visual user interfaces become a toolset of significance. The ergonomics of interaction based on visual user interfaces is still under investigation. Although human understanding is greatly aided by maps and drawings of all sorts, we have yet to find the most appropriate visual displays for use in visual user interfaces. Beyond the uses of information space visualization described here, we can consider other modes of interacting with information spaces—through gesture or voice, for example. Operations that can be controlled by eye-gaze dwell time (manifesting continuing subject interest through the approximate point of gaze remaining fairly steady) include de-
compressing of compressed medical or other imagery and zooming to locations of interest; Web browser scrolling; accessing information content such as one or a number of video streams based on user interest as manifested by his or her eye-gaze dwell time; and manipulating objects in two-dimensional and three-dimensional spaces. Such interaction with computer displays is certainly feasible, as we have shown. However, issues related to user acceptability and ergonomics have yet to be fully investigated. Fionn Murtagh See also Data Visualization; Information Retrieval; Ontology FURTHER READING Benzécri, J.-P. (1992). Correspondence analysis handbook. Basel, Switzerland: Marcel Dekker. Doyle, L. B. (1961). Semantic road maps for literature searchers. Journal of the ACM, 8, 553–578. Farid, M., Murtagh, F., & Starck, J. L. (2002), Computer display control and interaction using eye-gaze. Journal of the Society for Information Display, 10, 289–293. Guillaume, D., & Murtagh, F. (2000). Clustering of XML documents. Computer Physics Communications, 127, 215–227. Hoffman, P. E., & Grinstein, G. G. (2002). A survey of visualizations for high-dimensional data mining. In U. Fayyad, G. G. Grinstein, & A. Wierse (Eds.), Information visualization in data mining and knowledge discovery (pp. 47–82). San Francisco: Morgan Kaufmann. Kohonen, T. (2001). Self-organizing maps (3rd ed.). New York: SpringerVerlag. Murtagh, F., & Heck, A. (1987). Multivariate data analysis. Dordrecht, Netherlands: Kluwer. Murtagh, F., Taskaya, T., Contreras, P., Mothe, J., & Englmeier, K. (2003). Interactive visual user interfaces: A survey. Artificial Intelligence Review, 19, 263–283. Oja, E., & Kaski, S. (1999). Kohonen maps. Amsterdam, Elsevier. Poinçot, P., Murtagh, F., & Lesteven, S. (2000). Maps of information spaces: Assessments from astronomy. Journal of the American Society for Information Science, 51, 1081–1089. Shneiderman, B. (2002). Leonardo’s laptop: Human needs and the new computing technologies. Cambridge, MA: MIT Press. Torgerson, W. S. (1958). Theory and methods of scaling. New York: Wiley. Wise, J. A. (1999). The ecological approach to text visualization. Journal of the American Society for Information Science, 50, 1224–1233.
INFORMATION THEORY ❚❙❘ 377
INFORMATION THEORY Information theory generally refers, especially in the United States, to a theory of communication originated by mathematician Claude E. Shannon at Bell Telephone Laboratories in 1948 and developed further by him and others. The highly abstract, mathematical theory has been influential, and sometimes controversial, in digital communications, mathematics, physics, molecular biology, and the social and behavioral sciences. More broadly, information theory refers to both Shannon’s approach and other probabilistic methods of analyzing communication systems. Other theories of information are common in statistics and information science.
Shannon’s Theory and Its Development Shannon’s theory, comprehensively established in “A Mathematical Theory of Communication” (Shannon 1993) uses probability theory and the concept of entropy to determine how best to encode messages in order to transmit information efficiently and reliably in the presence of noise. The theory’s general model (Figure 1) applies to any communication system, whether it involves machines, humans, other living things, or any combination of these. An information source selects a message from a set of possible messages. The transmitter encodes that message and converts it IN F O R M ATI O N SOURCE
TRANSMITTER
into a signal, which is sent along a channel affected by a noise source. The received signal (transmitter signal plus noise) enters the receiver, which decodes it and converts it into a message for the destination. In a local area Ethernet, for example, a user (information source) types in an e-mail message to send to a second user (destination). The first user’s computer system (transmitter) encodes the message and converts it into an electrical signal that is sent over the computer network (channel), where electrical noise (noise source) can affect it. The second user’s computer system (receiver) picks up the signal, decodes it, and displays a message— hopefully what was sent. The hallmark of Shannon’s theory is a quantitative measure of information, which he adapted from the work of Bell Labs researcher Ralph Hartley. In 1928 Hartley proposed a simple logarithmic measure of information transmitted in a digital system that did not consider noise or the probability of selecting symbols. Drawing on his own work in cryptography during World War II, Shannon went beyond Hartley to treat the statistical aspects of messages, noise, and coding problems comprehensively. He related the amount of information generated by a source, assumed to be statistically regular, to the uncertainty and choice involved in selecting messages from an ensemble of messages. The greater the uncertainty and choice, the more information the source produces. If there is only one possible message, there is no information; maximum information exists when the choice of messages (or symbols) is random.
RECEIVER
SIGNAL
DESTINAT I ON
RECEIVED SIGNAL
MESSAGE
NO ISE SOURCE
Schematic diagram of a general communication system
FIGURE 1. MESSAGE
378 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Information in this sense does not measure data or knowledge; the meaning of messages is irrelevant to the theory. Throughout the theory, uncertainty and choice are expressed by equations of the form H = –∑ p log2 p [for the digital case] H = –∫ p log p [for the analog case] where p is the probability of occurrence of an event—for example, a symbol. The expressions are mathematically analogous to physical entropy in statistical mechanics. In the digital case, the entropy of a source gives the average number of bits per symbol or bits per second required to encode the information produced by the source. Channel capacity, defined as the maximum value of source entropy minus the uncertainty of what was sent, gives the maximum rate of transmission of information in bits per second. The paper seems to mark the first appearance of the term “bit” (a contraction of “binary digit”) in print. A prominent feature of the theory is the coding theorem for a noisy digital channel. This states the surprising result that if the entropy of a source is less than channel capacity, a code can be devised to transmit information over such a channel with an arbitrarily small error. The tradeoffs are complex codes and long delays in the transmitter and receiver. The upper bound of transmission in an important analog case is C = W log (1 + P/N) where C is channel capacity, W is bandwidth, P is average transmitter power, and N is the average power of white thermal noise. These and other theorems establish fundamental limits on data compression (encoding) and transmitting information in communication systems. A simple coding example for a noiseless digital channel, drawn from the paper, illustrates Shannon’s approach. Consider a language with just four symbols: A, B, C, D. Symbol A occurs ¹⁄₂ of the time, B ¹⁄₄ of the time, C and D ¹⁄₈ of the time each. A direct method to encode these symbols uses 2 bits per symbol, for example, as 00, 01, 10, and 11, respectively. Alternatively, one can use the proba-
bilities of when these symbols occur to calculate the entropy of the source as ⁷⁄₄ bits per symbol, and then devise a code to match that entropy: 0, 10, 110, 111. The new code requires only ⁷⁄₄ bits per symbol to encode messages in this language, versus 2 bits per symbol, a compression ratio of ⁷⁄₈. No method uses fewer bits per symbol to encode this source. Despite many claims to the contrary, Shannon did not avoid the term “information theory.” He used it, for example, in the titles of early talks on the subject, in the original paper and other early papers, in an encyclopedia article, and in an editorial (Shannon 1993). Much research has strengthened and extended Shannon’s theory. In the 1950s, mathematicians reset the theory’s foundations by rigorizing proofs of the main theorems and extending them. The theory has been further developed in such areas as error-correcting codes, rate distortion theory (lossy data compression), multiuser channels (network information theory), and zero-error channel capacity (zero-error information theory).
Other Theories of Information When Shannon published his path-breaking paper in 1948, it vied with three other mathematical theories of information. The most established was the theory of estimation of British statistician and geneticist Ronald Fisher. In the 1920s and 1930s, Fisher defined the amount of information to be expected, with regard to an unknown statistical parameter, from a given number of observations in an experiment. The measure was mathematically similar to, but not equivalent to, that for entropy. In 1946 British physicist Denis Gabor defined a “quantum of information,” expressed in “logons,” in terms of the product of uncertainties of time and frequency of an electrical signal. Gabor used the concept to analyze waveforms in communication systems. The third theory was proposed by American mathematician Norbert Wiener in his well-known book Cybernetics published in 1948. Wiener independently derived a measure of information similar to Shannon’s, except that he defined it as negative rather than positive entropy. It thus measured order rather than disorder
INFORMATION THEORY ❚❙❘ 379
(uncertainty). Contemporaries often referred to the entropy concept of information as the “ShannonWiener measure” or the “Wiener-Shannon formula.” At the first London Symposium on Information Theory, held in 1950, British physicist Donald MacKay brought all three measures into a unified Information Theory. MacKay included Fisher’s and Gabor’s work under the new category of Scientific Information Theory, the realm of the physicist, and Shannon’s and Wiener’s work under Communication Theory, the realm of the engineer. However, MacKay’s efforts did not resolve the different meanings of “information” or “information theory.” Shannon’s followers, especially those in the United States, employed “information theory” exclusively to describe his approach. Yet the Professional Group on Information Theory in the Institute of Radio Engineers, founded in 1951 and a forerunner of the present-day Information Theory Society of the Institute of Electrical and Electronics Engineers (IEEE), considered both Shannon’s theory and Wiener’s theory of prediction and filtering to be in its purview. Shannon himself included the latter field in “Information Theory,” an article he wrote for the Encyclopedia Britannica in the mid-1950s. British electrical engineer Colin Cherry observed in 1957 that the research of physicists such as MacKay, Gabor, and Leon Brillouin on scientific method,“is referred to, at least in Britain, as information theory, a term which is unfortunately used elsewhere [that is, in the United States] synonymously with communication theory. Again, the French sometimes refer to communication theory as cybernetics. It is all very confusing” (Cherry 1957, 216). Subsequently the mathematical and electrical engineering communities in the United States viewed these interpretations of information as complementary concepts, not as competitors, a position that holds today. Gabor’s measure is now prominent in electrical circuit theory, Fisher’s in classical statistics. Wiener’s work on prediction and filtering defines the area of statistical communication theory. Although it was a “core discipline of information theory” for many years, Wiener’s theory moved “outside the main stream of information theory” in the 1960s (Viterbi 1973, 257). Yet signal detection and estimation still
form a subfield of research in the IEEE’s Information Theory Society. “Shannon Theory,” as it has been called since the early 1970s, has remained at the center of the discipline. The IEEE honors Shannon as the founder of Information Theory, and many textbooks view his approach as nearly synonymous with the topic. Following a suggestion made in 1949 by Warren Weaver, an American mathematician who directed the natural sciences division of the Rockefeller Foundation, numerous researchers have tried to make Shannon’s theory the basis for a semantic theory of information. These range from the highly mathematical theory of logicians Rudolf Carnap and Yehoshua Bar-Hillel in the early 1950s to numerous quantitative and non-quantitative attempts by workers in the interdisciplinary field of Information Science. None have gained the scientific status of the non-semantic theories.
Influence of Information Theory Information theory became something of a fad in scientific circles in the 1950s when numerous researchers enthusiastically applied the “new science” to a variety of fields. These included physics, artificial intelligence, behavioral and molecular biology, physiology, experimental and cognitive psychology, linguistics, economics, organizational sociology, and library and information science. Ironically, communication engineers were skeptical until the 1960s, when Shannon’s theory was used to encode messages in deep space communications. Although most applications, adaptations, and modifications outside of mathematics and engineering proved to be unfruitful, the language of information theory became ingrained in such fields as molecular biology (gene as carrier of information), economics (markets as information processors), and artificial intelligence (semantic information processing). The biological and behavioral sciences describe the operation of all forms of life, from the DNA molecule to society, in terms of information transfer, storage, and processing. Technical applications have proven themselves in physiology (for instance, the informational capacity of sense organs) and experimental psychology ( for instance, the
380 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
relation between the amount of information in a stimulus to the response time to the stimulus). A recent textbook notes that “information theory intersects physics (statistical mechanics), mathematics (probability theory), electrical engineering (communication theory) and computer science (algorithmic complexity)” (Cover and Thomas 1991, 1). Applications of Shannon’s theory to information technology increased dramatically following the invention of the microprocessor in the 1970s and increasing levels of semiconductor integration. Complex error-correcting codes and data compression schemes are pervasive in digital communications. They help make possible such technologies as hard-disk drives, high-speed memories, cell phones, compact discs, DVDs, digital television, audio and video compression, and video conferencing on the Internet. Perhaps the most pervasive influence of information theory has been indirect. Social theorists from Marshall McLuhan in the 1960s to Manuel Castells in the 1990s, drawing on the popularization and wide application of information theory, have helped create a public discourse of information that proclaims the dawning of an information age, economy, and society. Ronald Kline See also Information Overload; Theory FURTHER READING Aspray, W. (1985). The scientific conceptualization of information: A survey. IEEE Annals of the History of Computing, 7, 117–140. Attneave, F. (1959). Applications of information theory to psychology: A summary of basic concepts, methods, and results. New York: Henry Holt. Capurro, R., & Hjorland, B. (2003). The concept of information. Annual Review of Information Science and Technology, 37, 343–411. Cherry, C. (1957). On human communication: A review, a survey, and a criticism. Cambridge: MIT Press. Cover, T. M., & Thomas, J. A. (1991). Elements of information theory. New York: Wiley. Dahling, R. L. (1962). Shannon’s Information Theory: The spread of an idea. In Stanford University, Institute for Communication Research, Studies of Innovation and of Communication to the Public (pp. 117–139). Palo Alto: Stanford University Press. Edwards, P. N (1996). Closed world: Computers and the politics of discourse in Cold War America. Cambridge: MIT Press.
Fisher, R. A. (1935). The design of experiments. London: Oliver & Boyd. Gabor, D. (1946). Theory of communication. Journal of the Institution of Electrical Engineers, Pt. III, 93, 429–459. Hartley, R. V. L. (1928). Transmission of information. Bell System Technical Journal, 7, 535–563. Kay, L. (2000). Who wrote the book of life? A history of the genetic code. Palo Alto: Stanford University Press. Machlup, F., & Mansfield, U. (Eds.). (1983). The study of information: Interdisciplinary messages. New York: Wiley. MacKay, D. M. (1969). Information, mechanism, and meaning. Cambridge: MIT Press. Shannon, C. E. (1993). A mathematical theory of communication. In N. J. A. Sloane & A. D. Wyner, (Eds.), Claude Elwood Shannon, collected papers (pp. 5–83). New York: IEEE Press. (Original work published 1948) Slepian, D. (1973). Information theory in the fifties. IEEE Transactions on Information Theory, 19(2), 145–148. Slepian, D. (Ed.). (1973). Key papers in the development of Information Theory. New York: IEEE Press. Verdú, S. (Ed.). (1998). Information Theory: 1948–1998. IEEE Transactions on Information Theory, 44(6), 2042–2272. Viterbi, A. J. (1973). Information theory in the Sixties. IEEE Transactions on Information Theory, 19(3), 257–262. Weaver, W. (1949). Recent contributions to the mathematical theory of communication. In C. E. Shannon & W. Weaver, The mathematical theory of communication (pp. 93–117). Urbana: University of Illinois Press. Webster, F. (1995). Theories of the information society. London: Routledge. Wiener, N. (1948). Cybernetics: Or control and communication in the animal and the machine. Cambridge and New York: Technology Press and Wiley.
INSTANT MESSAGING See Chatrooms; Collaboratories; Cybercommunities; E-mail; Groupware; Internet in Everyday Life; Social Psychology and HCI
INSTRUCTION MANUALS In the world of computing, an instruction manual is a book that explains how to use a software program or a hardware device. If we accept a relatively broad definition of manual, we can say that manuals dealing with such technologies as architecture, mining, and agriculture have been written for many centuries. Manuals that approximate contemporary manuals in their design generally date from the nine-
INSTRUCTION MANUALS ❚❙❘ 381
teenth century. The first author of computer manuals was J. D. Chapline, who worked on the Binac and Univac I computers from 1947 to 1955. Chapline borrowed many design ideas from military manuals and, later, from automotive manuals. Interest in computer documentation increased with the growth of the computer industry. This interest intensified considerably when personal computers (“microcomputers”) burst onto the scene in the early and mid 1980s. Prior to personal computers, most computer users were either computer programmers and other computer specialists or scientists and engineers who were prepared to master a new and daunting technology. Now, however, a much larger and more diverse audience, including business people, graphic designers, students, and hobbyists were using computers and reading documentation. Computer companies, heeding widespread complaints about confusing and tedious manuals, began to regard clear and engaging documentation as an important aspect of their products and a competitive advantage in the marketplace. Industry leaders such as IBM, Apple, and Microsoft—along with universities and other organizations—developed usability testing programs and undertook research studies. These and other computer companies issued insightfully designed, expensively produced manuals with refined page layout, ample illustrations, and color printing. Similarly, corporations in a wide range of industries improved their internal documentation to achieve greater efficiency, although budgets for internal documentation have traditionally been lower than for product documentation. Commercial publishers discovered that many people would pay for a better manual than the one shipped with the product or for a manual that met specific needs. A new publishing business in thirdparty computer books emerged. Software companies assisted third-party publishers, recognizing that customers value software products with strong thirdparty book support. During this period most software products included some kind of online (on-screen) help system. Online help, however, was often crudely designed. Furthermore, the appearance and operation of help systems varied greatly from one software product to another, and the whole idea of online help was alien
to computer users. Little by little, however, help systems began to compete successfully with print documentation.
Kinds of Computer Manuals It is important to understand the various kinds of computer instruction manuals and how each kind is designed. The largest class of manuals are those that document software applications. These manuals are usually divided into three categories: tutorials, user’s guides, and reference manuals. These three genres of instruction manuals are meant to work together to support the user’s complete cycle of product use from installation and initial learning, through ongoing use, to returning to the product or some product feature after time has passed. Other significant classes of manuals are programmer language manuals and hardware manuals. Tutorials Tutorials are intended to introduce users to the product. They are slow paced and provide detailed explanations and instructions along with numerous screen captures (images of how the screen will look) that show how the system will respond to the user’s actions. Tutorials are organized as a series of lessons in which the designer carefully chooses the most important features of the product and explains the features in the sequence that will result in the most effective learning. Rarely does a tutorial cover everything the product can do. The lessons are generally built around extended examples (often called scenarios). For example, the tutorial for a database management system might guide the user through building a database to keep track of the wine in a wine cellar. Tutorials are written in a friendly, conversational manner. In many tutorials, the user is encouraged to tackle a new lesson and is congratulated at the end of the lesson. Having completed the tutorial, the user will presumably graduate to the user’s guide and will be able to learn whatever additional features are required to accomplish the user’s own tasks. Print tutorials have drawbacks for both computer companies and users. They are expensive to write, and because they have a high page count, they are
382 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
expensive to print and ship. While some users rely on tutorials, others do not have the patience for slowpaced documentation. Another significant problem (which can be avoided in online tutorials) is it takes just a single mistake to put the user can out of sync with the tutorial. Still another problem is the relevance of the scenario to the user’s work (transfer of learning). A tutorial dealing with a wine cellar database may not be very useful to someone intending to build a database to log laboratory data. Finally, while a well-written tutorial can lead the user successfully through tasks, it is not clear how much of this knowledge is retained. User’s Guides When most people think about a computer manual, they are thinking of a user’s guide. This is the central piece of the print documentation set, the piece that will be used most of the time. If only one manual is provided, it is very likely a user’s guide. The user’s guide consists of procedures (instructions) for carrying out all the tasks (or at least all the mainstream tasks) that can be performed with the product. Technical communicators carefully organize user’s guides for maximum usefulness. Broadly speaking, the sequence of the chapters corresponds to the sequence in which users are likely to carry out tasks. So, for instance, the chapter on creating a new document will precede the chapter on printing. Highly specialized chapters come toward the end. Within a chapter, basic tasks precede specialized tasks. As much as possible, procedures are written as independent modules that the user can consult in any order. The writing style is straightforward, without scenarios or motivational comments. Examples are brief and appear within an individual procedure. Procedures generally consist of a title, a conceptual element, and numbered steps. The title identifies the procedure. The conceptual element (usually a paragraph or two) makes clear the purpose of the procedure and, if necessary, provides such information as the prerequisites that must be met before the procedure can be carried out. The steps are the actions that users will take and, at times, descriptions of how the system will respond to these actions. If the purpose is clear from the title, there may be no reason for the conceptual element. A
group of related procedures may be introduced by overview paragraphs, often located at the beginning of a chapter. Writing user’s guides poses many challenges. First, while almost everyone in the world of computer documentation embraces the idea of carefully studying the product’s users and writing user-centered, task-oriented documentation, these are not easy things to accomplish. There are major challenges in learning how the users of a product understand their work, what background knowledge and mental models they bring to bear, and the cognitive processes they employ when using the product, particularly when they encounter difficulties. Among the many thorny design issues is how much detail to include in procedures. Too much information is tedious; insufficient information leaves users puzzled or unable to carry out tasks. Hitting the proper balance is difficult, especially for a product that is used by a wide range of individuals. These kinds of problems apply to online as well as print documentation. Reference Manuals Whereas user’s guides are organized by tasks, reference manuals are organized—often alphabetically— by the names of commands. The manual describes each command’s purpose and options along with the location of the command in the application’s menu structure and the keyboard shortcut for executing the command. Reference documentation assumes a sophisticated user who understands the task he or she wishes to carry out and who can identify the commands that are necessary. Often people consult reference manuals for a review of commands they have used in the past.
Programming Language Documentation Manuals for documenting computer languages take the form of tutorials and references. Tutorials explain the basic concepts of the programming language and guide the programmer through the creation of simple programs. Code samples and
INSTRUCTION MANUALS ❚❙❘ 383
explanations of likely errors are included in these tutorials. References are the heart of programmer documentation. They explain the statements, variables, operators, and other constructs of the programming language. Syntax diagrams and code examples are generally included.
Hardware Manuals Hardware manuals vary greatly because computer hardware encompasses hand-held devices, standard desktop computers, mainframes, computer components, and more. When hardware devices include built-in software, a display, and a keypad, the hardware documentation may resemble a software manual. Hardware manuals also explain, with illustrations as well as text, the procedures for setting up, maintaining, and repairing the device. Users may well be unscrewing plates, connecting plugs, installing circuit boards, and measuring voltages. Hardware documentation is much less likely to be delivered online, if only because online presentation is generally unavailable when the user is assembling or starting the device or when the device is malfunctioning.
The Rise of Online Help Starting in the early 1990s, computer companies began reducing the size of the print documentation set and placing greater emphasis on the online help system and other forms of on-screen documentation. Now products are likely to ship with only a user’s guide or a brief guide to getting started—or they may ship with no print at all. If there is a tutorial, it is probably an online tutorial, very possibly delivered over the Web. This change was driven in large part by the need to control costs. Another reason is the desire to streamline the product development cycle. Once a manual is finished, there is often a six-week lag while the manual is at the printer. Not only can’t the product be shipped while the manual is being printed, but if last-minute corrections to the product code change the product’s appearance and behavior (for instance, if a buggy feature is modified or removed), the documentation cannot be updated to reflect these changes.
It is also true that online help has greatly matured, and a strong argument can be made that the needs of users are better served by online help. While print manuals are more legible than on-screen documentation, online help provides faster access to information. Clicking links is faster than turning pages, and when help content is integrated with the application (context sensitivity), users can instantly display information pertinent to the portion of the interface they are working with. Other advantages of online help include the ability to deliver animation, audio, and video, and to allow the user to directly access more detailed content stored on the vendor’s support website. Even so, many users remain loyal to print manuals.
Acrobat as a Compromise Solution Adobe Acrobat enables publishers of paper documents to distribute these documents as computer files with confidence that the formatting will be preserved regardless of the recipient’s computer system and printer. Some software vendors, therefore, prepare handsomely formatted manuals and distribute them free of charge (often from a tech support website) in Acrobat (PDF) format. Users can have printed manuals if they accept the trouble and cost of the printing. Furthermore, with such features as a clickable table of contents, text search, thumbnail images of pages, and the ability to add notes and highlights, Acrobat makes it possible to create manuals that can be used effectively on the computer screen. On-screen PDF manuals cannot be integrated with the application, but users do get many of the features of a help system, along with book-like page layout. Computer companies now have various ways to provide product documentation. They will make choices based on the nature of their products, the preferences of their customers, and cost factors. Instruction manuals, both comprehensive and scaled down, will share the stage with online help systems, and these instruction manuals will be shipped with products, printed by users from PDF files, utilized as on-screen PDF manuals, and purchased in the form of third-party books. If we continue to learn more about human-computer interaction and,
384 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
in particular, how people use computer documentation, and if computer companies and third-party publishers strive to create the best documentation they can, computer use will become a more productive and enjoyable experience. David K. Farkas See also Adaptive Help Systems FURTHER READING Agricola, G. (1950). De re metallica (H. C. Hoover & L. H. Hoover, Trans.). New York: Dover. Barker, T. T. (2003). Writing software documentation: A task-oriented approach (2nd ed.). New York: Longman. Brockmann, R. J. (1998). From millwrights to shipwrights to the twentyfirst century: Explorations in the history of technical communication in the United States. Creskill, NJ: Hampton Press. Carroll, J. M. (1990). The Nurnberg funnel: Designing minimalist instruction for practical computer skills. Cambridge: MIT Press. Carroll, J M. (Ed.). (1998). Minimalism beyond the Nurnberg funnel. Cambridge: MIT Press. Farkas, D. K. (1999). The logical and rhetorical construction of procedural discourse. Technical Communication, 46(1), 42–54. Hackos, J. T., (1994). Managing your documentation projects. New York: Wiley. Haramundanis, K. (1997). The art of technical documentation (2nd ed.). Woburn, MA: Butterworth-Heinemann. Haydon, L. M. (1995). The complete guide to writing and producing technical manuals. New York: Wiley. Horton, W. (1993). Let’s do away with manuals before they do away with us. Technical Communication, 40(1), 26–34. Horton, W. (1994). Designing and writing online documentation: Hypermedia for self-supporting products (2nd ed.). New York: Wiley. Jordan, S. (Ed.). (1971). Handbook of technical writing practices. New York: Wiley-Interscience. Microsoft Corporation. (2003). Manual of style for technical publications. Redmond, WA: Microsoft Press. Price, J., & Korman, H. (1993). How to communicate technical information: A handbook of software and hardware documentation. Redwood City, CA: Benjamin/Cummings. Schriver, K. A. (1997). Dynamics of document design. Creating texts for readers. New York: Wiley. Simson, H., & Casey, S. A. (1988). Developing effective user documentation: A human-factors approach. New York: McGraw-Hill. Steehouder, M., Jansen, C., van der Poort, P., & Verheijen, R. (Eds.). (1994). Quality of technical documentation. Amsterdam: Rodopi. Ummelen, N. (1996). Procedural and declarative information in software manuals: Effects on information use, task performance, and knowledge. Amsterdam: Rodopi. Vitruvius, P. (1950). The ten books on architecture (M. H. Morgan, Trans.). New York: Dover Publications.
INTERNET—WORLDWIDE DIFFUSION The Internet grew rapidly in the 1990s. Although we are still far from universal Internet connectivity, a large percentage in the developed world had access to the Internet by the end of 2003, as did significant percentages of people in developing countries. Among the factors affecting the rate of diffusion of Internet around the globe are economics, government policies, and the original dominance of the English language on the Web.
The Growth of the Internet Although there are no reliable data on the size of the world’s online population, estimates show that use of the Internet has diffused rapidly. The number of Internet users around the globe has surged from an estimated 4.4 million in 1991 to 10 million in 1993, 40 million in 1995, 117 million in 1997, 277 million in 1999, 502 million in 2001, to more than 600 million in 2002. Thus, the global penetration rate of the Internet has increased from less than a tenth of a percent in 1991 to 2 percent in 1997, 7 percent in 2000, to over 10 percent of the total world population in 2002. Projections for 2004 place the number of global Internet users between 700 million and 945 million. Estimates for the Internet’s global penetration rate in 2004 are between 11 percent and 15 percent. Despite rapid worldwide diffusion of the Internet, a disproportionate number of users are concentrated in more developed countries, especially the United States. In 2002, 169 million Americans were online, accounting for about 60 percent of the country’s total population and 29 percent of the world’s Internet population. There were 172 million users in Europe (28 percent of the world’s Internet population, 182 million in Southeast and East Asia, including 145 million in China, Japan, and Korea (23 percent). South America was home to 29 million users (5 percent), while there were 11 million in Oceania (1.9 percent), and 10 million in Africa (1.6 percent).
INTERNET-WORLDWIDE DIFFUSION ❚❙❘ 385
Countries with the largest population of online users are the United States with an estimated 159 million, China with 59 million, Japan with 57 million, Germany with 34 million, South Korea with 26 million, and the United Kingdom with 25 million Internet users. By proportion of population, Nordic countries top the list, with Iceland, Sweden, Denmark, Finland, and Norway in the top ten. Over half to three-quarters of these countries’ populations are online. Other countries leading in the rate of Internet penetration include the United States, Canada, the Netherlands, South Korea, and Singapore. To be sure, these data are rough approximations. Getting a perspective on the Internet is like tracking a perpetually moving and changing target. Meaningful comparisons and knowledge accumulation are hindered by the lack of comparability of data from different countries. This has led researchers studying the global diffusion of the Internet to rely on statistics gathered country-by-country that often employ different measurements. The definition of the online population often differs between studies: Some focus on adult users, while others include children and teenagers. There is no standard definition of who is an Internet user: Some studies embrace everyone who has ever accessed the Internet as a user, while others count only those who use the Internet at least once a week. Some studies use households, not individuals, as the unit of analysis. This also masks how individuals within a household use the Internet. To increase reliability and comparability, this article relies on data from national representative surveys conducted by government agencies, scholarly researchers, and policy reports issued by international organizations.
Digital Divides Internet users are not a random sample of a country’s population: They differ from nonusers in socioeconomic status (education, occupation, income, wealth), age, gender, race or ethnicity, stage in the life-course, urban-rural location, and language. This digital divide has been present since the onset of computerization and the Internet, when most users were North American, young, well-educated, white
men. Yet the digital divide is not a binary yes-no question of whether basic physical access to the Internet is available. Access does not necessarily equal use. What matters is the extent to which people regularly use a computer and the Internet for meaningful purposes. The digital divide is shaped by social factors as much as technological factors, with systematic variations in the kinds of people who are on and off the Internet. Moreover, the digital divide cuts across nations and changes over time. Not only are there socially patterned differences within countries in who uses the Internet, there are major differences between countries: the global digital divide. Thus, there are multiple digital divides, varying within countries as well as between countries, both developed and developing.
Seven National Examples To show the varied diffusion of Internet use, summaries of the development of Internet use in seven quite different countries are provided here and compared in Table 1. United States The United States has always had the largest number of Internet users. In 1997, 57 million Americans were using the Internet, representing 22 percent of the total U.S. population. The number of users climbed to 85 million in 1998 (33 percent), 116 million in 2000 (44 percent), 143 million in 2001 (54 percent), and reached 169 million in 2002. Between 1997 and 2001, while the number of Americans using computers increased by 27 percent from 137 million to 174 million, the online population rapidly increased by 152 percent. By 2001, 66 percent of American computer users were also online. Likely because Internet use is so widespread, all forms of digital divides are shrinking in the United States (and Canada). Germany The Internet penetration rate has risen in Germany since the mid 1990s. Among the German population aged 14 and older, 7 percent used the Internet in 1997, 10 percent in 1998, and 18 percent in 1999. Unlike in North America, there was a substantial
386 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
gap in Germany between computer ownership and Internet use as late as 1999, when 45 percent of households in Germany owned a computer but only about one-quarter of those households (11 percent of all households) were connected to the Internet. Internet diffusion has accelerated since then. Twenty-nine percent of the German population was wired in 2000, 39 percent in 2001, and 44 percent in 2002. Despite the diffusion of the Internet, the socioeconomic and gender digital divides are increasing in Germany, as newcomers to the Internet are disproportionately men of high socioeconomic status. Italy Only five percent of Italian households had Internet access in 1998. Italy’s low Internet penetration rate has been associated with low computer ownership. But the situation is changing, as Italians have been rapidly adopting personal computers (PCs) and the Internet since the late 1990s. In 2000, 31 percent of Italian households owned a PC, of which 60 percent were connected to the Internet. In one year, the Internet penetration rate increased by one-third, from 14 percent in 1999 to 21 percent in 2000. It more than doubled in two years, reaching 33 percent (19 million) in 2001. However, Italy still has relatively low rates of PC and Internet penetration compared with other western European nations. The gender gap has remained significant, with women using the Internet far less than men. Japan The percentage of Japanese households owning PCs more than doubled from 1996 to 2002. About 22 percent of Japanese households owned a PC in 1996, 29 percent in 1997, 33 percent in 1998, 38 percent in 1999, 51 percent in 2000, and 58 percent in 2001. However, there has been a gap between PC access and Internet access. The diffusion of the Internet, especially the PC-based Internet, started relatively late in Japan. For instance, while 40 percent of American households were online in 1999, only 12 percent of Japanese households were online that year. The number of Internet users (6 years and older) was 12 million in 1997, 17 million in 1998, 27 million in 1999, 47 million in 2000, 56 million in 2001, and
69.4 million in 2002. The Japanese are the world’s heaviest users of mobile phone-based Internet services, comprising about one-third of the world’s users, with the less-educated and women accessing the Internet from Web-enabled mobile phones at a higher rate than from PCs. Korea The number of Internet users in Korea increased threefold between 1998 and 1999, from 3 million to 11 million. By 2001, 24 million Koreans over 7 years old (57 percent) were online. The number of Internet users grew to 26 million by June 2002, a figure nearly nine times large than the figure from five years earlier. There were also 27 million mobile Internet subscribers in Korea in June 2002, although many of them presumably also had PC-based access. With 14 broadband subscribers per 100 inhabitants in June 2001, Korea has become the world leader in broadband Internet access. Although only 14,000 Korean households had a broadband connection in 1998, nearly 9 million of them were using broadband connections by 2002. Koreans are also heavy users of mobile-phone-based Internet services. China China is a relatively late starter in Internet use but has been catching up quickly. Because China’s population is so large, the low penetration rate of less than 5 percent provides both a great many users and much room for growth. There has been a dramatic increase in Internet users, from 620,000 in 1997 to 22 million in 2001, and about 60 million in 2003. The number of Internet-connected computers has increased from about 0.3 million in 1997 to 12 million in 2002. China’s Internet population probably ranks second in the world and is growing rapidly. Currently, use is concentrated in major urban areas near the east coast. Public access points, such as Internet cafes, accommodate many. The majority of new users are young, especially university students, creating a probable demand for continued Internet use in later life. Mexico In Mexico a much higher percentage of the population has a computer than has Internet access. For
INTERNET-WORLDWIDE DIFFUSION ❚❙❘ 387
TABLE 1.
Summary of Internet Access in Seven Countries
Country
Socioeconomic Status
Gender
Life Stage
Geographic Location
U.S.
Declining yet persistent
Half of Internet users are female.
Declining yet persistent
Declining yet persistent
Germany
Increasing
Increasing
Declining yet persistent
Declining
Italy
Deep digital divide based on education.
Increasing
Japan
Declining yet persistent
Declining, and reversed for mobile (webphone) Internet
Younger Italians currently more Northern Italy is likely to access leading the south in and use the Internet diffusion. Internet. Trend is not available.
Younger are more involved
Major cities have higher Internet diffusion than smaller cities.
South Korea
Increasing
Persistent
Increasing
Declining. Seoul is the most wired area in the country.
China
Huge, yet declining slightly
Declining yet persistent
Slightly declining
Huge, yet declining slightly
Mexico
Huge
Less than half of Internet users are women.
© ©Wenhong and Barry Wenhong ChenChen and Barry Wellman 2003 Wellman 2003
Younger Very uneven. Users Mexicans are concentrated in make up the the central districts, majority of Guadalajara, and Internet users. Monterrey.
388 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
example, at the start of the Internet era in 1994, although nearly 5 percent of the Mexican population owned a PC, only 0.04 percent of the population accessed the Internet. Since then, Mexicans have adopted the Internet rapidly. The penetration rate surpassed the 1 percent mark in 1998 and increased to 2.6 percent in 1999. It continued to grow to 2.8 percent (2.7 million) in 2000 and 3.7 percent (3.6 million) in 2001, and reached 4.7 million in 2002. Unreliable telephone service hinders home-based Internet use, especially in rural and impoverished areas. Instead, public Internet terminals provide a significant amount of connectivity.
all countries, developed as well as developing. In some countries, the digital divide is widening even as the number and percentage of Internet users increases. This happens when the newcomers to the Internet are demographically similar to those already online. The diffusion of the Internet is not merely a matter of computer technology. The digital divide has profound impacts on the continuation of social inequality. People, social groups, and nations on the wrong side of the digital divide may be increasingly excluded from knowledge-based societies and economies. Wenhong Chen, Phuoc Tran, and Barry Wellman
The Diffusion of the Internet in Context The diffusion of the Internet (and accompanying digital divides) has occurred at the intersection of international and within-country socioeconomic, technological, and linguistic differences. Telecommunications policies, infrastructures, and education are prerequisites for marginalized communities to participate in the information age. High costs, English language dominance, lack of relevant content, and lack of technological support are obstacles to disadvantaged communities’ using computers and the Internet. For instance, while about one-half of the world’s Internet users are native English speakers, about three-quarters of all websites are in English. The diffusion of Internet use in developed countries may be slowing and even stalling. Currently, Internet penetration rates are not climbing in several of the developed countries with the most penetration. This is a new phenomenon, but is too soon to tell if it represents a true leveling-off of the penetration rate or a short-term fluctuation as Internet use continues its climb to triumphant ubiquity. With the proliferation of the Internet in developed countries, the digital divide between North American and developed countries elsewhere is narrowing. However, the digital divide remains substantial between developed and developing countries. The divide also remains substantial within almost
See also Digital Divide
FURTHER READING Chen, W., & Wellman, B. (2004). “Charting Digital Divides: Within and Between Countries.” In W. Dutton, B. Kahin, R. O’Callaghan, & A. Wyckoff (Eds.), Transforming Enterprise. Cambridge, MA: MIT Press. CNNIC (China Internet Network Information Center). (2003, July). 12th statistical survey report on the Internet development in China. Retrieved January 23, 2004, from http://www.cnnic.org.cn/ download/manual/en-reports/12.pdf Fortunati, L., & Manganelli, A. M. (2002). A review of the literature on gender and ICTs in Italy. In K. H. Sørensen & J. Steward (Eds.), Digital divides and inclusion measures: A review of literature and statistical trends on gender and ICT (STS Report 59-02). Trondheim, Norway: Centre for Technology and Society. Light, J. (2001). Rethinking the Digital Divide. Harvard Educational Review, 71(4), 709–733. Ministry of Public Management, Home Affairs, Posts, and Telecommunications (Japan). (2003). Building a new, Japan-inspired IT society. Tokyo: Author. National Telecommunications and Information Administration (2002). A nation online: How Americans are expanding their use of the Internet. Washington, DC: U.S. Department of Commerce. Norris, P. (2001). Digital divide? Civic engagement, information poverty and the Internet in democratic societies. Cambridge, UK: Cambridge University Press. Nua Internet (2004). How many online? Retrieved January 23, 2004, from http://www.nua.com/surveys/how_many_online/index.html. Reddick, A., & Boucher, C. (2002). Tracking the dual digital divide. Ottowa, Canada: Ekos Research Associates. Retrieved January 23, 2004, from http://olt-bta.hrdc-drhc.gc.ca/resources/digitaldivide_e.pdf Servicios de Telecomunicaciones. (2003). Usuarios estimados de Internet en México [Estimated Internet users in Mexico]. Retrieved January 23, 2004, from http://www.cft.gob.mx/html/5_est/Graf_ internet/estimi1.pdf
INTERNET IN EVERYDAY LIFE ❚❙❘ 389
Soe, Y. (2002). The digital divide: An analysis of Korea’s Internet diffusion. Unpublished master’s thesis, Georgetown University, Washington, DC. Thomasson, J., Foster, W., & Press, L. (2002). The diffusion of the Internet in Mexico. Austin: University of Texas at Austin, Latin America Network Information Center. TNS Interactive. (2001). Asia-Pacific M-commerce report. Retrieved January 23, 2004, from http://www.tnsofres.com/apmcommerce/ UCLA Center for Communication Policy. (2003). The UCLA Internet report: Surveying the digital future year three. Retrieved January 8, 2004, from http://ccp.ucla.edu/pdf/UCLA-Internet-Report-YearThree.pdf van Eimeren, B., Gerhard, H., & Frees, B. (2002). ARD/ZDF-OnlineStudie 2002. Entwicklung der Online-Nutzung in Deutschland: Mehr Routine, Wengier Entdeckerfreude [The development of Internet use in Germany: More routine, less fun of discovery]. Media Perspektiven, 8, 346–362. World Internet Project Japan. (2002). Internet usage trends in Japan: Survey report. Tokyo: Institute of Socio-Information and Communication Studies, Tokyo University.
INTERNET IN EVERYDAY LIFE The increasing presence of the Internet in everyday life has created important issues about what the Internet means to people concerning access to resources, social interaction, and commitment to groups, organizations, and communities. What started in 1969 as a network between four computers in southern California has metamorphosed into a global system of rapid communication and information retrieval. The Internet was designed to be decentralized and scalable from the beginning. These design features have given the Internet room to expand to immense proportions and to keep growing. By the end of 2003, the Internet had an estimated 600 million regular users, and the Google search engine could reach more than 3.4 trillion unique webpages. The Internet was a novel curiosity in academia in the 1980s, but it went mainstream in the early 1990s when e-mail was joined by the World Wide Web. Since then the Internet has become so pervasive that in many parts of the world its presence is taken for granted. Such widespread nonchalance about such a powerful set of technological tools illustrates how deeply the Internet has embedded itself into everyday life.
First Age—The Internet as Dazzling Wonder When the Internet moved from the arcane world of library computer terminals to homes and offices during the 1990s, it was heralded as a technological marvel. Early adopters congratulated themselves on being progressive elites, and techno-nerds rejoiced in newfound respect and fame. Bespectacled, nerdy Microsoft founder Bill Gates was as much a superstar as rock singers and professional athletes. All things seemed possible. The cover of the December 1999 issue of Wired magazine, the Internet’s greatest cultural champion, depicted a cyberangel leaping from a cliff to reach for the ethereal sun. The angel’s graceful posture pointed upward, placing seemingly boundless faith in an unfettered cyberfuture. The first clear pictures from the frontier of cyberspace came from early studies of online culture. Investigators, peering into online communities such as the Whole Earth ’Lectric Link (WELL) and LambdaMOO, provided insight into how early adopters multitasked and negotiated identities, given a paucity of social cues. Early adopters’ activities provided material for stories in the mass media and ethnographies (cultural studies). Instead of traveling to remote places, ethnographers only had to log on to the Internet and tune in to online virtual communities. Fascinating stories abounded of colorful characters and dangerous situations such as virtual transvestites and cyberstalkers. Some pundits went too far, extrapolating from such esoteric online settings to the generalized Internet experience. However, as the Internet became broadly adopted, people saw that communication would not primarily be with far-flung mysterious others in virtual worlds, but rather with the people about whom users already cared most: family, friends, and workmates. Nevertheless, ideologies of the unique, transformative nature of the Internet persisted as enthusiasts failed to view it in historical perspective (presentism). For example, long-distance community ties had been flourishing for generations, using automobiles, telephones, and airplanes. Other pundits assumed that only online phenomena are relevant to understanding the Internet (parochialism). They committed the fundamental
390 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
A Personal Story—Information Technology and Competitive Academic Debate They say a transformation occurs when something new becomes so much a part of an activity that you cannot think of doing it otherwise. I have witnessed something on the order of a transformation in doing research for competitive academic debate as both a participant and a coach of at both the high school and college levels. Research is essential to constructing competitive arguments and strategies in the activity. In high school in the early 1990s, most of this was done in libraries as preparation before going off to a tournament. By the time I reached college in the mid 1990s, we also had access to Lexis-Nexis. This was particularly significant because we could dial into the service to adapt and update our arguments while away at tournaments. The ability to adapt arguments over the course of a tournament is crucial to maintain a competitive edge. The next change came as the university I attended began to wire all the buildings with Internet and the Internet gradually became more central to debate research. With the latest journals and reports from various think tanks and available online, we could do the bulk of our research without leaving the debate office—which came in handy during those cold Iowa winters. Some became so dependent on the Internet that they never went to the library. At this point, however, the Internet was still not portable, meaning we, like most teams, did not have Internet access when we were out of town at tournaments. Over time, however, the ability to access the vast diversity of resources on the Internet became essential to stay competitive with other teams. Given the literally hundreds of dollars we would incur in hotel phone charges, we began to network computers together so that multiple people could be online or using Lexis-Nexis while using a single phone line. Now that most major hotels have high speed Internet access, the costs of connectivity have dramatically declined. Some tournaments even offer onsite wireless access to all participants so that their coaching staffs can continually update and adapt argument strategies throughout the day. Internet-based research, once not even possible, became a competitive edge for those who had it, and is now a competitive necessity for debate programs at home and on the road. Michael J. Jensen
sin of particularism, thinking of the Internet as a lived experience distinct from the rest of life. This sin often shaded into elitism because only the small percentage of the technologically adept had the equipment, knowledge, time, and desire to plunge so fully into cyberspace. The social exuberance for all things technological departed quickly in 2000. For one reason, that year’s dot.com stock market bust curbed enthusiasm and media attention. Special newspaper Internet sections shrank in the wake of instantly vanishing dot.com vanity ads, and the pages of Wired magazine shrank 25 percent from 240 pages in September 1996 to 180 pages in September 2001 and another 22 percent to 140 pages in September 2003. When the rapidly contracting dot.com economy was brought down to earth, it took Internet euphoria with it. At the same time, the Internet had become
so widely used in developed countries that it was becoming routinized. Familiarity breeds cognitive neglect, and as with the telephone and the automobile before it, exotic stories diminished just as the widespread diffusion of the Internet increased its true social importance.
Second Age—The Internet Embedded in Everyday Life The story of the Internet after the hype is more interesting, if less fashionable. The Internet plugs in to existing social structures: reproducing class, race, and gender inequalities; bringing new cultural forms; and intersecting with everyday life in both unconventional and conventional ways. Attention now focuses on the broader questions of the “Internet in
INTERNET IN EVERYDAY LIFE ❚❙❘ 391
society” rather than on “Internet societies.” The thrust of research is moving from using culture-oriented small sample studies and abstract theorizing to using surveys to study the more diffuse impact of this new communication and information distribution medium in the broad population of Internet users (and nonusers). Whereas the first age of the Internet was a period of exploration, hope, and uncertainty, the second age of the Internet has been one of routinization, diffusion, and development. Research shows that computer networks actively support interpersonal and interorganizational social networks. Far from the Internet pulling people apart, it often brings them closer together. Internet users are more likely to read newspapers, discuss important matters with their spouses and close friends, form neighborhood associations, vote, and participate in sociable offline activities. The more they meet in person or by telephone, the more they use the Internet to communicate. This media multiplexity means that the more people communicate by one medium, the more they communicate overall. For example, people might telephone to arrange a social or work meeting, alter arrangements over the Internet, and then get together in person. Rather than being conducted only online, in person, or on the telephone, many relationships are complex dances of serendipitous face-to-face encounters, scheduled meetings, telephone chats, e-mail exchanges with one person or several others, and broader online discussions among those people sharing interests. Extroverts are especially likely to embrace the ways in which the Internet gives them an extra and efficient means of community. However, introverts can feel overloaded and alienated. Internet-based communications have always fostered social networks serendipitously. Even eBay, the Internet auction enterprise, helps create communication between hitherto-disconnected specialized producers and collectors. Many software developers have focused on identifying, using, and analyzing these social networks. Participants in online networking websites, such as Friendster, not only describe themselves online (e.g., “single, male truck driver aged thirty-five”), but also list their friends. They hope that friends of friends will be able to contact each other.
Most people in most developed countries use the Internet to find information or to contact friends, but many people are not online. Surveys and ethnographies have shown how racial minorities, the economically disadvantaged, and those people who do not read English use the Internet less than others. This situation has serious social consequences as companies and government agencies place more services exclusively online. Thus,“digital divides” mean that the lack of Internet access and use can increase social inequality. Digital divides exist within countries and among countries. Moreover, different countries have different sorts of digital divides. For example, Italian women access the Internet much less often than do Italian men or northern European women. Overall, however, the gaps in income, location, culture, and language are shrinking between those people who are comfortable with computerization and those who are not. The gender gap has already disappeared in some places. Digital divides constitute more than an access/no access dichotomy. People have concerns about the quantity of information flowing through the Internet and the quality of the experience. The quality of the Internet experience is a key concern for reducing social inequality. First, the ability to perform a complex and efficient search online is not a skill learned by osmosis, but rather by experience and openness to the potential of the technology. Second, bloated software that inundates the user with ambiguous options and icons can intimidate novice users instead of providing the best framework for learning. Third, content providers must consider the time lag between experienced early adopters, late adopters, and newbies (new users). These populations can have significantly different expectations about what to do online and how to do it. Fourth, many websites are available only in English. Fifth, one can have difficulty using the Internet to communicate if one’s contacts are not online. At one time analysts expected all societies to use the Internet in similar ways. However, comparative research shows different national patterns. The extent to which such media as e-mail or instant messaging (IM) is used depends on a complex interplay between people’s tastes, financial resources, culture, location geographically, location in the social structure, and
392 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Finding Work Online In an interview for my dissertation on work in the Internet industry, a freelance writer described how she found one of her jobs. Members of the online communities she belonged to—including e-mail lists that connected members of New York’s Internet industry—freely shared job leads with one another and often announced when they were looking for a new job. One position, however, came when an employer “met” her through the list. She used the story to describe the importance of maintaining one’s reputation online: “He said, ‘I saw your post on the list and saw that you know how to use a semicolon. You seem super competent.’” She got the job without ever having met him. My interview data show that her experience is not unusual. Online social connections can feel as real as other ways of expressing knowing someone, especially in realms like job hunting. Through e-mailed job announcements, people can demonstrate what anthropologists call “gift exchange” through their social networks. They use information as a resource to share with people they are in contact. The rise of these announcements and online job databases make the process of job hunting easier, as does the expansion of acquaintance networks through computer
national infrastructure. At times people face not a matter of personal choice but rather a matter of social constraint: It is foolish for a person to send e-mails or instant messages if few people are reading them. For example, Catalans in Spain mostly use the Internet for acquiring information and shopping—train schedules, theater tickets—and less for communicating by e-mail. Catalonia is a local society in a healthful climate where people gather in cafes to chat face to face. To take another example, teenagers in developed countries communicate more by mobile phone and instant messages than by e-mail. In Japan the proliferation of Web-enabled phones means that two hitherto-separate communication media are becoming linked: Japanese teenagers and young adults frequently exchange e-mails on their mobile phones or use their personal computers to send short text messages to mobile friends.
technologies such as programs to maintain contact information, e-mail lists, and new “networking” services. Jobs in a different city can be easily researched, and looking for jobs that require specialized skills across the country is easier than before the use of the Internet for job hunting. The same changes that make it easier for a job seekers to get information about job openings, however, make it more difficult for them to differentiate themselves from other, now more numerous jobseekers. Through a kind of information overload, online job databases may, in fact, be strengthening the role that personal connections play in ultimately helping workers find jobs. One group I studied had a novel approach to the online/offline searching techniques—they stood on street corners in New York City passing out fliers for a website where they posted their resumes. Rather than relying solely on the ways in which they were linked to prospective employers online, they increased the number of their social connections. Still, getting a job, as the sociologist Mark Granovetter once wrote, comes down to the friends of friends, even in a digital era, even if some of those people are known only online. Gina Neff
With physical co-presence continuing to be important, the Internet supports “glocalization” rather than the Canadian philosopher Marshall McLuhan’s imagined “global village.” In the community and at work the Internet facilitates physically close local ties as well as physically distant ties. People often use the Internet to communicate quickly with nearby others without the disturbance of a phone call or in-person visit. For example, one study of Netville, the pseudonym for a Toronto suburb, found that active Internet users knew the names of more neighbors, had visited more of them, and used the Internet effectively to mobilize against their real estate developer. Another Toronto study found that co-workers were more likely to use the Internet when they worked in the same building, in part because they had more tasks and concerns in common. Even many long-distance ties have a lo-
INTERNET IN EVERYDAY LIFE ❚❙❘ 393
cal component, as when former neighbors or officemates use the Internet to remain in touch. E-diasporas abound as migrants use the Internet to stay linked with their old country: communicating with friends and relatives, reading newspapers online, and providing uncensored information. People’s increased use of the Internet is correlated with their decreased time doing housework, watching television, and being with family members in person. Experts have suggested two models to explain this situation. The hydraulic model treats time as a fixed amount. Hence, an increase in the use of the Internet directly corresponds with a decrease in other activities. According to the efficiency model, people on the Internet can use time more effectively and may get more communication and information out of their day. The efficiency model appears to be more accurate as people combine e-mail, IM, and mobile phone use with other activities. Indeed, with multitasking, we can say that some people live a thirty-six-hour day, performing tasks concurrently online and offline. The Internet also has affected social networks at work. Early efforts at software to support people working groups were fitful, in part because most knowledge workers do not work in one group. Many now are engaged in distributed work, operating through geographically dispersed and loosely knit social networks. Rather than being parts of traditional bureaucratic hierarchies (with organizational structures looking like inverted trees) in which each worker fits into a single group, many knowledge workers are partial members of multiple teams and report to multiple superiors. Many teams are geographically dispersed so that much communication is done by the Internet. Moreover, those workers who spend the day working on personal computers often turn to the Internet to acquire information rather than ask a co-worker in a nearby cubicle. They form “communities of practice” with fellow workers whom they may never have met in person, exchanging know-how and empathy online. However, proximity still has its advantages because it provides a broad bandwidth of multisensory communication— people learn more when they see, hear, smell, and touch each other—and enables the exchange of physical objects.
The Social Possibilities of the Internet The Internet—or any other technology—does not simply “cause” anything, just as a light switch placed high on a wall does not make access impossible for children. Understanding the implications of the Internet calls for understanding some of the possible social activities that can be accomplished by using it. For example, people can take advantage of real-time communication with partners via instant messaging, access daily journals (blogs), and send instant announcements to one or one hundred specific people through e-mail. Yet, the Internet does not determine the nature of communication. Indeed, a line of “media richness” research during the early 1990s failed to show much fit between the nature of a communication medium and what it was used for: People used what their friends and their co-workers used. The Internet lends itself to particular styles of communication that are parts of a person’s overall ensemble of everyday communication via the telephone and in-person encounters. Thus, the Internet’s technical characteristics provide a means of organizing relationships with other people, not a blueprint of how the relationships will or should take place.
Asynchronous Communication Most communication takes place in “real time.” On the telephone and in person, people assume that communication is reciprocal and that the delay between utterances is brief. By contrast, e-mails, like letters, allow people to communicate on their own time. Yet, unlike letters, e-mails reach their destination within minutes. As long as systems are not overloaded, the only significant delay in e-mail communication is the time lag set by the user’s attention. E-mail, as a form of asynchronous communication, gives greater individual autonomy. People can choose when to turn on their computers and their e-mail programs, choose to whom they wish to respond, and choose who else in their network they want to include in their e-mail communication. The cost of this autonomy is uncertainty regarding when and if the receiver will read the e-mail and reply.
394 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Bandwidth The number of bits (units of computer information) that can be pushed through a computer network connection has risen from 110 bits per second (bps) in the mid-1970s to 30,000 bps in the early 1990s and upward of 1 million bits for a high-speed connection today. High capacity bandwidth is important for speed so that text messages and webpages become readable without distracting delays. Greater bandwidth affords richer content. Greater bandwidth can mean the difference between sending terse and ugly text messages and sharing photos or music and seeing one another via Internet-connected cameras (webcams). Greater bandwidth usually allows computers to be connected continuously. The study of the Netville wired suburb found this always-on feature of its network to be more valued than sheer speed. Such a connection affords people the ability to send e-mail or check the Web whenever the inclination strikes them. Employers now complain about workers’ use of the Internet for personal matters, and family members complain that their loved ones are tied to their computers during supposed leisure hours. High-speed, always-on connections allow a different relationship to the Internet than that possible with micromanaged dial-up connections with slow file access and concerns about usurping telephone lines. Whereas dial-up connectivity facilitates a discrete Internet session, always-on Internet encourages spontaneous use.
Globalized, Ubiquitous Connectivity Computerization has oscillated between centralized control (computer centers) and personal control (standalone computers). The current situation— networked computing—means that information (and control) flows up and down between central servers and somewhat autonomous personal computers. Yet, despite organizational control, many people in organizations also use their computers for social and personal matters. Computer networks are expanding as the World Wide Web is becoming more comprehensive and
worthy of its name. Ubiquity means the widespread availability of usable computing and computermediated communication. Travelers in the developed world are coming to expect to be able to co n n e c t to t h e In te r n e t w h e re ve r t h e y a re through cybercafes, high-speed links in hotels, wireless “hotspots,” and the local offices of their organizations. Workers can now be reached on vacation. Cybercafes proliferate in the less-developed world, catering to people who cannot afford to own a computer or who do not have reliable phone and electricity connections. Other solutions are developing, such as wirelessly connected computers carried by Indian postal workers on their rounds. The continuing development of global availability means that even more people and organizations will be reachable online. All users could be connected to all, through either direct ties or short chains of indirect ties.
Wireless Portability People are shifting from “wired” computing— connected to the Internet through cables—to portable, wireless computing. Portability means that people can take computing with them: One does not have to depend on others’ equipment to connect. Already more wireless mobile phones than wired phones are in use worldwide. Although wires still carry the most bandwidth, mobile phones are becoming integrated with the multifunctional capacity of computers. This integration lets people be less rooted to place and gives them the ability to connect from anywhere. Portability means that much work is being carried home from the office on laptop computers or high-capacity storage devices.
Relational Data Vannevar Bush, the great-grandfather of the Internet, once suggested that information could be organized into trails of association rather than grouped into discrete categories (as in an encyclopedia). This suggestion has been translated into “Web surfing.” People move through Web networks, from one link to another, rather than exhaust a particular category. In some instances these links are dynamic, such as the
INTERNET IN EVERYDAY LIFE ❚❙❘ 395
recommendations on an Amazon.com book webpage, whereas others are more static, such as a personal list of favorite links. Most computer operating systems now allow users to have their own settings, e-mail accounts, and desktop aesthetics. Instant messaging accounts, accessible from any Internet terminal, are tailored to the person, not to the house or the particular computer. Ubiquitous computing could soon mean that whenever people log on to a communications device, the device knows who they are, where they are, and their preferences. Such personalization, even at its early stages, is fostering societal shifts from place-to-place connectivity (a particular computer) to person-to-person connectivity (a particular user’s account). The Internet has partially democratized the production and dissemination of ideas. More people are providing information to the public than ever before. E-mail discussion groups, Web-based chatrooms, and Usenet newsgroups foster conversations among (more or less) like-minded people. All of these communication media are based on many-to-many communication, in contrast to e-mail and instant messaging, which usually are based on one-to-one communication. Although some people have feared that the like-minded will talk only to each other, in practice much diversity exists in these interactions, and interesting ideas can be copied and forwarded to others. Rather than result in inbred sterilization, such media are concerned with “flaming” (making offensively rude comments) and “spamming” (sending off-topic comments). For those people who want more complex means of communication, easy-to-use software now facilitates do-it-yourself webpages. Software has transformed the creation of simple webpages from an arcane art to a straightforward creation of non-specialists. At one time most webpages were relatively static, but now blogs make frequent updating simple. Many blogs combine personal reflections, social communication, and links to other websites. This democratization of computing is not just recreational. The Internet also supports open source development of computer code: a peer-production system where members of a team contribute and distribute computer code freely and openly. One pop-
ular open source operating system (GNU/Linux) contains many millions of lines of computer code. Another open source product, Apache, runs most of the world’s Web servers. Without the Internet to connect specialized developers and distribute their code, open source work would have remained a slowmoving, poorly communicated, and badly coordinated activity for hobbyists. Yet, even open source work does not exist exclusively on the Internet. Linux user groups populate most North American cities, popular face-to-face conferences generate revenue, and developers like to talk to each other in person. The Internet has facilitated, not monopolized, this type of production. The most notorious exchange of complex information on the Internet is the sharing of music, computer programs, and movies. Because only computer bits, not material goods, are exchanged, many downloaders feel that they have the right to obtain such material for free. The complex interplay between immaterial media, their physical containers such as CDs, their copyright licenses, and the costs of distribution has challenged producers, consumers, and the legal system.
The Turn toward Networked Individualism The Internet lets people who can afford recent technological services communicate when, where, and with whom they want and have their experiences personalized. Indeed, the Internet (along with mobile phones and automobiles) is fostering a societal turn away from groups and toward networked individualism: People connect to each other as individuals rather than as members of households, communities, kinship groups, workgroups, and organizations. Especially in the developed world, this flourishing of person-to-person connectivity has also been fostered by social changes such as liberalized divorce laws and by technological changes such as the proliferation of expressways, mobile phones, and air travel. The turn to a ubiquitous, personalized, wireless world fosters personal social networks that supply sociability, support, information, and a sense of belonging. The individual user is becoming a
396 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
switchboard between his or her unique set of ties and networks; people separately operate their specialized ties to obtain needed resources. Although people remain connected and supportive, individuals in unique networks have supplanted the traditional organizing units of the household, neighborhood, kin group, and workgroup. Networked individualism is having profound effects on social cohesion. Rather than being part of a hierarchy of encompassing groups like nesting Russian dolls, people belong to multiple, partial communities. They move from person to person, not place to place. Increasing specialization of tastes and combination of roles are not products of the Internet. Yet, the design of the Internet, culturally rooted in a specific brand of individualism, considers the person regardless of place and regardless of a socially imposed structure such as a kinship network. Social coordination may be usurped as the role of maintaining ties with kin becomes either less important or more overloaded. Teenagers’ parents do not get to approve of buddy lists or websites visited. The development of computer networks, and the flourishing of social networks are building upon each other to support the rise of networked individualism. Just as the flexibility of less-bounded, spatially dispersed, social networks creates demand for collaborative communication and information sharing, the rapid development of computer-communications networks nourishes societal transitions from grouporiented societies to a society of networks. Barry Wellman and Bernie Hogan See also Computer-Supported Cooperative Work; Cybercommunities; Digital Divide; Internet— Worldwide Diffusion
FURTHER READING Bradner, E., Kellogg, W. A., & Erickson, T. (1999). The adoption and use of Babble: A field study of chat in the workplace. In S. Bodker, M. Kyng, & K. Schmidt (Eds.), ECSCW 99: Proceedings of the Sixth European Conference on Computer Supported Cooperative Work (pp. 139–158. Dordrecht, The Netherlands: Kluwer. Bush, V. (1945). As we may think. The Atlantic Monthly, 176(1), 101–108.
Castells, M. (2000). The rise of the network society (2nd ed.). Oxford, UK: Blackwell. Castells, M. (2001). The Internet galaxy: Reflections on Internet, business, and society. Oxford, UK: Oxford University Press. Castells, M., Tubella, I., Sancho, T., & Wellman, B. (2003). The network society in Catalonia: An empirical analysis. Barcelona, Spain: Universitat Oberta Catalunya. Chen, W., & Wellman, B. (2004). Charting digital divides: Within and between countries. In W. Dutton, B. Kahin, R. O’Callaghan, & A. Wyckoff (Eds.), Transforming enterprise. Cambridge, MA: MIT Press. Cross, R., & Parker, A. (2004). The hidden power of social networks: Understanding how work really gets done in organizations. Boston: Harvard Business School Press. Deibert, R. J. (2002). Dark guests and great firewalls: The Internet and Chinese security policy. Journal of Social Issues, 58(1), 143–159. Gibson, W. (1984). Neuromancer. New York: Ace Books. Hampton, K., & Wellman, B. (2003). Neighboring in Netville: How the Internet supports community and social capital in a wired suburb. City and Community, 2(3), 277–311. Haythornthwaite, C., & Wellman, B. (1998). Work, friendship and media use for information exchange in a networked organization. Journal of the American Society for Information Science, 49(12), 1101–1114. Hiltz, S. R., & Turoff, M. (1993). The network nation (2nd ed.). Cambridge, MA: MIT Press. Hinds, P., & Kiesler, S. (Eds.). (2002). Distributed work. Cambridge, MA: MIT Press. Howard, P. N., & Jones, S. (Eds.). (2004). Society online: The Internet in context. Thousand Oaks, CA: Sage. Ito, M. (Ed.). (2004). Portable, personal, intimate: Mobile phones in Japanese life. Cambridge, MA: MIT Press. Katz, J., & Aakhus, M. (2002). Perpetual contact: Mobile communications, private talk, public performance. Cambridge, UK: Cambridge University Press. Katz, J. E., & Rice, R. E. (2002). Social consequences of Internet use: Access, involvement, and interaction. Cambridge, MA: MIT Press. Kendall, L. (2002). Hanging out in the virtual pub: Masculinities and relationships online. Berkeley, CA: University of California Press. Kiesler, S. (Ed.). (1997). Culture of the Internet. Mahwah, NJ: Lawrence Erlbaum. Kim, A. J. (2000). Community building on the Web. Berkeley, CA: Peachpit Press. Kraut, R., Kiesler, S., Boneva, B., Cummings, J., Helgeson, V., & Crawford, A. (2002). Internet paradox revisited. Journal of Social Issues, 58(1), 49–74. Madden, M., & Rainie, L. (2003). America’s online pursuits. Washington, DC: Pew Internet and American Life Project. Manovich, L. (2002). The language of new media. Cambridge, MA: MIT Press. National Telecommunications and Information Agency. (1999). Falling through the Net: Defining the digital divide. Washington, DC: U.S. Department of Commerce. Preece, J. (2000). Online communities: Designing usability, supporting sociability. New York: John Wiley & Sons. Raymond, E. (1999). The cathedral and the bazaar: Musings on Linux and Open Source by an accidental revolutionary. Sebastopol, CA: O’Reilly. Rheingold, H. (2000). The virtual community (Rev. ed.). Cambridge, MA: MIT Press.
ITERATIVE DESIGN ❚❙❘ 397
Rheingold, H. (2002). Smart mob: The next social revolution. New York: Perseus. Smith, M. A., & Kollock, P. (Eds.). (1999). Communities in cyberspace. London: Routledge. Sproull, L., & Kiesler, S. (1991). Connections. Cambridge, MA: MIT Press. Turkle, S. (1995). Life on the screen: Identity in the age of the Internet. New York: Simon & Schuster. UCLA Center for Communication Policy. (2003). The UCLA Internet report: Surveying the digital future year three. Retrieved January 8, 2004, from http://ccp.ucla.edu/pdf/UCLA-Internet-Report-YearThree.pdf Watts, D. J. (2003). Six degrees: The science of a connected age. New York: W. W. Norton. Wellman, B. (2001). Physical place and cyberspace: The rise of personalized networks. International Urban and Regional Research, 25(2), 227–252. Wellman, B., & Haythornthwaite, C. (Eds.). (2002). The Internet in everyday life. Oxford, UK: Blackwell.
ITERATIVE DESIGN Iterative design is a product development process based on repeated cycles. In each cycle, designers elaborate, refine, and experiment with the design. The work done in each cycle feeds into the next cycle. Although iterative design can be used with many aims, the most common goal is usability. Where usability is the goal, each iteration experiments with some aspect of users’ experiences with the product, and user evaluations form a large part of the feedback. In this way, iterative design is a form of user-centered design. Iterative design dovetails well with prototyping when it comes to software development. Prototyping is described in general in a separate article in this volume. Iterative design is sometimes distinguished from iterative development, in that iterative design aims to produce a design solution. This is also called design prototyping, as the prototype is a key element in describing the design. Iterative development, by contrast, aims to produce a software system. This is also called production prototyping, with the prototype becoming the system. While iterative development and iterative design are thus conceptually different, they are usually conflated both in the literature on iterative design and in system development practice.
When software developers develop software for their own use, they rely on iterative design. An early description of iterative design used as a formalized software development method is given in a 1975 article by the computer scientists Victor Basili and Albert Turner, who called it iterative enhancement. In a 1985 article, John Gould and Clayton Lewis broadened the idea and impact considerably by suggesting that iterative design should be one of the fundamental principles for collaboration between system developers and the prospective users of the system being developed. Reinhard Budde and his colleagues clarified the concept in a 1992 article by making a distinction between the purpose of prototyping and the relation between the prototype and the final system. They drew the connection between iterative design and evolutionary prototyping, which is described as a continuous process for adapting an application system to rapidly changing organizational constraints. The relation between the prototypes and the application system depends on the form of iterative design that is employed, as discussed below. Iterative design processes in the field of humancomputer interaction typically involve a substantial element of initial analysis, which forms the basis for developing a first prototype. This prototype is gradually redeveloped, and through a number of iterations it becomes the system. Often no aspect of the initial design remains untouched through the many iterations, and it is argued that the iterations contribute to a significantly better product.
Forms of Iterative Design In practice, there are three distinct forms of iterative design: abstract iterative design, experiential iterative design, and embedded iterative design. Abstract Iterative Design This simple form of iterative design does not involve any operational prototypes. There may be only a simple two-stage process in which the designer creates the design and then meets with users. The designer explains the design and solicits feedback from the users. This user feedback forms the basis for design revisions and further cycles of user review. The
398 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
designers may use simple aids such as drawings or replicas of system screen images, components, and so on. The simple aids used in iterative design are often referred to as paper prototypes or mock-ups. They are a form of design artifact that functions as a concrete embodiment of how the designer plans to meet users’ requirements. This artifact performs an important social role in making user-designer communication accurate; it provides registration and alignment of the abstract ideas of the designer with the abstract ideas of the user by providing a physical object on which to anchor terminology. Experiential Iterative Design More commonly, user review in iterative design will include some form of experience with a working prototype of system components. The users will have a simulated but dynamic experience with various system functions and will examine input format and system output. The experiential process is similar to the abstract process, but the design artifacts are considerably richer, more sophisticated, and dynamic. The user has the opportunity to experience how the new system will look and feel. This form of iterative design is sometimes called design prototyping, and the design artifacts are sometimes called throwaway prototypes. (They are called throwaway prototypes because they are not actually part of the final system; they serve only as models for the design.) Embedded Iterative Design Embedded iterative design embeds the design activity in the construction activity in cycles that produce working system components that will be part of the final production system. In embedded iterative design, the design artifact is the emerging production software system. Users experiment with the various system components as these are produced in rough form and then refined. Based on users’ experience and evaluation, designers refine and develop their designs. An iterative cycle will include design, implementation, and evaluation. Usability testing with prospective users is a key part of evaluation. Embedded iterative design is the basis of production prototyping. This form of prototyping evolves a software system through design and con-
struction cycles until the design stabilizes and the production prototype passes a final user acceptance test. The design and the system itself synchronously evolve into the final development product. Embedded iterative design is usually the basis for short-cycle systems development, which is also called agile development or Internet-speed development. This form of systems development is similar to production prototyping, except that the design is never really considered to be stable. The software artifact evolves through a continuous series of releases, growing and adapting to changes in its environment. Embedded iterative design is suitable in highly competitive environments and in emergent organizations.
Managing Iterative Design Iterative design is an ideal response to adaptive situations. It is carried out in a sequence of iterations or cycles, each involving a number of activities. These activities include analysis of evaluations of previous versions of the system and requirements for additional functions, design of software elements that integrate the new functions into the existing version, implementation of the design through the building of new artifacts, and evaluation of those new artifacts by prospective users. After the evaluation, the cycle begins again. These cycles continue until the design, and usually the artifact, is stable and acceptable to all stakeholders involved. (Stakeholders can include the system’s designers, users, testers, and owners.) Iterative design may seem simple and appealing, but it is definitely not without its challenges. Iterative design depends fundamentally on repeated activities, which makes management particularly problematic, because it becomes hard to exercise the two basic management functions of planning and control. Plans are unstable because usability goals are hard to specify in advance and are supposed to change with each cycle. Control is also difficult to maintain, as it is difficult to measure progress in usability, especially if an iteration leads to a shift in design goals. In addition, progress depends on user cooperation that at times may not be forthcoming. Management of iterative design must include getting managers, analysts, programmers, and users to agree to the exact objectives of the process. Man-
ITERATIVE DESIGN ❚❙❘ 399
Risk Management Cycle 2 Specify Consequences
3 Assign Priorities
1 Define Risks
Prototyping Cycle FIGURE 1.
Iterative design as an interplay between a prototyping cycle and a risk management cycle.
4 Select Resolution Strategies
agers must keep users from dominating the designeruser interaction, otherwise users may inflate the scope of development. Managers must also keep designers from becoming domineering, because designers may deflate the scope in order to reduce the programming work. Designers must often work with incomplete materials and limited time for user reviews. The iterative design process must carefully define the contents of the next prototype in each cycle, because user reviews may uncover a vast array of potential revisions and improvement directions. Designers should not run in too many different and possibly unproductive directions. The process must accurately gauge progress and nearness to completion so that design artifacts are not prematurely accepted before user reviews are stable. A risk management approach works well for controlling iterative design processes. This approach enables appropriate risk resolution strategies to be placed in effect before the design process breaks down. As illustrated in Figure 1, a four-stage risk management cycle is introduced into the design cycle to evaluate the current stage of the project. First risks are defined. Next, the consequences of those risks specified by considering what undesirable situation will result from the risk and ranking the probability and potential severity for each risk. Then the risks are assigned priority, with high-probability or high-consequence risks receiving high priority. Finally, resolution strategies are developed for urgent risk factors. The process must designate resolution strategies for the two to four risks that have the highest ranks, and these strategies form the basis for managing the next design cycle.
Looking Forward Iterative design is a fundamental element in most high-speed systems development approaches, including “agile” development methods. The growing use of such approaches and methods, especially in highly competitive global settings, will drive increasing use of iterative design for many years to come. Richard Baskerville and Jan Stage See also Prototyping, User-Centered Design
FURTHER READING Alavi, M. (1984). An assessment of the prototyping approach to information systems development. Communications of the ACM, 27(6), 556–563. Baecker, R. M., Nastos, D., Posner, I. R., & Mawby, K. L. (1993). The user-centered iterative design of collaborative writing software. In Proceedings of InterCHI’93 (pp. 399–405). New York: ACM Press. Basili, V., & Turner, A. (1975). Iterative enhancement: A practical technique for software development. IEEE Transactions on Software Engineering, SE-1(4), 390–396. Baskerville, R. L., & Stage, J. (1996). Controlling prototype development through risk management. Management Information Systems Quarterly, 20(4), 481–504. Baskerville, R., & Pries-Heje, J. (in press). Short cycle time systems development. Information Systems Journal, 14(2). Boehm, B., Gray, T., & Seewaldt, T. (1984). Prototyping versus specifying: A multiproject experiment. IEEE Transactions on Software Engineering, SE-10(3), 290–303. Budde, R., Kautz, K., Kuhlenkamp, K., & Züllighoven, H. (1992). What is prototyping? Information Technology & People, 6(2–3, 89–95). Connell, J. L., & Schafer, L.B. Structured Rapid Prototyping. Englewood Cliffs, NJ: Yourdon Press. Dieli, M. (1989). The usability process: Working with iterative design principles. IEEE Transactions on Professional Communication, 32(4), 272–279.
400 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Ehn, P. (1989). The art and science of designing computer artifacts. Scandinavian Journal of Information Systems, 1, 21–42. Gould, J. D., Boies, S. J., & Lewis, C. (1991). Making usable, useful, productivity-enhancing computer applications. Communications of the ACM, 34(1), 74–86. Gould, J. D., & Lewis, C. (1985). Designing for usability: Key principles and what designers think. Communications of the ACM, 28(3), 300–311. Khanna, N., Fortes, J. A. B., & Nof, S. Y. (1998). A formalism to structure and parallelize the integration of cooperative engineering design tasks. IIE Transactions, 30(1), 1–16. Nielsen, J. (1993). Iterative user-interface design. IEEE Computer, 26(11), 32–41.
Norman, D. (1988). The psychology of everyday things. New York: Basic Books. Plaisant, C., Marchionini, G., Bruns, T., Komlodi, A., & Campbell, L. (1997). Bringing treasures to the surface: Iterative design for the Library of Congress National Digital Library Program. In Proceedings of Human Factors in Computing, CHI ’97 (pp. 518–525). New York: ACM Press. Sullivan, K. (1996). The Windows® 95 user interface: A case study in usability engineering. Proceedings of Human Factors in Computing Systems, CHI ’96, 473–480. New York: ACM. Vonk, R. (1990). Prototyping: The effective use of CASE technology. New York: Prentice Hall.
THE KEYBOARD
K THE KEYBOARD The keyboard is the main input device for the personal computer and its design retains features first developed for typewriters over a hundred years ago.
Keyboard History In England in 1714, Queen Anne granted Henry Mills the first keyboard patent. In Italy in 1808, Pellegrino Turri built the first typewriter. In 1870 in Denmark, a pastor named Malling Hansen produced the “writing ball,” commonly regarded as the first commercial typewriter. In 1868 in the United States, Christopher Latham Sholes, C. Glidden, and S.W. Soule, three Milwaukee inventors, patented the “Type-Writer,” the forerunner of today’s computer
keyboard. The Type-Writer had alphabetically arranged keys that used an understroke mechanism; depressing a key caused a lever to swing the typehead upward against carbon paper over a piece of stationery that was held against the underside of a platen. The typist could not see what was being typed and the device was prone to frequent key jams, which slowed typing. From 1874 to 1878 the Remington Company in New York manufactured a new design, the Sholes & Glidden Type-Writer, which had a new keyboard layout with keys organized into three rows—top, home, and bottom. The new layout, developed by Amos Densmore, brother of Sholes’s chief financial backer, James Densmore, was and designed to make the overall task of typing faster by slowing typing to minimize key jams. Based on letter-pair frequency and named after the leftmost six keys on the 401
402 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
top row, QWERTY was patented in 1878. The same year a “shift” key was added to allow either upperor lower-case type, The Typewriter Magazine was published for the first time, and Scientific American used the term “typewriting” in an article. Most early typists learned a four-finger technique. In 1876, a law clerk named Frank E. McGurrin developed and taught himself “all-finger” touch typing, and six years later a Mrs. M. V. Longley began a Shorthand and Typewriting Institute to teach the “all-finger” method. In Cincinnati in 1888 a typing competition was held between McGurrin and Louis Traub (a skilled four-finger typist), which McGurrin easily won. After that, QWERTY became the standard keyboard layout and touch typing became the standard technique. Typewriter technology continued to improve. In 1896 Franz Wagner of the Underwood Company developed the “up strike” mechanism. The Blickensderfer Company developed the first electric typewriter in 1902 and the first portable typewriter in 1913. As technology improved there was less need for the QWERTY layout to slow typing. In 1932 August Dvorak, a professor at Washington State University, developed what he claimed was the ul-
timate keyboard layout; he called it the Dvorak Simplified Keyboard. Both keyboard layouts have been officially recognized by the American National Standards Institute (ANSI X4.22-1983). A comparison of the two keyboard layouts and the claimed performance differences for them when typing in English are shown in Figure 1. However, in spite of the claimed advantages, research in the mid-1990s failed to demonstrate either performance or postural benefits for the Dvorak layout, and QWERTY remains the de facto alphabetic keyboard layout in the United States. But many modern computer keyboards use a Dvorak layout for number and punctuation keys, and, in addition, incorporate numerous other key groupings—for example, a horizontal row of function keys is above the top row of the keyboard. This arrangement maps most closely to the horizontal menu arrangements of most software, and experiments show that the horizontal arrangement gives the fastest response times. Internet keyboards may also provide an additional row of dedicated buttons above the function keys. A numeric keypad is usually positioned to the right of the keyboard, and keys are arranged in calculator layout (789, 456, 123, 0) rather than phone
QWERTY keyboard layout ! 1
“ 2
Q
# 3
W A
$ 4
E S
Z
R D
X
_
% 5
T F
C
& 7
6
Y G
V
‘ 8
U H
B
( 9
I J
O K
N
M
) 0
* =
P
/4 1 /2
: ;
L
Top-row words Home-row words Bottom-row words # words typed on home row Typing speed Typing accuracy Finger travel (8 hours/day) Learning time
1
@ c
‘ ,
. .
? /
52 % 32 % 16 % 120 100 % 100 % 16 miles 60 hours
Dvorak keyboard layout ! 1
@ 2
# 3
“ ‘
‘ ,
. .
A
O : ;
FIGURE 1.
$ 4
% 5
C
6
& 7
* 8
( 9
) 0
[ ]
P
Y
F
G
C
R
L
? /
E Q
U J
I K
D X
H B
T M
N W
_ .
S V
* =
Z
Top-row words 22 % Home-row words 70 % Bottom-row words 8% # words typed on home row 3000 Typing speed 115–120 % Typing accuracy 115–120 % Finger travel (8 hours/day) 1 mile Learning time < 20 hours
Comparison of the QWERTY and Dvorak keyboard layouts specified in ANSI X4.22-1983
THE KEYBOARD ❚❙❘ 403
layout (123, 456, 789, 0), even though studies find that the calculator layout is slightly slower and less accurate. Keyboards also have keys for cursor control (up, down, left, right) and these are arranged either as a cross or an inverted T. Recommended modern keyboard design requirements are specified in the BSR/HFES 100 standard (2002), which are summarized in Table 1.
Ergonomic Keyboards Early in the twentieth century typists began reporting a variety of upper body musculoskeletal injuries which were attributed to the design of the keyboard. In 1915 Fritz Heidner was granted a U.S. patent for a series of split-keyboard designs. In Germany in 1926, E.A. Klockenberg also recommended a split-keyboard design to improve the bent position of the hands. However, little changed in the design of keyboards until the 1960s and 1970s when Karl H. E. Kroemer began experimental investigations of split designs for computer keyboards. Since then there have been numerous redesigns of the computer keyboard. Ergonomic keyboards use different physical designs to try to improve typing safety and performance. Some of the keyboards have been designed TABLE 1.
specifically for people who have been injured or who have a physical disability that affects their typing ability, such as a missing hand. Others are designed for a more general user population and offer significant health and performance benefits to a wide range of people. The key to good ergonomics is always to match the design of the technology to the needs and characteristics of the user. Ergonomic keyboard designs focus on improving the posture or the position of the hands when typing, on decreasing the keying forces, and on reducing the amount of repetition. They tend to be more expensive than conventional keyboards, but most of them offer health benefits by reducing injury risk factors and some of the designs also improve productivity. The following seven alternative ergonomic keyboard designs are currently available: Fixed-angle split keyboards (for example, Microsoft natural—www.microsoft.com). These keyboard designs split the alphanumeric keys at a fixed angle and they slightly tent the keyboard. There is some research evidence of reduced discomfort because of reduced ulnar deviation (lateral bending of the hands). These designs work better for people with broader or larger frames and for pregnant women because they put the arms in a better position to reach around the front of the body. However, the designs
Mandatory Keyboard Design Requirements Specified in BSR/HFES100 (2002)
Feature Keyboard Layout Cursor control Keyboard Height and Slope Key Spacing
Key Force Key Displacement Keying Feedback
Requirement Numeric keypads shall be provided when users’ primary task involves data entry. These keys shall be grouped together. If cursor keys are provided, they shall be arranged in a two-dimensional layout (as a cross or inverted-T). The slope of conventional tabletop-mounted keyboards shall be between 0 and 15 degrees. Centerline distances between the adjacent keys within a functional group shall be between 18 and 19 mm horizontally and between 18 and 21mm vertically. The force to activate the main alphabetic keys shall be between 0.25 and 1.5 N. Vertical displacements of the alphabetic keys shall be between 1.5 and 6.0 mm. Actuation of any key shall be accompanied by tactile or auditory feedback, or both.
404 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
usually address the issue of wrist extension, whereas the upward bending of the hands turns out to be a more important musculoskeletal injury risk factor than ulnar deviation. Hunt-and-peck typists will find split keyboards more difficult to use, and the keyboards are generally more expensive than conventional keyboards; they also tend to be larger and wider, which in some situations can put the mouse too far out to the side of the keyboard. The multitouch fixed-angle split keyboards (Touchstream— www.fingerworks.com) do not use conventional keys but have a touch-sensitive surface that allows users to key and mouse on the same physical area. This design also allows users to control many computer commands with simple finger gestures performed on the same physical area. It takes some time to learn but as users become proficient, the overall speed of their computer work performance can increase by over 80 percent. Adjustable-angle split keyboards (for example, Goldtouch, www.goldtouch.com). These keyboard designs allow users to change the split angle to suit their own needs. Often the split angle is linked to the degree of tenting of the keyboard as well. There is some research evidence of reduced discomfort with this kind of design because of reduced ulnar deviation, but these designs do not usually address wristextension issues. The fact that users have to decide on the split angle means that they may need some training, which suggests that some users might end up with a split angle that is inappropriate for them. There is also a multitouch adjustable-angle split-keyboard (Touchstream LP—www.fingerworks .com). Split keyboards are always difficult for hunt-and-peck typists to use, and these designs are often fairly expensive. Completely split keyboards (for instance, Kinesis— www.kinesis.com). In these designs the left hand and right hand portions of the keyboard are completely split apart. In some designs the keys are presented in a scooped design that allows the hands to rest in a more neutral posture for typing. There is some research evidence of reduced discomfort because of reduced ulnar deviation and also reduced wrist extension. However, it takes time to learn to use a split keyboard, and research shows that initial performance can suffer a 50 percent slowing of
typing speed. Completely split keyboards are also hard for hunt-and-peck typists to use, and some of them are very expensive. Vertically split keyboards (for example, Safetype— www.safetype.com). The keyboard resembles an accordion and users type with their hands facing each other. This design works well to reduce ulnar deviation and wrist extension, but if the keyboard is too high the chest and shoulders can become fatigued. The design is nearly impossible for hunt-and- peck typists to use because the keys cannot be seen easily, and because it is a specialist keyboard it is expensive. Chordic keyboards (for instance, BAT—www .aboutonehandtyping.com). Chord keyboards have a smaller number of keys, and letters and digits are generated by combinations of keys in chords. Onehanded and two-handed designs are available. Research shows that it takes about eighty hours to gain moderate proficiency using the chords that correspond to characters. And although the keyboards are more expensive than regular keyboards, they can be useful to some users, especially those with special needs, such as impaired vision or severely arthritic hands. Specialist keyboards (for instance, Datahand— www.datahand.com or Orbitouch—www.keybowl .com). Several different keyboard designs have been developed to assist users who have some physical limitation or who wish to type in a different way. The Datahand allows users to rest their hands on a series of switches that detect different directions of finger movements, and these generate the characters. The Orbitouch lets users rest their hands on two domed surfaces and then move these surfaces to generate the characters. Specialist keyboards often result in slower typing and learning to use them can take time, so they aren’t a good choice for most people. And like other alternative keyboard designs, they are also expensive. One-handed keyboards. Sometimes users have a physical limitation, such as a missing hand, or they perform work where one hand needs to key while the other does something else. Several alternative designs for one-handed keyboards are available. The Half-QWERTY (www.aboutonehandtyping.com) uses the same keys found on a regular keyboard, but each key functions in two modes, allowing the user
THE KEYBOARD ❚❙❘ 405
to generate all the characters of a regular keyboard in a smaller area. The Frogpad (www.frogpad.com) works in a similar way. One-handed chordic keyboards (for instance, Twiddler—www.handykey.com) and one-handed multitouch keyboards (like Mini— www.fingerworks.com) are also available. Conventional keyboards have also changed their design over the past twenty years: Keyboards are flatter, function keys are on a top row, key mechanisms have become “lighter,” requiring less force, and keyboards have a cursor key pad and a numeric key pad. These features were not always available on older keyboards. For the average user (average size and average skill), the modern conventional computer keyboard is a familiar and cost-effective design, and to date, no ergonomic design has gained widespread acceptance.
Thumb Keyboards The development of PDAs (personal digital assistants) and wireless email products, such as the Blackberry, have resulted in the development of small thumb-operated keyboards. Some of these have a
QWERTY layout and some have an alphabetic layout. Although adequate for short text messaging and email, thumb typing is too slow for large documents and overuse injuries of the thumb (for instance, DeQuervain’s tenosynovitis) can occur with intensive thumb keyboard use. Alan Hedge FURTHER READING BSR/HFES 100. (2002). Human factors engineering of computer workstations. Santa Monica, CA: Human Factors and Ergonomics Society. Heidner, F. (1915). Type-writing machine. Letter’s Patent 1, 138–474. United States Patent Office. Klockenberg, E. A. (1926). Rationalisierung der Schreibmaschine und ihrer Bedienung (Rationalization of typewriters and their operation). Berlin: Springer. Kroemer, K. H. E. (1972). Human engineering the keyboard. Human Factors, 14, 51–63. Kroemer, K. H. E. (2001). Keyboards and keying: An annotated bibliography of literature from 1878 to 1999. UAIS, 1, 99–160. Office machines and supplies—alphanumeric machines—alternate keyboard arrangement (revision and redesignation of ANSI X4.221983) (formerly ANSI X3.207-1991 (R1997)). Washington, DC: American National Standards Institute.
LANGUAGE GENERATION LASER PRINTER LAW AND HCI LAW ENFORCEMENT LEXICON BUILDING
L LANGUAGE GENERATION The success of human-computer interaction depends on the ability of computers to produce natural language. Internally, computers represent information in formats they can easily manipulate, such as databases and transaction logs. In many cases, however, interpreting data in such formats requires a considerable amount of effort and expertise from the user. The methodology for translation of nonlinguistic data into human language is the focus of natural-language generation (NLG). This research topic draws on methodologies from linguistics, natural-language processing, and artificial intelligence, including user modeling, discourse theory, planning, lexical semantics, and syntax.
LIQUID CRYSTAL DISPLAYS LITERARY REPRESENTATIONS
Since the 1960s, NLG technology has matured from experimental systems for story generation to well-established methodologies that are used in a wide range of current applications. For example, NLG systems are used to summarize the medical conditions of patients in emergency care units based on the patients’ records, describe objects displayed in a museum gallery based on facts from the museum’s knowledge base and user interests, answer questions about flight reservations presented in a database, generate explanatory captions for graphical presentations, and produce stock market reports from a set of stock quotes.
Methodology A generation system creates a text from a semantic input that is given in some nonlinguistic form. 407
408 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Examples of such input are stock transactions, numerical weather data, and scores of sport events. To transform a given nonlinguistic representation into a text, the system needs to address two issues: what to say (content planning) and how to say it (linguistic realization). The first issue operates at the macro level; here we want to specify what our text will be about, given a rich nonlinguistic representation of information. The second issue, on the micro level, deals with how to verbalize the selected information in natural language. Both issues raise multiple questions. At the macro level, the questions include: In which order should the selected topics be presented? What are the connections between different topics? How should these connections be described in the text? At the micro level, the questions include: Which verbalization is the most appropriate, given the unit of semantic information and the context in which this unit of information appears? What word ordering will produce a grammatical sentence? A typical generation system has two levels. The macro level consists of content selection and content organization modules. The micro level consists of a sentence planner, a lexical chooser, and a surface realizer. To illustrate the functionality of each component, consider the task of automatically generating daily reports of stock market activity. Since typical financial reports include not only information about current transactions but also historic stock information, the input to the system consists of tables specifying stock performance in the past and all the transactions recorded for a given day. Given the large amount of information, the task of the content selection component is to identify which transactions are to be included in the generated text. For example, we may decide to describe a general trend in the stock market on a given day and name the stocks that fluctuated the most. Once the information is selected, the content organization component groups together relevant pieces of information and establishes a linear order among them. In our example, a target text may start with a discussion of the general trend, followed by a list of stocks that have risen or fallen the most. The next component of the generation system, the sentence planner, divides information into sen-
tence chunks and selects an appropriate syntactic construction for each sentence. For example, when describing the performance of fallen stocks, we may either generate a separate sentence for each stock in that category, or aggregate this information into one complex sentence. At this point, the generation system is still operating with a nonlinguistic representation of information. It is the job of the lexical chooser to translate this representation into natural language. The translation is performed using a system lexicon that specifies how each semantic concept can be verbalized and what the constraints on the different alternatives are. In the stock domain, the concept drop (for stocks that have fallen) can be verbalized as “plummeted” when the drop is more than ten points and as “dipped” when the drop is around five points. Once all the information units are translated into words, the system still has to select the appropriate word ordering and insert auxiliary words, such as determiners and prepositions. This last task is performed by the surface realizer, based on the rules of natural-language grammar.
Implementation: Traditional Approaches Traditionally, there have been two approaches to the implementation of generation systems: template generation and multilayed linguistic generation. Templates delineate output strings containing variables that can be instantiated with particular values. They are relatively easy to implement and are therefore commonly employed in applications in which only a few different types of sentences are being generated. However, template-based approaches are not effective in complex domains since they are not robust to new types of input and are difficult to maintain as systems expand. Linguistic-based generation is a preferred alternative for applications in which variability of the output and scalability is an issue. The following discussion applies to linguistic-based generation. Content Planning As mentioned earlier, the task of the content planner is to select relevant material and to order it into
LANGUAGE GENERATION ❚❙❘ 409
a coherently flowing sequence. Content selection and content organization are performed in a single step by most generation systems. These tasks can be done at many levels of sophistication. One of the most simple approaches is to write a hard-coded text planner, which produces text with a standardized content and structure. This approach is particularly suitable for domains in which text variability is not an issue, such as many technical domains. Other systems employ artificial-intelligence techniques for planning, considering content selection and organization to be a multistep process aimed at reaching a specific communication goal. While this approach can yield flexible and powerful systems, in practice it is difficult to implement because of the amount of knowledge that must be encoded. The most common approach today makes use of a schema. A schema is a text-planning language that captures style-specific principles of text organization. It operates at the level of semantic messages and the discourse relations that hold among them. Typically, a schema is augmented with domain communication knowledge that instantiates it with semantic predicates specific to a domain. For instance, a schema of an encyclopedia entry may include the following information: (1) identification of an item as a member of some generic class, (2) description of an object’s function, attributes, and constituency, (3) analogies made to familiar objects, and (4) examples (McKeown 1985). Linguistic Realization The sentence planner must decide how to group semantic units into sentences and what syntactic mechanism should be used to implement the desired combinations. Although there are several obvious constraints on the aggregation process, such as the length of the resultant sentences, the number of potential aggregations is still vast. In most cases, human experts analyze types of aggregation that occur in a corpus and then encode corpus-specific rules based on their findings. At the next stage, lexical choice (choosing which words to use) is commonly implemented as a rewriting mechanism that translates domain concepts and their semantic relations into words and syntactic relations. The lexical chooser relies on a mapping dic-
tionary that lists possible words corresponding to elementary semantic concepts. Sample entries might be [Parent [sex:female]], with the mappng “mother, mom”; or [love x, y], with the possibilities “x loves y,” “x is in love with y.” Entries of the mapping dictionary can be augmented with information that encodes grammatical features of the word as well as constraints on its usage, including stylistic, contextual, and idiosyncratic constraints. Finally, the linguistic realizer generates sentences in a grammatical manner, taking care of agreement, morphology, word order, and other phenomena. The linguistic realizer is the most extensively studied component of generation. Different grammar theories have led to very different approaches to realization. Some of the grammars that have been used successfully in various NLG systems include systemic grammars, meaning-text grammars, and tree-adjoining grammars. Typically, NLG systems rely on one of several general-purpose realization engines, such as FUF/Surge, KPML, and RealPro.
Implementation: Corpus-Based Approaches Most of the modules in existing generation systems are domain and application specific. For instance, the guiding principle for selecting information in the stock market domain is unlikely to carry over to the weather domain. Content planners are, therefore, typically developed anew for each application; the same holds for the content organizer, sentence planner, and lexical chooser. Typically, human experts construct complex rule-based modules by analyzing large amounts of domain text. As a result, the development of a generation system takes significant time and human effort. In recent years, the focus of research in the generation community has shifted to data-driven approaches, in which generation systems learn necessary data from samples of texts. Data-driven approaches are particularly effective for tasks in which the selection between alternative outputs involves a variety of constraints and therefore is hard to specify manually. Surface realization is a case in point. While some choices in the surface realization component are
410 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
uniquely determined by grammar rules (for instance, in English, the subject is always placed before the verb), the realization of other grammatical constructs depends on semantic and discourse constraints, and in some cases this selection is idiosyncratic. Instead of considering the complex interaction between these features, we can rule out implausible candidates by considering corpus statistics. For instance, it is unlikely to find the noun phrase “a money” in any corpus of well-formed English sentences. Based on this intuition, Kevin Knight and Vasileios Hatzivassiloglou developed the first statistical realizer. Their realizer uses a few syntactic rules to generate a lattice of possible verbalizations of a given input and then selects the optimal path in this lattice based on language model scores. Today, statistical methods are applied to other modules in the generation process. A commonly used resource for such learning is a collection of texts annotated with semantic information. For instance, Pablo Duboue and Kathleen McKeown (2001) learn domain ordering constraints by analyzing patterns in the distribution of semantic concepts in the corresponding text. Other researchers automatically induce lexicons for generation systems by aligning semantic concepts with matching phrases (Barzilay and Lee; Reiter, Sripada and Robertson).
Future Directions It is safe to say that at the present time one can build a natural-language generation system for a specific application. Future research incorporating machinelearning techniques may help speed up the development and increase the coverage of NLG systems. Most of the current methods of NLG require a manually annotated corpus, which is not available in many domains. Further research in the direction of weakly supervised and unsupervised machinelearning methods is required. Most current research in NLG focuses on the task of text production from semantic input. However, in many applications there is a need for text-totext generation, that is, for the rewriting of input that is already in a textual form. Example of such applications include summarization, text simplification, and information fusion from multiple texts. Lack of semantic representation in the input and the
domain-independent character of these applications preclude the use of techniques developed for concept-to-text applications. Consequently, text-to-text generation systems must rely solely on input texts and knowledge that can be automatically derived from those texts. Natural-language generation systems are typically oriented towards production of written language. Spoken responses, however, have different characteristics from written ones. For example, long, complex sentences are usually inappropriate in speech. Further research is needed to incorporate the results of work in linguistics on spoken language constraints. Regina Barzilay See also Dialog Systems; Machine Translation; NaturalLanguage Processing FURTHER READING Barzilay, R. & Lee, L. (2002). Bootstrapping lexical choice via multiple-sequence alignment. Proceedings of Empirical Methods in Natural Language Processing, 164–171. Biber, D. (1988). Variation across speech and writing. Cambridge, UK: Cambridge University Press. Dubuoe, P., & McKeown, K. R. (2001) Empirically estimating order constraints for content planning in generation In Proceedings of the ACL-EACL 2001, July 6–11, Toulouse, France. Hovy, E. H. (1988). Generating natural language under pragmatic constraints. Hillsdale, NJ: Lawrence Erlbaum. ILEX. (n.d.). Intelligent labeling explorer: A project at the University of Edinburgh into dynamic hypertext generation. Retrieved March 23, 2004, from http://www.hcrc.ed.ac.uk/ilex/ Joshi, A. K. (1987). The relevance of tree adjoining grammar to generation. In G. Kempen (Ed.), Natural language generation: Recent advances in artificial intelligence, psychology, and linguistics (pp. 233–252). Dordrecht, Netherlands: Kluwer Academic Publishers. Kittredge, R., Korelsky T., & Rambow, O. (1991). On the need for domain communication language. Computational Intelligence, 7(4), 305–314. Klein, S. (1965). Automatic paraphrasing in essay format. Mechanical Translation 8 (3), 68–83. Knight, K., & Hatzivassiloglou, V. (1995). Two-level, many paths generation. In Proceedings of the 33rd annual meeting of the Association for Computational Linguistics (pp. 252–260). San Francisco: Morgan Kaufmann. Kukich, K. (1983). Knowledge-based report generations: A technique for automatically generating natural language reports from databases. In Sixth ACM SIGIR Conference (pp. 246–250). New York: ACM Press. Mann, W. C., & Matthiessen, C. M. I. M. ( 1985). Demonstration of the Nigel text generation computer program. In J. D. Benson &
LASER PRINTER ❚❙❘ 411
W. S. Greaves (Eds.), Systemic Perspectives on Discourse, 1, 50–83. Norwood, NJ: Ablex. McKeown, K. R. (1985). Text generation: Using discourse strategies and focus constraints to generate natural language text. Cambridge, UK: Cambridge University Press. McKeown, K. Jordan, D., Feiner, S., Shaw, J., Chen, E. Ahmad, S., et al. (2002). A study of communication in the cardiac surgery intensive care unit and its implications for automated briefing. Retrieved March 23, 2004, from http://www1.cs.columbia.edu/~shaw/papers/ amia00.pdf. Mel’cuk, I. A., & Polguere, A. (1987). A formal lexicon in the meaning-text theory (or how to do lexica with words). Computational Linguistics, 13(3–4), 261–275. Mittal, V., Moore, J., Carenini, G., & Roth, S. F. (1998). Describing complex charts in natural language: A caption generation system. Computational Linguistics, 24(3), 431–467. Reiter, E., Sripada, S., & Robertson, R. (2003). Acquiring correct knowledge for natural language generation. Journal of Artificial Intelligence Research, 18, 491–516. Seneff, S., & Polifroni, J. (2000). Dialogue management in the Mercury flight reservation system. Paper presented at the Satellite Workshop, ANLP-NAACL 2000, Seattle, WA.
LASER PRINTER Ever since the German inventor Johannes Gutenberg invented movable-type printing in 1436, printing has become an increasingly valuable technology. The challenge in printing has always been how to format and assemble the elements of the page to be printed in a form that allows rapid replication of large numbers of books, papers, documents, and so forth. Three basic page elements exist: text, graphics, and pictures. Many page creation devices can produce one or perhaps two of these elements but not all three. For a new technology to be truly useful, it had to be able to produce all three elements in high quality and in a rapid manner.
Invention of the Laser Printer In October 1938 the U.S. inventor Chester Carlson invented the process known as “xerography.” This process was the driving force behind the creation and growth of the copier industry by Xerox Corporation. The word xerography was coined from two Greek words, xeros and graphein, which mean “dry writing.” The xerographic process, also known as “electrophotography,” was capable of reproducing printed
materials with high fidelity. Today copiers can reproduce color or black and white materials with high speed and high quality. This copying process, however, was a reproduction process only and not one that could readily create the original material. Much of the creative process was done by conventional page assembly methods, as had been done for years. In 1967–1968 Gary Starkweather, an optical engineer at Xerox, became interested in this creative process and thought that a combination of optics, electronics, and the xerographic process might solve the creative problem. One can make up pages using rectangular or circular points, but this often requires using multiple-sized shapes, which is often a slow process. Starkweather believed that the ideal approach would be to create page images using points or “zero dimensional” objects. Additionally, if one could create the pages fast enough, such a process would permit not only creation of the original or master image but also the copies. Books and other types of documents, for example, could be printed as needed and with the type sizes required at the time of need. Two critical problems had to be solved, however. The first was how to make an imaging system that generates the points precisely and at the right positions and how to determine when the spot should and should not be generated; the second was designing a digital system that could generate the data stream with which to drive the imaging system. For example, for white areas of the page, no points should be printed, and for black areas points should be printed. Some process had to provide the correct signals to the spot generator and at sufficient speeds. Three critical technologies were becoming available during this time frame. The first was the digital computer. This technology was not as yet a personal technology, but the start of the personal technology was there. The second technology was the integrated circuit, which started with the U.S. scientist Jack Kilby in 1968. The third technology was the laser, invented in 1961 by Arthur Schalow and Charles Townes. The laser was critical because without the high brightness capability of the laser one could not expose the required points of light fast enough. The computer and integrated circuit would eventually combine to permit computing the image at a fast enough rate.
412 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
nology resulted in the world’s first commercial laser printer in 1977, known as the “Xerox 9700.” This printer generated ninety thousand image points per 6.4 square centimeters and produced two pages per second.
Technical Principles
A Hewlett Packard LaserJet 4050T.
Why was such speed needed? The vision for the laser printer required producing pages at a rate of about one page per second. The number of points required to produce a reasonable page was about ninety thousand per 6.4 square centimeters minimum. If the standard page to be printed had a format of 20 by 25 centimeters, then at least 80 by 90,000 or 7.2 million points of light would have to be computed and printed in one second. Would this be possible? At the time when Starkweather was working on the printer technology, other scientists and engineers were assessing computer technology. Xerox in 1970 established a research center in Palo Alto, California, that became famous as “Xerox PARC.” Starkweather transferred to Xerox PARC in 1971 and combined his efforts with those of Charles Thacker, Butler Lampson, and others who were working on a personal computer that became known as the “Alto.” In order for the laser printer to generate points at a rate of at least 7 million per second, the laser beam had to be focused and scanned across the surface of a sensitive photoconductor material, as used in Xerox copiers. A polygonal mirror was used and combined with a novel optical system to generate the large number of points precisely and reliably. The first laser printer used at Xerox PARC was combined with the Alto computer and some special interface electronics to produce a printer that generated 15 million points per second. Gradual refinement of this tech-
A laser printer is a rather eclectic assemblage of technologies. First, one needs a light source with which to illuminate the photoconductor of an electrophotographic machine such that sufficient energy is deposited in the available time. This time can be as brief as a few billionths of a second. Additionally, an optical system is needed to focus the laser beam to a spot size of a couple of thousandths of an inch across the entire image, which might be 28 by 43 centimeters or even larger. In order to image the beam across this size at sufficient speed, one must have a beam deflector. Since the invention of the laser printer, the principal deflector technology has been the polygonal scanner. This can be thought of as a disc with flat mirrors on its periphery. As the disc spins, each mirror in its turn intercepts the light beam and scans it across the region to be imaged. To understand the extent of the task, if one is to image one page 28 centimeters long per second at six hundred scans per inch, the scanner must deliver about 6,600 scans per second. As the scans occur, a small optical detector synchronizes the data with the beam position. For a scan of 28 centimeters to occur at a page rate of one page per second, the optical beam is moving at about 1,300 meters per second or about Mach 4. The laser, of course, must be turned on and off to correspond to the data to be printed, and some lasers, such as gas lasers, require a beam modulator. Smaller laser printers such as those used in personal applications utilize solid state lasers that can be directly modulated and do not require external modulators. Although such systems sound complex, research and engineering work has rendered such scan subsystems reliable today. Subsequent refinement of laser technology pioneered at Xerox PARC resulted in personal laser printers at lower speeds but at lower costs as well. The first personal laser printer was known as the “Hewlett-Packard LaserJet” and used the basic de-
LAW AND HCI ❚❙❘ 413
sign pioneered in Starkweather’s earlier work. Later, Hewlett-Packard, Xerox, Canon, Apple computer, and others developed laser printers with higher page quality. Today one can purchase laser printers that image in color and print at page rates ranging from a minimum of four pages per minute at low cost to as high as 180 pages per minute. Recently Hewlett-Packard announced that it had shipped its 30 millionth laser printer. Today the great bulk of electronic printing is done with laser printers. The newest applications of laser printers involve what is known as “demand” or “sort-run” printing, by which several thousand documents can be generated as needed. Even books are now beginning to be demand printed, thus fulfilling Starkweather and Xerox PARC’s vision of what the personal computer combined with laser printer could become. Gary K. Starkweather See also Alto; Fonts FURTHER READING Elzinga, C. D., Hallmark, T. M., Mattern Jr., R. H., & Woodward, J. M. (1981). Laser electrophotographic printing technology. IBM Journal of Research and Development, 25(5), 767–773. Fleischer, J. M., Latta, M. R., & Rabedeau, M. E. (1977). Laseroptical system of the IBM 3800 printer. IBM Journal of Research and Development, 21, 479. Laser printing. (1979). SPIE Proceedings, 169, 1–128. Starkweather, G. K. (1980). High speed laser printing systems. Laser Applications, 4, 125–189. Starkweather, G. K. (1985). A high resolution laser printer. Journal of Imaging Technology, 11(6), 300–305. Urbach, J. C., Fisli, T. S., & Starkweather, G. (1982). Laser scanning for electronic printing. Proceedings of the IEEE, 70(6).
LAW AND HCI Knowledge is power. However, who owns knowledge? Knowledge is something that can be sold while simultaneously kept and whose value can either appreciate or vanish through time. With the Internet geographic constraints on and distinctions between communication and computation blurred. Online,
on-demand, real-time, electronic transmission of information and knowledge became the standard mode of correspondence and its major currency, with time a premium and speed a commodity. Real estate became invisible and the value of property assets in the global marketplace determined by whether domain name ownership was in a dot.com, dot.net, or dot.org. Privacy was stripped, security breached, crime pervasive, theft untraceable, identity transparent, and piracy commonplace in the concealed world of cyberspace. Success was ascertained by the speed of deliverables, and power became programmable. Robust communication now could be conducted with and through computers, robots, information systems, and the Internet, not just with people. As David Johnson and David Post assert,“The rise of an electronic medium that disregards geographical boundaries throws the law into disarray by creating entirely new phenomena that need to become the subject of clear legal rules but that cannot be governed, satisfactorily, by any current territorially based sovereign” (Johnson and Post 1996, 1367, 1375). How could virtual reality, called “cyberspace” (Gibson 1984, 51), be legally harnessed and this world of downloads, networks, interfaces, and avatars understood by the courts? Over which space would jurisdiction attach—cyberspace or real space—or multiple non-coordinating jurisdictions? In particular, how could the law keep pace with the rapid proliferation of ubiquitous, high-bandwidth, embedded, miniaturized, portable, and invisible dissolution of high-functionality systems in the environment and its accompanying vulnerabilities? Machines had become complex interacting systems, sometimes having a mind of their own and at times failing as a result of bugs. Now computers were interacting to form networks. Just as human-computer interaction (HCI) was spawned from the Information Age, so, too, did new legal practice areas evolve—Internet law as well as unprecedented issues such as the copyright of a computer program and the patent of a click.
A Brief History of Law In the United States rules are established and enforced through one of three legal systems found at
414 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
the federal, state, and local levels. The Constitution— together with the laws of Congress, decisions by the federal courts, executive orders of the president, and regulations adopted by the executive agencies— constitute the federal level. In addition, each of the fifty states has its own constitution. On that level laws can be enacted by state legislatures, decisions held by courts, and regulations promulgated by their respective agencies. Laws created by either federal or state court are usually based on precedence—or the past, which serves as an example of prior decisions made on the same or similar issues. That fact is why the United States is characterized as a common law country, with the theory that through time laws will adapt to new circumstances. New issues without precedence to guide the courts create what is commonly known as a “case of first impression.” The Information Age raised new concerns that continue to perpetuate at all three levels of the legal system. HCI will up the ante even more as the integration of human with machine becomes ever more pervasive and invisible and our virtual realities gradually augmented as thinking, feeling, and intelligent systems and environments become commonplace.
Personal Computing Considered a revolutionary device at the time, the personal computer (PC) began to gain widespread acceptance by consumers and businesses alike in 1981. One click was about to change our lives forever. The word mouse would take on an entirely new meaning. The word Google would enter our lexicon to signify a new research method—that of a search engine. Internet service providers (ISPs) would become commonplace as they granted us entrance beyond the physical world and permission into a global cyberspace. However, the PC’s magnitude, scope, social impact, and legal implications were yet unrealized. Our world was about to become electronically wired, networked, sensored, interfaced, and imaged. Signatures could be obtained electronically. Auctions, gambling, sweepstakes, promotions, and games could be played online. Contracts, negotiations, and disputes could be altered in an instant. Clickwraps and shrinkwraps
(described below) were recognized as new forms of agreements. Pornography was freely downloaded. Music was easily swapped. Proprietary writings were infringed. Distance learning would enter the educational fray, with the extensible enterprise becoming the business norm. Snail mail would be the last resort used to reach out and touch someone. Change would be the only constant. The letter e would enter the mainstream—e-commerce, e-mail, e-sign, e-governance, e-discovery, e-courts. The legal system would be challenged to create new practice areas while trying to understand and interpret evolving communication standards and protocols.
Privacy One contentious issue surrounding the Internet is privacy. Although advocacy groups allege that a right to privacy exists, legally it does not. The legal concept of the right to privacy can first be found in an 1890 Harvard Law Review article entitled “The Right to Privacy,” written by Samuel Warren and Louis Brandeis when they were law firm partners. Warren and Brandeis claimed that the right to privacy already existed in the common law and gave each person the choice to share or not to share information about his or her private life. Their intent was merely to establish the right to privacy as a legal protection in their day. Neither did either man coin the phrase “the right of the individual to be let alone,” as found in U.S. Supreme Court Justice Brandeis’s dissent in Olmstead v. United States (1928), which is often quoted by privacy champions and is the first case in which the U.S. Supreme Court considered the constitutionality of electronic surveillance. Warren and Brandeis in their 1890 article interpreted the Fifth Amendment to the United States Constitution: “No person shall . . . be deprived of life, liberty, or property, without due process of law . . .” to read that a person has an inherent right to be let alone and to privacy. Their interpretation was their legal theory and their view of a more general right to enjoy life. Even so, with the onset of the Internet several well-recognized organizations were formed to assert a person’s rights within a network of global communication: The Center for Democracy and Technology (CDT) was established to promote democratic values
LAW AND HCI ❚❙❘ 415
and constitutional liberties in the digital age; the Electronic Frontier Foundation (EFF) was established to defend the right to think, speak, and share ideas, thoughts, and needs using new technologies, such as the Internet and the World Wide Web; and the Electronic Privacy Information Center (EPIC), a public interest research center, was established to focus on emerging civil liberties issues and to protect privacy, the First Amendment, and constitutional values. Just as no right to privacy exists, no privacy policy is required to be posted on a website—considered to be a form of online advertising. However, should a privacy policy be displayed on an personal website or a business website, it then may be subject to civil liability or criminal sanctions should the owner not abide by its own policies. The U.S. Federal Trade Commission (FTC) is charged with guarding against unfairness and deception (Section 5 of the Federal Trade Commission Act, 15 U.S.C. §§ 41-58, as amended) by enforcing privacy policies about how personal information is collected, used, shared, and secured. In its 1998 report, Privacy Online: A Report to Congress, the FTC described the fair information practice principles of notice, choice, access, and security in addition to enforcement—to provide sanctions for noncompliance—as critical components for online privacy protection. Today the FTC plays a central role in implementing rules and safeguarding personal information under the Gramm-Leach-Bliley Act (GLBA), the Children’s Online Privacy Protection Act (COPPA), and the Fair and Accurate Credit Transaction Act (FACTA). The Gramm-Leach-Bliley Act, also known as the “Financial Modernization Act of 1999,” was enacted to protect personal information held by a financial institution. The act applies to banks; securities firms; insurance companies; consumer loan lenders, brokers, and servicing entities; companies preparing individual tax returns or providing financial advice, credit counseling, or residential real estate settlement services; debt collectors; and enterprises transferring or safeguarding money. It requires these institutions to provide privacy notices with an opt-out provision to their customers. If the opt-out provision is chosen by a customer, the institution is prohibited from
sharing the customer’s personal information with third parties. The privacy requirements of the GLBA are divided into three principal parts: the Financial Privacy Rule, the Safeguards Rule, and pretexting provisions. Eight federal agencies, together with the states, have authority to administer and enforce the Financial Privacy Rule and the Safeguards Rule, which apply to all financial institutions. Although the Financial Privacy Rule governs personal financial information collected and disclosed by financial institutions, it also applies to non-financial companies that may receive such information. The Safeguards Rule requires financial institutions that collect customer information, as well as those that receive it from other financial institutions, to design, implement, and maintain safeguards to protect that information. The pretexting provisions protect consumers against companies that have obtained personal information under false pretenses such as calling a bank pretending to be a customer, also known as “pretexting.” The Children’s Online Privacy Protection Act is designed to offer parents control over information gathered online and provided by their children and the subsequent use of that information. COPPA applies to commercial websites that collect personal information from children under the age of thirteen, requiring that the websites follow several rules to safeguard a child’s privacy while obtaining the parents’ consent before collecting such personally identifiable information. Any website directed at children under the age of thirteen must comply with COPPA. The Fair and Accurate Credit Transaction Act, signed into law by President George W. Bush in December 2003, amends the Fair Credit Reporting Act (FCRA) by requiring the nationwide consumer reporting agencies (CRAs) to provide a yearly credit report at no cost to consumers. FACTA prohibits a CRA from circumventing such a requirement by clearly illustrating what constitutes circumvention.
Spam In addition to enforcing privacy policies, the FTC enforces the Controlling the Assault of Non-Solicited Pornography and Marketing Act of 2003 (CAN-SPAM
416 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Act) to combat unsolicited commercial e-mail advertising, fraudulent and deceptive chain letters, and pyramid and other get-rich-quick schemes. The CAN-SPAM Act additionally includes a protection against unmarked sexually oriented or pornographic material. Signed into law on 16 December 2003 and effective 1 January 2004, the CAN-SPAM Act, an optout law, impacts all U.S. online businesses marketing their services or products through e-mail transmissions, defining a commercial electronic mail message as one whose primary purpose is the commercial advertisement or promotion of a commercial product or service, including content on an Internet website operated for a commercial purpose. The CAN-SPAM Act governs nearly any business e-mail, including electronically submitted newsletters and stand-alone promotional e-mails. It prohibits fraudulent or deceptive subject lines, headers, or returned address, requires that e-mail advertisers identify their messages as advertisements or solicitations in a clear and conspicuous manner, and requires that a postal mailing address be included in the e-mail message. These requirements apply not only to spammers, but also to those people or businesses that may procure spammers’ services. Noncompliance with the CAN-SPAM Act could result in civil enforcement by the FTC or state attorneys general, resulting in both criminal sanctions and civil penalties. ISPs may bring civil lawsuits against violators who adversely affect those providers. The FTC now is considering the establishment of a national “Do not spam” list similar to the “Do not call” registry restricting telemarketing phone calls.
Health Information Privacy On 14 April 2003, the U.S. Department of Health and Human Services (HHS) mandated compliance with the Privacy Rule of the Health Insurance Portability and Accountability Act of 1996 (HIPAA), Public Law 104-91. HIPAA amends the Internal Revenue Code of 1986 “to improve portability and continuity of health insurance coverage in the group and individual markets, to combat waste, fraud and abuse in health insurance and healthcare delivery, to promote the use of medical accounts, to improve
access to long term care services and coverage, to simplify the administration of health insurance, and for other purposes.” The Office of Civil Rights is responsible for enforcement of HIPAA. It establishes a framework for the standardization of electronic data interchange (EDI) in health care, in particular, protections for the privacy and security of individually identifiable health information (IIHI). Compliance with the HIPAA Security Rule is required to be met by health care plans, providers, and clearinghouses in 2005. With HIPAA the federal government introduced a complex regulatory scheme with broad implications for health plan administration and health plan sponsors. It subjected not only health care entities, but also health care providers that conducted certain electronic transactions, along with health care clearinghouses, group health plans, group plans, and plan sponsors—in other words, all employers who provide group health plans for their employees—to the Privacy Rule. It forbade an employer from obtaining IIHI from its health plan unit and using it to decide work assignments, promotions, firings or layoffs, employee discipline, or any other employmentrelated issue. To be HIPAA compliant, an employer has to provide notice of his or her privacy policies and practices, together with confirmation of receipt of such notice; designate a privacy officer; train personnel handling IIHI in privacy and security compliance; have a documented policy and procedure for privacy and security violations; have in place mechanisms for sanctioning employees violating the privacy and security policies; allow individuals the right to access, amend, and receive accountings of their IIHI; establish procedures for mitigating harmful effects of improper uses or disclosures of IIHI; and have whistleblower provisions in place to not retaliate against those people who may exercise their rights under the Privacy Rule. In addition, an employer now was obligated to establish firewalls to ensure that IIHI handled by his or her group health plan or plan sponsor was segregated from the rest of the employer’s operations. Pursuant to the Privacy Rule, a person, entity, or third-party administrator involved in any activity involving the use or disclosure of IIHI now was required to sign a business associate agree-
LAW AND HCI ❚❙❘ 417
ment, thereby adding another layer to the already extensive requirements of the Privacy Rule.
Radio Frequency Identification Radio frequency identification (RFID) tags have revolutionized the concept of HCI by changing how computing works, affecting data collection and inventory management. RFID devices, using a variation of a bar code with smart chips and wireless capabilities, can track, monitor, search, and scan people continuously without their knowledge, as if people were wearing a sign on their back that flashes: “I was here, thought about this, and purchased that.” Anonymity disappears. In addition to privacy concerns, RFID opens a new target area for hackers attempting to penetrate through security. As of 2005, the U.S. Department of Defense (DoD) is requiring its suppliers to begin using RFID devices, and the Food and Drug Administration (FDA) is encouraging their adoption among drug wholesalers, manufacturers, and retailers. The challenge to implementing the use of RFID tags will be coordinating disparate databases into a synchronized infrastructure to manage the data. Handheld readers containing RFID capabilities to collect data already generated are becoming a reality. Although the technology dates as far back as World War II, today’s RFID applications are expected to change how retail business is conducted, resolving problems such as shoplifting, inventory shortages, and logistical errors while reducing manual labor, inventory checks, and the scanning of bar codes.
Internet Telephony The Internet enables the transmission of telephone calls using the Internet protocol (IP), the same protocol that sends data from one computer to another. Placing a telephone call over “voice over IP” (VoIP) requires a network connection and a PC with a speaker and a microphone. In some cases, software may also be required. VoIP, through fiber-optic networks and broadband connections, will have the capability to tie together voice with e-mail, instant messaging, videoconferencing, and caller ID as well as reduce the cost of long-distance and international
telephone calls. One legal concern is how VoIP should be regulated and global laws harmonized. The U.S. Federal Communications Commission (FCC) is investigating the regulatory status of VoIP to identify the migration of communication services to Internet-based platforms. The result of the investigation will have a direct impact on taxation. However, because the data bits are encrypted, and no standardized method exists for distinguishing voice calls from the terabits (one trillion bits) of other data on the Internet, the technical limitations of VoIP preclude law enforcement’s ability to wiretap conversations and accurately locate 911 calls. The Federal Communications Commission is investigating whether Internet telephone providers will need to rewire their networks to government specifications in order to provide law enforcement with guaranteed access for wiretaps. The result will call into question whether VoIP is a phone service. The 1994 Communications Assistance for Law Enforcement Act (CALEA) will need to be revisited in order to address the law enforcement and national security issues raised by these applications.
E-commerce Jurisdiction The global information highway has no rules of the road that apply to all participants. International boundaries are nonexistent to those people who conduct business from and through a website on the Internet. Local regulations have international ramifications. In such an ethereal realm, the effects of online conduct may be felt at the other end of the world. By establishing an online presence, a business may be subjected to the laws and courts of jurisdictions outside the location of its operations. The basis for such jurisdictions may be either subject matter or personal. Whereas “subject matter jurisdiction” refers to the competence of a particular court to hear a particular issue, personal jurisdiction determines whether a defendant can be brought into the court that claims to have actual subject matter jurisdiction. One area of subject matter jurisdiction involving e-commerce is the illegal distribution of copyright materials, giving the U.S. federal courts potential
418 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
rights to hear infringement cases involving violations of the U.S. Copyright Act. Cyberspace transactions often involve matters of personal jurisdiction. However, for website owners, the issue of most concern is the scope of expanded special jurisdiction— whether and to what extent the courts of a state other than the state of the website owner’s incorporation or principal office assert jurisdiction simply because the residents of that state or country can access that owner’s website. One such issue rose to a criminal prosecution case by a court in Paris, France, and was closely watched around the world. In 2000, France’s Union of Jewish Students and the International Anti-Racism and Anti-Semitism League sued Yahoo for selling Nazi paraphernalia on its auction pages. French criminal statutes prohibit the public display of Nazi-related uniforms, insignia, or emblems, and so the French court issued an order directing Yahoo to deny access to the Nazi artifacts by Internet users in France, demanding it re-engineer its United States content servers for recognition of French Internet Protocol addresses or it would otherwise face severe penalties. Although the French criminal court dismissed all charges, the case would have opened up Internet providers to possible prosecution anywhere in the world even if their activities were legal in their home base. Because laws are not harmonized, civil, criminal, and regulatory jurisdictions over cyberspace overlap. Laws do not default from one country to the other, causing jurisdictional conflicts and potential global risks, in particular, transborder cybercrimes such as unauthorized system intrusions, online fraud, intellectual property and identity theft, cyberterrorism, stalking, manipulation of data, and economic espionage.
Sarbanes-Oxley Act of 2002 E-mail can be deleted, but it never really and truly ceases to exist. Once posted, electronic content is immortal. It is forever and permanently in cyberspace. It can be retrieved by employers or prosecutors at any time in the future. Your mouse footprints are electronically dropped with every website visit and purchase.
As a result of corporate ethical failures highlighted by the scandals involving the energy company Enron and the accounting company Arthur Andersen, President George W. Bush signed into law the Sarbanes-Oxley Act of 2002 (SOX), affecting corporate governance, internal controls, and disclosure obligations for publicly traded companies. SOX created new crimes, with severe penalties for the destruction, alteration, or tampering of records to influence, obstruct, or impede a federal investigation, bankruptcy case, or official proceeding with intent to alter, destroy, or conceal such records or documents—punishable by imprisonment of up to twenty years. As a result of SOX and HCI, computer forensics developed as a field to analyze a company’s computer servers to discover evidence of wrongdoing. The Arthur Andersen company was convicted because a jury found that it had destroyed documents after it had become aware of a Securities and Exchange Commission investigation of its client Enron. Document retention and records management now were of utmost importance. Courts were requiring companies to produce electronic documents in litigation discovery. The party who would bear the cost of retrieving electronic documents, and the risks associated with spoliation of evidence and failure to preserve documents, was determined by the courts, as evidenced in Zubulake v. UBS Warburg LLC (2003), a suit over gender discrimination and illegal retaliation.
Computer Contracts The anticipatory aspect of the law is best seen through the drafting of contracts. In negotiating a contract, lawyers attempt to foresee the future by predicting what may happen between the parties and provide for contingencies by stipulating a remedy to protect their client. Unlike most contracts by which services are provided or title to a product is sold, title to software remains with the vendor who grants a license for its use. A license allows someone other than the owner the right to use the software in limited ways, through the conveyance of a software license agreement.
LAW AND HCI ❚❙❘ 419
If the software is prepackaged for mass marketing and bought off the shelf, then an agreement is included with a consumer’s purchase. Merely opening the box containing the software, or using it, constitutes assent to the terms and conditions of the agreement. No signature is necessary between the parties. These types of agreements are called “shrinkwrap” or “self-executing licenses”—the software and the agreement are in the same box, and the terms and conditions are nonnegotiable. The other type of software license agreement is a clickwrap or point-and-click agreement. When a person visits a website, before downloading a document or a particular software, often the website requires that the person agree or not agree to the terms and conditions of the agreement. In order to have a valid agreement, the person must give conspicuous assent to the terms; otherwise no rights will be licensed.
the DMCA have appeared and continue to appear before the courts.
The Future The invention of computers took engineering complexity into a whole new realm. The complexity is being driven by the technology but also, even more importantly, by the new ways people want to use technology. Computer science will borrow from biology. Within the next fifteen years microprocessors may become obsolete, making room for molecular electronics (nanocomputing), redefining what is meant by a computer and reducing it to the size of a blood cell. Equipped with nanotube transistors— ten thousand times thinner than a human hair— computers may one day be able to mimic the cell’s ability to self-replicate and outperform the most advanced models of silicon transistors, increasing processing capabilities multifold.
Intellectual Property In the area of HCI, and clearly in all science and technology innovation, intellectual property plays a key role, defining not only what is owned, but also what can be owned. A legal practice area, intellectual property covers patent law, copyright and trademark law, and trade secrets. Patent law is a specialized field within intellectual property. In order to be admitted to practice patent law before the U.S. Patent and Trademark Office, an attorney must have a science or engineering background and fulfill additional requirements. The development of information technology continues to have an enormous impact on intellectual property law and rights. One key result of the impact of HCI on intellectual property was the enactment of the Federal Digital Millennium Copyright Act (DMCA) in 1998. The DMCA was enacted as an attempt to begin updating national laws for the digital age. Its goals were to protect intellectual property rights and promote growth and development of electronic commerce. The DMCA is used to stop circumvention technology, with hefty civil and criminal liability for bypassing circumvention technology. Many contentious cases citing
Sonia E. Miller See also Political Science and HCI; Privacy
FURTHER READING Agre, P. E., & Rotenberg, M. (1997). Technology and privacy: The new landscape. Cambridge, MA: MIT Press. Auletta, K. (1995). The highwaymen: Warriors of the information superhighway. New York: Harcourt Brace. Ballon, I. C. (2001). E-commerce and Internet law: Treatise with forms, 1, 2, & 3. Little Falls, NJ: Glasser Legal Works. Battersby, G. J., & Grimes, C. W. (2001). Drafting Internet agreements. New York: Aspen Law & Business. Brin, D. (1998). The transparent society: Will technology force us to choose between privacy and freedom? Reading, MA: Addison-Wesley. Brooks, F. P. (1975). The mythical man-month. Reading, MA: AddisonWesley. Burnham, S. J. (1987). Drafting contracts. Charlottesville, VA: Mitchie Company. Bush, V. (1945, July). As we may think. The Atlantic Monthly (pp. 101–108). Davies, S. (1997). Re-engineering the right to privacy: How privacy has been transformed from a right to a commodity. In P. E. Agre & M. Rotenberg (Eds.), Technology and privacy: The new landscape (p. 143). Cambridge, MA: MIT Press. Dertouzos, M. L. (1997). What will be: How the new world of information will change our lives. New York: Harper Edge.
420 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Dertouzos, M. L. (2001). The unfinished revolution: Human-centered computers and what they can do for us. New York: HarperCollins. Fernandez, R., & Picard, R. W. (2003). Modeling drivers’ speech under stress. Speech Communication, 40, 145–159. Garfinkel, S. (2000). Database nation: The death of privacy in the 21st century. Sebastopol, CA: O’Reilly & Associates. Gelernter, D. (1992). Mirror worlds. New York: Oxford University Press. Gershenfeld, N. (1999). When things start to think. New York: Henry Holt. Gibson, W. (1984). Neuromancer. New York: Ace Books. Glancy, D. J. (1979). The invention of the right to privacy. Arizona Law Review, 21(1), 1–39. Harris, M. S. (2002). Update on e-commerce—Jurisdiction. NY Business Law Journal, 6(1), 21–28. Johnson, D., & Post, D. (1996). Law and borders—The rise of law in cyberspace. Stanford Law Review, 48, 1367. Kiesler, S., & Hinds, P. (2004). Human-robot interaction. HumanComputer Interaction, 19(1–2), 1—8. Lessig, L. (1999). Code and other laws of cyberspace. New York: Basic Books. Martin, J. (2000). After the Internet: Alien intelligence. Washington, DC: Capital Press. Meyer, C., & Davis, S. (2003). It’s alive: The coming convergence of information, biology, and business. New York: Crown Business. Miller, S. E. (2003). A new renaissance: Tech, science, engineering and medicine are becoming one. New York Law Journal, 230(70), 5–7. Monassebian, J. (1996). A survival guide to computer contracts: How to select and negotiate for business computer systems. Great Neck, NY: Application Publishing. Moore, G. E. (1965). Cramming more components onto integrated circuits. Electronics, 38(8). Moran, T. P., & Dourish, P. (2001). Context-aware computing. HumanComputer Interaction, 16(2–4), 1–8. Myers, B. A. (1998). A brief history of human computer interaction technology. ACM Interactions, 5(2), 44–54. Picard, R. W. (2003). Affective computing: Challenges. International Journal of Human-Computer Studies, 59(1–2), 55–64. Picard, R., & Healey, J. (1997). Affective wearables. Personal Technologies: M.I.T. Media Laboratory Perceptual Computing Section Technical Report, 467(1), 231–240. Picard, R. W., Vyzas, E., & Healey, J. (2001). Toward machine emotional intelligence: Analysis of affective physiological state. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(10), 1175–1191. Rosen, J. (2000). The unwanted gaze: The destruction of privacy in America. New York: Random House. Rotenberg, M. (2000). The privacy law sourcebook 2000: United States law, international law, and recent developments. Washington, DC: Electronic Privacy Information Center. Schneier, B. (2000). Secrets and lies: Digital security in a networked world. New York: Wiley Computer Publishing. Shneiderman, B. (2002). Leonardo’s laptop: Human needs and the new computing technologies. Cambridge, MA: MIT Press. Siebel, T. M., & House, P. (1999). Cyber rules: Strategies for excelling at e-business. New York: Doubleday. Waldrop, M. M. (2003). Autonomic computing: An overview of the concept and exploration of the public policy implications (Woodrow Wilson International Center for Scholars Foresight and Governance Project, 2003-7). Washington, DC: Woodrow Wilson.
Warren, S., & Brandeis, L. (1890). The right to privacy. Harvard Law Review, 4(5), 193. Court Cases Olmstead v. United States, 277 U.S. 438, 473 (1928). Zubulake v. UBS Warburg LLC, 02 Civ. 1243 (SAS) (S.D.N.Y. May 13, 2003).
LAW ENFORCEMENT Information technology (IT) has the potential to revolutionize the work done by law enforcement. Although information has always been a cornerstone for police and other law enforcement agencies, until recently such agencies have not typically viewed information systems as a valuable asset. As IT has proliferated in the private sector, IT in most police organizations is still in its infancy. However, as the world changes and as governments review national and local security processes, the need for police to capitalize on cutting-edge technologies has never been greater. The challenges for system and interface designers developing these technologies stem from the distinctive context of police work and how this context affects how information technology is used by police.
The Use of Information in the Police Context Information is a central feature of modern societies, and it is the central feature of policing. Police agencies use information to determine resource allocations to police divisions, to determine when and where to patrol, and to determine who might be involved in a crime and which crimes might be solved. Aggregation of information in the form of statistics also helps police and the community better understand important trends in crime in specific areas. The knowledge of crime trends in their cities help police understand the underlying problems in neighborhoods. Being able to plan and allocate resources to areas with higher needs leads to more proactive crime prevention. Decision making at all levels of police agencies is based upon police information.
LAW ENFORCEMENT ❚❙❘ 421
Information is also used in police investigations not only to identify and apprehend a criminal suspect, but also to provide evidence in a court of law to convict those guilty of a crime. Police put together pieces of fragmented information to try to understand the events that have occurred in order to identify and apprehend an offender. In order to convict an offender, police might testify in a court of law, presenting information that provides evidence that the offender is indeed guilty of a crime. The more complete and irrefutable the information, the more likely an offender is to be convicted. The pieces of information build a case and provide the substantiation that enables police to catch an offender. The ability of police to share information both within and outside of their agency is another way by which information can be accessed and utilized. Although police must acquire information, methods of handling and presenting it are of utmost importance. In the private sector companies protect their competitive secrets. A breach of such secrets can result in a decline in a company’s profitability. In the law enforcement domain, the violation of police information may have more serious consequences, such as the inability to convict a dangerous felon or even the death of an officer or innocent bystander. Consequences of errors in handling information can lead to violation of individual rights and physical harm. Legal and moral issues include the incarceration of a wrongly accused person and the inability to incarcerate the real perpetrator of a crime.
Importance of IT in Police Organizations Given the value of information in all aspects of police organizations and the importance of how information is handled in this context, police are relying on IT to manage their information needs. Although a large number of police agencies still rely on some manual processes, the growth of digital information maintained by police repositories has been explosive. This growth has improved the speed at which police can assess information and increased the amount of information that police can store. For example, in her book The Justice Juggernaut: Fighting Street Crime,
Controlling Systems (1990), Diana Gordon reported that in the United States in 1989, the federal National Crime Information Center (NCIC) database housed more than 20 million records and processed more than 1 million transactions per day. In addition to text-based data, police use technologies that capture multimedia data, such as telephone conversations, surveillance camera video, and crime scene pictures. Police also use geographical information systems that allow them to map time and space information for crime analysis. The development of information technology that manages multimedia information becomes more important as these types of data become more prevalent. Information technology facilitates the sharing of information. Because criminal activity is not bound by geographical jurisdictions, police realize that the ability to share information across agencies is important. As more police organizations store data in electronic systems, sharing information through networks becomes more important and more feasible. The federal government has developed several national initiatives to help U.S. law enforcement agencies deal with issues in the development and use of information technology. For example, the Office of Justice Programs Integrated Justice Information Technology Initiative (part of the U.S. Department of Justice) was developed in 1997 to coordinate funding and technical assistance to support the design and implementation of information technology for information sharing. The National Institute of Justice’s Office of Science and Technology has also developed programs to address issues in systems interoperability to facilitate information sharing through the use of IT.
Challenges of Designing IT for Police Given its importance, IT is becoming less of an enhancement and more of a necessity for police. To take advantage of IT, interface and system designers face a number of challenges created by the organizational environment and characteristics of police work. An important part of design and human-computer interaction is understanding the characteristics of a system’s users. In most police organizations many types of users with many types of needs use a
422 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Fighting Computer Crime
B
egun in 1991 as the Computer Crime Unit of the U.S. Department of Justice, the Computer Crime and Intellectual Property Section (CCIPS) is the primary federal office for computer-related crime. Below is the CCIPS website statement on its purpose and function. The Computer Crime and Intellectual Property Section (“CCIPS”) attorney staff consists of about forty (40) lawyers who focus exclusively on the issues raised by computer and intellectual property crime. Section attorneys advise federal prosecutors and law enforcement agents; comment upon and propose legislation; coordinate international efforts to combat computer crime; litigate cases; and train all law enforcement groups. Other areas of expertise possessed by CCIPS attorneys include encryption, electronic privacy laws, search and seizure of computers, e-commerce, hacker investigations, and intellectual property crimes. A large part of CCIPS’ strength derives from the diverse skills and the wide variety of experiences its lawyers have had before joining the Section. Before joining CCIPS, its attorneys have been computer scientists, state and federal prosecutors, and associates and partners at law firms. A substantial number of CCIPS’ attorneys have received degrees in computer science, engineering, or other technical fields; about half came to CCIPS with prior government service. CCIPS began as the Computer Crime Unit of the former General Litigation and Legal Advice Section of DOJ’s Criminal Division in 1991. CCIPS became a Section of the Criminal Division in 1996. As Attorney General Janet Reno noted in her testimony on “Cybercrime” before the United States Senate Committee on Appropriations on February 16, 2000: “…CCIPS works closely on computer crime cases with Assistant United States Attorneys known as “Computer and Telecommunications Coordinators” (CTCs) in U.S. Attorney’s Offices around the country. Each CTC is given special training and equipment, and serves as the district’s expert in computer crime cases. “The responsibility and accomplishments of CCIPS and the CTC program include: Litigating Cases: “CCIPS attorneys have litigating responsibilities, taking a lead role in some computer crime and intellectual property investigations, and a coordinating role in many national investigations, such as the denial of service investigation that is ongoing currently. As law enforcement matures into the Information Age, CCIPS is a central point of contact for investigators and prosecutors who confront investigative problems with emerging technologies. This year, CCIPS assisted with wiretaps over computer networks, as well as traps and traces that require agents to segregate Internet headers from the content of the packet. CCIPS has also coordinated an interagency working group consisting of all the federal law enforcement agencies, which developed guidance for law enforcement agents and prosecutors on the many problems of law, jurisdiction, and policy that arise in the online environment. “Working with the U.S. Attorney’s Office in the District of New Jersey and the FBI, as well as with state prosecutors and investigators, CCIPS attorneys helped ensure that David Smith, the creator of the Melissa virus, pled guilty to a violation of the computer fraud statute and admitted to causing damages in excess of $80 million. “CCIPS is also a key component in enforcing the “Economic Espionage Act,” enacted in 1996 to deter and punish the theft of valuable trade secrets. CCIPS coordinates approval for all the charges under the theft of trade secret provision of this Act, and CCIPS attorneys successfully tried the first jury case ever under the Act, culminating in guilty verdicts against a company, its Chief Executive Officer, and another employee. “The CTCs have been responsible for the prosecution of computer crimes across the country, including the prosecution of the notorious hacker, Kevin Mitnick, in Los Angeles, the prosecution of the hacker group “Global Hell” in Dallas, and the prosecution of White House web page hacker, Eric Burns, in Alexandria, Virginia.
U.S. Department of Justice, Computer Crime and Intellectual Property Section. Retrieved March 10, 2004, from http://www.usdoj.gov/criminal/cybercrime/ccips.html
central information system. Some users, such as crime analysts, are more computer savvy and use computer systems regularly for crime investigation and report generation. Records personnel deal with data entry and verification of the information contained
in the system. Police managers and higher-ranking officers often use an information system for case management and resource allocation. Patrol officers, who make up the majority of employees in a police department, are typically not as
LAW ENFORCEMENT ❚❙❘ 423
experienced in computer use and thus have more problems accessing and using information technology. As the front line of defense for police, however, patrol officers often must get the right information in a timely manner. A system must be designed not only to encompass the needs of all the types of users within the police organization, but also to take into account the abilities and characteristics of all users. A system should be designed to meet the investigative needs of crime analysts but also to be accessible to patrol officers in the field, who, for example, may use the system to verify information given to them by a suspect in custody. Thus, the usability of a system, which influences the frequency of use by police personnel, influences the success of information technology in the police context. System designers face not only the challenges of designing for different types of users and tasks, but also the challenge of integrating different systems across different police agencies. An increasingly pressing problem is the ability of police agencies to share the information within their systems with other police agencies. Many of these systems were developed in-house and are stand-alone systems, making integration with other systems difficult. The problem of integration occurs at many levels. System designers may face the challenge of integrating systems that are on different types of platforms. For example, one police agency may use an Oracle database system, whereas another agency may store its data in flat files or in an archaic legacy system. Another problem could be integrating information from systems with different architectures. For example, system designers integrating two systems may have the tedious task of matching data from the underlying architecture of one system with another. For a user the ability to seamlessly access information from different systems greatly reduces the need to learn to use those different systems. Users, especially police officers who are less familiar with different computer systems, want to learn a single interface and use that interface to access information from different systems. From a technical point of view this means that system designers must map the data contained in one system to the same
data in another system. The federal government is trying to deal with problems of data integration, encouraging police agencies to follow the National Incident-Based Reporting System (NIBRS). As more police agencies move toward an NIBRS-compliant system, some of these problems of data integration will be solved. For system designers, however, the challenge of integrating systems is not a minor one. Designers must take into consideration issues of platform and data integration. Although initiatives to establish standards exist, these initiatives are not an option for the majority of older police information systems in use.
Information Access and Security As information technology in police organizations becomes more prevalent, the ability of police officers to easily access information while ensuring information security becomes more important. System designers must balance the need for information access with the need for information security. Officers, especially the majority who work on patrol, perform a large part of their duties in the field and in patrol cars. System designers must take into account the work environment that affects officers’ ability to access information. Rather than design just for desktop systems, designers must design for laptop computers, car-mounted digital terminals, and even handheld digital terminals. System designers must also decide where data should reside. In centralized systems data from different information systems at different agencies are ported into a single data warehouse. This assures agencies that only the data they want to share are accessible to other agencies. The concerns of centralized systems revolve around the maintenance of the information system. For example, who should manage and maintain the system? In a decentralized system police agencies maintain their information but allow other agencies to tap directly into their system. Allowing other agencies to directly access an information system leads to issues of security as well as system loading. Another factor affecting security and access issues is the mode in which police officers work.
424 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Officers are the first line of defense, the first responders to the public’s calls for assistance. They also provide primary investigative tasks, initiating the case report that may eventually be assigned to investigators. Officers are primarily car based and incident driven. They must be mobile to respond quickly to calls. Because they have high workloads, officers are often not able to follow up on cases and must instead quickly pass cases to investigators, thus limiting the information that officers may be able to pass along. Officers have a limited amount of time during which they can access and create a case report before submitting it to investigators. Therefore, the window of opportunity for officers to access information from computer systems is crucial. This distributed aspect of police work is a security and access concern for system designers. Although police must use information technology to access information from their patrol cars, they have less control over security in the field. With the deployment of wireless technology in patrol cars, system designers must implement security measures, such as encryption methods and protocols, to protect the transmission of sensitive information. With the advent of information technology, the amount of information collected by police departments continues to grow. A challenge for interface and system designers is the design of information retrieval interfaces that allow police to quickly and easily use the technology and understand the output. In designing an interface, designers must decide how to best display aggregated information. In police work typical data collected in information systems include people, places, vehicles, and crime types, as well as other types of data such as criminal and crime scene photographs, fingerprints, and security camera videos. These data more likely are stored in a number of systems rather than in a single system. When police search these systems for all information on a particular person, place, or vehicle, the graphical user interface should return the information from the systems in a logical and integrated manner. Police also use information technology to search for associations among entities, such as associates of a suspect, owners of a vehicle, or crimes associated with a location. A single association can be the key to solving a crime. Therefore, police must be able to
easily understand the associations among entities. Because police may not know the exact associations and instead have to browse information, use of information technology can quickly result in information overload. Designing user interfaces with visualization techniques to display associations, such as timeline analysis displays, geo-mapping, and networks, can reduce information overload and enhance a user’s ability to use information technology. A number of artificial intelligence systems have been developed to aid police in searching through the vast amounts of criminal data. Many of these systems build upon human heuristics (aids in learning) to model police search behaviors. One system, Coplink, uses a statistic-based algorithmic technique to identify relationships among entities, such as people, locations, vehicles, crime types, and organizations. Coplink was developed using a user-centered design in which police personnel were involved in the planning, design, and evaluation at each stage of development. Evaluations of Coplink found that the user interface was intuitive and that the system greatly enhanced the speed with which officers were able to search for information.
Implications for Designers In what had been a closed organizational environment, police are gradually adopting information technology. Officers, who previously had argued that their weapons of choice were guns and handcuffs, are now relying on information technology for a major part of their work. Designing information technology for police work has many opportunities. As an increasing number of police agencies store different types of data electronically, cutting-edge information technology can drastically change how police work. Coupled with the need for more cooperation among police agencies, information technology can connect agencies and form a more collaborative law enforcement environment. These opportunities are not without challenges for system and interface designers. Consequences of mishandled or misunderstood information are paramount, possibly leading to legal or even physical harm. Also, given the rapidly growing amount and different types of data, information technology
LEXICON BUILDING ❚❙❘ 425
for police must be scalable to handle police needs as they shift and expand. Challenges that system designers face include integration across systems, information access and security, and information retrieval processes. Designers of information technology for police work must consider the police environment and the pressures that affect how police use information technology. Given the many types of users (e.g., patrol officers, investigators, police managers, and crime analysts) with various levels of computer experience and different job tasks, designers must take into account the diverse needs and abilities of all users. Task analysis techniques and user-centered design not only help designers understand the work and environmental challenges faced by police, but also increase user support. As the needs of police and the capabilities of information technology continue to change, system designers must take into account the unique work practices, environment, and needs. In the end, the better the system design, the better the police work, which benefits everyone. Roslin V. Hauck See also Information Organization; Information Overload; Information Retrieval; Law and HCI
FURTHER READING Bowen, J. E. (1994). An expert system for police investigations of economic crimes. Expert Systems with Applications, 7(2), 235–248. Brahan, J. W., Lam, K. P., & Chan, H. L. W. (1998). AICAMS: Artificial Intelligence Crime Analysis and Management System. KnowledgeBased Systems, 11, 335–361. Colton, K. (1978). Police computer systems. Lexington, MA: Lexington Books. Gordon, D. (1990). The justice juggernaut: Fighting street crime, controlling citizens. New Brunswick, NJ: Rutgers University Press. Haggerty, K. D., & Ericson, R. V. (1999). The militarization of policing in the information age. Journal of Political and Military Sociology, 27(2), 233–255. Hauck, R. V., Atabakhsh, H., Ongvasith, P., & Chen, H. (2002). Using Coplink to analyze criminal-justice data. IEEE Computer, 35(3), 30–37. Leonard, V. A. (1980). The new police technology: Impact of the computer and automation on police staff and line performance. Springfield, IL: Charles C. Thomas.
Maltz, M. D., Gordon, A. C., & Friedman, W. (2000). Mapping crime in its community setting: Event geography analysis. New York: Springer-Verlag. Manning, P. K. (1992). Information technologies and the police. Crime and Justice, 15, 349–398. Morgan, B. J. (1990). The police function and the investigation of crime. Brookfield, VT: Avebury. Northrop, A., Kraemer, K. L., & King, J. L. (1995). Police use of computers. Journal of Criminal Justice, 23(3), 259–275. Office of Justice Programs. (2000). Office of Justice Programs Integrated Justice Information Technology Initiative. Retrieved November 4, 2003, from http://www.ojp.usdoj.gov/archive/topics/integratedjustice/ welcome.html Pliant, L. (1996). High-technology solutions. The Police Chief, 5(38), 38–51. Rocheleau, B. (1993). Evaluating public sector information systems. Evaluation and Program Planning, 16, 119–129. U.S. Department of Justice. (2000). Uniform crime reporting: National Incident-Based Reporting System, data collection guidelines: Vol. 1. Data collection guidelines. Retrieved February 17, 2004, from http:// www.fbi.gov/ucr/nibrs/manuals/v1all.pdf
LEXICON BUILDING The word lexicon can refer to any of the following: (1) the familiar dictionary—a repository of information about words containing explanations that ordinary speakers of the language can understand while not including information that speakers take for granted, (2) the lexical component of a formal grammar where information about words is provided in a formalism designed to be interpreted by the interpretive rules of the grammar, and (3) a body of structured information about words provided in a notation that allows computational means of performing natural language processing (NLP) operations. Resources that can serve these purposes are not necessarily distinct. Commercial dictionaries— especially those designed for language learners— often have information in a form that can be exploited for purposes of grammar writing and NLP. In principle a single lexical resource could serve all three purposes, made available in task-specific formats and provided with interfaces designed for different classes of users. Because a wide variety of NLP tasks exists, the sort of lexicon needed can vary greatly from application to application. For speech recognition the
426 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
lexicon is a list of the words needing to be recognized together with representations of their pronunciation and minimal indication of their grammatical properties. In the case of automatic stemming where the stems of words (e.g., the “sleep” in “sleeping,” the “medicate” in “premedication”) are separated from their affixes (“-ing,”“pre-“ and “-ion” of those same words), the lexicon consists of a list of stems and a list of affixes, the latter classified according to their combinability of stems of particular types. If the application is one of document routing—for example, shunting streaming newswire articles to different editorial offices according to whether they deal with the stock market, sports, crime, high fashion, or the weather—a lexicon could be considered adequate for such purposes if it merely associated each content word with the relevant domains and gave a set of probabilities that a word belongs to one domain rather than another. For such lexicons human efforts will be mainly devoted to sorting and labeling a collection of documents large enough to support the machine learning by which words acquire such weighted associations. Such lexicons generally do not need to contain careful human-constructed descriptions of individual words.
METADATA Information about the actual source of the data: the author, the period when or the context within which the data got provided; the age of the author, and the title of the document.
At a level requiring slightly richer representations— as, for example, in information extraction tasks—an appropriate lexicon might use a restricted vocabulary integrated with a limited repertory of phrasal patterns designed to discover information about, say, corporate leadership changes, traffic conditions, or the progress of an ongoing chess game. (For example, “[PERSON] has been replaced as [OFFICE] at [COMPANY] by [PERSON] of [COMPANY].”) In such cases either the texts subject to analysis will themselves be domain restricted or the NLP task at hand will target limited kinds of information, keyed by the presence of particular words, ignoring whatever other information the texts contain.
At the most ambitious level, for an application concerned with open-domain natural language understanding or with accurate and natural translation of texts from one language into another, the lexicon would need to be extremely large and to contain considerably richer information than is found in even the finest unabridged commercial dictionaries and also need to be fitted into knowledge about grammar, usage, commonsense inferencing, and discourse.
The Units of a Lexicon In the previous paragraphs the term word was used to identify the primary elements in the building of a lexicon. The word word in the generally understood sense is appropriate in naming the process of word disambiguation (the act of establishing a single semantic or grammatical interpretation for an ambiguous word; in the literature usually called “word sense disambiguation”; the most detailed study is by computer scientists Nancy Ide and Jean Véronis), where the system decides, for a multiple-meaning word, which is most likely to be the intended meaning in a given passage. However, the primary entity that needs to be characterized in a lexicon is not the word but rather a pairing of a word with a sense, usually called a “lexical unit” (LU). LUs—not “words”— need definitions, have synonyms and paraphrases, participate in semantic contrasts, and have specific grammatical properties. When we regard the concept this way, we can say that the concept of LU is an elaboration of the lay concept of “word.” (Objections to the term word sense disambiguation point precisely to this distinction. The process is more appropriately called either “word disambiguation” or “sense selection”: What is disambiguated is the word, not the sense.) The concept of LU that we need is both narrower and broader than the common notion of “word.” After we see that an essential part of being an LU is having a unitary meaning description and unique grammatical properties, we are compelled to recognize the existence of LUs that are made up of more than one word, the so-called multiword units (MWUs). If the word lift is an LU, then pick up is also an LU; if tolerate is an LU, then so is put up with; if pork is an LU, then so is horse meat.
LEXICON BUILDING ❚❙❘ 427
The need for these two elaborations of the word concept makes it clear why simple statistical studies of space-separated letter sequences cannot provide detailed information about LUs and why any attempt to measure the distribution of LUs requires sampling and human judgment just to figure out what needs to be counted. If we find the letter sequence “court” in a text we do not know which LU this represents (tennis court, the king’s court), nor can we tell whether in its context court stands for an LU on its own or is part of a multiword unit (e.g., Court of Appeals). Types of MWUs include (1) noun compounds of the form noun+noun (house arrest, peace officer) or of the form adjective+noun (forcible entry, federal officer, punitive action); (2) conventionalized verb+object combinations (exact vengeance, inflict punishment, take revenge); (3) combinations of verbs with various function words (put down, look into, put up with); (4) complex prepositions (in terms of, pursuant to, in accordance with); (5) lexically complex conjunctions (let alone, much less, both . . . and, either . . . or, as . . . as); and many others, in addition to a vast collection of idioms. The goal of automatically detecting MWUs in running text is especially difficult for two reasons. First, not every MWU is an uninterrupted word sequence, and second, the same combination of words can count as being a single LU in some contexts but as having separate functions in other contexts: The juxtaposition of the words walk and into is accidental in “He walked into the alley” but constitutes an MWU in “He walked into a door” (“collided with”); the elements of the words let alone are individually interpretable in “I want to be let alone” but make up a single MWU in “She wouldn’t give me a nickel, let alone ten dollars.” Specialist vocabulary is replete with MWUs, and the status of a word group as an MWU does not always stand out, even for a human interpreter. For example, looking at the parallel syntactic patterns in “He was accused of beating his dog with a broomstick” and “He was accused of assaulting a federal officer with a deadly or dangerous weapon,” one cannot know that the highlighted phrase in the second sentence is a named crime in the U.S. Criminal Code, which needs its own entry in a lexicon of U.S. criminal justice procedures.
The proper treatment of MWUs in computational linguistics is a largely unsolved problem: The issues are how to represent them in a lexicon, how to discover them in running text, how to estimate their contribution to texts in particular styles and genres, and even how to decide how many of them there are in the language. Linguist Ray Jackendoff has speculated that the list of MWUs that an individual knows must be roughly the same size as the list of single words, and the lexicographer Igor Mel’cuk claims that the number of phrasal words that must be recorded is ten times the size of the single-word lexicon; but for NLP applications, as opposed to some individual’s mental lexicon, there can be no limit to a lexicon’s size as long as means are needed for recognizing personal names, place names, names of historical events, and all the rest.
Decoding versus Encoding Functions of a Lexicon After the units are identified, information associated with them can be designed for either decoding (recognizing) or encoding (generating) purposes. The difference is between being able to recognize words in a passage in a way that leads to passively “understanding” the passage and having enough information about the words to be able to combine them appropriately with other words in relevant contexts of use. Many NLP applications require at most the decoding function of a lexicon—“at most” because for many purposes, such as information retrieval, document routing, topic detection, or event tracking, little information is needed about the actual meanings of individual words. Question-answering system that address simple facts—typically referred to as “factoids”—can be aided by having available, for each word, lists of semantically related words, such as: synonyms (words with the same meaning), antonyms (words with opposite meaning), hyponyms (subtypes, as “terrier” is to “dog”), and hypernyms (supertypes, such as “dog” is to “terrier”). Data mining—automatically searching for information in large databases—is helped by having large lists of words that share category membership (e.g., the
428 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
names of pharmaceuticals, disease names, therapies, etc.) of the sort derivable from medical and technical glossaries. In the case of word sense selection, the lexicon will show that an encountered word has more than one sense, and the application’s task is to choose (or give weighted probabilities to) the sense needed in the given context. This can be done by exploiting (1) metadata about the text itself (in a sports column about tennis, the noun court is likely not to refer to a royal household), (2) information about the linguistic structure of the phrase or sentence in which the word is found (if a parser has recognized court as a verb, then the “wooing” sense will be selected), or (3) information about the semantic domain of words that are in grammatical construction with, or in the neighborhood of, the target word (the legal institution sense of court is called for in the sentence “The judge called for order in the court”). The inadequacy of the encoding function of typical dictionary definitions can be illustrated with the word decedent: The reader, human or machine, will know from the definitions that a person referred to with this word is dead. Common definitions in a sample of dictionaries are “someone who is no longer alive,” “a deceased person,” and “a dead person.” However, an advanced language learner who wishes to know when and how to use the word, a translator (human or machine) needing to know when to select it, or a language generation computer application finding it in its lexicon needs to know that, although, in fact, designating a dead person, the word decedent is used in discourse about that person’s estate. One cannot appropriately say “Mozart is a decedent” or “Our graveyard holds 173 decedents.” A human dictionary reader might suspect from various hints that something is special about the word decedent; for example, a reader who finds in the Web investment glossary (www.investorword.com) simply the definition “a person who has died,” might also notice that the entry’s cross-references are to the words will, estate, heir, and succession. A lexicon needs to contain generative components (1) for recognizing MWUs that are produced by special subgrammars covering personal names, place names, institutional names, dates, expressions of clock time, currency amounts, and so forth; (2) for
recognizing morphologically complex technical terms in specialized disciplines that make extensive use of Greco-Latin roots; and (3) for including meaning specializations that can be generated from certain basic meaning descriptions (as in the generative lexicon of linguist James Pustejovsky, 1995). For the first of these, many existing NLP applications use entity recognizers (software for recognizing names of persons, places and institutions, addresses, and expressions of calendar time) such as the Bolt, Beranek, and Newman (BBN) Identifinder (www.bbn.com/ speech/identifinder.html).
Sources of Lexical Information An important consideration for lexicon building is how and where information about lexical properties is to be found. Much of what people know about their language is implicit and cannot be easily brought to conscious awareness. Thus, building a lexicon cannot be achieved simply by asking native speakers to write down in a systematic and application-relevant way information about word use and meaning that they are assumed to know. In many cases facts about the meaning of a word are not obvious, requiring subtle tests for teasing them out through the assembly and analysis of corpus evidence—evidence taken from a large collection called a corpus (plural, corpora) of machine-readable texts—together with careful use of judgments on the part of the users of the language. In fortunate cases, of course, much of the work has already been done, and the task is to adapt to local representational requirements information that is publicly available; examples are machine-readable versions of commercial dictionaries and such online resources as WordNet (www.cogsci.princeton.edu/~wn) and FrameNet (www.icsi.berkeley.edu/~framenet). Researchers put much effort into statistical studies of natural language corpora to discover words associated with particular domains, to cluster words by contextual features on the assumption of regular form/meaning correspondences, and to derive classificatory relations between words on the basis of contextual clues. (For example, phrases like “X, a northern variety of fish” or “X and other fish” lead to the classification of X as a fish.)
LEXICON BUILDING ❚❙❘ 429
For many purposes traditional kinds of linguistic research are unavoidable, using the refined introspections of linguistically trained native speakers and carefully testing predictions based on these. This is most necessary for language-generation purposes because native speakers can know what is not possible in a language, something that a corpus cannot tell us (a sentence such as “Another day elapsed” can be found in a corpus; one such as “Yesterday elapsed” cannot; linguistic introspection offers data generally not knowable in any other way). Collaboration with experts is necessary in cases where a meaning is stipulated in some expert domain and neither discoverable in a corpus nor accessible to ordinary speakers’ intuitions. Experts are not necessarily skilled in knowing the form in which their knowledge can be made accessible to readers or available to computational purposes, but their knowledge is obviously necessary in many cases.
Kinds of Lexical Information In the end we find that a lexicon capable of serving the widest range of NLP purposes will have to include information about: 1. Pronunciation in the form of computer-friendly transcriptions such as the TIMIT, an international standardized ascii-based alphabet for the phonetic transcription of speech. 2. The identification of lemmas (the identification of a single “dictionary form” for words of different shape: thus “goes,”“gone,”“went,” etc., will all be identified with the lemma “go”) along with the tagging of words with information about part-of-speech (noun, verb, etc.) and grammatical properties (plural, past, etc.). 3. The association of each LU with other LUs in the lexicon, such as the recognition of synonyms (doctor, physician), taxonomic relations (terrier > dog > mammal, etc.), contrast sets (man: woman, boy: girl, beautiful: ugly, etc.) 4. The ability to co-occur with other words and phrases, thus distinguishing transitive from intransitive verbs (according to whether they take a direct object), the selection of prepositions (as in “fond of,” “pleased with,” “depend on,”
“object to,” etc.), and the preference for particular combination with modifiers (“excruciating pain,”“blithering idiot,”“stark naked,” etc.) 5. Enough semantic information to guide the semantic integration of the meanings of LUs into meaning representations of the phrases and sentences with which they combine 6. Association with use conditions that are independent of meaning proper, that is, the fit with particular topics or genres and the like. Building an adequate lexicon for NLP work is a huge undertaking involving long-term planning and serious funding. The absence of such a lexicon makes it impossible for a computer to handle language correctly and sets arbitrary limits to NLP systems. Building such a lexicon requires a holistic approach. This is not something to be carried out piecemeal, a method that guarantees incompatibility of the various components. Linguistic analysis is complex, slow, and labor intensive; most “lexicons” produced today cover only a part of the total analysis of the language and are themselves only partial, the funding having ended before the work was completed. A comprehensive lexicon of the language—a systematic record of how words are used and understood by people—is essential if the twenty-first-century computer is to handle language correctly. Charles Fillmore See also Machine Translation; Natural-Language Processing; Ontology; Speech Recognition FURTHER READING Boguraev, B., & Pustejovsky, J. (Eds.). (1996). Corpus processing for lexical acquisition. Cambridge, MA: MIT Press. Briscoe, T., & Carroll, J. (1997). Automatic extraction of subcategorization from corpora. Proceedings of the 5th Conference on Applied Natural Language Processing ANLP-97. Retrieved February 9, 2002, from http://acl.ldc.upenn.edu//A/A97/A97-1052.pdf Cruse, D. A. (1986). Lexical semantics. Cambridge, UK: Cambridge University Press. Fellbaum, C. (1998). WordNet: An electronic lexical database. Cambridge, MA: MIT Press. Fillmore, C. J. (1992). Corpus linguistics vs. computer-aided armchair linguistics. Directions in corpus linguistics: Proceedings from a 1991 Nobel Symposium on Corpus Linguistics (pp. 35–66). Stockholm: Mouton de Gruyter.
430 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Fontenelle, T. (2003). Special issue on FrameNet. International Journalof Lexicography, 16(3). Fisher, W. M., Zue, V., Bernstein, J., & Pallett, D. (1987). An acousticphonetic data base. 113th Meeting of the Acoustical Society of America, Indianapolis, IN. Gildea, D., & Jurafsky, D. (2002). Automatic labeling of semantic roles. Computational Linguistics, 28(3), 245–288. Grishman, R., Mcleod, C., & Meyers, A. (1994). COMLEX syntax: Building a computational lexicon. Proceedings of the 15th International Conference on Computational Linguistics (COLING94), Kyoto, Japan. Ide, N., & Véronis, J. (1998). Word sense disambiguation: The state of the art. Computational Linguistics, 24(1), 1–40. Koskenniemi, K. (1983). Two-level morphology: A general computational model for word-form recognition and production. Helsinki, Finland: University of Helsinki Department of General Linguistics. Miller, G. A., Beckwith, R., Fellbaum, C. D., Gross, D., & Miller, K. J. (1990). WordNet: An on-line lexical database. International Journal of Lexicography, 3, 235–244. Ritchie, G. D., Russell, G. J., Black, A. W., & Pulman, S. G. (1992). Computational morphology: Practical mechanisms for the English lexicon. Cambridge, MA: MIT Press. Wilks, Y., Slator, B., & Guthrie, L. (1996). Electric words: Dictionaries, computers and meanings. Cambridge, MA: MIT Press.
LIQUID CRYSTAL DISPLAYS Flat panel displays are a fascinating technology. From computer monitors to personal digital assistants (PDA), visual displays are the ultimate humanmachine interface. Liquid crystal displays (LCDs) are ubiquitous in such portable electronic products as PDAs, cellular phones, and video recorders, and they have enabled new product categories, such as laptop computers. LCDs are erasing the age-old domination of cathode-ray-tubes (CRT) for desktop computer monitors. Unlike conventional CRT technology, which creates light, an LCD simply acts as a light shutter to modulate a powerful backlight that is on continuously. To understand the basic operation of LCD devices, one must understand several important properties of liquid crystal materials. The elongated shape of the liquid crystal molecules, often referred to as shape anisotropy, gives liquid crystal materials their unique electrical and optical properties. Dielectric
anisotropy refers to the difference in the dielectric constant parallel to and perpendicular to the long axis of the molecule; it is responsible for the reorientation of the liquid crystal when the crystal is subjected to an applied electric field. Typically, liquid crystal materials will align parallel to the direction of the applied electric field. When the liquid crystal molecules change their orientation, their optical appearance also changes. With these unique properties, one can control the optical appearance of pixels, which are the smallest switching element on a display to create an image. The most common LCD configuration, known as the twisted nematic (TN), employs crossed polarizers and a molecular orientation of molecules whose long axis twists through a 90-degree angle between two glass substrates that are somewhat like windowpanes. One unique feature of LCD technology is the way in which the twisted structure of the molecules is created. A polymer layer on each of the two glass substrates is mechanically rubbed with a cloth to create very minute groves (called nanogrooves) on the surface that uniformly align the long axis of the molecules at each surface. The alignment of the rub direction is placed parallel to the transmission axis of the polarizer, but the two glass substrates are placed one on the other with their polarizing directions crossed. A pair of polarizers, stacked with their polarizing orientations at right angles, normally block when they are crossed, and are transparent when they are parallel. After light passes through the first polarizer, it becomes linearly polarized and follows the liquid crystal twisted structure. This process, often referred to as adiabatic waveguiding, enables light to escape out the top polarizer (even though the polarizers are crossed). By using a backlight, a given pixel would be bright in this configuration, and color is controlled on the pixel level with a red, green, and blue color filter array. Thin transparent conductor layers, usually indium-tin oxide, are deposited on the substrates so that a voltage can be applied to the material. When a voltage is applied to the pixel, an electric field is created perpendicular to the substrates. The liquid crystal molecules align parallel to the electric field,
LITERARY REPRESENTATIONS ❚❙❘ 431
thereby breaking the twisted symmetry. The light passes through this aligned configuration without any change; therefore the second polarizer absorbs all of the light and the pixel is black. Various levels of gray are possible with intermediate voltages, and the array of thousands of pixels with different color filters can produce a complete image with full color and shading. Gregory P. Crawford FURTHER READING Crawford, G. P., & Escuti, M. J. (2002). Liquid crystal display technology. In J. P. Hornak (Ed.), Encyclopedia of imaging science and technology (pp. 955–969). New York: Wiley Interscience. Lueder, E. (2001). Liquid crystal displays. New York: Wiley SID. Wu, S. T., & Yang, D. K. (2001). Reflective liquid crystal displays. New York: Wiley SID. Yeh, P., & Gu, C. (1999). Optics of liquid crystal displays. New York: John Wiley and Sons.
LITERARY REPRESENTATIONS Since the industrial revolution, many writers have imagined intelligent machines as a way of seeing humanity from a new perspective. “The Sand-Man” (1817), by the Romantic writer E. T. A. Hoffmann, concerned a man fascinated by a mechanical doll he imagines to be the perfect woman. This idea was borrowed by the composer Léo Delibes for his 1870 ballet Coppélia, and by Jacques Offenbach for his 1880 opera The Tales of Hoffmann. Early in the twentieth century, L. Frank Baum, the author of the children’s classic The Wonderful Wizard of Oz (1900) and a subsequent series of Oz books, added the mechanical man, Tik-Tok, to the roster of Dorothy’s friends in Oz. A hundred years later, computers feature as props in much ordinary literature, but the deepest explorations of humanmachine interaction are in science fiction, where computers and intelligent machines are often main
characters, or are the environment in which the story takes place.
The Hard-Science Paradigm Intelligent machines may appear in any genre of modern literature, but robots are especially associated with a particular subvariety of science fiction. A questionnaire study conducted by William Bainbridge at a world science fiction convention held in Phoenix, Arizona in 1978 found that memorable stories about robots tend to belong to the hardscience category. These are stories that take current knowledge from one of the physical sciences and logically extrapolate the next steps that might be taken in that science. They appeal to readers who enjoy reading factual science articles and stories about new technology. Interestingly, the research found that people who like hard-science science fiction tend to prefer stories in which there is a rational explanation for everything, and they like fictional characters who are cool, unemotional, clever, and intelligent. This may mean they not only like intelligent machines but would prefer human beings to be more like robots. This possibility is illustrated by the robot stories of the preeminent hard-science writer, Isaac Asimov (1920–1992). Simple robots such are in use today in factories or that are closely supervised by human beings can be programmed relatively simply. But, Asimov thought, if robots are to operate autonomously they need the equivalent of an ethical code. Thus, he postulated the Three Laws of Robotics (Asimov 1950, 7): 1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. In Asimov’s 1942 story “Runaround” (reprinted in the anthology I, Robot), two men on the sun-facing
432 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Excerpt from “The Sand-Man” (1817) by E. T. A. Hoffman
I
n this selection from a classic tale about man and machine, the main character falls in love with a beautiful mechanical doll, never wanting to believe she isn’t really human. The concert came to an end, and the ball began. Oh! to dance with her—with her—that was now the aim of all Nathanael’s wishes, of all his desires. But how should he have courage to request her, the queen of the ball, to grant him the honour of a dance? And yet he couldn’t tell how it came about, just as the dance began, he found himself standing close beside her, nobody having as yet asked her to be his partner; so, with some difficulty stammering out a few words, he grasped her hand. It was cold as ice; he shook with an awful, frosty shiver. But, fixing his eyes upon her face, he saw that her glance was beaming upon him with love and longing, and at the same moment he thought that the pulse began to beat in her cold hand, and the warm lifeblood to course through her veins. And passion burned more intensely in his own heart also, he threw his arm round her beautiful waist and whirled her round the hall. . . . Nathanael, excited by dancing and the plentiful supply of wine he had consumed, had laid aside the shyness which at other times characterised him. He sat beside Olimpia, her hand in his own, and declared his love enthusiastically and passionately in words which neither of them understood, neither he nor Olimpia. And yet she perhaps did, for she sat with her eyes fixed unchangeably upon his, sighing repeatedly, “Ach! Ach! Ach!” Upon this Nathanael would answer,“Oh, you glorious heavenly lady! You ray from the promised paradise of love! Oh! what a profound soul you have! my whole being is mirrored in it!” and a good deal more in the same strain. But Olimpia only continued to sigh “Ach! Ach!” again and again. Source: Hoffmann, E. T. W. (1885). The sand-man. In Weird tales, Vol. 1. New York: Charles Scribner’s Sons. (Original work published 1817) Retrieved March 10, 2004, from http://gaslight.mtroyal.ca/sandman.htm
side of the planet Mercury need selenium to repair the system that protects them from the lethal solar radiation. They send Speedy, a robot, to get some from a pool of this metal that they themselves cannot
approach. Unfortunately, Speedy has been designed with an especially strong Third Law, and the men’s command to him, which depends for its completion on the Second Law, was not stated with an explicit high priority. When Speedy approaches the molten selenium, he discovers it is too dangerous for him to enter. This gives him what psychologists call an approach-avoidance conflict. Speedy goes crazy and runs around the pool, singing. Knowing they will die if they cannot get control of Speedy, the two men agonize about what to do. Eventually, one of them realizes that the First Law could resolve this conflict between a weakened Second Law and a strengthened Third Law. He intentionally exposes himself to mortal danger, forcing Speedy to save him. Restored to sanity, Speedy is sent under a stronger command to a safer selenium deposit. “Runaround” was probably the first publication to use the word robotics, and all Asimov’s robot stories assume the existence of a distinct engineering discipline devoted to the design of humanlike machines. Asimov called robot engineers roboticists, but this word has not caught on. Asimov’s novel The Caves of Steel (1954) concerns a partnership between a robot detective and a human policeman, who team up to solve a murder that could not have been committed by a robot or a human alone, but only by a combination of both. Much of the story revolves around the competition and growing understanding between the robotic and human investigators. On one level, the theme is the relationship between people and machines, but on a deeper level it is the connection between people and the things they use, including other people. Potentially, a happy ending can be reached when any two beings come to understand each other both as objects and as subjects. In a later mystery novel, The Robots of Dawn (1983), Asimov suggests that human beings are ruled by strict Laws of Humanics, comparable to the Laws of Robotics. For example, the people in The Caves of Steel are under an inescapable psychological compulsion to avoid open spaces, and people in Asimov’s novel The Naked Sun (1956) have a powerful inhibition against ever being in the physical presence of another person. Many of the classical hard-science writers viewed humans and robots in very similar
LITERARY REPRESENTATIONS ❚❙❘ 433
terms. Adam Link, the robot hero of a series of stories by Eando Binder, asserted his own humanity through the principle that the body, whether flesh or metal, was only part of the environment of the mind. Robert A. Heinlein (1907–1988), a hard-science writer with a highly individualist ideology, postulated that the only thing preventing a machine from becoming a conscious, individual person was the lack of sufficient computing power. In his novel The Moon is a Harsh Mistress (1966), a machine that was designed to handle a vast variety of tasks autonomously is augmented with additional memory, computer vision, and voice, unexpectedly becoming the leader of a rebellion against the collectivist government of Earth.
The Cyberpunk Paradigm Throughout the history of science fiction, a few writers have contributed stories that were unusually surrealist, psychological, or politically radical. In the 1960s writers and works in this vein were described as New Wave. Bainbridge’s questionnaire study found that the New Wave was characterized by avant-garde fiction that experiments with new styles, often based on speculations in the social sciences. Many of the stories concern harmful effects of scientific progress or are critical of contemporary society. Often they deeply probe personal relationships or feelings, and characters tend to be sensitive and introspective. In the 1980s, this literary movement morphed into the subgenre known as cyberpunk. Cyberpunk continues to experiment with stylistic innovations, tends to be critical of power structures in society, and relishes the lurid extremes of human character and experience, epitomized by deviant sex, drugs, and madness. Cyberpunk assumes a future world in which computers and the Internet constitute the fundamental structure of society. In these stories, government is weak or fragmented, the family is practically nonexistent, and transnational corporations battle one another for information supremacy. In such a world, computer hackers are the most effective rebels. To a significant extent, the writers of the older hard-science school were individualistic, and therefore focused on an individual robot or computer for
a particular story. Humans interacted with computers and robots by programming them or simply by speaking commands to them. In contrast, cyberpunk writers are concerned with collective phenomena and the oppression of the individual by the social system. Their heroes are antiheroes, sometimes carrying generic names like Case (in William Gibson’s 1984 novel Neuromancer) or Hiro Protagonist (in Neal Stephenson’s 1992 Snow Crash). Computers in such stories typically are not individual characters; they are part of the networked environment of cyberspace. The term cyberspace was introduced in Neuromancer to refer to the dynamic virtual reality people perceive when “jacked into” the worldwide computer network. In an unspecified future year, users connect to this network either through electrode headsets that detect and affect their brainwaves through the skin, or by plugging their brains directly in through jacks surgically inserted behind the left ear. Cyberspace is “a consensual hallucination. . . . a graphic representation of data abstracted from the banks of every computer in the human system. . . . lines of light ranged in the nonspace of the mind, clusters and constellations of data. . . . like city lights, receding” (Gibson 1984, 51). A heavily defended corporate database is represented in cyberspace as a green rectangle, whereas an artificial intelligence (AI) is a featureless, white square. When the AI sends a computer virus to invade the database, it is a “polychrome shadow, countless translucent layers shifting and recombining” (168). The experience of cyberspace is “bodiless exaltation” (6) and therefore addictive. Neuromancer’s protagonist was a professional data thief, addicted to cyberspace, who stole from his employers. In punishment, they crippled his nervous system so he could no longer experience cyberspace, leaving him desperately self-destructive. He becomes enmeshed in a confused net of conspiracies, spun by rival corporations and artificial intelligences, and is assisted by dubious friends, including the computer-recorded personality of his deceased hacking teacher. The nearest thing to a government that appears in the novel is the Turing Registry, an agency that tries to prevent any of the autonomous artificial intelligences from escaping human control.
434 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
Two books about travelers in search of themselves and their worlds: The Wonderful Wizard of Oz (1900) and The Shockwave Rider (1975).
Much of Stephenson’s Snow Crash also takes place in cyberspace, where people are represented by avatars (computer-generated characters) of varying degrees of cost, artistry, and surrealism. Gibson wanted to call computer-generated personal representatives constructs, but Stephenson’s term avatars has been adopted by the computer science community. In Hinduism, an avatar is a particular form in which a deity may appear to human beings; by extension, a computer avatar is a virtual form in which humans appear to one another inside cyberspace. The avatars of different users meet on the avenues of the Metaverse, a vast, virtual-reality city, and the rules of this environment limit the avatars’ size and their ability to harm one another. Users do not jack their brains directly into the Metaverse, as they do into Gibson’s cyberspace, but merely wear special goggles on which
red, blue, and green lasers paint three-dimensional images. The Snow Crash of the title is a kind of drug, a computer virus, or a pattern of information that affects the human mind in the same way a virus affects a computer. In the Metaverse, Snow Crash appears in the form of a small calling card, or a scroll that unrolls to reveal a flashing image of apparently random bits. One possibility explored by the novel is the idea that each human religion is an information virus, that spreads (for better or worse) from mind to mind. Another is the notion that the natural programming language of the human mind, the fundamental machine language of the brain, is ancient Sumerian. The cyberpunk genre even explores now obsolete human-computer interfaces. The Difference Engine (1991), a historical novel that Gibson wrote
LITERARY REPRESENTATIONS ❚❙❘ 435
in collaboration with Bruce Sterling, imagines that the nineteenth-century inventor Charles Babbage (1791–1871) succeeded in building the mechanical computer he actually failed to complete, thereby introducing the information age a century early and transforming industrial society. Programmers, in this alternate Victorian society, are called clackers, because of the noise produced by the machines that read their data cards, and computing is clacking. In classic cyberpunk fashion, the story critiques the Victorian hope that a partnership between technological innovation and social order can overcome the fundamental dynamics of human conflict.
Conflict between Humans and Machines Robots, computers, and information systems frequently become entangled in conflicts between human beings. An early example of warfare on the In ter n e t i s Jo h n Br u n n e r ’s 1 9 7 5 n ove l The Shockwave Rider. In the novel, as the United States approaches the year 2020, it becomes a fragmented society in which corrupt government covertly magnifies social problems and individual psychopathology, the better to control the demoralized population. Published two years before the first home computers became available and two decades before the first commercial Web browser, Brunner’s novel predicted correctly that every home could have a computer connected to the Internet (with the standard keyboard, monitor, and printer) and that the Internet could also be accessed via mobile telephones. The secret Tarnover project to create superior data warriors backfires when its best computer saboteur escapes. Sending software tapeworms across the Internet to modify selected data in the world’s connected information systems, he creates a series of temporary identities for himself. When Tarnover and the Federal Bureau of Data Processing conspire to destroy the few remaining free communities, he writes a tapeworm to deliver the weapon that government fears the most: truth.
Once Brunner’s science-fiction technology had become real, mainstream writers exploited information warfare for its dramatic value. The Net Force series, created by Tom Clancy and Steve Pieczenik in 1998, concerns the Net Force branch of the Federal Bureau of Investigation, portrayed as heroes, in contrast to Brunner’s villainous Federal Bureau of Data Processing. In Net Force, the mobile telephone computer that Brunner imagined becomes a virgil—a Virtual Global Interface Link—that combines telephone, camera, scanner, fax, phone, radio, television, GPS, and computer. Virgil, it will be remembered, was the ancient Roman historian-poet whom the Italian poet Dante envisioned as his companion on his journey into hell in the first portion of The Divine Comedy (written c. 1310–1314), and a virgil accompanies the head of Net Force to his death in the first chapter of the first novel in the series. In 2010, the time of the first story, many people still use keyboard, mouse, and monitor to interact with their computers. But many prefer virtual-reality headsets, visualizing choices in scenarios such as a private meeting in a forest clearing. Some scenarios are compatible. For example, when two people compete with each other in virtual reality, one may experience the competition as a highspeed highway race, while the other may perceive them to be driving speedboats up a river. When the villain wants to sabotage the data systems of several corporations and governments, he employs the scenario that he is a German soldier in World War I, killing onrushing Allied troops. Several authors have argued that heavy reliance upon computers could make a high-tech society especially vulnerable to low-tech enemies. In Mack Reynold’s novel Computer War (1967), a bellicose nation called Alphaland attacks peaceful Betastan, on the advice of its computers. But the Betastani refuse to respond in the ways predicted by Alphaland’s machines, and at one point they detonate an explosive magnetic device that erases all the computer memories in the Alphaland capital. John Shirley’s often-reprinted cyberpunk story “Freezone” (1985) imagines that the economy of the capitalist world collapsed into the Computer Storage Depression when the electromagnetic pulse from a nuclear
436 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
weapon detonated by Arab terrorists erased all the data in the United States. Many writers have explored the possible conflicts that might arise between people and their machines. Perhaps the most influential such story was the 1921 drama, R.U.R. (Rossum’s Universal Robots), by the Czech writer Karel Capek (1890–1938). This work introduced the term robot, from a Czech word meaning heavy labor, with the implication of compulsory work or serfdom. Rossum, whose name may be derived from the Czech word for mind or reason, invented these manlike machines in order to prove that God was unnecessary. After Rossum’s death, his heirs built a vast industry supplying the labor needs of the world with these robots. The motives of the builders of the robots were various. Some simply wanted to earn money. Others wanted to liberate the lower social classes from unpleasant labor and turn everybody into aristocrats. Robots were far cheaper than human laborers, so the world became awash with wealth, whatever way it was shared across the social classes. Once people no longer needed to work, however, they seemed to lose the will to live and stopped having children. Conflict between people continued, but now the soldiers were robots rather than humans. Believing that people were irrational and inefficient, the robots rebelled, and began to exterminate the entire human species. The play is a farce, but it examines profound issues regarding the nature of humanity and the relationship between humans and their creations. A substantial dystopian literature has postulated various ways in which robots or computers might wrest control of the world from humans, imposing cyberdictatorship and eradicating freedom. The Humanoids (1950), by Jack Williamson, imagines that perfect robots were invented and programmed to follow a prime directive—to serve and obey, and guard men from harm—strictly. These seemingly benevolent machines manufacture endless copies of themselves and set about liberating humanity from labor and danger. Soon, everybody has at least one invincible, ever-present robot companion who prevents them from doing anything dangerous, such as using a tool, engaging in a physical sport, or conducting scientific research. Player Piano (1952), by Kurt Vonnegut (b. 1922), depicts a fu-
ture United States run by engineers, in which automation (epitomized by the vast computer, EPICAC XIV) has thrown most of the workforce into unemployment and is gradually rendering even the mostly highly skilled jobs obsolete, including, eventually, those of the engineers themselves. Michael Crichton’s mainstream novel Prey (2002) warns that corporate greed may inadvertently produce lethal threats through a combination of poorly understood technological innovations at the intersection of computing, genetic engineering, and nanotechnology. This story of a monster that terrorizes workers at a remote research laboratory was partly inspired by the new swarm concept in robotics, the idea that a very large number of individually unintelligent machines might achieve intelligence by interacting socially with one another. But the fundamental concept is one that has very recently achieved prominence in science policy debates in the real world, namely, technological convergence. Thoughtful scientists and engineers in many fields have begun to explore ways in which human abilities may be greatly enhanced through convergence of information technology, biotechnology, nanotechnology, and cognitive science. The conscious aim is certainly not to create monsters, although participants in the convergence movement are very conscious of the need to consider the social implications of their work. Rather, their aim is to strengthen the creativity and freedom of individual humans, perhaps ultimately through some kind of convergence between humans and the tools that serve them.
Convergence of Humans and Machines From the very beginnings of the science fiction genre, many stories imagined that technology could augment human abilities. For example, Heinlein’s 1942 hard-science story “Waldo” concerned a disabled man who invented remote manipulator arms that compensated for his disabilities. For decades, other science fiction writers used the term waldo to mean remote manipulator, but the word never caught on in actual robotics.
LITERARY REPRESENTATIONS ❚❙❘ 437
Excerpt from Isaac Asimov’s I, Robot
I
n his classic science fiction anthology I, Robot (1950), Isaac Asimov looks ahead to a world in which robots move from primitive machines in the early twenty-first century to highly sophisticated “creatures” who may indeed rule the world a short fifty years later. The stories in the anthology are told by “Robopsychologist” Dr. Susan Calvin to a reporter from the Interplanetary Press. In the extract below, Dr. Calvin reminisces about her fifty-year tenure at U.S. Robots. The offices and factories of U.S. Robots were a small city; spaced and planned. It was flattened out like an aerial photograph. “When I first came here,” she said, “I had a little room in a building right about there where the fire-house is now.” She pointed. “It was torn down before you were born. I shared the room with three others. I had half a desk. We built our robots all in one building. Output-three a week. Now look at us.” “Fifty years,” I hackneyed, “is a long time.” “Not when you’re looking back at them,” she said. “You wonder how they vanished so quickly.” She went back to her desk and sat down. She didn’t need expression on her face to look sad, somehow. “How old are you?” she wanted to know. “Thirty-two,” I said. “Then you don’t remember a world without robots. There was a time when humanity faced the universe alone and without a friend. Now he has creatures to help him; stronger creatures than himself, more faithful, more useful, and absolutely devoted to him. Mankind is no longer alone. Have you ever thought of it that way?” “I’m afraid I haven’t. May I quote you?” “You may. To you, a robot is a robot. Gears and metal; electricity and positrons.-Mind and iron! Human-made! if necessary, human-destroyed! But you haven’t worked with them, so you don’t know them. They’re a cleaner better breed than we are.” I tried to nudge her gently with words, “We’d like to hear some of the things you could tell us; get your views on robots. The Interplanetary Press reaches the entire Solar System. Potential audience is three billion, Dr. Calvin. They ought to know what you could tell them on robots.” It wasn’t necessary to nudge. She didn’t hear me, but she was moving in the right direction. “They might have known that from the start. We sold robots for Earth-use then-before my time it was, even. Of course, that was when robots could not talk. Afterward, they became more human and opposition began. The labor unions, of course, naturally opposed robot competition for human jobs, and various segments of religious opinion had their superstitious objections. It was all quite ridiculous and quite useless. And yet there it was.” Source: Asimov, I. (1950). I, robot (pp. 1617). Garden City, NY: Doubleday & Company.
Hard science merges with cyberpunk in many stories that describe how humans of the future might merge with their machines. In the 1954 story “Fondly Fahrenheit,” by Alfred Bester (1913–1987), a man and his robot flee from planet to planet to escape justice for murders they are committing. Psychologically, the two have blended. This is evident in the very style of the writing, because the firstperson narrative perspective constantly shifts from one to the other, even within the same paragraph. The hard-science aspects of the story include three laws of robotics that are rather different from those propounded by Asimov:
1. A robot must obey the government, and state directives supercede all private commands. 2. A robot cannot endanger life or property. 3. A robot must obey its owner. These laws can conflict with each other, especially for sophisticated robots capable of performing a wide range of tasks and who belong to corrupt or insane owners. The first rule is meant to solve such problems. In the modern world the state is the ultimate judge of morality, for humans and machines alike. What happens, then, if the owner tells the robot that a state command did not actually come from
438 ❘❙❚ BERKSHIRE ENCYCLOPEDIA OF HUMAN-COMPUTER INTERACTION
the state, but is a lie or an error in communication? Overcome by the heat of human emotion, such a robot might even commit serial homicide. Bester’s novel The Computer Connection (1974) depicts the merging of human and machine in another way. It concerns a scientist at Jet Propulsion Laboratory whose personality lands in his supercomputer when an epileptic seizure coupled with profound emotional shock drive it out of his body. In Software (1982), by Rudy Rucker, human minds are uploaded to computers by scanning the brain as knives destructively slice it apart. Greg Bear’s Blood Music (2002) imagines that a combination of genetic and electronic technologies could create noocytes, viruslike molecular computers that absorb the minds and dissolve the bodies of human beings. The nondestructive scanning device in The Terminal Experiment (1995), by Robert J. Sawyer, employs a billion nanotechnology sensors and computer integration techniques to map the neural pathways of the brain from the surface of the scalp. Scanning sessions bombard the subject with sights and sounds, to activate all the brain’s neural pathways. In both The Computer Connection and The Terminal Experiment, uploading a person to a computer removes normal inhibitions, so the person becomes a murderer. When people are uploaded in Greg Egan’s Permutation City (1994), they immediately commit suicide. In the classic 1953 novel The City and The Stars, by Arthur C. Clarke (b. 1917), people uploaded into the computer-generated city, Diaspar, lose their capacity to explore and evolve. Diaspar is an eternal city, and its people are eternal as well. Superficially they are like humans, except that they produce no children. In a thousand-year lifetime, they enjoy adventure games in computer-generated virtual reality, create works of art that are destined to be erased, and gradually grow weary. Then they enter the Hall of Creation to be archived as patterns of electrical charges inside the Central Computer. After a few thousand years, or a few million, they will be reconstituted again, to experience another life in Diaspar before once again entering the archives. Each year, about ten thousand people are restored to life, always a somewhat different combination of
individuals but existing within a fixed population size and following set laws of behavior. At birth, a Diasparan is biologically about twenty, but requires about twenty years to mature. During this time the individual does not possess any memories of his or her previous lives. Then the memories return, and he or she lives out a life that is practically a replay of the previous one, content within the artificial womb of the city. Thus they fail to use their advanced technology to do anything really productive, such as exploring the stars. If human beings really do merge with their computers over the coming centuries, we can wonder whether this will help them to achieve great things in the real universe, or to retreat from challenge into a meaningless, virtual existence. William Sims Bainbridge
FURTHER READING Asimov, I. (1950). I, robot. New York: Grosset and Dunlap. Asimov, I. (1954). The caves of steel. Garden City, NY: Doubleday. Asimov, I. (1956). The naked sun. New York: Bantam. Asimov, I. (1983). The robots of dawn. New York: Ballantine. Bainbridge, W. S. (1986). Dimensions of science fiction. Cambridge, MA: Harvard University Press. Baum, L. F. (1904). The marvelous ozama of Oz. Chicago: Reilly and Britton. Bear, G. (2002). Blood music. New York: ibooks. Bester, A. (1974). The computer connection. New York: ibooks. Bester, A. (1997). Virtual unrealities: The short fiction of Alfred Bester. New York: Vintage. Binder, E. (1965). Adam Link—Robot. New York: Paperback Library. Brunner, J. (1975). The shockwave rider. New York: Ballantine. Capek, K. (1990). Toward the radical center: A Karel Capek reader. New York: Catbird. Cardigan, P. (Ed.). (2002). The ultimate cyberpunk. New York: ibooks. Clancy, T., & Pieczenik, S. (1998). Net force. New York: Berkley. Clarke, A. C. (1953). The city and the stars. New York: Harcourt, Brace and Company. Clute, J., & Nicholls, P. (1995). The encyclopedia of science fiction. New York: St. Martin’s Griffin. Crichton, M. (2002). Prey. New York: HarperCollins. Egan, G. (1994). Permutation City. New York: Harper. Gibson, W. (1984). Neuromancer. New York: Ace. Gibson, W., & Sterling, B. (1991). The difference engine. New York: Bantam. Heinlein, R. A. (1950). Waldo and Magic Inc. Garden City, NY: Doubleday. Heinlein, R. A. (1966). The moon is a harsh mistress. New York: Orb.
LITERARY REPRESENTATIONS ❚❙❘ 439
Hoffman, E. T. A. (1885). The sand-man. In Weird Tales (J. T. Bealby, Trans.). New York: Scribner’s. (Original work published 1817) Reynolds, M. (1967). Computer war. New York: Ace. Roco, M. C., & Bainbridge, W. S. (2003). Converging technologies for improving human performance. Dordrecht, Netherlands: Kluwer. Rucker, R. (1982). Software. New York: Avon. Sawyer, R. J. (1995). The terminal experiment. New York: HarperCollins.
Spiller, N. (Ed.). (2002). Cyber reader: Critical writings for the digital era. New York: Phaidon. Stephenson, N. (1992). Snow crash. New York: Bantam. Sterling, B. (1986). Mirrorshades: The cyberpunk anthology. New York: Ace. Vonnegut, K. (1952). Player piano. New York: Delta. Williamson, J. (1950). The humanoids. New York: Grosset and Dunlap.