262 54 206MB
English Pages [266] Year 1995
UNIVERSITY of
GLASGOW
Computer Applications and Quantitative Methods in Archaeology 1994
Edited by
Jeremy Huggett & Nick Ryan with Ewan Campbell, Clive Orton and Stephen Shennan
BAR International Series 600 1995
Published in 2019 by BAR Publishing, Oxford BAR International Series 600 Computer Applications and Quantitative Methods in Archaeology 1994 © The editors and contributors severally and the Publisher 1995 The authors’ moral rights under the 1988 UK Copyright, Designs and Patents Act are hereby expressly asserted. All rights reserved. No part of this work may be copied, reproduced, stored, sold, distributed, scanned, saved in any form of digital format or transmitted in any form digitally, without the written permission of the Publisher. ISBN 9780860547778 paperback ISBN 9781407349046 e-book DOI https://doi.org/10.30861/9780860547778 A catalogue record for this book is available from the British Library This book is available at www.barpublishing.com BAR Publishing is the trading name of British Archaeological Reports (Oxford) Ltd. British Archaeological Reports was first incorporated in 1974 to publish the BAR Series, International and British. In 1992 Hadrian Books Ltd became part of the BAR group. This volume was originally published by Tempvs Reparatvm in conjunction with British Archaeological Reports (Oxford) Ltd / Hadrian Books Ltd, the Series principal publisher, in 1995. This present volume is published by BAR Publishing, 2019.
BAR
PUBLISHING BAR titles are available from:
E MAIL P HONE F AX
BAR Publishing 122 Banbury Rd, Oxford, OX2 7BP, UK [email protected] +44 (0)1865 310431 +44 (0)1865 316916 www.barpublishing.com
Contents Preface Jeremy Huggett and Nick Ryan
V
Innovation, Confrontation, and Transformation 1
Has archaeology remained aloof from the information age? Ben Booth
2
Archaeological computing, archaeological theory, and moves towards contextualism. Gary Lock
13
3
The good, the bad, and the downright misleading: archaeological adoption of computer visualisation. Paul Miller and Julian Richards
19
4
Democracy, data and archaeological knowledge. Jeremy Huggett
23
1
IT in Education and Communication 5
The development and implementation of a computer-based learning package in archaeology. Roger Martlew and Paul Cheetham
27
6
Characterizing novice and expert knowledge: towards an intelligent tutoring system for archaeological science. Graham Tilbury, Ian Bailiff and Rosemary Stevenson
31
7
The ENVARCH project. Anja-Christina Wolle and Clive Gamble
35
8
Multimedia communication in archaeology - why and how? Kai Jakobs and Klaus Kleefeld
43
9
An electronic guide to the buildings of ancient Rome. Philip Perkins
47
Quantitative Applications and Methodologies 10
The incorporation of cluster analysis into multidimensional matrix analysis. John Wilcock
55
11
Graphical presentation of results from principal components analysis. M. J. Baxter and C. C. Beardah
63
12
Measuring biological afTmity among populations: a case study of Romano-British and AngloSaxon populations. Jeff Lloyd-Jones
69
13
Spatial interrelationships analysis and its simple statistical tools. Germa Wunsch
75
Conservation condition surveys at the British Museum.
81
14
M. N. Leese and S. M. Bradley
15
Flavian fort sites in South Wales : a spreadsheet analysis. J. W M. Peterson
87
16
Identifying your local slag ... the use of quantitative methods and microstructure analysis in determining the provenance of British bloomery slags from the late Iron Age to the end of the Roman occupation. Stephen G. Bullas
95
17
Quantitative analysis of Etruscan cinerary urns. Paola Moscati
101
18
Multivariate methods for the classification of Lower and Middle Palaeolithic stone inventories. Thomas Weber
105
19
A Method for the analysis of incomplete data and its application to monastic settlements in Italy (4th-6th century). Beatrice Caseau and Yves Caseau
113
20
Survey sampling, right or wrong? H. Kamermans
123
21
Global palaeoclimate modelling approaches: some considerations for Archaeologists. Alicia L. Wise and Trisha Thorme
127
Survey and GIS Applications 22
ID-MARGARY - an Inference Database for the MApping, Recognition and Generation of Ancient Roads and trackwa Y s. Stephen G. Bullas
133
23
An application of GIS to intra-site spatial analysis: the Iberian Iron Age cemetery of El Cigarralejo (Murcia, Spain). Fernando Quesada San z, Javier Baena Preysler and C. Blasco Bosqued
137
24
A GIS approach to the study of non-systematically collected data: a case study from the Mediterranean. Federica A . Massagrande
147
25
Detection of beacon networks between ancient hill-forts using a digital terrain model based GIS. Kazumasa Ozawa , Tsunekazu Kato and Hiroshi Tsude
157
26
Remote sensing, GIS and electronic surveying: reconstructing the city plan and landscape of Roman Corinth. David Gilman Romano and Osama Tolba
163
27
Image processing and interpretation of ground penetrating radar data. Vanessa S. Blake
175
28
Reconstructing a Bronze Age site with CAD. K. Kotsakis , S. Andreou , A. Vargas, and D. Papoudas
181
Regional and National Database Applications 29
Computer applications in the fields of archaeology and museology in Hungary. Attila Suhajda
189
30
Computerising the lists of historic buildings in England: a historical case study on initiating a national project. Nigel Clubb
193
31
Concepts of inf onnational and statistical processing of archaeological data of the computer centre of the Institute of Archaeology and Ethnography in Novosibirsk. Anatoly P. Derevianko , Yury P. Khol 'ushkin , Vasily T. Voronin, Dmitry V. Ekimov , Dmitry N. Goriachev , Vladimir V. Schipunov , and Helen V. Kopteva
203
Excavation and Post-Excavation Applications 32
Towards a computerised desktop: the Integrated Archaeological Database System. Michael J. Rains
207
33
The excavation archive as hyperdocument? Nick Ryan
211
34
The Bonestack: a stack for old bones. Annie Milles
221
ii
35
INSITE: an interactive visualisation system for archaeological sites. Alan Chalmers, Simon Stoddart , John Tidmus and Roger Miles
225
36
SYSAND: a system for the archaeological excavations of Anderitum (Jovols, Lozere, France). Andrea Maggiolo-Schettini , Paola Seccacini , Carmella D. Serratore , Raffaella Pierobon-Benoit , and Gianluca Soricelli
229
37
Computer-aided design techniques for the graphical modelling of data from the prehistoric site of Runnymede, Berkshire. P. L. Main , A . J. Spence and A. F. Higgins
235
38
The Archaeological Data Archive Project. Harrison Eiteljorg II
245
Textual Applications 39
A new method of off-line text recognition.
249
Susan Laflin
40
The use of computers in the decipherment of the Hackness Cross cryptic inscriptions. Richard Sermon
iii
253
IV
Preface Jeremy Huggett 1 and Nick Ryan 2 1
Department of Archaeology, University of Glasgow, Glasgow GJ2 SQQ, UK Computing Laboratory, University of Kent at Canterbury, Kent, CT2 7NF, UK
2
CAA94 was held at the University of Glasgow from 23rd March - 26th March 1994, and hosted by the Department of Archaeology. Apart from the conference sessions themselves, the two main events which are likely to remain in participants' memories are the wild night of dancing at the conference ceilidh, and the sounds of bagpipes and drums playing outside the lecture room during the plenary session. The plenary session presented a challenge to participants: to look at the most fundamental issue at the heart of CAA - the relationship between us as archaeologists and computers. The idea was to focus on the present - what are we doing with computers today? and look forward to the future - where do we see ourselves going in the next few years? To what extent has all this new technology really improved the way we do archaeology? Are we slaves to the machine or in control of our destiny? Do we have any clear idea why we do some of the things that we do? Are we justified in the claims we make and the beliefs we hold? Are the problems we face today any different to those of 10, 15 or 20 years ago? These questions are far more fundamental than a desire to have the biggest and fastest computer on the block - they strike right at the heart of our relationship with and attitude to computers in archaeology. As archaeologists, we are not really strangers to this sort of questioning - after all, archaeology is a self-aware and self-assessing discipline, perhaps more so than most. But these questions are rarely asked in the context of computer applications. Furthermore, these questions lead us on to more important issues. How do we see our future as computing archaeologists? What do we want to achieve? What kind of agenda should we have? Where should we be in, say, five years time? And how are we going to get there? This was why the plenary session was entitled Innovation, Confrontation and Transformation, because the role of computers could be at the forefront of new techniques, confronting the methodologies and
Email: [email protected] Email: [email protected]
theories that we cling to, and transforming the discipline itself. The conference plenary was in many respects successful at setting the tone for the conference as a whole and most of those plenary presentations are reproduced here (Booth, Lock, Huggett, and Miller and Richards). Several themes began to emerge during the plenary which were reinforced during the conference sessions: the use of the Internet (practically demonstrated by Sara Champion in a series of workshops); the use of IT in education (again, demonstrated extensively in the form of the UK-wide Teaching and Learning Technology Programme Archaeology Consortium products); and increasing discussion about the relationships of computers and archaeological theory, often evidenced through the use of Geographical Information Systems. Acknowledgements
We would like to formally thank those who helped during the conference: Beth Bartley, Ewan Campbell, Stuart Halliday, Janet Hooper, William Kilbride, Olivia Lelong, Allan Rutherford, and Rob Squair. In particular, we would like to thank Jen Cochrane for all her work before, during and after the conference in looking after the financial affairs and accommodation. The editing and production of the proceedings would not have been possible without the assistance and advice of the specialist referees: Ewan Campbell, Clive Orton and Stephen Shennan. We would also like to thank Lorraine McEwan for her assistance with a number of figures in this publication; without her contribution we would have had difficulty in meeting the printer's deadline. For those interested in such things, the proceedings were prepared and edited in Glasgow using Word for Windows 6; papers were then emailed as uuencoded files to Canterbury for final editing, incorporation of illustrations and production of camera-ready copy.
Glasgow and Canterbury 28th February 1995
V
vi
1 Has archaeology remained aloof from the information age? Ben Booth The Science Museum , Exhibition Road, London SW7 2DD, U.K.
1.1
Introduction
The 'Information Age' has been marked by substantial and continuing increases in computer processing power, by the convergence of computing and telecommunications, and latterly by convergence with mass entertainment. One of the major benefits of these developments has been the potential for the dissemination of information via the now widespread high speed networks and easily distributed media such as CD-ROM. The use of such techniques as multi-media, hyper-media and virtual reality have made this information more accessible to its users. A glance through the daily papers is sufficient to show that these developments are not merely of interest to computer specialists and business, but will effect all areas of society, both at work and at home (e.g. Freedland 1994). This paper reviews overall developments in information technology over the last ten years, and examines how such techniques have been used for the dissemination of archaeological information. Developments in the related areas of libraries and museums are also described. In conclusion it is argued that whilst these techniques are beginning to be used for the dissemination of archaeological information, the profession as a whole still aspires to conventional publication in a monograph or journal . It is suggested that this cannot be healthy for a discipline which depends so much on public support, and which has so much potential to enrich people's lives.
1.2
Information technology
The last decade has seen very substantial developments in computer hardware and software, with perhaps the most significant change being the increase in computing power available to users. In the early 1980's the maximum generally available configuration consisted of an S 100 bus based machine with a 280 chip, 64K RAM, a 5Mb hard disk and a lOOK floppy disk drive, using the CP/M operating system. Such a system would cost several thousand pounds. This standard was superseded by the IBM PC and its imitators - the Intel 8086 and 8088 based family of microcomputers, with the MS-DOS operating system. A typical configuration consisted of 640K RAM, a 20Mb hard disk and a 360K floppy disk drive, at a similar cost to the CP/M machines. The arrival of Alan Sugar's AMSTRAD range of IBM compatible computers in 1987 brought the cost of such a system down to under a thousand pounds, although the overall standard of the equipment in terms of screen, keyboard and durability was not considered by many to be suitable for business use. However other manufacturers quickly followed suit, and provided full specification microcomputers for a similar
outlay. This order of cost (a thousand pounds) has subsequently remained the stable norm for the base configuration, despite inflation and very substantial increases in specification. A typical microcomputer now has an 80486 processor, 8Mb RAM, a 130Mb hard disk and a 1.4Mb floppy disk drive. Whilst MS-DOS has continued to evolve, the Windows operating system is now the standard. It is likely that a 600Mb CD-ROM drive will become usual in the near future as a means of importing software and data, and when writable CD-ROM drives reduce in cost sufficiently, these will become established as a slow but permanent backup and storage medium (Hyon 1994). In parallel with developments in the Intel arena, Apple computer have been providing systems based on the Motorola 68000 series of processors. After the demise of the Apple II, the shortlived Lisa was followed by the Macintosh family, characterised by their 'Windows, Icons, Mice and Pull-down menu' (WIMP) interface, developed from earlier work by Xerox . Apple microcomputers have a significant following in the graphics, publishing and educational sectors, but have not captured more than about ten percent of the overall market for personal computers. The Apple Power-Mac now provides compatibility with MS-DOS machines (Sheldon et al. 1992), but there is a penalty to be paid in terms of speed and the amount of memory which is required, and therefore the cost of an effective system is higher than for the equivalent IBM compatible PC. Multiple access to computers, peripherals and data is provided either through multi-user access to a single system, or through the connection of multiple servers, peripherals and users via single or linked networks. Multi-user access to microcomputer facilities was provided in the early 1980's through the use of the MP/M operating system running on one of the 280 family of microcomputers, and latterly with the introduction of 16bit processors running Concurrent CP/M-86. For computers in the mini-computer class there were a range of proprietary operating systems, with Unix beginning to be quite widely used on machines of this scale. For the 80x86 family of processors Xenix (a version of Unix specifically developed for this platform) was introduced as a multi-user alternative to the standard MS-DOS. As an alternative OS/2 provided multi-tasking. Perhaps because of the significant memory and other resources required for its implementation, OS/2 was not initially widely used except for specialist applications, such as building management systems, and network management, where it's multi-tasking, and graphical interface were especially appropriate . Despite the promise of OS/2, MS-DOS and latterly Windows have remained the de facto standard for
BEN BOOTH
single user PCs, with Unix as the standard for multi user platforms at the PC and mini-computer scale. Local area networks provide an alternative means of achieving shared access to central (and distributed) facilities . Up until the late 1980's there were a range of proprietary network systems, offering facilities from simple file and message transfer to access to central servers . Currently the most widely used network protocol is TCP/IP, a broadcast protocol using physical cabling conforming to a bus topology. A sizeable minority of networks use the Token Ring protocol , which as its name suggests employs token passing on a ring topology. Cabling technology has evolved from thick and thin ethernet to a structured approach, typically consisting of a mixture of unshielded twisted pair (UTP), and fibre-optic. UTP is now usually operated at a speed of 10 megabits (Mb) per second, but speeds of 100Mb are claimed by suppliers . Fibre-optic cable can transmit at speeds of 140Mb for FDDI (Fibre Distributed Data Interface), and higher for methods such as ATM (Asynchronous Transfer Method). These higher speeds are particularly relevant where the large volumes of data required for high resolution images and moving images need to be moved over the network. For smaller networks, servers employing the Novell Netware operating system have become the norm. Typically these are based on an 80x86 platform, with most mainstream MS-DOS packages having Novell versions. Novell is an obvious choice for small networks, typically growing to a maximum of tens of users, although the ability to satisfactorily support larger numbers is claimed by the manufacturers, and Netware version 4.x addresses the requirements of multisite installations. Several small Novell networks can be linked together. For larger networks , particularly spread across wide-area communications , Unix servers are more often used, with TCP/IP as the network protocol. Typically Netware involves larger quantities of data being transferred over the network, whilst the Unix based networks more usually operate in a mode which imitates terminal access: the 'client-server' model, which optimises traffic on the network and utilises local processing power. It has been available for some time, although many applications developers remain shy of it. A networked version of Windows (Windows-NT) is also available; it may become commonly used in the longer term (and is already presenting a challenge to Unix), but it has yet to be widely adopted (Yager & Smith 1992). Apple microcomputers can access both Novell and Unix servers via a range of protocols, and also have Appletalk, their own integral but rather rudimentary networking capability. Over this period there have been considerable advances in wide area networks, both in terms of what is provided by utilities such as British Telecom and Mercury for public use, and what is available via the academic networks. In the early 1980's connections supplied by British Telecom included a range of leased lines, which were expensive in terms of the capacity provided, and therefore out of reach of most museums and archaeological bodies. Modem use
of dial-up lines (mostly analogue links throughout) was possible, but these were not very reliable. EPSS (the Experimental Packet Switching Service) allowed modem access to a high capacity link which became more reliable through time and evolved into PSS. Today a range of leased lines (at low and high speeds) are available, together with the dial-up PSS service. ISDN2 (Integrated Services Digital Network Version 2) service from British Telecom provides dial-up access to lines of almost limitless capacity, thus enabling the bandwidth of the link to rise in order adapt to the volume of data being carried . Because of the potential for unplanned costs, ISDN lines tend to be utilised for occasional use (parti~,ularly for high bandwidth purposes such as video conferencing), as backup to fixed links , or where the overall anticipated data volumes are low. However, lower costs may lead to more widespread use (Andrews 1994). Except for the final link from the network to the subscriber, these connections are now entirely provided through digital networks with a high degree of reliability. For the higher bandwidth services (typically from 64K upwards) the carriers fibre-optic network is extended so as to connect directly to the customers network. Connection costs to 64K plus networks are therefore high, but the absence of a copper based analogue component facilitates high data transmission rates. Up until recently JANET (the Joint Academic NETwork) provided a comprehensive but relatively low speed link between academic institutions in the United Kingdom, and via the Internet to the rest of the world. In the past archaeologists and museums were discouraged from accessing JANET, as it was felt that they would overload it with unscholarly data. The newly available SuperJANET offers a much higher bandwidth, enabling larger volumes of data to be transmitted. It is debatable for how long this very high capacity will remain unsaturated. However in the short term its use by museums and others who are able to utilise it for image transmission and other high volume activities is being encouraged. The majority of the academic centres in the United Kingdom now have access to SuperJANET, and elsewhere in the world high bandwidth networks are being developed. The various forms of optical storage technology, including particularly CD-ROM, are an alternative to wide area communications for the distribution of large quantities of data, and are often used for encyclopedic works. This medium is particularly suitable for circumstances where a large volume of data which changes relatively slowly needs to be disseminated. It is a complimentary method to network communications, as it allows desktop access to very large amounts of data (perhaps networked locally), but is not dependent on wide area connections. Technological improvements in processor speed, and in memory, storage and network capacities, have facilitated the widespread use of image based and graphical facilities which had previously been prohibitively expensive. A 2
HAS ARCHAEOLOGY REMAINEDALOOFFROMTHEINFORMATIO N AGE?
wide range of means of display and access are facilitated by the ability to display images through an interactive interface which allows users to browse through data in a number of ways. The whole gamut of multimedia includes text, images (still and moving), and sound. In the users imagination the term 'multimedia' usually conjures up all of these , but in practice a simple combination of images and text is more common . This is termed 'interactive ' if the user has a choice of routes through the data. Hypertext permits a range of links to be made between related records, rather than the straightforward index or sequential access, leading to serial progress through the data, which has up until recently been the norm (Morrill Tazelar 1988). Much has been made of the potential of virtual reality, perhaps largely because the idea of becoming immersed and navigating through a computer generated environment appeals to the imagination. In practice the first applications were constrained by the very substantial computing power required to manipulate a three dimensional virtual world, and the limitations of sensors and display technology. Both technology and expectations have now matured. In addition to some quite sophisticated games consoles, virtual reality now has a range of practical applications including architectural simulations , military trammg, medicine, and telecommunications network management (Peltu 1994). It has much potential as a means of providing a means of navigating through the electronic sources contained in the 'virtual library', which is already beginning to replace traditional paper resources. There has been a substantial growth in the availability of packaged software for both personal computers (IBM compatible and Macintosh), and larger systems (Picher et al. 1987). Such products range from commonly used applications such as word processors, databases and spreadsheets, to a whole gamut of programs for particular industries , and a wide range of utilities. Typically a single suite of programs will provide the functionality of word processor, spreadsheet, database and diary, or these can be obtained through linked packages. Flexibility and integration are some of the advantages from these products, which in most cases have removed the need for special programs to be written either by or for the user. Although in many ways an extension of word processing, desktop publishing was hailed as a new and powerful technology (Seybold 1987), and has provided an effective means of producing copy, either for direct printing from a laser printer, or for reproduction by more conventional technology. An important development which was foreshadowed before the microcomputer revolution, but has nevertheless matured during this period, is the convergence of telecommunications and computing , leading to the present situation where almost all the telecommunications in the developed world use digital technology for data transmission , and computerised digital exchanges have replaced the electromechanical technology which had its roots in the 19th century . Following on from the
convergence of computing and telecommunications , there is now a convergence of telecommunications and entertainment technologies. In competition with terrestrial television, and the later satellite services, high speed digital networks are being installed to enable entertainment to be broadcast to domestic users . In the United Kingdom entertainment providers are installing their own networks, and in the United States there have been well publicised moves by the entertainment providers to purchase communications corporations. In the United Kingdom legislation prevents British Telecom from transmitting television and video over their network , but similar legislation in the United States is presently being relaxed , and trials to test the transmission of video over the present copper telephone network have proved encouraging. Standards have become important in facilitating the interconnection of equipment and applications, and in allowing users to select hardware and software from a range of suppliers. Throughout much of the 1980s, standards in the computing and telecommunications industries varied widely. In many areas there was little commonality between products , and some of the larger suppliers supported several incompatible standards within their own product range . This situation tended to work to suppliers ' advantage, as once a user was committed to a particular range of equipment, it was necessary to continue purchasing both hardware and software from a limited source, and it was necessary to replace the whole installation if a change was required. One of the most significant developments in standards has been the established of the OSI (Open Systems Interconnection) standard, which specifies seven levels of standard for data communication. This de jure standard, has particularly been sponsored by the United Kingdom and United States Governments. In practice the full OSI standard is not often implemented, but it has nevertheless had a big effect on the development of a suite of de facto standards, which include ethemet networking, and hardware and software using the Unix operating system. Pragmatically MS-DOS, Windows and Novell tend to be included within this environment. As a consequence for smaller multi-user computers, through the size range traditionally occupied by minicomputers, up to smaller mainframes, this combination of Unix and ethemet is widely used, permitting an extensive range of suppliers equipment and software to be used together. At the smaller end of the scale the MS-DOS and latterly the Windows operating systems have become the standards, together with Novell for networking. Apple has (with the exception of unauthorised far-eastern clones) maintained control of its own standards, so this significant, minority constituting about ten percent of the installed base of personal computers , remains homogeneous in its conformance to the manufacturers standards. A further range of standards have emerged for the specification, implementation and maintenance of computer systems. The United Kingdom government has been a major force behind this , through the CCTA 3
BEN BOOTH
(formerly the Central Computing and Telecommunication Agency, now the Government Centre for Information Systems), which has encouraged the establishment of a range of methodologies such as SSADM (Structured Systems Analysis and Design Methodology), PRINCE (Projects IN Controlled Environment), GOSIP (Government Open Systems Interconnection Profile), CRAMM (Computer Risk Analysis and Management Methodology), to name some more prominent examples (CCTA 1989a, 1990a, 1990b, 1991a, 1991b). Many of these provide a codification of industry best practice, and together they provide an overall set of linked methodologies.
1.3
Libraries
Since the 1960's there has been an appreciation of the potential for computerised library catalogues. The initial model consisting of a printed catalogue and index to holdings (perhaps on microfiche) is now being replaced by an on-line catalogue, accessed within the library, and perhaps accessible externally as well. Computers are also used for a range of management purposes in libraries, and in particularly for stock control and circulation management. Recently the emphasis has switched away from merely providing an index to paper information, to making the sources themselves available electronically. A range of facilities for searching both locally held and remote data, and for electronic document delivery, are now routinely expected by users. This growth in the facilities provided by libraries has been matched by the production of electronic information sources, available on CD-ROM and via networks. A controversial development is the tendency towards the rapid dissemination of papers by network, thus avoiding the sometimes lengthy process of peer review. Copyright issues are also a concern. The scientific community has been in the forefront of such developments, but others are following. Developments in the nature of scientific communication are described by Meadow and Buckle (1992), who argue that the major change is in the increase of informal communication, much of it facilitated by network technology. Most of the larger science publishers are developing databases of electronic journals, together with electronic products in the serial and textbook areas. In the United States, as Gelfand and Booth (forthcoming) have described, there have been several initiatives which have used these techniques, including, CORE (Chemistry Online Retrieval Experiment) compiled by the American Chemical Society at Cornell University; the Red Sage Project at the University of California, San Francisco sponsored by Springer-Verlag; TULIP, promoted by Elsevier, with 15 campus sites worldwide; and various joint ventures initiated by OCLC (On-line Catalogue of the Library of Congress) and the AAAS (American Association for the Advancement of Science), including the On-line Journal of Current Clinical Trials. A range of reports and initiatives have underlined the importance with which these developments are viewed in Britain. The Follett Report (Joint Funding Councils 1993)
has addressed the needs of academic libraries, and has made recommendations aimed at providing access to the facilities which library users will expect to be available, and at taking advantage of opportunities presented by new technologies. A study conducted in 1992-93 by the Royal Society, the British Library and the Association of Learned and Professional Society Publishers (Royal Society 1993), aimed to learn how publishers, information intermediaries and user 's aims are being met, and what gaps there are in meeting these actual and anticipated needs. The British Library has recently launched a range of initiatives of on-line data access and document delivery. That these concerns are not just a matter for information professionals and academics is shown by the recent white paper published by the United Kingdom Government on science and technology (UK 1993), and a further document published by the United Kingdom government on the information superhighway (CCTA 1994). In the United States the National Academy of Sciences has produced Science, Technology and the Federal Government: National Goals for a New Era, (National Academy of Sciences 1993), and there are several other relevant reports (National Science Foundation 1990; Gould and Pearce 1991). New American federal legislation, such as the Boucher Amendment (USA 1993) is intended to encourage the development of a national data 'Superhighway'.
1.4
Museums
Perhaps because of an inherent conservatism, or because of the dichotomy between their roles as 'cabinets of curiosities' and 'centres of information' (Stewart 1984), museums have in general lagged behind libraries in the application of new technology. There has nevertheless been a lengthy history of electronic cataloguing for museums (Lewis et al. 1967), dating back to the 1960s with the formation of IRGMA (the Information Retrieval Group of the Museums Association), the forerunner of the MDA (Museum Documentation Association). Initial work was aimed at assessing the practicality of using this new technology, at developing standards for the data to be stored, and at developing a range of cards and forms for data capture. The main purpose of such initiatives was seen as assisting in the production of a detailed catalogue of the museum's collections. This emphasis on cataloguing continued into the 1980's, when partly in response to critical reports by the National Audit Office (NAO) and others (UK 1988, 1989), collections management, and in particular accountability, took on greater importance (Booth 1985; Roberts 1988). However, whilst effective management of collections and resources have remained a priority, the emphasis has now switched to public access and visitor satisfaction. Museums must continue to manage their collections in an accountable manner, but this is not an end in itself. The change in emphasis from collections management to public access was marked by the 1989 MDA conference on Sharing the Information Resources of Museums , and by the 1993 conference, which specifically addressed museums use of interactive multimedia (Roberts 1992; Lees 1993). The 4
HAS ARCHAEOLOGY REMAINED ALOOF FROM TIIE INFORMATION AGE?
LASSI (Larger Scale Systems Initiative) project, a collaborative venture aimed at specifying and procuring a software package for museums , is illustrative of this trend . It sits astride the change in priorities , having been conceived as an aid to collections management, but with an increasingly important public access component. Whereas libraries, which are suppliers of information and have an increasing dependence on technology for information delivery, have seen information systems as strategic in terms of McFarland and McKenny 's grid (McFarland & McKenny 1983), museums have tended to perceive information technology as merely providing support to their principal activities. The model which can be applied to museums is similar to that argued for the humanities in general. There is considerable potential for the use of information technology , but in contrast to other disciplines such as the sciences and social sciences , where this potential has already been recognised, it is yet to be realised (J. Paul Getty Trust 1993; British Library Board and British Academy 1993; Michelson & Rothenbero0 1992). However museums are now attachinoo increasinoo importance to this area. An example is the Science Museum in London , where automation originated in the Library, but with Information Systems administrative relocation to the Resource Management Division, which also contains finance, human resources and estates , the strategic importance of information systems has been recognised (Booth in press). The overall trend in information systems at laroe o• which is being followed by museums, is one of decentralisation in terms of both systems and organisation , with priorities in information systems being determined by the users , as they come more to control the facilities which they utilise . Users in the broadest sense are now negotiating service level agreements with systems providers, and are aiming to obtain a consistent service according to agreed parameters (CCTA 1989b). The trend for decentralisation, coupled with greater user sophistication, is leading to a movement of staff away from the central information systems providers. Whilst one outcome of these changes is for service providers to be located nearer to the users they support, this arrangement can lead to difficulties in maintaining institution wide data and technical standards , although one product of the introduction of such a network is likely to be a strengthening of standards . Paradoxically , because of the technical complexity of the infrastructure, the introduction of more sophisticated networks is likely to lead to an increase in the numbers of central support staff. With an appreciation of the strategic importance of information, the institution-wide network can acquire additional importance and resourcing. Several large museums have carried out a strategic review of their information systems needs . These studies have tended to stress the potential of networking for both internal and external communication , and the role to be played by such new technologies as multimedia (Smithsonian Institution 1992; Booth in press; Lees & Booth 1993).
Many museums now have the facilities to make available images as well as text describing the items in their collections . These include the National Museum of Denmark , where a pioneering project has stored images in analogue format on a video disk , and more recent initiatives employing digital storage at the Design Museum in London, where the entire collection is available through a hypermedia application (Rubinstein 1992), and the National Railway Museum , York, where the large holdings of glass photographic negatives are being digitised (Heap & Booth 1993). The Micro Gallery at the National Gallery in London has digital images for almost all of the entire collection of over 2,000 paintings. With the widespread use of video technology , and increasingly common use of digital methods , the public now expect to see images as well as text. It is arguable that in a museum context, a text only OPAC application such as that at the Department of American Art at the Metropolitan Museum of Art, New York (Hoover Voorsanger 1992), would not now be acceptable to the public . Kodak's Photo-CD has set the standards for digital images (Chen 1993), but still lacks the necessary database support for manipulating the very large numbers of images which many museums have. However the range of formats which can be scanned continues to be enlarged , and work by those such as Luna Imaging Inc. (Dr. Michael Ester pers. comm.) is developing links between Photo-CD and conventional text databases . Interactive exhibits allow the public to manipulate image, text and sound, and perhaps also to touch real objects. This 'touch and feel' experience became widespread in children's museums in the late 1970s, some of the ideas having been prototyped in the Children's Gallery at the Science Museum in London in the 1930s. An example of such an interactive approach is the natural history discovery centre at the Liverpool Museum; a similar facility for science is planned for the new education centre at the Science Museum in London . The potential of multimedia has been explored in a research report from the British Library by Signe Hoffins (Hoffins 1992), although multimedia was not at that time felt to be a mature technology (Arts Council of Great Britain 1992). A project at Loughborough University is examining means of linking the traditional structured database most often used for museum documentation, with an interactive hypermedia interface which would be suitable for public access (Poulter et al. 1994).
CD-ROMis being explored by museums as a means of distributing information, and Chadwyck-Healey Ltd have been particularly prominent in this area (ChadwyckHealey 1992). CD-ROM provides a very significant advance on microfiche, and is likely to become a successful means of distributing museum's data to a wide public . Whilst there have been some successful implementations of centralised databanks (for instance the Canadian Heritage Information Network - CHIN , and the FENSCORE natural science collaborative database (McGee 1992; Pettit 1992), it seems likely that network 5
BEN BOOTH
access to databases located in their host museums will provide users with a ' virtual database ', which has the characteristics of the union databases which have been sought for so long. Users will thus be able to access a much larger collection of information than is actually present at a single location . Several museums are now implementing networking strategies . The potential range of different approaches is illustrated by the South Kensington Museums in London. The Natural History Museum has recently installed a comprehensive network infrastructure , to which all of the staff in the museum may be connected . The first priority was to provide access by scientists to external databases and messaging facilities, but the museum is also providing for internal and external users access to its own databases. Use of the network for internal electronic mail grew surprisingly quickly in the few months after the network became available . The Science Museum installed a comprehensive network at its three main sites and storage facilities early in 1994. Initially the priority was to make major internal databases available to staff, and to foster synergy through internal communication ; but access to external data sources is likely to become important , particularly as other museums data resources become accessible via JANET and the Internet. It is planned to also make the museum's information resources available to external users. The Science Museum's Library holdings are already available to external users via the Libertas computer system operated by Imperial College. The Victoria and Albert Museum is pursuing a policy of incremental networking, which will provide shared access for workgroups , and via bridges, access to central facilities and external services. Access to JANET is available to these museums in South Kensington via Imperial College, and there is a proposal to link the South Kensington Museums to SuperJANET via an optical fibre 'ring', connected to the Imperial College SuperJANET node. Imperial College was selected as one of 5 introductory test sites for installing SuperJANET. Whilst these three museums with different collecting areas are pursuing networking from different perspectives, the eventual result will be comprehensive internal networking, with access to and from the outside world . The links to SuperJANET make possible the transmission of large volumes of data including images . Similar efforts are being made among the museums and libraries of the Smithsonian Institution in Washington DC (Smithsonian Institution 1992), and others (Wallace & Jones-Garmil 1994). There are a number of standards initiatives which have a bearing on the use museums make of information technology. Overall procedures for collections management are set out in the guidelines for the MGC registration scheme (Museums and Galleries Commission nd) , and in guidelines for the care of particular collections (Museums and Galleries Commission 1992a , 1992b ). In parallel to these initiatives the MDA has continued to develop its recording media and documentation standards (Holm 1991 ), and has published a revised version of the Data Standard (Museum Documentation Association
1991 ), which takes account of contemporary data modelling techniques. Initiatives in both procedures and documentation have been brought together with the publication of Spectrum, a comprehensive guide to museum documentation standards (Grant 1994). In the United States the CIMI (Computer Interchange of Museum Information), has also been investigating the requirements for museum data (Bearman 1990; Bearman & Perkins 1993).
1.5
Archaeology
As would be expected , much of the work in archaeological information systems has been directed towards the design of systems for excavation records. Papers by Chapman (1984) , Booth (1984a), Cogbill (1985) , Stead (1987) , and Huggett (1989) outline possible data structures. Ten years of work in this field by the Central Excavation Unit of the Department of the Environment are reviewed in the volume on archaeological computing edited by Cooper and Richards (Hinchliffe & Jefferies 1985). The volume by Ross, Moffett and Henderson ( 1991) provides a useful overview. Powlesland (1991) describes the rationale behind the development of the Heslerton recording system , and its translation into a computerised database. Post-excavation practice is surveyed by Richards (1991) , and Williams ( 1991) describes the systems in use by the Department of Urban Archaeology of the Museum of London. Archive requirements are described by Schofield and Tyers (1985). Specific areas of interest which have been described include stratigraphic sorting (Wilcock 1982; Haigh 1985; Ryan 1988; Desachy and Djindjian 1990; Huggett and Cooper 1990 ; Boast and Chapman 1990; Herzog and Scollar 1990), and the integration of graphics with the field records (Flude et al. 1982; Alvey & Moffett 1986; Huggett 1989). Apart from the paper by Chapman (1984) there has been little discussion of the process of database design, although this is remedied to a degree by Ross (1991). However, several papers address the issues concerning the underlying database technology for archaeological records, including those by Grimley and Haigh (1982), Booth (1983a), Stallard (et al. 1984) , Moffett (1984a, 1985), Ryan (1991) , and Cheetham and Haigh (1991). The overall impression is of a number of independent developments, which often take account of work by others , but are closely linked to local requirements. There are no moves towards the adoption of a national system for the United Kingdom. Developments in sites and monuments records, and the various records kept by the national heritage bodies are well documented, including publications by Moffett (1984b), Evans (1984), Leech (1986), Cheetham (1985) , and Grant (1985). The 1988, 1989 and 1990 volumes of CAA proceedings each contain a group of papers on this theme (Rahtz 1988; Rahtz & Richards 1989; Lockyear & Rahtz 1990), and there are further papers by Lang and Stead (1992), and Robinson (1993). The volume from the Danish National Museum describes a range of initiatives (Larsen 1992), with the paper by Andresen and Madsen (1992) looking at the implications of the relational
6
HASARCHAEOLOGY approach. Overall there is a pattern of consistency at the county level, encouraged by the national agencies , and coordinated by bodies such as the Association of County Archaeological Officers (ACAO). There is also increasing co-operation between the national agencies. From 1986 geographical information systems (GIS) have become (as one would expect) of significant interest to archaeologists , and have been used directly linked to local and national sites and monuments records, and as research tools. They are discussed by Harris (1986, 1988), Clubb (1988), Wansleeben (1988), Lock and Harris (1990), Kvamme (1992), Castleford (1992), Ruggles (1992), Kvamme (1993), Middleton and Winstanley (1993), Chartrand et al. (1993), and Lang (1993) . Whilst the paper by Stewart (1984) discusses the overall role of museums, there has been little discussion of the relationship between archaeological fieldwork and museum collections . Developments sponsored by the MDA have been reviewed by Stewart (1982a; 1982b), and Light 1984. The paper by Keene and Orton (1992) describes an approach to assessing the condition of objects in a museums collections. The application of expert systems to archaeology first appears in the record in 1985 when Wilcock assessed their potential. Subsequent contributions include Baker (1988), Doran (1988) , Stutt (1989), Vitali and Lagrange (1989), Lock (1989), and Vitali (1991). These authors seemed to be inconclusive about the overall value of this approach, and interest seems to have subsequently subsided, for the moment at least. Word processing, the use of CD-ROM and other forms of electronic publishing have been recurrent themes, including papers by Rahtz (1986), Wake (1986), Wilcock and Spicer (1986), Girdwood (1988), Martlew (1988a), and Jacobs and Klefeld (1990). Education as a theme has been amplified in papers by Rahtz ( 1988), O'Flaherty (1988), Martlew (1988b), Ruggles (1988), Biek (1988), Orton and Grace (1989), Wheatley (1981), and Ruggles et al. (1991 ). These technologies are also very much to the fore in the volume edited by Martlew (1984 ), including papers by Flude, Oppenheim, Sutton, Powell, Clark and Hassan. Virtual reality is likely to become a powerful technology for visualisation and communication (Reilly 1991). The potential of SGML (Standard Graphical Mark-up Language) as a means of tagging archaeological data for publication was first recognised by Rahtz (1986), subsequently these ideas have been developed by Smith (1992); with the adoption of SGML as a means of formatting data for Internet access this is likely to be an important tool. For these authors the potential of information technology in archaeology is as much in the dissemination of knowledge as in its recording and analysis. The early 1980' s saw a number of papers examining the strategic approach and proper use of computers in archaeology, including Flude (1983), Copeland (1983), Whinney (1984), McVicar (1985) , McVicar and Stoddart (1986) , Richards (1985) , and Cooper (in Cooper &
REMAINED ALOOF FROM THE INFORMATION AGE?
Richards 1985). Richards reviews the case for standardisation in the same volume, and Reilly looks at computers in field archaeology as agents of change. Further aspects are reviewed by Clubb (1989, 1991), and the development of an IS strategy is described by Booth (in press). The majority of the above themes were further developed in the CAA conferences in 1993 (Wilcock & Lockyear 1995) and 1994 (this volume) . Much of the impetus towards more rigorous field techniques and prompt publication has come from the Department of the Environment and latterly English Heritage. Building on from the Frere and Cunliffe reports (Ancient Monuments Board 1975, Council for British Archaeology and Department of the Environment 1983) there have been the two papers on the management of archaeological projects (English Heritage 1989, 1991a), the Model Brief for Archaeological Evaluation (Perring 1992), and three archaeological strategy papers (English Heritage 1990, 1991b, 1992), with a further document on rescue archaeology funding (English Heritage 199 lc ). The Central Excavation Unit (now Central Archaeological Services) has developed a manual covering procedures , recording , processing and publication (English Heritage 1985). A document setting the overall policies for archaeology and development is the Planning Policy Guideline (PPG) 16 from the Department of the Environment (1990). The case for maritime archaeology to be given similar protection and resources that on land is outlined in a publication from the Joint Nautical Archaeology Policy Committee (1989). It is noteworthy however that apart from the Central Excavatioi:i Unit's manual, the English Heritage publications do not give specific instructions on recording for excavation and survey, and that whilst much thought is given to archive generation and storage, there is little attention given to dissemination apart from conventional publication . In contrast there is substantial guidance on the form of sites and monuments and national records, starting with the issue of a form for recording monuments together with accompanying guidance (Form 107 and Advisory Note 32) by the Inspectorate of Ancient Monuments of the Department of the Environment (1981). The status of sites and monuments records, including some observations on record keeping and data processing were surveyed by the Department of the Environment in the publication England 's Archaeological Resource (Inspectorate of Ancient Monuments 1984). The Association of County Archaeological Officers has produced several publications in this area , including County Archaeological Records Progress and Potential (1985), SMR 's Some Current Issues (1990), and Policies for Access and Charging (1991). As well as general guidelines there are several initiatives by the national bodies aimed at defining record types and terminology control (Royal Commission on the Historical Monuments of England 1986, 1987, 1992, 1993; Royal Commission on the Historical Monuments of England and English Heritage 1989). Together these provide comprehensive (and developing) standards for both local and national records .
7
BEN BOOTH
1.6
Conclusion
This paper has attempted to show the potential that exists for the application of information technology to archaeology, and to describe developments in libraries and museums which are of relevance. There have already been a wide range of innovative uses of computing in archaeology, and there is no doubt that there are many exciting opportunities ahead. What is striking however is that archaeological fieldworkers who have the responsibility to recover, analyse and disseminate the results of their work, are for the most part not taking advantage of this technology, preferring instead to publish in the conventional manner. It falls to those record centres, museums and libraries that eventually receive archaeological data to make use of these new methods. In past generations archaeologists such as Wheeler sought contact with the public via the media of the day, and made great use of the press, radio and television to increase awareness of archaeology in the population at large. Todays archaeologists use computers for excavation recording, wordprocesing and a range of supportive tasks, but do not appear to be taking advantage of the broadcast potential which has been enabled by the 'Information Age'.
References ANCIENT MONUMENTS BOARD 1975. Principles of Publication in Rescue Archaeology, Report of the Working Party of the Ancient Monuments Board for England Committee for Rescue Archaeology, Department of the Environment, London. ALVEY, B. & MOFFEIT, J. 1986. 'Single context planning and the computer: the plan database', in S. Laflin (ed). Computer Applications in Archaeology 1986, 59-72, University of Birmingham Centre for Computing and Computer Science, Birmingham. ANDRESEN, J. & MADSEN, T. 1992. 'Data structures for excavation recording', in C. U. Larsen (ed.) Sites & Monuments. National Archaeological Records, 49-70, National Museum of Denmark, Copenhagen. ANDREWS. D. 1994. 'Falling prices boost ISDN', Byte, 19 (1), 40. ARTS COUNCil. OF GREAT BRITAIN 1992. Very Spaghetti. The Potential of Interactive Multimedia in Art Galleries, Arts Council of Great Britain, London. ASSOCIATION OF COUNTY ARCHAEOLOGICAL OFFICERS 1985. County Archaeological Records. Progress and Potential, Association of County Archaeological Officers, London. ASSOCIATION OF COUNTY ARCHAEOLOGICAL OFFICERS 1990. Sites and Monuments Records. Some Current Issues. Association of County Archaeological Officers, London. ASSOCIATION OF COUNTY ARCHAEOLOGICAL OFFICERS 1991. Sites and Monuments Records. Access and Charging. Association of County Archaeological Officers, London. BAKER, K. G. 1988. 'Towards an archaeological methodology for expert systems', in C. L. N. Ruggles & S. P. Q. Rahtz (eds.) Computer and Quantitative Methods in Archaeology 1987, 229-236, British Archaeological Reports, Oxford. BEARMAN, D. 1990. Archives & Museum Data Models and Dictionaries, Technical Report 10, Archives and Museum Informatics, Pittsburgh. BEARMAN, D. & PERKINS, J. 1993. Cimi Computer Interchange of
BIEK, L. 1988. 'Is this a record? Judgment on Domesday: the first year in archaeo-archiving', in S. P. Q. Rahtz (ed.) Computer and Quantitative Methods in Archaeology 1988, 543-550, British Archaeological Reports, Oxford. BOAST, R. & CHAPMAN, D. 1990. 'SQL and hypertext generation of stratigraphic adjacency matrixes ', in K. Lockyear & S. P. Q. Rahtz (eds.) Computer Applications and Quantitative Methods in Archaeology 1990, 43-52, BAR Publishing, Oxford. BOOTH, B. K. W. 1983. 'The changing requirements of an archaeological database', in J. G. B. Haigh (ed.) Computer Applications in Archaeology 1983, 23-30, University of Bradford, Bradford. BOOTH, B. K. W. 1984. 'A documentation system for archaeology', in S. Laflin (ed.) Computer Applications in Archaeology 1984, 58, University of Birmingham Centre for Computing and Computer Science, Birmingham. BOOTH, B. K. W. 1985. 'Information project group at the National Maritime Museum', in E. Webb (ed.) Computer Applications in Archaeology 1985, 36-40, University of London Institute of Archaeology, London. BOOTH, B. K. W. 1988. 'The SAM record - past, present and future', in S. P. Q. Rahtz (ed.) Computer and Quantitative Methods in Archaeology 1988, 379-388, British Archaeological Reports, Oxford. BOOTH, B. K. W. 1995. 'Developing an information systems strategy for the National Museum of Science and Industry', in J. Wilcock & K Lockyear (eds.) Computer Applications and Quantitative in Archaeology 1993, 95-99, BAR Methods International Series 598, BAR Publishing, Oxford. BOOTH, B. -K W., BROUGH, R. L. & PRYOR, F. M. M. 1984. 'The flexible storage of site data: a microcomputer application', Journal of Archaeological Science 11, 81-89. BRITISH LIBRARY BOARD AND BRITISH ACADEMY 1993. Information Technology in Humanities Scholarship, Office for Humanities Communication, Oxford. CASTLEFORD, J. 1992. 'Archaeology, GIS and the time dimension: an overview', in G. Lock & J. Moffett (Eds.) Computer Applications and Quantitative Methods in Archaeology 1991,
95-106, BAR Publishing, Oxford.
CCTA 1989a. CRAMM User Guide Version 1.3, Central Computing and Telecommunication Agency, London. CCT A 1989b. Service Level Management, HMSO, London. CCTA 1990a. SSADM Version 4 Reference Manual, Nee Blackwell Ltd, Oxford. CCTA 1990b. PRINCE: Structured Project Management Nee Blackwell Ltd, Oxford. CCTA 1991a. GOSIP 4: Supplier Set, HMSO, Norwich. CCTA 1991b. GOSlP 4: Purchaser Set, HMSO, Norwich. CCTA 1994. Information Superhighways. Opportunities for Public Sector Applications in the UK, CCTA, London. CHADWYCK-HEALEY, C. 1992. 'Marketing Museum Data', in D. A. Roberts (ed.) Sharing the Information Resources of Museums, 149-151, Museum Documentation Association, Cambridge. CHAPMAN, J. 1984. Design of a database for archaeological site data', in S. Laflin (ed.) Computer Applications in Archaeology 1984, 119-128, University of Birmingham Centre for Computing and Computer Science, Birmingham. CHARTRAND, J., RICHARDS, J. & VYNER, B. 1993. 'Bridging the gap: GIS and the York Environs Project', in J. Andresen, T. Madsen, & I. Scollar (eds.) Computing the Past: Computer Applications and Quantitative Methods in Archaeology. 1992, 159-166, Aarhus University Press, Aarhus, Denmark.
Museum Information Committee. Standards Framework for the Computer Interchange of Museum Information, Museum
Computer Network, Silver Spring Md.
8
HAS ARCHAEOLOGY REMAINED ALOOF FROM THE INFORMATION AGE? CHEETIIAM, P. N. 1985. 'The archaeological database applied: North Yorkshire Council Sites and Monuments Record at the University of Bradford', in E. Webb (ed.) Computer Applications in Archaeology 1985, 49-56, University of London Institute of Archaeology, London. CHEETHAM, P. N. & HAIGH, J. G. B. 1992. 'The archaeological database new relations', in G. Lock & J. Moffett (eds.) Computer
Applications and Quantitative Methods in Archaeology 1991, 7-14, BAR Publishing, Oxford.
CHEN, C. 1993. 'Photo-CD and other digital imaging technologies', Microcomputers for Information Management, 10 (1), 29-42. CLUBB, N. 1988. 'Computer mapping and the Scheduled Ancient Monument Record', in S. P. Q. Rahtz (ed.) Computer and Quantitative Methods in Archaeology, 399-408, British Archaeological Reports, BAR Publishing, Oxford.
FLUDE, K. 1983. 'Problems in archaeological computing', in J. G. B. Haigh (ed.) Computer Applications in Archaeology 1983, 7-14, University of Bradford, Bradford. FLUDE, K. GEORGE, S. & ROSKAMS, S. 1982. 'Uses of an archaeological database - with particular reference to computer graphics and the writing process', in I. Graham & E. Webb (eds.) Computer Applications in Archaeology 1981, 51-60, University of London Institute of Archaeology, London. FREEDLAND, J. 1994. 'A network heaven in your own front room', The Guardian, 30 May 1994, 23. GELFAND, J. & BOOTH, B. (forthcoming). 'Scholarly communication in the sciences: managing challenges for libraries and museums',
Proceedings of the 1993 Conference of the International Association of Technical University Libraries, Technical University of Hamburgh, Hamburgh.
CLUBB, N. 1989. 'Investment appraisal for Information Technology', in S. P. Q. Rahtz & J. D. Richards (eds.) Computer Applications and Quantitative Methods in Archaeology 1989, 1-8, British Archaeological Reports, BAR Publishing, Oxford.
GIRDWOOD, A. 1988. 'Phototypesetting and desk-top publishing systems in archaeology', in C. L. N. Ruggles & S. P. Q. Rahtz (eds.)
CLUBB, N. 1991. 'Procuring medium-large systems', in K. Lockyear & S. P. Q. Rahtz (eds.) Computer Applications and Quantitative Methods in Archaeology 1990, 81-84, BAR Publishing, Oxford.
GOUI.D, C. & PEARCE, K. 1991. Information Needs in the Sciences: an assessment, The Research Libraries Group, Mountain View, Ca.
COGBIU.., S. 1985. 'Information planning and British archaeology', in E. Webb (ed.) Computer Applications in Archaeology 1985, 152, University of London Institute of Archaeology, London. COOPER, M. A. 1985. 'Computers in British archaeology: the need for a national strategy', in M.A. Cooper & J. D. Richards (eds.) Current Issues in Archaeological Computing, British Archaeological Reports, BAR Publishing, Oxford. COPELAND, J. 1983. A Study of the Use of Archaeological Records and
Related Material by Professional and Amateur Archaeologists (Report No. 2), Leeds Polytechnic School of Librarianship, Leeds.
COUNCil.. FOR BRITISH ARCHAEOLOGY AND DEPARTMENT OF THE ENVIRONMENT 1983. The Publication of Archaeological Excavations, Report of a Joint Working Party of the Council for British Archaeology and Department of the Environment, Advisory Note 40, Department of the Environment, London. DEPARTMENT OF THE ENVIRONMENT 1990. Ppg 16. Planning Policy Guidance: Archaeology and Planning, HMSO, London. DESACHY, B. & DilNDJIAN, F. 1990. 'Matrix processing of stratigraphic graphs: a new method', in K. Lockyear & S. P. Q. Rahtz (eds.)
Computer Applications and Quantitative Methods in Archaeology 1990, 29-38, BAR Publishing, Oxford.
DORAN, J. 1988. 'Expert systems and archaeology: what lies ahead?', in C. L. N. Ruggles & S. P. Q. Rahtz (eds.) Computer and Quantitative Methods in Archaeology 1987, 237-242, British Archaeological Reports, BAR Publishing, Oxford. ENGLISH HERITAGE 1985. Central Excavation Unit Manual, English Heritage, London. ENGLISH HERITAGE 1989. The Management of Archaeology Projects, English Heritage, London. ENGLISH HERITAGE 1990. Developing Frameworks: Policies for our Archaeological Past, English Heritage, London. ENGLISH HERITAGE 1991a. Management of Archaeological Projects, London. English Heritage. ENGLISH HERITAGE 1991b. Exploring our Past. Strategies for the Archaeology of England, English Heritage, London. ENGLISH HERITAGE 199 l c. Rescue Archaeology Funding English Heritage, London. ENGLISH HERITAGE 1992. Managing England's Heritage. Setting our Priorities for the 1990's. English Heritage, London. EVANS, D. 1984. 'A national archaeological archive - computer database applications', in S. Laflin (ed.) Computer Applications in Archa.eology 1984, 112-118, University of Birmingham Centre for Computing and Computer Science, Birmingham.
Computer and Quantitative Methods in Archaeology 1987,
295-299, British Archaeological Reports, Oxford.
GRANT, A. 1994. Spectrum The UK Museum Documentation Standard, Museum Documentation Association, Cambridge. GRANT, S. 1985. 'Computing the past and anticipating the future', in E. Webb (ed.) Computer Applications in Archaeology 1985, 152, University of London Institute of Archaeology, London. GRIMLEY, B. J. & HAIGH, J. G. B 1982. 'A general purpose data management system for archaeologists', in S. Laflin (ed.) Computer Applications in Archaeology 1982, 63-68, University of Birmingham Centre for Computing and Computer Science, Birmingham. HAIGH, J. G. B. 1985. 'The Harris matrix as a partially ordered set', in E. Webb (ed.) Computer Applications in Archaeology 1985, 81-90, University of London Institute of Archaeology, London. HARRIS, T.
1986. 'Geographic Information System design for archaeological site information retrieval', in S. Laflin (ed.) Computer Applications in Archaeology 1986, 148-161, Birmingham University Centre for Computing and Computer Science, Birmingham.
..
HARRIS, T. 1988. 'Digital terrain modelling in archaeology and regional planning', in C. L. N. Ruggles & S. P. Q. Rahtz (eds.) Computer and Quantitative Methods in Archaeology 1987, 161-172, British Archaeological Reports, Oxford. HEAP, C. J. & BOOTH, B. K. W. 1993. 'Image storage at the National Railway Museum, York', Computers in Libraries International 93, 25-40, Meckler, London. HERzoo, I. & SCOLLAR, I. 1990. 'A new graph theoretic oriented program for Harris matrix analysis', in K. Lockyear & S. P. Q. Rahtz (eds.) Computer Applications and Quantitative Methods in Archaeology 1990, 53-60, BAR Publishing, Oxford. HINCHLIFFE, J. & JEFFERIES. J. 1985. 'Ten years of data-processing in the Central Excavation Unit', in M. A. Cooper & J. D. Richards (eds.) Current Issues in Archaeological Computing, 17-22, British Archaeological Reports, Oxford.. HOmNS, S. 1992. Multimedia and Interactive Display in Museums, Exhibitions and Libraries Libraries and Information Research Report 87, British Library, London. HOLM, S. 1991. Facts & Artifacts. How to Document a Museum Collection, Museum Documentation Association, Cambridge. HOOVER VOORSANGER, C. 1992. 'An automated collections catalogue: the Department of American Art at the Metropolitan Museum of Art', in D. A. Roberts (ed.) (1992). Sharing the Information Resources of Museums, 64-70, Museum Documentation Association, Cambridge. HUGGETT, J. 1989. 'The development of an integrated archaeological software system', in S. P. Q. Rahtz & J. D. Richards (eds.)
Computer Applications and Quantitative Methods in Archaeology 1989, 287-294, British Archaeological Reports, Oxford.
9
BEN BOOTH HUGGETT, J. & COOPER, M. I 990. 'The computer representation of space in urban archaeology', in K. Lockyear & S. P. Q. Rahtz (eds.)
LIGIIT, R. B. 1984. 'Microcomputers in museums', in S. Laflin (ed.), Computer Applications in Archaeology 1984, 33-37, University of Birmingham Centre for Computing and Computer Science, Birmingham.
HYON, J. 1994. 'A standard for writing recordable CDs', Byte, 19 (1), 231-236.
LOCK, G. (ed.) 1989. 'Expert systems and artefact Oassification', in S. P. Q. Rahtz & J. D. Richards (eds.) Computer Applications and Quantitative Methods in Archaeology 1989, 339-386, British Archaeological Reports, Oxford.
Computer Applications and Quantitative Methods in Archaeology 1990, 39-42, BAR Publishing, Oxford.
INSPECTORATE OF ANCIENT MONUMENTS 1981. Ancient Monuments Records Manual and County Sites and Monuments Records,
Advisory Note 32, Department of the Environment, London.
INSPECTORATE OF ANCIENT MONUMENTS 1984. England's Archae
ological Resource. A Rapid Quantification of the National Archaeological Resource and a Comparison with the Schedule of Ancient Monuments, Department of the Environment,
London.
1993. Technology, Scholarship and the Humanities: The Implications of Electronic Information
J. PAUL GETTY TRUST
J. Paul Getty Trust, Santa Monica Ca.
JAKOBS, K. & KLEEFELD, K. D. 1991. 'Using public communications services for archaeological infonnation', in K. Lockyear & S. P. Q. Rahtz (eds.) Computer Applications and Quantitative Methods in Archaeology 1990, 3-8, BAR Publishing, Oxford. JOINT F'UNDING COUNCILS 1993. Joint Funding Councils. Libraries Review Group: Report, Higher Education Funding Council for England, Bristol.
LOCK, G. & HARRIS, T. 1990. 'Integrating spatial information in computerised SMRs', in K Lockyear & S. P. Q. Rahtz (eds.) Computer Applications and Quantitative Methods in Archaeology 1990, 165-174, BAR Publishing, Oxford.
LOCKYEAR, K. & RAJITZ, S. P. Q. (eds.) 1991. Computer Applications and Quantitative Methods in Archaeology 1990, BAR Publishing, Oxford. MARTLEW, R. (ed.) 1984, Information Systems in Archaeology, Alan Sutton, Gloucester. MARTLEW, R. 1988a. 'Optical disk storage: another can of worms?', in C. L. N. Ruggles & S. P. Q. Rahtz (eds.) Computer and Quantitative Methods in Archaeology 1987, 265-268, British Archaeological Reports, Oxford. MARTLEW, R. 1988b. 'New technology in archaeological education and training', in S. P. Q. Rahtz (ed.) Computer and Quantitative Methods in Archaeology 1988, 499-504, British Archaeological Reports, Oxford.
JOINT NAUTICAL ARCHAEOLOGY POLICY COMMITTEE 1989. Heritage at
MEADOW, A. J. & BUCKLE, P. 1992. 'Changing communication activities in the British scientific community', Journal of Documentation, 48 (3), 280.
KEENE, S. & ORTON, C. 1992. ' Measuring the condition of museum collections', in G. Lock & J. Moffett (eds.) Computer
MCFARLAND, F. W. & MCKENNY, J. L. 1983. Corporate Information
Sea. Proposals for the Better Protection of Archaeological Sites Underwater, The National Maritime Museum, Greenwich.
Systems Management: The Issues Facing Senior Executives,
Dow Jones Irwin, New York.
Applications and Quantitative Methods in Archaeology,
163-166, BAR Publishing, Oxford.
KvAMME, K. L. 1992. 'Terrain form analysis of archaeological location through Geographic Information Systems', in G. Lock & J. Moffett (eds.) Computer Applications and Quantitative Methods in Archaeology 1991, 127-136, BAR Publishing, Oxford. KVAMME, K. L. 1993. 'Spatial statistics and GIS: an integrated approach', in J. Andresen, T. Madsen & I. Scollar (eds.) Computing the
Past. Computer Applications and Quantitative Methods in Archaeology 1992, 91-104, Aarhus University Press, Aarhus.
LANG, N. A. R. 1993. 'From model to machine: procurement and implementation of Geographical Information Systems for County Sites and Monuments Records', in J. Andresen, T. Madsen, & I. Scollar (eds.) Computing the Past. Computer Applications and Quantitative Methods in Archaeology 1992,
167-176, Aarhus University Press, Aarhus.
LANG, N. & STEAD, S. 1992. 'Sites and Monuments Records in England theory and practice', in G. Lock & J. Moffett (eds.) Computer Applications and Quantitative Methods in Archaeology 1991,
69-76, BAR Publishing, Oxford. LARSEN, C. U. (ed.) 1992. Sites & Monuments. National Archaeological Records. Copenhagen. National Museum of Denmark. LEECH, R. 1986. 'Computerisation of the National Archaeological Record', in S. Laflin (ed.) Computer Applications in Archaeology 1986, 29-37, University of Birmingham Centre for Computing and Computer Science, Birmingham. LEES, D. (ed.) 1993. Proceedings of the Sixth International Conference of
the MDA and Second International Conference on Hypermedia and Interactivity in Museums /CHIM '93,
Museum Documentation Association, Cambridge.
LEES, R. & BOOTH, B. 1993. IS Strategy Study Report, National Museum of Science and Industry, London. LEWIS, G. D. et al. 1967. 'Information Retrieval for Museums', Museums Journal, 67, 88-120.
MCGEE, EC. 1992. 'Sharing infonnation: a Canadian perspective', in D. A. Roberts (ed.) Sharing the Information Resources of Museums, 149-151, Museum Documentation Association, Cambridge. MCVICAR, J. 1985. 'Using microcomputers in archaeology: some comments and suggestions', in E. Webb (ed.) Computer Applications in Archaeology 1985, 102-108, University of London Institute of Archaeology, London. MCVICAR, J. & STODDART, S. 1986. 'Computerising an archaeological excavation: the human factors', in S. Laflin (ed.) Computer Applications in Archaeology 1986, 225-227, University of Birmingham Centre for Computing and Computer Science, Birmingham. MICHELSON, A. & ROTHENBERG, J. 1992. 'Scholarly communication and information retrieval: exploring the impact of changes in the research process on archives', American Archivist, 55, Spring 1992, 236-315. MIDDLETON, R. & WINSTANLEY, D. 1993. 'GIS in a landscape archaeology context', in J. Andresen, T. Madsen & I. Scollar (eds.) Computing the Past. Computer Applications and Quantitative Methods in Archaeology. 1992, 151-158, Aarhus University
Press, Aarhus.
MOFFETT, J.C. 1984a. 'An archaeological network database management system', in S. Laflin (ed.) Computer Applications in Archaeology 1984, 93-100, University of Birmingham Centre forComputing andComputer Studies, Birmingham. MOFFETT, J.C. 1984b. 'The Bedfordshire County Sites and Monuments Record database management system', in S. Laflin (ed.) Computer Applications in Archaeology 1984, 101-109, University of Birmingham Centre for Computing and Computer Science, Birmingham. MOFFETT, J.C. 1985. The Management of Archaeological Data on Microcomputers, Unpublished PhD Thesis, University of London Institute of Archaeology. MORRILL TAZELAR, J. 1988. ' Hypertext'. Byte, 13 (10), 234-268. MUSEUM DOCUMENTATION ASSOCIATION 1991. MDA Data Standard, Museum Documentation Association,Cambridge.
10
HAS ARCHAEOLOGY REMAINED ALOOF FROM THE INFORMATION AGE? MUSEUMS AND GALLERIES COMMISSION (nd). Guidelines for Registration, Museums and Galleries Commission, London. MUSEUMS AND GALLERIES COMMISSION 1992a. Standards in the Care of Archaeological Collections, Museums and Galleries Commission, London. MUSEUMS AND GALLERIES COMMISSION 1992b. Standards in the Care of Biological Collections, Museums and Galleries Commission, London. NATIONAL SCIENCE FOUNDATION 1990. Communication in Support of Science and Engineering. A Report to the National Science Foundationfrom the Council on Library Resources, National Science Foundation, Washington DC. NATIONAL ACADEMY OF SCIENCES 1993. Science, Technology and the Federal Government: National Goals for a New Era, National Academy Press, Washington DC. O'FLAHERTY, B. 1988. 'The Southampton-York archaeological simulation system', in S. P. Q. Rahtz (ed.) Computer and Quantitative Methods in Archaeology 1988, 491-498, British Archaeological Reports, Oxford. ORTON, C. & GRACE, R. 1989. 'Hypercard as teaching tool', in S. P. Q. Rahtz & J. D. Richards (eds.) Computer Applications and Quantitative Methods in Archaeology 1989, 327-338, British Archaeological Reports, Oxford. PELTU, M. 1994. 'Cyberactive State', Computing, 12 May 1994, 39. PETIIT, C. 1992. 'The Fenscore Initiative', in D. A Roberts (ed.) Sharing the Information Resources of Museums, 114-116, Museum Documentation Association, Cambridge. PERRING, D. 1992. Model Brief for an Archaeological Evaluation (Revised 20/51')2), English Heritage, London. PICHER, 0 L., LUBRANO, C. R. & THEOPHENO, R. 1987. 'The promise of application software', Byte, 12 (7) (Special Supplement), 37-44.
Ross, S. 1991. 'Systems engineering for archaeological computing', in S. Ross, J. Moffett & J. Henderson (eds.) Computing for Archaeologists, 41-53, Oxford University Committee for Archaeology, Oxford. ROSS, S., MOFFETT, 1. & HENDERSON, J. (eds.) 1991. Computing for Archaeologists, Oxford University Committee for Archaeology, Oxford. ROYAL COMMISSION ON THE HISTORICAL MONUMENTS OF ENGLAND 1986. Thesaurus of Archaeological Terms. Royal Commission on the Historical Monuments of England, London. ROYAL COMMISSION ON THE HISTORICAL MONUMENTS OF ENGLAND 1987. Draft Thesaurus of Architectural Terms, Royal Commission on the Historical Monuments of England, London. ROYAL COMMISSION ON THE HlsTORICAL MONUMENTS OF ENGLAND 1992. Thesaurus of Archaeological Site Types, Royal Commission on the Historical Monuments of England, London. ROYAL COMMISSION ON THE HISTORICAL MONUMENTS OF ENGLAND 1993. Recording England's Past. A Data Standard for the Extended National Archaeological Record, Royal Commission on the Historical Monuments of England, London. ROYAL COMMISSION ON THE HISTORICAL MONUMENTS OF ENGLAND AND ENGLISH HERITAGE 1989. Revised Thesaurus of Architectural Terms, Royal Commission on the Historical Monuments of England, London. ROYAL SOCIETY 1993. Scientific Information Systems in the 1990s, Royal Society, London. RUBINSTEIN, B. 1992. 'Designed for study. the Design Museum's Study Collection database', in D. A Roberts (ed.), Sharing the Information Resources of Museums, 141-143, Museum Documentation Association, Cambridge.
POULTER, A, SARGENT, G, & FAHY, A 1994. 'The Hypermuse Project', Managing Information, 94 (1), 45-46.
RUGGLES, C. 1988. 'Software for the Leicester Interactive Videodisc Project', in S. P. Q. Rahtz (ed.) Computer and Quantitative Methods in Archaeology 1988, 523-530, British Archaeological Reports, Oxford.
POWLESLAND, D. 1991. 'From the trench to the bookshelf. computer use at the Heslerton Parish Project', in S. Ross, J. Moffett & J. Henderson (eds.) Computing for Archaeologists, 155-169, Oxford University Committee for Archaeology, Oxford.
RUGGLES, C 1992. 'Abstract data structures for GIS applications in archaeology', in G. Lock & J. Moffett (eds). Computer Applications and Quantitative Methods in Archaeology 1991, 107-112. Oxford. BAR Publishing.
RAH1Z, S. P. Q. 1986. 'Possible directions in electronic publishing in archaeology', in S. Laflin (ed.) Computer Applications in Archaeology 1986, 3-13, University of Birmingham Centre for Computing and Computer Science, Birmingham.
RUGGLES, C., HUGGETT, J., HAYLES, S., PRINGLE, H. & LAUDER, I. 1991. 'LIVE update: archaeological courseware using interactive video', in K Lockyear & S. P. Q. Rahtz (eds.) Computer Applications and Quantitative Methods in Archaeology 1990, 23-28, BAR Publishing, Oxford. RYAN, N. S. 1988. 'Browsing through the stratigraphic record', in
RAH1Z, S. P. Q. 1988. 'A resource-based archaeological simulation', in S. P. Q. Rahtz (ed.) Computer and Quantitative Methods in Archaeology 1988, 473-490, British Archaeological Reports, Oxford. REll..LY, P. 1991. 'Towards a virtual archaeology', in K Lockyear & S. P. Q. Rahtz (eds.) Computer Applications and Quantitative Methods in Archaeology 1990, 133-140, BAR Publishing, Oxford. RICHARDS, J. 1985. 'Into the black art: achieving computer literacy in archaeology', in E. Webb (ed.) Computer Applications in Archaeology 1985, 121-125, University of London Institute of Archaeology, London. RICHARDS, J. 1991. 'Computers as an aid to post-excavation interpretation'. in S. Ross, J. Moffett & J. Henderson (eds.) Computing for Archaeologists, 171-186, Oxford University Committee for Archaeology, Oxford. ROBERTS, D. A (ed.) 1988. Collections Management for Museums, Museum Documentation Association, Cambridge. ROBERTS, D. A (ed.) 1992. Sharing the Information Resources of Museums, Museum Documentation Association, Cambridge. ROBINSON, H. 1993. 'The archaeological implications of a computerised integrated national heritage information system', in J. Andresen, T. Madsen & I. Scollar (eds.) Computing the Past. Computer Applications and Quantitative Methods in Archaeology 1992, 139-150, Aarhus University Press, Aarhus.
S. P. Q. Rahtz (ed.) Computer and Quantitative Methods in Archaeology 1988, 327-332, British Archaeological Reports, Oxford. RYAN, N. S. 1992. 'Beyond the relational database: managing the variety and complexity of archaeological data', in G. Lock & J. Moffett (eds.) Computer Applications and Quantitative Methods in Archaeology 1991, l--0, BAR Publishing, Oxford. SCHOFIELD, J. & TYERS, P. 1985. 'Towards a computerised archaeological research archive', in M. A. Cooper & J. D. Richards (eds.) Current Issues in Archaeological Computing, 5-16, British Archaeological Reports, Oxford. SEYBOLD, J. W. 1987. 'The desktop publishing phenomenon', Byte, 12 (5), 149-154. SHELDON, K. M., LINDERHOLM, 0. & MARSHALL, T. 1992. 'The future of personal computing', Byte, 17 (2), 96-102. SMmI, N. 1992. 'An experiment in electronic exchange and publication of archaeological field data', in G. Lock & J. Moffett (eds.) Computer Applications and Quantitative Methods in Archaeology 1991, 49-58, BAR Publishing, Oxford. SMmIS0NIAN INSTITUTION 1992. Information Resource Management Strategic Plan 1992-1996. Washington DC. Smithsonian Institution.
11
BEN BOOTH SOCIETY OF MUSEUM ARCHAEOLOGISTS 1993. Selection, Retention and Dispersal ofArchaeological Collections. Guidelines for use in England, Wales, and Northern Ireland. Society of Museum Archaeologists.
VITALI, V. & LAGRANGE, M-S. 1989. 'An expert system for provenance determination of archaeological ceramics', in S. P. Q. Rahtz (ed.) Computer and Quantitative Methods in Archaeology 1988, 369-376, British Archaeological Reports, Oxford.
STALLARD, Y. GRAY, W. A. & FIDDIAN, N. J. 1984. 'Creating a conservation record database using a relational database management system', in S. Laflin (ed.) Computer Applications in Archaeology 1984, 78-92, University of Birmingham Centre for Computing and Computer Science, Birmingham.
WAKE, D. I 986. 'Computer conferencing and electronic journals for the scientific community', in S. Laflin (ed.). Computer Applications in Archaeology 1986, 14-20, University of Birmingham Centre for Computing and Computer Science, Birmingham.
STEAD, S. D. 1988. 'The integrated archaeological database', in C. L. N. Ruggles & S. P. Q. Rahtz (eds.) Computer and Quantitative Methods in Archaeology 1987, 279-284, British Archaeological Reports, Oxford. STEWART J. D. 1982a. 'MDA, MOS and computerised archaeology', in I. Graham & E. Webb (eds.) Computer Applications in Archaeology 1981, 101-111, University of London Institute of Archaeology, London. STEWART, J. D. 1982b. 'Computerising archaeological records - a progress )report on the work of the MDA', in S. Laflin (ed.) Computer Applications in Archaeology 1982, 4-10, University of Birmingham Centre for Computing and Computer Science, Birmingham,. n
-·
STEWART, J. D. I 984. 'Museums - cabinets of curiosities or new centres of information', in R. Martlew (ed.) Information Systems in Archaeology, 77-89, Alan Sutton, Gloucester. STUTT, A.
1989. 'Expert systems, explanations, arguments and ... (ed.) Computer and archaeology', in S. P. Q. Rahtz Quantitative Methods in Archaeology 1988, 353-366, British Archaeological Reports, Oxford.
UK 1988. Management of the Collections of the English National Museums and Galleries, Report By the Comptroller and Auditor General, HC 394 Session 1987-88, HMSO, London. UK 1989. Management of the Collections of the English National Museums and Galleries: First Report, Committee of Public Accounts, HC Session 1988-89, HMSO, London. UK 1993. Realising Our Potential. a Strategy for Science, Engineering and Technology, CM 2250, HMSO, London. USA 1993. House of Representatives. 1757, House of Representatives, Washington DC. VITALI, V. 1991. 'Formal methods for the analysis of archaeological data: data analysis vs expert systems', in K. Lockyear &, S. P. Q. Rahtz (eds.) Computer Applications and Quantitative Methods in Archaeology 1990, 207-209, BAR Publishing, Oxford.
WALLACE, B. & JONES-GARMIL, K. 1994. 'Museums on the Internet', Museum News, July/August 1994, 33-62. WANSLEEBEN, M. 1988. 'Geographic Information Systems in archaeological research', in S. P. Q. Rahtz (ed.) Computer and Quantitative Methods in Archaeology 1987, 435--454, British Archaeological Reports, Oxford. WHEATLEY, D. 1991. 'SYGRAF - resource based teaching with graphics', in K. Lockyear & S. P. Q. Rahtz (eds.) Computer Applications and Quantitative Methods in Archaeology 1990, 9-14, BAR Publishing, Oxford. WHINNEY, R. 1984. 'The Midas, MP/M and the MSc', in S. Laflin (ed.) Computer Applications in Archaeology 1984, 27-32, University of Birmingham Centre for Computing and Computer Science, Birmingham. WILCOCK,J. D. 1982. 'STRATA - the microcomputer version', in I. Graham & E. Webb (eds.) Computer Applications in Archaeology 1981, 112-114, University of London Institute of Archaeology, London. WILCOCK,J. D. 1985. 'A review of expert systems: their shortcomings and possible applications in archaeology', in E. Webb (ed.) Computer Applications in Archaeology 1985, 139-144, University of London Institute of Archaeology, London.
...
WILCOCK, J. D. & SPICER, R. D. 1986. 'Standardisation of word processing for publication', in S. Laflin (ed.) Computer Applications in Archaeology 1986, 21-28, University of Birmingham Centre for Computing and Computer Science, Birmingham. WILCOCK,J. D. & LOCKYEAR, K. (eds.) 1995 Computer Applications and Quantitative Methods in Archaeology 1993. BAR International Series 598, BAR Publishing, Oxford. Wlll..IAMS, T. 199 l . 'The use of computers in post-excavation and publication work at the Department of Urban Archaeology, Museum of London'. in R. Ross, J. Moffett & J. Henderson (eds.) Computing for Archaeologists, 187-200, Oxford University Committee for Archaeology, Oxford. YAGER, T. & SMITH, B. 1992. 'Is Unix dead?', Byte, 17 (9), 134-136.
12
2
Archaeological computing, archaeological theory and moves towards contextualism Gary Lock Institute of Archaeology, Oxford University, 36 Beaumont Street Oxford OXJ 2PG UK. Email: [email protected]
2.1
Introduction
Several papers published over the last two decades or so have attempted to place developments in archaeological computing within a wider theoretical framework (e.g. Bell et al. 1986). These range from the broad approach of Richards ( 1986) who sees computing as one of many technologies that have influenced the theory and direction of archaeological endeavour, to the more specific theme concerning the relationship between theory and statistical methods. The many and often complex facets of the arguments surrounding the latter are well represented by the collection of papers edited by Aldenderfer (1987), particularly Read (1987). The present paper adopts a wider view in an attempt to identify linkages between the development of digital technologies and the changing stances of archaeological theory. A brief historical overview suggests a symbiotic relationship between these two apparently disparate areas which has generated the fertile discipline of archaeological computing whose strength is reinforced by the annual occurrence of CAA and its proceedings. Future developments and the implications for different application areas of archaeological computing are then discussed. These are positioned within a perceived trend towards increasing contextualism and data-rich environments which encompass both the technology and the archaeological theory.
2.2
The implications of this for archaeological computing are considerable. For archaeological computing to become an integral part of the archaeological process rather than just a set of tools which can be used at appropriate times, there has to be a fundamental link between the computing and the underlying theoretical stance. Whether the theory and/or the link with the computing are explicit and knowingly integrated into the research process is a different problem and not of direct relevance here. Figure 2.1 presents a formal model of models showing an archaeological research process which incorporates the use of computers. It attempts to represent the symbiotic relationships between a series of different models that enable us to make statements about the past. This is left intentionally vague because versions of 'the past' are variable with my, your, our and their pasts all being potentially very different and valid in different ways. An individual's perception of the past is largely determined by unconcious attitudes as well as the theoretical and philosophical stance from which the analysis is performed, as is evident from much recent writing on archaeological theory. Suffice it to say at this point, that which ever particular version of the past is of interest, the aim of archaeology must be to make coherent and meaningful statements about it.
Working with models
Central to the theme of this paper is the concept of working with models. Voorrips (1987) has suggested a useful classification of models used in archaeology based on the distinction between empirical and formal models and combinations of the two. The aim of any model is to simplify something to enable the process of understanding. The drawing of an artefact is an empirical model of a piece of empirical reality but is just as much a model as complex statistical formulae which are a formal model of empirical reality, albeit perhaps at the higher level of social organisation and interaction. It is the use of formal models that interests us here and particularly their ability to represent and interact with archaeological theory, whether a single defined theory or a more general theoretical approach . The use of such models blossomed throughout the 1970s (Clarke 1972; Renfrew and Cooke 1979) and these discussions have always been firmly rooted in considerations of archaeological theory .
.__ ______
~
Statementsabout 'the past'
Figure 2.1: A formal model of an archaeological research process involving the use of computers.
13
GARY LOCK
The Data Model and the Theoretical Model have always been the basis of archaeological analysis, whether explicitly or implicitly. It is difficult to imagine even the most theoretical of archaeologists not connecting with some data at some point; and conversely the most empirical approach is a theoretical stance in itself. The Data Model incorporates practical and philosophical questions of data structures, ccxling systems, what to record and what to measure. This brings to the forefront the debate on the objectivityof data and the now generally accepted view that data are theory-laden; whether we can ever have 'things given' or accept that we work with 'things made' (Chippindale forthcoming). The Theoretical Model is the analytical engine that drives the Data Model with questions concerning the who, why, where, when and how of our particular perception of the past. The interaction of these two models which involves theory generation and theory testing in an intuitive, iterative loop represents the formal process of archaeological analysis which is not dependant on the use of a computer. This is similar to a part of Read's discussion (1990, 33) of mathematical constructs and archaeological theory in which he details the relationship between Models of Data and Theory. The current argument suggests that computer-based analyses operate at a further level removed from the target past by an intervening Digital Model. The Digital Model is necessarily a representation of selected combined elements of the Data and Theoretical Models which are relevant to the analysis or study in hand. There is a very simple but deterministic bottom line to this; if something (most likely to be an entity, a relationship between entities or a concept) can not be represented within a digital environment then it can not be included within a Digital Model and, by implication, in a computer-based analysis. An example of such a limitation is shown by the on-going debate concerning GIS applications in archaeology and the attempts to incorporate cognitive and perceptual spatial data as opposed to environmental data which are more readily representable digitally (Lock and Stancic forthcoming). The results of working with a digital model can be seen as a virtual or surrogate past which enable the generation of statements about the target past. Performing any sort of computerised analysis illustrates the complex, symbiotic relationships that exist between the three models shown in Figure 2.1. It is an iterative web in which changes to any one model have repercussions throughout. The results of running the digital model, i.e. an analysis, for example, may initiate the rethinking of the underlying theoretical model, inducing a different view of the data model and a rerun of the digital model to complete the loop. This is the essence of exploratory analysis and the 'what if approach which depends on an iterative web rather than the more rigid structure of confirmatory hypothesis testing. The properties of the digital model are central to the theme of this paper and fundamental to the use of computers in archaeology. The capabilities of the Digital Model to incorporate more aspects of the Data and Theoretical Models create an environment from which statements about the past are generated. It follows, therefore, that the richer the Digital
The developmentof archaeologicalcomputing Technologydriven? .... ~a-----.- -~::-.Theory driven? 1
I
~
Mainframecomputers Multivariatestatistics Reductionist
i
Culture-history Classification Typology
[DATA MINIMAL] ~
The 'New Archaeology' Quantification Scientism Processualism Confirmatory Reductionist [DATA MINIMISING]
Micro processors Increasedaccess Increasedsoftware Graphics Integratedsoftware Visualisation Multi-media
~
'Post processual' Exploratory Non-confirmatory Contextual Visualisation ~
[DATA RICH]
I
[DATA ENRICHING]
Figure 2.2: A suggested development of archaeological computing .
Model in terms of included data and theory, the richer the resulting statements. This introduces another symbiotic relationship, this time between the technology which determines the capabilities of the Digital Model and changing archaeological theory. This relationship can be viewed within the evolutionary framework of the development of archaeological computing (Figure 2.2), incorporating developments in digital technology in parallel with those in archaeological theory. It is apparent from this that the term archaeological computing as used here refers to not just the technology but also to the underlying archaeological theory driving its use and it would be impossible to 'do' archaeological computing in a theoretical vacuum.
2.3
The evolution of archaeological computing
Way back at the dawn of archaeological computing in the 1960s the available technology consisted of mainframe computers with a limited selection of software . This was a crucial decade in the development of both archaeological computing, archaeological theory and the establishment of a relationship between them. The advent of the so-called New Archaeology, now usually referred to as the processual school, encouraged an explicit use of computerbased quantitative methods. However, such methods were also developing independently of this new, and mainly North American based, movement. In Britain, the work of Hodson and others was concerned with the application 14
ARCHAEOLOGICAL COMPUTING,ARCHAEOLOGICAL THEORYAND MOVESTOWARDSCONTEXTUALISM
of numerical taxonomy to archaeological data, an approach rooted in the existing European tradition of culture-historical archaeology focussing on concerns of classification and culture change based on typologies and seriated sequences (Hodson et al. 1966 and many of the papers in Hodson et al. 1971). One of the main strengths of mainframe computers during the 1960s was their number crunching abilities and the running of multivariate statistical software. This offered a severely limited data-minimal digital model in which to operate although it was adequate for the contemporary dataminimising theory. The data requirements of the software were a numerical matrix and, therefore, reductionist in the extreme. This mirrored the theoretical precepts of classification and typology involving the identification and comparison of diagnostic traits, usually of isolated, de-contextualised groups of artefacts or, at best, artefacts within discrete archaeological contexts. This comfortable match between the available digital model and the prevailing theoretical paradigm was also applicable to much of the archaeological computing advocated by the processualists. Quantification, and by implication a computer-based approach, became inextricably linked to the reductionist scientism seen to be central to a processual approach to explaining the past. This was made explicit very early in the debate with the introduction of multivariate statistics by the leading proponent of the New methodologies (Binford and Binford, 1966). Intrinsic to this paradigm was a belief in objectivity and a belief that the processes, structures and behaviour that formed the archaeological record could be reached via appropriate methodologies, of which hypothetico-deductivism,hypothesis testing and quantification formed an important part. Of course not all contemporary archaeologists, even those at the forefront of archaeological computing, agreed with rigid processual approaches, and it is refreshing to re-read Doran and Hodson (1975, Chapter 13) for an alternative view. Other alternative views developed through the 1980s into what is generally now labelled Post-processual archaeology which can be seen as part of the wider post-modernist movement. Post-processual archaeology is, in effect, not so much an integrated school of thought rather than a divergent series of theoretical approaches united mainly by their critique of processualist methods. It is way beyond the theme of this paper (and this author!) to detail developments in archaeological theory other than to identify aspects which are relevant to archaeological computing. However, it is important to note that this is not a central theme of most postprocessual writing, although it is recognised that quantitative methods do have a part to play in such methodologies (Shanks and Tilley 1987). The divergence of approaches which shelter beneath the post-processualist umbrella make it difficult to generalise on the perceived role of computing. In a recent review (Barrett 1994, Chapter 7) it gets no mention at all, whereas in a paper which attempts to forge links between processual and post-processual methodologies concerning sociocultural theory (Cowgill 1993), quantitative methods are seen to be central. Of course post-processualism is not the end of the story. The development of archaeological theory is
an endless continuum and reactions to aspects of postprocessual thought have been many and varied. Both of the last two citations above can be seen as examples of moving the debate beyond post-processualism and a recent collection of papers (Yoffee and Sherratt 1993) presents a range of alternative theoretical positions. One theme which can be identified within much writing on post-processual archaeology is that of context and contextualisrn. Hodder (1986; 1987), for example, has shown the importance of context in terms of both the context of data and the social and cultural context of the analyst (although see Barrett 1987 for an alternative view of what contextualism is about). It has been argued by Kohl (1993, 13) that 'knowledge is never absolute nor certain but must be contextualised'. Of course the concept of context is open to many different interpretations and applications but with reference to the process suggested in Figure 2.1, context is integral to the Digital Model. In this sense context is not just concerned with the inclusion of more and varied data but also with the links and relationships between data which can be stored and studied. The knowledge we produce generates statements about the past and in a computer-based analysis these are a product of the Digital Model. It follows, therefore, that data-rich digital models and data-enriching theoretical approaches are symbiotically linked within a methodology which acknowledges the primacy of contextuality. Figure 2.2 suggests that developments in information technology over the last three decades have been steadily moving towards increasingly data-rich digital environments. The parallel with trends in archaeological theory is not trivial and represent a shift from theory-driven deductive methods which can operate in data-minimal environments to data-driven inductive approaches which are data hungry. This is mirrored in changing sociocultural theory and views of culture from something that can be reduced to a series of structures, laws and quantifiable patterns to something that is a much more complex system of interacting values and beliefs. A worldview, in fact, that is not rigidly structured and minimising but is contextual, the study of which demands data-rich contextual models.
2.4
Towards data-rich contextualism.
It is possible to illustrate this trend towards contextualism in archaeological computing with specific examples, starting with databases which are probably the most widely used software tool in archaeology. Early flat-file database structures were rigid data-minimal models which constrained the representation of relationships between entities. Of the subsequent data models that have emerged, the relational model has found favour in allowing a much richer and theoretically satisfying representation of entities, attributes and relationships. Of the many examples in archaeology, the relational model of excavation recording developed by Andresen and Madsen (1992) is an excellent example of this move towards datarichness. The work of Goodson (1989) is an early example of the inclusion of images within database structures and can be seen as a step towards the exciting and very rich digital environments of interactive
15
GARY LOCK
multimedia . The importance of these technological developments for contextuality is not just the vastly improved range of data-types that are brought into play within an integrated digital environment but the concept of non-linear access and the complexity of links between data that are fundamental to multimedia authoring (Rahtz et al. 1992 demonstrate this with the example of excavation reports) . Large multimedia databases can include not only images and text but also moving video, animation and sound. Archaeology operates within a multi-dimensional world which is analogous with the concept of contextuality and the functionality of digital multimedia. It follows from this that multimedia data, which are infinitely cross-linked within multi-dimensional hyperspace, reflect changes in theoretical approaches by encouraging data-driven analyses rather than the theorydriven deductive methods enforced by data-poor digital models . For databases with a spatial component, and especially those purporting to be analytical rather than purely representational, we need to examine the past, current and future role of Geographic Information Systems in archaeology. GIS is a multi-million dollar bandwagon that has been gathering momentum across international markets, as well as within the rather smaller world of archaeology (Allen et al. 1990). This is based partly on the appeal of the data-rich spatial environment on offer and on the powers of spatial visualisation afforded (Lock and Harris 1992). Despite the tremendous amount of hype that has been generated around GIS applications in archaeology, one of the most interesting debates at the moment concerns the poverty of the available digital model and its mismatch with current theoretical models, especially concerning landscape archaeology (discussed in detail by Gaffney and van Leusen, forthcoming). It is no longer of great novelty in terms of research methodology to know that certain types of sites can be shown to display a locational preference to a certain altitude/soil type/slope/aspect/distancefrom water or any other environmental correlate. This deterministic analytical approach can be seen in theoretical terms as a throwback to processualism and a data-minimal digital model. The debate concerning the theoretical validity of GIS has raged in geography for some time (Openshaw 1991) where the technology has been seen by some as the 'quantifiers revenge' over the post-modernists. The deterministic origins of GIS in archaeology are not difficult to trace to the initial uses of the technology for the predictive modelling of site location within Cultural Resource Management projects. The literature on predictive modelling is extensive but see Judge and Sebastian (1988), and Warren (1990) for the funcamentals and Brandt et al. (1992) and Kvamme (1992) for the persistence of the methodology. Current theoretical models of social theory and cognitive aspects of space and landscape are at odds with the limited digital model offered by current GIS software. Modem human spatial cognition is extremely difficult to understand and represent (Mark 1993) and the problems are multiplied many-fold when dealing with past peoples (Zubrow 1994). It is very pertinent here to repeat the statement in section 2.2
above, that if something cannot be represented digitally then it cannot be included in the digital model and by implication in a computer-basedanalysis. This is the problem with variables relevant to much cognitive and social landscape theory: they are difficult to isolate, measure and record digitally. The theoretical model has forced the existing digital model to its limits and found it wanting. Archaeologists have been quick to attempt compromise and have recognised viewshed analysis as the most promising aspect of the current digital model in an attempt to bridge this gap (Wheatley 1993 and several papers in Lock and Stancle forthcoming). There is no doubting that the future of GIS lies in moves towards increasingly data-rich environments and increasing multidimensionality. This can be seen in terms of multimedia GIS which are showing great potential (Buttenfield and Weber 1993) and for the more specific improvement in dimensionality inherent within truly 3-dimensional GIS (Raper 1989; Turner 1991). One of the exciting implications of the latter for archaeology is the development of temporal GIS where time and change through time are represented on a continuous z-axis (Lock and Harris forthcoming). This would free analysis from another severe restriction of the current GIS digital model which forces time into a categorical theoretical model through the use of coverages. The promise of continuous time together with fuzzy temporal and spatial boundaries is part of the move towards the increasing richness of future GIS and, perhaps, an improved fit between the digital and theoretical models. Statistics were a part of the central creed of processual methodologies and their manipulation of the data-rich digital models of post-processualism is perhaps less obvious than for other areas of archaeological computing. It has been argued above that GIS hold the potential for data-rich spatial analysis and that should include spatial statistics. A major failing of traditional spatial statistical methods (Hietala 1984 for example) is that they are too reductionist and incapable of incorporating background, or contextual, information. To reduce complex archaeological spatial reality to a series of points, then test for randomness and produce a probability value has failed to capture the interest of most archaeologists looking for spatial interpretations of their data. GIS should offer a much more productive environment for spatial statistical analysis although most software available at the moment is surprisingly lacking in this area. Kvamme (1993) has been a pioneer in attempting to integrate spatial statistics into archaeological GIS applications. In a recent short article (Kvamme 1994) he comments on the relationship between the theory-down deductive reasoning of formal methods and the exploratory, data-up inductive approaches encouraged by visualisation. He argues for an integration of the two to create a 'healthy spatial archaeology' which must utilise the datarich digital models described here rather than the dataminimal models upon which traditional spatial statistics operated. Various types of multivariate statistics, usually based on some kind of similarity matrix of cases and variables, are equally reductionist and were equally central within processual methodologies. It is of interest that the latest textbook of multivariate techniques includes 'Exploratory' in 16
ARCHAEOLOGICAL COMPUTING,ARCHAEOLOGICAL THEORYAND MOVESTOWARDSCON1EXTUALISM
its title (Baxter 1994) and the author stresses the point that such methods need not be aligned with any particular theoretical stance thus avoiding the dangers of 'throwing methodological babies out with the theoretical bathwater' (ibid, 8). Twenty years after the first book of its kind (Doran and Hodson 1975), it is difficult at first to see how multivariate statistical methods have evolved to fit into the general trend towards contextuality and the data-rich digital models described here. Of course, some problems are suited to statistics by being purely analytical rather than requiring a wider context. Perhaps quantitative methods really are sufficiently different to the rest of archaeological computing to warrant the separatist title of this volume and its associated conference. Having said that, there is the potential of future convergence between the data-rich models of other forms of archaeological computing and multivariate statistical analysis and the key words for this link are multidimensionality , visualisation and exploratory. A promising route towards this future is shown in an enlightening paper by Openshaw (1994) which is one of several discussing spatial analysis and GIS (Fotheringham and Rogerson 1994). Openshaw 's theme is one of data-driven exploratory analysis within a data-rich multidimensional digital model. He outlines Tri-Space Analysis which utilises variables inherent within Geographic Space, Temporal Space and Attribute Space thus forming the totality of the GIS environment open to pattern-spotting procedures. The analytical power is invested in a STAM (Space-'TimeAttribute-Machine) which in essence operates a searcheverywhere philosophy. A refinement of this involves the concept of AL. (Artificial Life) which is a branch of A.I. (Artificial Intelligence) where a STAC (Space-'TimeAttribute-Creature) in the form of a hyper-sphere roams around the database feeding on data and adapting to its environment via a genetic algorithm. In simple analytical terms this means testing different groups of variables within the multidimensional database for patterning, whether that is deviation from randomness or patterning according to any other definition. As the analysis proceeds so the algorithm controlling it can change its aims according to the data it processes. The other important aspect of this approach is that it brings AI, an area of archaeological computing which has declined in acceptance after a period of intense interest, into line with the general trend towards data-rich digital models and contextuality. Of course, this is not suggesting that within the next few years every archaeological unit will be employing artificial life to perform its post-excavation work, but it does reinforce a convergence based on this trend and the following quote from Openshaw is particularly apposite: "Suddenly, the opportunity exists to take a giant step forward not by becoming clever in an analytical sense, but purely by becoming cruder and more computationally oriented in a way that allows the computer to do most of the work. It is almost as if computer power is to be used to make up for our ignorance of the nature and behaviour of the complex systems under study ." (Openshaw 1994, 91). This is not suggesting that brute computer force is a substitute for theory or thinking, but that the digital models
available today are approaching the richness of theoretical approaches.
2.5
Conclusions.
In the collection of papers mentioned above (Yoffee and Sherratt 1993), Bradley (1993) talks of a loss of nerve in archaeology with a polarisation between two opposites. On the one hand are the scientists who use ever more specialist equipment and techniques to perform increasingly detailed analyses, usually on artefacts , and generally add little to the wider understanding of past human behaviour . On the other are the post-processualist theoreticians who appear to have become so disillusioned with the archaeological record and insist on such introspection that they have become the PC police of archaeology (not IBM compatible!) . The majority of working archaeologists are suspended somewhere in the middle feeling decidedly uncomfortable with a move in either direction. Bradley's plea is for a rejuvenation in archaeological creativity, to be able to bridge the gap between the alienating extremes of formal scientific methods and theoretical ideology. Such creativity , I suggest, depends on a fertile mix of data and theory within an environment that encourages unrestricted interaction between the two. It is my hope that as technological limitations fade into the background, so archaeological computing powered by ever more data-rich and contextual digital models, can play its part in the archaeological theory of tomorrow.
Acknowledgments I would like to thank Andrew Sherratt for commenting on an earlier version of this paper.
References ALDENDERFER , M. S. (ed.) 1987. Quantitative Research In Archaeology . Progress And Prospects , Sage Publications, New York. ALLEN, K. M. S., GREEN , S. W. & ZUBROW, E. B. W. (eds .) 1990. Interpreting Space : GIS and archaeology, Taylor and Francis,
London. ANDRESEN , J. & MADSEN,T. 1992. 'Data structures for excavation recording. A case of complex information management' in C. U. Larsen (ed.) Sites and Monuments. National Archaeological records , 49-67 . The National Museum of
Denmark, Copenhagen. ANDRESEN, J., MADSEN , T . & SCOI.LAR, I. (eds.) 1993. Computing the Past. CAA92 Computer Applications and Quantitative Methods in Archaeology , Aarhus University Press, Aarhus.
BARRETT , J. C. 1987. 'Contextual archaeology', Antiquity, 61, 468-473. BARRETT , J. C. 1994. Fragments from Antiquity. An archaeology of social life in Britai,., 29~1200 BC, Blackwells, London. BAXTER, M . J. 1994. Exploratory
Multivariate Analysis in Archaeology , Edinburgh University Press, Edinburgh.
BELL,J. A., LOCK,G. R. & REn.l..Y,P. (eds.) 1986. Formal Methods , Quantitative Techniques and Computer Applications in Theoretical Archaeolog y, Science and Archaeology, 28 (Special Edition).
BINFORD , L. R. & BINFORD , S. R. 1966. 'A preliminary analysis of functional variability in the Mousterian of Levallois facies', American Anthropologist , 68, 238-295.
17
GAR Y LOCK
BRADLE Y, R. 1993. 'Archaeology: the loss of nerve' in N. Yoffee & A. Sherratt (eds.), 131-13 3. BRANDT , R., GR0ENEW0UDT , B. J. & KVAMME , K. L. 1992. 'An experiment in archaeological site location: modelling in the Netherlands using GIS techniques', World Archaeolog y, 24, 268-282 .
BUTIENFIELD , B. P. & WEBER, C. R. 1993. 'Visualisation and Hypermedia in Geographical Information Systems' in Medyckyj-Scott and Hearnshaw (eds.), 136-1 47. CHIPPINDALE , C. (forthcoming). 'Capta (not Data), Information, Knowledge and Understanding' in S. Ross (ed.) The problems and Potentials of Electronic Information for Archaeology, The British Academy, London. CLARKE, D. L. (ed.) 1972 . Models in archaeology, Methuen, London. COWGILL , G. L. 1993. 'Distinguished Lecture in Archaeology: Beyond Criticizing New Archaeology', American Anthropologist , 95 (3), 551-573 .
DORAN,J. E. & HODSON , F. R. 1975. Mathematics and Computers in Archaeolog y, Edinburgh University Press, Edinburgh. FOTHERINGHAM , S. & ROGERSO N, P. 1994. Spatial Analysis and GIS, Taylor and Francis, London. GAFFNEY , V. & VANLEUSEN, M. (forthcoming). 'GIS, Environmental Determinism and Archaeology: a parallel text' , in G. Lock & Z. Stancic (eds.). GOODSO N, K. J. 1989 . 'Shape information in an artefact database' in S. P. Q . Rahtz & J. D. Richards (eds.), Computer Applications and Quantitative Methods in Archae ology 1989, 349-3 61, British Archaeological Reports International Series 548 (2 volumes), Oxford.
LOCK,G. R. & HARRIS , T. M. (forthcoming). 'Analyzing change through time within a cultural landscape: conceptual and functional limitations of a GIS approach', in P. Sinclair (ed.) Urban origins in eastern Africa , One World Archaeology series, Routledge, London. LOCK,G. R. & STANCIC , Z. (eds.). (forthcoming). Archaeolo gy and Geographic Information Systems: A European Perspecti ve, Taylor and Francis, London. MEDYCKYJ-SCOTT , D. & HEARNSHAW , H. M. 1993. Human Factors in Geographical Information Systems , Belhaven Press, London. MARK,D. M. 1993. 'Human Spatial Cognition' , in Medyckyj-Scott & Hearnshaw (eds.), 51-60 . OPENSHAW , S. 1991. 'A view on the GIS crisis in Geography, or using GIS to put Humpty-Dumpty back together again', Environment and Planning, A, 23, 621-628 . OPENSHAW , S. 1994. 'Two exploratory space-time-attribute pattern analysers relevant to GIS', in S. Fotheringham & P. Rogerson (eds), 83-104 . RAfITZ , S., HALL, W. & ALLEN, T. 1992. 'The development of dynamic archaeological publications' , in P. Reilly and S. Rahtz (eds), 360-383 .
RAPER , J. F. (ed.) 1989. Three dimensional applications in Geographi c Information Systems, Taylor and Francis, London. REill.Y, P. & RAfITZ , S. (eds.). 1992. Archaeolo gy and the Informa tion Age. A global perspecti ve, One World Archaeology Series 2 1, Routledge, London. RICHARDS , J. D. 1986. 'Computers in Archaeological Theory and Practice' , in Bell et al. (eds.), 51- 55 .
HIETALA , H. J. (ed.) 1984. Intrasite spatial analysis , New directions in archaeology, Cambridge University Press, Cambridge.
READ, D. W. 1987. 'Archaeological Theory and Statistical Methods: Discordance, Resolution, and New Directions', in M . S. Aldenderfer (ed.), 151- 184.
HODDER , I. 1986. Reading the Past: current approaches to interpretation in archaeolog y, Cambridge University Press, Cambridge.
READ, D. W. 1990. 'The utility of mathematical constructs in building archaeologicaltheory', in A. Voorrips (ed.), 29-60 .
HODDER , I. (ed.) 1987 . The archaeolog y of contextual meaning, Cambridge University Press, Cambridge.
RENFREW , C. & COOKE,K. L. (eds.) 1979. Transformations : Mathematical approaches to culture change , 145-167 , Academic Press, New York.
HODSO N, F. R., SNEATH , P. H. A. & DORAN , J. E. 1966. 'Some experiments in the numerical analysis of archaeological data', Biometrika , 53 , 311-324 .
SHANKS , M. & TILLEY , C. 1987 . Re-Constructing Cambridge University Press, Cambridge.
Archaeolog y ,
HODSON,F. R., KENDALL, D. G. & TAUTU , P. (eds.) 1971. Mathematics in the Archaeological and Historical Sciences , Edinburgh University Press, Edinburgh.
TURNER,A. K. (ed.) 1991. Three Geoscientific Information Publications, Dordrecht.
JUDGE,J. W. & SEBASTIAN L. 1988. Quantifying the Present and Preserving the Past: Theory, method and Application of Archaeological Predictive Modelling, US Department of the Interior, Denver.
VOORRIPS, A. 1990. Mathematics and Information Science in Archaeology : a flexible framework , Studies in Modem Archaeology, HOLOS-Verlag, Bonn.
Km-IL, P. L. 1993. 'Limits to a post-processual archaeology (or, The dangers ofanew scolasticism', in N. Yoffee & A. Sherratt (eds.), 13- 19. KVAMME , K. L. 1992. 'A Predictive Site Location Model on the High Plains: An Example with an Independent Test', Plains Anthropologist , 56(2), 19-40 . KVAMME, K. L. 1993. 'Spatial statistics and GIS: an integrated approach', in J. Andresen et al. (eds.) 91-103 . KVAMME,K. L. 1994. 'Computer visualization vs. statistics: how do they fit together?', Archaeological Computing Newsletter , 38, 1-2 .
dimensional modelling with Systems , Kluwer Academic
WARREN , R. 1990, 'Predictive modelling of archaeological site location', In K. M . Allen et al. (eds.) 201-215 . WHEATLEY , D. 1993. 'Going over old ground: GIS, archaeological theory and the act of perception', in J. Andresen et al. (eds.), 133-138 . YOFFEE , N. & SHERRATT , A. 1993. Archaeological Theory: who sets the agenda ? New Directions in Archaeology, Cambridge University Press, Cambridge. ZUBROW, E. B. W. 1994. 'Knowledge representation and archaeology: a cogrubve example using GIS' , in C. Renfrew and E. B. W. Zubrow (eds.) The Ancient Mind. Elements of cognitive archaeology , New Directions in Archaeology, Cambridge University Press, Cambridge.
18
3
The good, the bad, and the downright misleading: archaeological adoption of computer visualization Paul Miller 1 and Julian Richards 2 Department of Archaeology, University of York, The Kings Manor, York, YOJ 2EP, UK. Email: [email protected], 2 [email protected]
3.1
Introduction
Over the past decade, archaeologists have been quick to adopt the techniques of computer visualization, often with eye-catching effect (Reilly 1988, 1992). The images produced by these techniques serve to enable archaeologists and public alike to visualize past monuments, landscapes and excavations, truly bringing the past to life. Over the years, delegates to CAA conferences have been awed by the latest piece of solid modelling, or reduced to avaricious jealousy by the price tag on the computer producing it. We suggest, however, that there are dangers, both in the techniques themselves and in the way they have been utilised within archaeology; dangers that to a large degree have gone unrecorded by practitioners and the greater archaeological community alike.
3.2
Visualization
'Visualization' as used here refers to the computerised exploration of data which have been converted into displayable geometric objects. However, it is more than the application of image processing, solid modelling, or GIS techniques. Visualization is an interactive process whereby large and potentially complex data sets may be displayed on the computer screen and explored to reveal new insights. It is also a methodology, a way of looking at and approaching the problems of imparting data to an audience. As such, computer-based visualization has tremendous research potential within a discipline such as archaeology: •
archaeology is a very visual subject. Its data frequently comprise images - of artefacts or sites; aerial photographs, geophysical survey plots, satellite images, or excavation plans
•
archaeological data sets can be very large, with a single survey or excavation producing thousands of records
•
archaeological data can be extremely complex, with uncharted relationships between a number of variables
However, to date the catalyst for visualization in archaeology has not been the search for improved techniques for discovering new knowledge but rather for improved ways for presenting existing knowledge to the public.
3.3
Visualization in Archaeology
The general perception of computer visualization in archaeology is of impressive but expensive computer reconstructions of some of the major buildings of antiquity. Indeed, in Britain, visualization projects originated in the 1980s, starting with Bowyer and Woodwark's reconstruction of the Temple Precinct from Roman Bath (Smith 1985; Woodwark 1991), and that of the Roman legionary bath house at Caerleon in Wales (Woodwark 1991). These pioneering projects inspired a succession of visualization applications elsewhere in the UK, including the Saxon Minster at Winchester (Burridge et al. 1989), the Cistercian foundation at Furness abbey in Lancashire (Wood & Chapman 1992), the Roman palace at Fishbourne (Cornforth & Davidson 1989), or Kirkstall abbey (Dew et al. 1990). Due to the success of the earlier temple project, Bath City Council commissioned a second model, resulting in a short video of the civic bath complex (Lavender et al. 1990). The resulting video provided a walk through the civic bath complex and consisted of hundreds of images, each of which took many minutes to compute on powerful workstations. As with other models, the final video was impressive, but it was impossible to deviate from the route laid down by the programmers and even the single images were far from interactive, due to the time required for computation. Elsewhere in Europe there were other experiments with visualization technology including the Stabian baths in Pompeii (Moscati 1989), the medieval church of St Giovanni in Sardinia (Soprintendenza Archeologia Per le Provincie di Sassari e Nuoro 1989), the Athenian Acropolis (Eiteljorg 1988), or the pyramids at Giza in Egypt (Labrousse & Comon 1991). With all of these models, the raison-d'etre was to make a reconstruction of the site accessible to the general public. They are generally of buildings which have been excavated or recorded, but where poor survival makes it difficult to visualize the appearance of the original structure without artificial aids of this type. For example, the Civic baths complex in Bath is one of most visited tourist attractions in the UK with around one million visitors each year. The surviving remains, although impressive, are only a faint echo of the original imposing structure, and are overshadowed by later structures.
19
PAUL MILLERAND JULIAN RICHARDS
An important factor is that most of these computerbased projects are concerned with Classical or Romanesque architecture. In most cases the architectural principles are well understood. The building foundations had been excavated and the above ground appearance was known, or at least calculable , from surviving buildings. Therefore in each of these cases the archaeologists had already formed a fairly clear view of what the building looked like before they considered developing the computerised model. Thus there was little extra to be learnt by constructing the model. This is not to say that the visualization failed to provide any new insights , but merely that it was not the primary aim of the project. The models were developed on equipment that was at the time state of the art , and using software that was only available in a limited number of corporate research centres . In most cases they were the result of sponsorship by a large commercial organisation , such as IBM, Fiat, or BNFL who had quite cynically targeted archaeology as a discipline in which they could gain public relations points in an area which was politically safe, and at relatively little cost to them. They were each aware that archaeology attracts media attention and arouses large-scale sympathetic public interest , while those working within the profession would welcome collaboration due to lack of resources. Each project was a result of collaboration between computer scientists and archaeologists , rather than being archaeologically controlled . In most cases the visualization software itself was not accessible to the archaeologists and therefore the computer scientists were interposed between them and their data. The archaeologists did not have direct control of the modelling themselves. These high-tech reconstruction models are the direct successors of the water-colour drawings of Alan Sorrell which still grace many site guide books. The increasing use of computer-based reconstructions has not been developing in isolation , and may be seen as a result of several, related, factors: •
the recent and massive growth of the heritage industry, and the explosion in museums and heritage centres, opening it has been estimated in the UK, at the rate of one per day (Hewison 1987). Besides being under growing pressure to find new ways of attracting the public, museums and heritage bodies also have the raw materials necessary.
•
as a result, there is money available for model development within the 'Heritage Industry ' .
•
although initially expensive, computer modelling is now relatively cheap , and when compared with the cost of employing a draughtsman for several months to prepare a single drawing , the development of a reusable computer model may be the cheaper option.
•
the increased expectations of a public used to a diet of high quality multimedia-media presentations , from school learning software upwards.
However, these models carry a degree of authority which Sorrell's drawings never had. No clouds or wisps of smoke hide those areas where interpretation was difficult; the question marks and qualifications of the excavation report are reduced to the clinical fixed measurements of the architectural plan . Furthermore , computer models carry more authority than paper images ; people expect computers to be right , and the past is therefore presented as a known - and knowable - reality. We do not know of any examples where alternative reconstructions have actually been published and clearly there is a major danger here. Large audiences are being exposed to visualizations in circumstances where the pictures or animations are divorced from the academic discussion (both technical and theoretical) associated with their development. Most archaeologists are keen to emphasise that there are many possible views of the past , and that we rarely know anything for certain , but these computer models are constrained by the short-term attention span the heritage industry assumes museum visitors to possess, and the small time-slices devoted to any one model amongst the plethora of images and displays bombarding the visitor. In presenting a very visual and solid model of the past there is a danger that techniques of visualization will be used to present a single politically correct view of the past, and will deny the public the right to think for themselves . This is not a problem that is unique to archaeology. Under the heading of 'Lies, damned lies and slick graphics ' New Scientist (Kiernan 1994) warned that computer graphics could be tricking the public - and even the scientists who use them - into believing that speculations or forecasts were in reality proven fact. Speakers at the Conference of the American Association for the Advancement of Science recently noted that in an image-hungry world, a computer forecast of patterns of air pollution was more effective in influencing policy makers and politicians than dry tables of numbers and charts. Worryingly, there is little, if any, quality control for computer graphics and they are not subject to the same intense peer review as scientific papers . Part of the fault also lies with the visualization tools themselves. Rarely are they capable of displaying uncertainty or fuzzy data. One needs to be able to display levels of probability that a wall stood where it is shown, and the level of confidence in a computer generated terrain model. An application area where these, and other, considerations are especially pressing at present is the field of GIS, variously described as 'the greatest thing since the map' and 'just another bandwagon '. In archaeology, as elsewhere, the advent of powerful GIS (Allen et al. 1990) has enabled a revolution in the ways in which spatial data are visualized and explored. Allied to this visualization revolution has been the 20
THEGOOD,
THE BAD, AND THE DOWNRIGHT MISLEADING
lowering of barriers between archaeologists and their data; no longer do we need a computer scientist to write our software, a geographer to analyse our distributions , or a cartographer to draw our maps. Now we do it ourselves.
other archaeologists and the general public . With this in mind it is important that we recognise the limitations of current techniques and find ways to represent the failings in both our data and the display methodology.
This has enabled many archaeologists to explore their data in ways that were impossible or prohibitively expensive before, but it has also allowed the possibility of far lower standards for work of this nature.
A terrain model, for example, might be displayed which is based upon thousands of accurately surveyed points . Another model could also be displayed which is based upon only several hundred. Without recourse to ground trothing, or to the data themselves, it becomes very difficult to realise which is the more accurate. As disseminators of information to a data-naive public, we must find techniques for displaying areas of fudged data within our models, and attempt to educate people in the skills of visual data analysis: an awareness of scale, an understanding of the fact that lines on maps often represent fuzzy boundaries, and a perception of the limitations inherent in our data.
A cartographer does far more than draw maps . He has been trained for many years in the science of cartography, and has an awareness of how different elements of map composition go together to make up the whole. In any map he prepares, all of this knowledge and experience is brought to bear, adding to the ..overall quality of the map, and the accuracy of the information conveyed (Tufte 1990). By allowing untrained archaeologists loose with a GIS, they have the tools at hand to produce maps superficially of the same, if not greater, quality to that produced by the cartographer, but without the background expertise. Given such a plethora of tools, it becomes all too easy to generate pretty but meaningless or even misleading maps. Even allowing for the problems of map composition and design, the most able map creator faces many difficulties in representing multidimensional archaeological data within the constraints of the two dimensional display medium. As archaeologists, we face very different visualization problems to those encountered by other social scientists as our maps do not merely record the distribution of occurrences through space, but also attempt to represent their temporal distribution (Castleford 1992). Complications begin to arise when data or analysis move beyond the first two and into the less conceptually concrete 3rd and 4th dimensions. Whilst we can perceive these dimensions (up as opposed to down, early rather than late), representing them within GIS in such a way that they may be visualized and manipulated is proving a great challenge. Several ingenious solutions have been found to the problem, but they tend to rely upon the use of snapshots, or slices through the dimension under study (Castleford 1992; Kvamme 1992). This approach enables basic visualization , but dissociates data from their fundamental inter-relationships. We become resigned to viewing only part of the whole, and begin to forget that our maps are only components of a totality in which past influences present, and low elevations interact with higher ones. In order to truly explore our data, it is necessary to investigate the means by which true multi-dimensionality may be built into our visualization systems. For detailed research , interactive links between data, images and user are also vital, and more important than the aesthetics of the image itself. As mentioned earlier, a major aspect of any archaeological visualization is to create representations of archaeological data that may be seen and understood by
3.4
Conclusion
In conclusion, we believe that computer graphics should carry a health warning . The motive for employing graphical modelling and imaging systems in the entertainment and advertising industries is the creation of pictures that elicit certain kinds of responses from the viewer, such as awe and wonder. Their role is to manipulate the audience. One can see examples of this in archaeology. Paul Reilly (forthcoming) described a German project to rebuild the great baroque Frauenkirche of Dresden, which was destroyed by the firestorms that ravaged the city in the aftermath of the allied bombing raids of 1945. The church had become a symbol of national unity and the project had a high public profile . As part of the campaign to raise funds for the restoration, an award-winning computer-generated tour of the Frauenkirche, restored to its former glory, had been made to show potential sponsors. The animation was accompanied by the music of Johan Sebastian Bach playing on the Frauenkirche 's organ. To quote Paul Reilly 'the image of archaeology that is being projected is one that is dynamic, hi-tech and, unashamedly, commercial' (Reilly forthcoming). As a way of influencing large numbers of people, computer visualization is a potentially powerful tool. On the one hand it can give large numbers of people access to the past, but on the other hand it gives tremendous power to the custodians of the heritage. There are big differences between research, education, entertainment and propaganda, but it is not always easy to draw sharp lines between them. Those who watched the frightening televised virtual world of Oliver Stone's Wild Palms (BBC 2) will have witnessed a nightmare vision at one end of this continuum. As the past becomes increasingly commercialised and the graphics become increasingly sophisticated this is a problem which we believe is likely to grow in importance into the next century.
21
PAUL MILLER AND JULIAN RICHARDS
References AllEN, K. M. S., GREEN, S. W. & ZUBROW, E. B. W. (eds.) 1990. Inter preting Space: GJS and archaeology, London. Taylor and Francis. BURRIDGE, J., COU.INS, B. M., GALTIN, B. N., HALBERT, A. R. & HEYWOOD, T. R. 1989. 'The WINSOM solid modeller and its application to data visualization', IBM Systems Journal, 28(4), 548-578. CASlLEFORD, J. 1992. 'Archaeology, GIS, and the time dimension', in G. Lock & J. Moffett (eds.), Computer Applications and Quantitative Methods in Archaeology 1991, 95-106, British Archaeological Reports (S) 577, BAR Publishing,. Oxford. CORNFORTH, J. AND DAVIDSON, C. 1989. 'Picturing Archaeological Computing Newsletter, 19, 1-7.
the
Past',
DEW, P., FERNANDO, L, HOU.IMAN, N., LAWLER, M., MALHI, M. & PARKIN, D. 1990. 'Illuminating chapters in history: computer aided visualization for archaeological reconstruction', Precirculated papers for Information Technology themes at World Archaeological Congress 2, Venezuela, September 1990, 1-8, Volume 3: Late papers, IBM UK Scientific Centre, Winchester. EITEIJORG, H. 1988. Computing Assisted Drafting and Design: new technologies for old problems, Center for the Study of Architecture, Bryn Mawr. HEWIS0N, R. 1987. The Heritage Industry: Britain in a climate of decline, London, Methuen. KVAMME, K. L. 1992. 'Geographic Information Systems and archaeology', in G. Lock & J. Moffett (eds.), Computer Applications and Quantitative Methods in Archaeology 1991, 77-84, British Archaeological Reports (S) 577, BAR Publishing, Oxford. LABROUSSE, A., & P. C0RN0N 1991. Viewing a Pyramid, Electricitee de France, Paris.
MOSCATI, P. 1989. 'Archeologia e informatica: l'esperienza di Neapolis', Rivista IBM XXV (1), 24-27. REIU...Y, P. 1988. Data Visualization: recent advances in the application of graphic systems to archeology, IBM UKSC report 185, March 1988, IBM, Winchester. REIU...Y, P. 1992. 'Three-dimensional modelling and primary archaeological data', P. Reilly & S. P. Q. Rahtz (eds.), Archaeology and the Information Age: a global perspective, 147-176, Routledge, London. REILLY, P. (forthcoming). 'Access to insights: stimulating archaeological visualization in the 1990' s', in A. Suhadja & K. Biro (eds.) The future of our Past '93, Hungarian National Museums, Budapest. REIU...Y, P. & Rahtz, S. P. Q. 1992., (eds.) Archaeology and the Information Age: a global perspective, Routledge, London. SOPRINTENDENZA ARCHEOLOGIA PER LE PR0YINCIE DI SASSARI E NUORO 1989. SIP/A - Progetto SITAG Archaeologia del Territorio. Territorio dell' Archeologia: immagini di un' esperienza di catalogazione informatica dei beni culturali delta Gallura, Chiarella-Sassari. Tempio Pausania. SMITH, I. 1985. 'Sid and Dora's Bath show pulls in the crowd', Computing, 27 June 1985, 7-8. TuFrE, E. R. 1990. Envisioning Information, Graphics Press, Cheshire, CO. WOOD, J.
& CHAPMAN, G. 1992. 'Three-dimensional computer visualization of historic buildings - with particular reference to reconstruction modelling' in P. Reilly & S. P. Q. Rahtz (eds.) Archaeology and the Information Age: a global perspective, 123-146, Routledge, London.
WOODWARK, J. R. 1991. 'Reconstructing history with computer graphics', IEEE Computer Graphics and Applications, January 1991, 18-20.
LAVENDER, D., WALLIS, A., BOWYER, A. & DAVENPORT, P. 1990, 'Solid modelling of Roman Bath', Precirculated papers for Information Technology themes at World Archaeological Congress 2, Venezuela, September 1990, Volume 3, 7-13, Late papers, IBM UK Scientific Centre, Winchester.
22
4 Democracy, data and archaeological knowledge J. Huggett Department of Archaeology, University of Glasgow, Glasgow G12 8QQ, UK Email: [email protected]
4.1
Introduction
In a recent article (Huggett 1993), I drew attention to what seemed to be an increasingly common phenomenon in papers discussing the application of computers in a variety of archaeological contexts: the concept of the democratisation of knowledge. This phrase has rapidly assumed the status of a buzzword over the last couple of years but it has rarely been questioned in terms of its implications for archaeology and archaeologists. In that sense then, I would suggest that the democratisation of archaeological knowledge using information technology is a classic instance of us, as archaeologists, hurtling down a road without any real idea of why we're doing it, or whether indeed it is a good idea in the first place. This paper addresses such issues as these, looking in particular at what is apparently meant by democratising knowledge and what it might mean for archaeology.
4.2
Computers and Knowledge
The impact of information technology on the production and distribution of knowledge is fundamental to any concept of democratisation. Computers can be integral to all stages: they may be the way by which we acquire knowledge, communicate that knowledge to others, and incorporate that knowledge into whatever is perceived as constituting the corpus of archaeological knowledge. Much archaeological knowledge is communicated orally, and it is impossible to quantify how many 'oral archaeologies' exist now or have existed in the past without ever having been committed to paper. A substantial element in the creation of archaeological knowledge is the number of unpublished conference papers, seminars, and animated bar discussions that take place. Despite this, the primary academically and professionally recognised means of communicating ideas, concepts and knowledge is through publication - a more substantial and systematic method than intangible oral archaeology. However, disseminating published knowledge is slow, it lacks spontaneity, and is often the result of many stages of re-writing, editing, reconsideration, and reformulation. Computers are increasingly being offered as a means of mediating between these two models of knowledge acquisition: they provide the opportunity for faster dissemination through electronic publication, without losing the freedom and spontaneity associated with oral communication. Computer-based communications have been identified as a major cultural revolution (for example, Harnad 1991) - representing not just a
technological paradigm shift like the Gutenberg printing press, but also a symbolic shift towards the dematerialisation of culture. Tangible artefacts are replaced with ephemeral digital forms, words are substituted by pictures, and the solidity of the printed page is replaced by an insubstantial phosphor image on a computer monitor. It is the age of a post-textual archaeology (Sherratt 1993) that is driven by information technology. Nothing is permanent: the knowledge contained within these representations, as well as the representations themselves, can be recomposed by the viewer in a faster, more flexible manner than by using traditional methods. This is in the near-future: archaeological interest in hypermedia is already apparent (for example, Rahtz, Hall & Allen 1992, Wolle 1994), and increasing numbers of people are gaining access to the vast expanses of the Internet with a consequent increase in the amount of networked archaeological resources. In many respects, proponents of knowledge democratisation were initially captivated by the vast amounts of data that are freely available via the Internet, and this excitement led to the creation of an image of a brave new world of free access and use of information.
4.3
Democratising Archaeological Knowledge
What is the democratisation of knowledge? The concept covers a wide variety of different aspects, but it is essentially based upon the increased ease of transmission of information through the use of computers, and hence the ability to share data and to manipulate and reprocess that data in various ways and to different ends (for example, Reilly & Rahtz 1992, 18; Fukasawa 1992, 97). At one level, this raises a host of new and exciting possibilities, including: •
the ability to access computerised databases on-line, archaeological databases that others have created and made available to us;
•
the ability to reinterpret a site using technologies to reveal new aspects;
•
the ability to extract digital data for reprocessing within an entirely different context to that within which it was originally created;
•
the ability to combine data from a wide variety of disparate sources.
different
The potential here is immense, but this vast reserve of data, information and knowledge also raises huge problems. In some respects, these problems are not 23
J. HUGGETT
simply a result of the ability to access all this information , but are present in computerised data of any kind - as is often the way, new applications reveal the shortcomings of existing methodologies. There are a number considered.
of issues that
need to be
•
the nature of the information being made available . What is it that is being provided ? Why? What do we actually want ?
•
the way that information is accessed . The whole concept of democratising information requires access to the material to be available , presumably by definition to everyone . Clearly there are major resourcing implications here .
•
the ownership of the information provided . Is it public domain , or do people get charged for it? If it is in the public domain , is it supplied with conditions - can you freely use the resource or do you only have free access to some aspects? What about issues of copyright and intellectual property?
4.3.1
The content of the resource
What information would be made available in a democratised scenario ? There is little doubt that what most people seem to desire is access to data , and in particular access to excavation data. A major problem in archaeology is the publication log-jam - the large numbers of excavations , in many cases undertaken years ago , still being worked on by their excavators, unpublished and sometimes inaccessible . However, this democratisation process has the potential to radically change our attitude to publishing archaeology. Instead of spending years assembling reports based on catalogues with in-depth multivariate analyses which by common consent turn out to be very cold, dull and boring to read , we could mount our datasets more or less immediately on a network and then concentrate on writing the kind of report we ' d all like to read, concentrating on interpretation, rather than simply reproducing reams of tables and diagrams - the raw data would be available for anyone who wanted to see what the interpretations were based on . Assuming the information is available , however , other difficulties arise. Foremost amongst these is the question of the nature and content of the information . The problem is simply that archaeologists do not record data in the same way. Archaeologists work with different theories and ask different questions . These issues will fundamentally affect the way we record our data and the type of data that we record . Archaeologists develop new methodologies, new twists to old techniques , in order to deal with situations that arise which make standardised methods inappropriate . Problems often require solutions that are specific to that situation. The alternative is to try and shoehorn our methodologies and observations into an inappropriate structure . Of course , the same questions were raised in the past about standardisation , but the fact that there are as yet no formal recording standards , at least in British archaeology ,
should surely serve as a warning . The Archaeological Data Archive Project is designed to create a collection of site archives, assembled with the intention of making them available to other archaeologists via media such as the Internet (Eitlejorg , this volume) . As those who subscribe to the Archaeological Institute of America List Digest circulated by Nick Eitlejorg on the Internet will be aware, there was a good deal of discussion about standards raised by the founding of this archive project , to the extent that a new section of the list was created. Somewhat ironically , perhaps , once the new list was set up, discussion dried up! The Archaeological Data Archive Project intends to handle such problems by ensuring that all data files are accompanied by full descriptions , detailing the contents , fields and relationships of the information so that they can be reconstructed by the user. Quite rightly , the discussion on the list concluded that without this type of documentation , the information provided could never be fully understood. A vitally important aspect of using someone else's data is the need to understand the rationale for their structuring and organisation of that data . However, documentation of file structures is not enough . As has been pointed out many times (e .g. Reilly 1985) the records and observations that we as archaeologists make , our 'data ', are theory-laden - the way we approach sites and deal with them is in a large part determined by the current state of archaeological inquiry . Anyone who has attempted to use old excavation reports will be well aware of what this means . Archaeological data are not the immutable truth that they are sometimes viewed as being. Data are recorded according to the theories and methodologies of the day, and data recorded for one purpose , or under one particular regime often cannot easily, if at all , be used for a different purpose . Data are not objective at all - observations are coloured by environment and perceptions together with levels of understanding and experience. In any database , different types of data will be missing , or recorded in different ways, all for quite valid reasons in most cases , but all of which limits the uses to which they can be put. There is still what seems to be a touching belief in the reusability of databases , but this is clearly problematic . Most databases are created for particular purposes ; redefining that purpose after the even t may require more than just a simple rearrangement of fields or restricted massaging of the content. Realistically , both data and accompanying documentation (where present) will be of variable quality. It will be difficult to compare datasets recorded by different people separated in time and space. All information will be subject to what are generally a whole series of unstated assumptions about how we collect data and what type of data we choose to collect. On the other hand , the alternative would be no historical archive at all . However, we have to be very careful when approachin g this type of resource , be fully aware of the context of the information and the problems that arise if data are removed from their original context. 24
DEMOCRACY,DATAAND ARCHAEOLOGICAL KNOWLEDGE
4.3.2
Access to knowledge
Of course, such issues will be of only academic interest to those who do not have the means of access to this on-line information. After all, one of the key pre-requisites of democratising knowledge is to provide access to that information . There are two major means of disseminating information in a computer-mediated environment: on disk , or on-line. The disk-based solution is the computerised equivalent of microfiche, particularly given the increasing use of CD-ROMas a delivery medium. Its read-only nature offers a degree of security, but the continuing development of new standards makes it difficult to be sure that the disks can be read at all, unless they are mastered to the lowest common denominator. It is no coincidence that developments in on-line access have entirely taken place in an academic environment, and that academics are the prime proponents of democratisation of knowledge. Academics are in a favoured position in that they have free access to the Internet, (although ultimately funded by their institution) but others are not so fortunate. The costs are not inconsiderable, although the appearance of companies offering gateways to the Internet is reducing the cost of access. But how democratic is a facility that is not freely available to all on the same terms, that depends on your affiliation to a particular type of institution or your ability to pay? Perhaps the 'democracy' element of the democratising knowledge concept is being interpreted too literally here, but if the aim is to share and exchange information freely then the means to do this must be made available. There is the prospect of an unhappy distinction developing between the have's and the have-not's - an archaeological elite who have access to the international electronic highways, and those archaeologists who don't. The fact that this divide will to a large extent mirror the existing divide (imaginary or otherwise) between archaeologists working in higher education, and those working in field units makes this even more unfortunate. Although this appears to be an extreme image, in fact it already exists to some extent in areas like email access, and this clearly militates against true democratisation .
4.3.3
Ownership of knowledge
There is a more fundamental problem, which is connected to the ownership of data that are made available through published or on-line datasets. Who actually owns this knowledge? Archaeologists have long clung to the idea that the past belongs to everyone, that the data we derive are in the public domain once an (unspecified) time of study has elapsed to allow for analysis and publication . While some information may be withheld - to protect sites from looting, for instance - in general this knowledge is freely available. Yet paradoxically, access to that knowledge is often not without cost as a result of the publishing mechanisms that are traditionally used for dissemination . The problem is that once that same information is available via computer media , it can be freely copied without anyone knowing , modified , and
changed to suit local purposes. In such an environment , the copyright and ownership of material becomes extremely problematic. As the Teaching and Learning Technology Programme (TLTP) in UK higher education is discovering, the issue of copyright when it comes to multimedia applications is extremely complex. How can you control access, restrict copying, limit manipulation, and so on in order to maintain your ownership and copyright intact within an environment that makes the breaching of that copyright so easy? Copyright statements may prohibit, but they cannot stop illegal copying. The chaotic, even anarchic, but jealously protected organisation of the Internet does little to help the situation. At the moment , when approaches to existing copyright holders are made for permission to use pictures and other information, their response is in terms of traditional publishing methods and copyright in books and manuscripts. They want to know how many copies will be made, and look askance when told that once the information is released, no one will have any control over the number of copies . Furthermore, archaeological knowledge is not achieved without cost. Archaeological data are often perceived to be freely available, paid for out of public monies and yet rights to information are exerted by everyone from Government bodies, the Ordnance Survey, down to individual field units. Archaeological information often has commercial value, and access to such data costs money. Increasingly there are situations where service to public interest is outweighed or overridden by operational necessities. If a competitor in a difficult market place wants some archaeological information that a field unit collected, why should that unit give it away if it will help their rival compete more effectively against them? Once information is published on disk or made available to the Internet community, control over its destination and use is lost. This raises issues of quality control which do not arise with traditional means of dissemination. As information is moved, copied, altered and becomes increasingly separated from its author in time and space, how is a viewer to know whether or not they are looking at the original, or some bastardised version? Is the database that has been downloaded the same as that originally mounted, or has it been massaged by some unseen person for unknown ends? And how can anyone tell? There is no way to mark originals as such that cannot be undone or that cannot be applied misleadingly to copies. The creation of centralised repositories such as that proposed by the Archaeological Data Archive Project would address the problem to some extent, but the very nature of the medium invites propagation, and the ephemeral nature of the digital form encourages a different attitude of mind to data .
4.4
Conclusions
The concept of democratising knowledge appears to be very exciting, and the processes involved certainly would represent a major cultural revolution in archaeology. The 25
J. HUGGETT exchange of information, the availability of data, the ability to call up remote databases, download excavation archives, re-analyse and compare datasets, can only be good for the study of archaeology, but we have to get it right. This new world of free access to and use of information will not be without its problems, and some of those problems could be so limiting and restrictive that this brave new world may remain nothing more than a few experimental islands of information. There is always a danger in new departures that the concept becomes so hyped and overblown that it becomes discredited, and it would be unfortunate if this was to happen with what could be such a radical development in a whole variety of ways for archaeology. Archaeologists need to look closely at the issues raised by the concept of democratisation - they are issues that have to be addressed rather than swept under the table, otherwise we could find ourselves with a series of valuable but essentially useless resources on our hands, and also face severe problems over the ownership and use of that information . Some of these are problems that have been with us more or less since the beginning of time , in terms of archaeological computing - others are new issues that are raised by the potential of the technological advances that are becoming increasingly available to us. We need to make sure we capitalise on these possibilities, but in order to do so, we must have a clear idea of where we're going and the means of getting there before we start out on the journey.
References fiJKASAw A, Y. 1992. 'TRI : Three-dimensional imaging for recording and analysing stone-artefact distributions ', Antiquity 66, 93-97. HARNAD, S. 1991. 'Post-Gutenberg Galaxy : The fourth revolution in the means of production of knowledge ', Public Access Computer Systems Revie w, 2(1), 39-53 . HUGGETT,J. 1993. 'Democratising archaeological knowledge with computers ', Archaeological Computing Newsletter , 35, 7- 10. REILLY, P. 1985. 'Computers in field archaeology : agents of change ?', in M. A Cooper & J. D. Richards (eds.) Current Issues in Archaeological Computing, British Archaeological Reports , Oxford, 63-78 . REILLY, P. & Rahtz , S. 1992. 'Introduction : Archaeology and the Information Age', in P. Reilly & S. Rahtz (eds.) 1992
Ar chaeology and the Information Age: A Global Perspective, Routledge , London , 1- 28. RAJITZ, S., HAll , W . & ALLEN, T. 1992. 'The development of dynamic archaeological publications ', in P. Reilly and S. Rahtz (eds .) 1992 Archaeolog y and the Information Age: A Global Perspective, Routledge , London , 360- 383. SHERRATT , A 1993. 'Archaeology and post-textuality', Antiqui ty, 67, 195. WOI.l.E, AC.
1994. 'A tool for multimedia excavation http://avebury.arch.soton.ac.uk/Research/Anj a.html
26
reports',
5
The development and implementation of a computer-based learning package in archaeology R. D. Martlew and P. N. Cheetham Department of Adult Continuing Education, University of Leeds, Leeds, U.K.
5.1
Introduction
This paper describes Archaeology at Work, one of the computer-based learning packages which has been developed under the auspices of the Teaching and Learning Technology Programme (TLTP). The aims and background of the TLTP have been discussed elsewhere (Martlew 1994); the project discussed here is a new version of a package which was originally produced for English Heritage and the National Council for Educational Technology (Martlew 1989). The package is written in Authorware Professional, for use with Windows on IBM compatible computers with SVGA standard graphics, and will be supplied through the TLTP as a runtime module.
5.2
Aims of the project
Archaeology at Work introduces students to an important aspect of the current work of professional archaeologists, and in so doing it allows students who may be new to the subject to explore it in an identifiable context. The package introduces the nature, scale and significance of archaeological evidence in the British countryside, and presents a wealth of factual knowledge in addition to developing basic concepts and skills. The simulation helps students to understand the tensions between demands for the destruction, preservation and investigation of archaeological sites arising from modem construction work. The aim is to convey a realistic impression of archaeological involvement in the planning process, while highlighting the main issues for students to consider. The package introduces fundamental concepts such as chronology and stratification, and basic techniques of locating, recovering and interpreting archaeological evidence from archival records.
The target audience is First or Introductory Level archaeology students, or those taking archaeology as a subsidiary course. The main reason for this, guided by TLTP policy, is that archaeology is not commonly taught as a school subject, and undergraduates cannot be expected to begin their studies with the same level of factual or conceptual knowledge which they will already have acquired in other disciplines. The preconceptions which students have about the subject can, for example, be heavily coloured by the way in which it appears in the media, as is suggested by the increase in recruitment to archaeology courses in the United States following the release of the Indiana Jones films.
It would be possible to counter this by constructing a relatively low-level computer program with checklists of what archaeologists do and what they do not do, and indeed this might be an interesting way of quantifying attitude changes during the initial year of study. However, such a low-level approach would do little to develop students' understanding of the role of archaeologists in any detail, or to develop their attitudes to the place of the 'archaeological heritage' in contemporary society. Similarly, it would provide few opportunities for the development of the intellectual skills appropriate to an undergraduate-level course. Archaeology at Work operates at a more advanced level by simulating real-life processes of problem-solving and decision-making. Realistic problems such as this can be tackled at many levels, and the more sophisticated the audience the greater will be the need for accuracy in the simulation. At the level of the target audience for Archaeology at Work, a greater degree of smoothing of the model is justifiable in order to achieve specific educational goals. Some of the concepts are necessarily simplistic given the nature and scope of the package, and will be qualified subsequently as students pursue more advanced courses. Archaeology at Work aims to provide a foundation on which these further studies can build.
The package is designed to support the learning process by guiding students and by extending the range and complexity of tasks which they are able to carry out. While there is structure and order in the way in which students work through the program, there is no control over the decisions which they are asked to take. In effect there are no right or wrong answers to the exercise, and students are assessed on their ability to locate, evaluate and assimilate data, and on the effectiveness of the arguments with which they justify their decisions.
5.3
Outline of the simulation exercise
Archaeology at Work simulates the role which archaeologists currently take in processing planning applications to Local Authorities. The introductory section takes students step-by-step through the package, showing the resources which are made available by the computer, the goals which are to be achieved, and the skills which will be acquired. The main exercise is in two parts, each of which leads to a specific piece of student work which can be graded. In the first stage, students produce an archaeological impact assessment. In the second stage, they have to interpret excavation archives to show how work in advance of development has 27
R. D. MARTLEWAND P. N . CHEETHAM
contributed to the understanding of the archaeology of the region.
excavation plans and sections are also provided as bitmaps
The exercise is based on a planning application to a fictional County Council in eastern Yorkshire for a gas pipeline from the North Sea gas fields to the National Grid. Three alternative routes are proposed, and the first stage of the exercise is for the students to assess the impact of each . The computer provides maps of the routes , and a database of known archaeological sites which simulates a County Sites and Monuments Record. A separate tutorial is included on the use of Ordnance Survey grid references , since this basic skill is essential for recovering information about the sites which are affected by the proposed routes .
It is assumed that students are coming to this package with little understanding of archaeology in general , and little knowledge of the archaeology of eastern Yorkshire in particular . Support is given throughout to enable them to take decisions which are taken in real life by qualified, professional archaeologists. This includes access to an 'on-line textbook' entitled Evidence from the Past , which contains illustrations of all the artefacts which students encounter during the exercise, and which will enable them to place their excavated evidence into an appropriate cultural and chronological context. Access to the information in Evidence from the Past is made as flexible as possible. It is essentially a straight piece of narrative text, giving the current cultural and environmental interpretation of the archaeology of the region.
The computer automatically assists each student to compile an individual dossier, under their own user name . Information about threatened sites is passed to this dossier as students work through the proposed pipeline routes, and it forms the basis of the report which they produce at the conclusion of the first stage of the exercise . In the report , students identify threatened sites, and specify the extent of archaeological investigation which they think should be carried out. Costings for three levels of work are provided : total excavation , partial excavation and watching brief, each generating different amounts of information for the different levels of expenditure . Students must also balance the overall cost of the work they propose against a target figure in order to submit a tender to carry out the excavations. If they specify the total excavation of every threatened site, for example, their tender will be unrealistically high and will be rejected ; the program will not let them continue until they have submitted a realistic tender. Setting the level of acceptability for tenders is one point of artificiality in the simulation , and has more to do with influencing the outcome of the exercise than with the genuine range of options which may be available to a County Archaeologist. The specific principle , however, is wholly realistic: there is only a limited amount of money available, and this is insufficient to carry out extensive work at every threatened site. The aim of the exercise is, after all , not to train County Archaeologists or archaeological consultants , but to encourage students to consider the issues arising from the ways in which archaeologists currently work. In the second stage of the exercise , students are able to examine excavation records for sites threatened by the pipeline , at the level of detail which they proposed in their first stage impact assessment. The computer program allows them to call up only the level of information for which they have received funding . A text file is included in the package which contains the basic skeleton of the final report , including section headings and brief guidelines as to how each section should be tackled . This provides a clear framework for the students ' work, and leaves them free to concentrate on the archaeological issues. Illustrations of artefacts can be printed out or pasted into the file containing the final report , and
Drop-down menus supplement a detailed index to allow access by period, theme and subject, and words or phrases are highlighted as 'hot words', which allow glossary definitions of specific terms and concepts to be called up with a mouse click . One of the limitations of the authoring software used for the project is that these apparent links cannot be developed as true hypertext: once a 'hot word' has been defined it becomes active for every occurrence of that word throughout the text. An alternative approach is to define particular occurrences of the word by screen co-ordinates , but this inevitably makes editing and updating difficult. The implementation of hypertext features should be available in the next major upgrade of the software . By simulating archaeological involvement in the planning process, the computer package is able to introduce students at first-hand to one of the roles of professional archaeologists in contemporary society. Not only do they see the subject in an identifiable context , they are at the same time introduced to the nature of archaeological evidence and how it can be used. The computer provides access to resources, guidance on progress and tools to help to produce work to professional standards of presentation . In order to make efficient and effective use of these facilities, the student must of course be familiar with the medium of delivery.
5.4
Computer-based learning and IT skills
Familiarity with the system on which the package is running is an obvious and essential prerequisite for computer-based learning , and Archaeolog y at Work requires a minimum level of competence in order to be used effectively. Students must know how to log on or start up the software, how to format and use diskettes and how to print documents, particularly if they are working on a local network . Familiarity with the Windows-style environment and competence in using a mouse are essential , and at least basic familiarity with a wordprocessing package is advantageous .
28
THEDEVELOPMENTAND IMPLEMENTATIO N OF A COMPUTER-BASEDLEARNINGPACKAGEIN ARCHAEOLOGY
Authorware provides many useful routines for monitoring student progress and performance at a basic level, and some of these have been employed in the grid reference tutorial which is supplied with Archaeology at Work. It is clear from preliminary trials that the students who take longest over this exercise also obtain the lowest scores. This is due in part not to the difficulty of the content, but to a lack of facility with mouse operations and slowness in responding to screen prompts. Those students who are already familiar with Windows perform best, and in the small sample so far available this appears to be a more important criterion than the students' existing level of skill in using grid references. The package has been designed so that most of the operations necessary to produce the required written work can take place on the computer. Text files are provided which contain outlines of the reports which students complete at the end of each stage of the main exercise. In order to take advantage of these files, students must be able to copy them onto their own disks or user space so that they can be edited using word-processing software. The package automatically compiles a gazetteer of the threatened sites which students have identified, and this information can be pasted into their impact assessment report at the end of stage one as an appendix. Maps and artefact illustrations can similarly be pasted into the final report. The end result is particularly gratifying to students: at an early stage in their course, they are able to produce two pieces of work to high standards of presentation, and despite the open-ended nature of the simulation exercise they have a clear idea of the areas which they are expected to cover. The extent to which students currently possess the level of computing skill to achieve this at the start of their undergraduate careers is questionable. When a university department of computing studies has to allow for a significant number of students arriving without relevant computing experience, an arts or humanities department must expect a much greater need for basic skills to be taught. This situation should improve with time, and TLTP projects outside the Archaeology Consortium are addressing the requirement for basic computer literacy. Setting such remedial work aside, to run through the computer-based material alone in Archaeology at Work will take in the region of six to ten hours for a reasonably competent student. The amount of thinking and preparation time that students spend outside the package itself is at least equal to this, and is capable of being expanded considerably according to the level of detailed background reading and research specified by the tutor. The package could form a stand-alone exercise which complements other courses, or it could form the basis of project work running over a full term.
5.5
Assessment
The flexibility of the simulation exercise allows Archaeolog y at Work to be used by individuals, or by small groups of students working together. The strategy for assessing student progress combines computer-based
and tutor assessment, as appropriate. The basic skills involved in using Ordnance Survey grid references , for example, are assessed using a computer-based test at the end of the tutorial. Facilities provided by Authorware allow student interaction to be monitored, including the overall time spent using the package and a log of the program modules which have been accessed. This type of operational information in this instance does not produce any pedagogically meaningful data, but may be of use if students experience difficulty in navigating through the exercise as a whole: preliminary trials suggest that this is unlikely to be a problem.
5.6
Discussion
Archaeology at Work is specifically tied to the archaeology of East Yorkshire, and has not been designed to allow the substitution of data from other areas. The ease with which the package may or may not be edited is also relevant in the context of keeping the material up to date. This applies to the realism of costings given for excavation work, as well as any changes in the interpretations and approaches to understanding the archaeology of the region. These issues reflect more general concern regarding long-term support for non-commercial CBL packages. If materials are produced by one-off, 'pumppriming' initiatives such as TL1P, it is difficult to see how they can be maintained and updated over an extended period of use. Computer-based material can in theory be more easily updated than textbooks, but the financial implications of this are, in the education sector, likely to give software the built-in obsolescence of print-based material.
The design of Archaeology at Work is closely related to the pedagogical approach which it embodies. Relevance in simulation exercises such as this is acquired through the use of realistic detail. There is an overall continuous scale from totally fictitious, abstract simulations to those which are embedded in one specific, detailed environment. The sites and the additional SMR data in Archaeology at Work are all real, with adjustments to adapt them to the purposes of the exercise. The facility to replace existing content with data from different regions is, in this case, less important than the credibility which is derived from the amount of realistic detail which the package presents. The idea behind Archaeology at Work, however, is totally transportable and even transmutable: the pipeline could equally well be a road, an irrigation canal or part of a new cable network for an information super-highway. At this level, the approach can be implemented anywhere and in many different forms. All that is required is the time and expertise to assemble and package the data, but this would of course amount to a complete re-write of the program. Authoring software in general is not yet sufficiently flexible to make the substitution of content an easy task, without imposing considerable limitations on the original design. If the purpose of authoring software is, as is often claimed, to allow lecturers with little knowledge of programming to compile computer-based
29
R. D.
MARTLEW AND
P. N . CHEETHAM
tutorials, the current generation of software tools does little to support anything more than a relatively low level of learning activity. The complexity of user interaction in a package such as Archaeolog y at Work is such that current software tools can do little to elevate its implementation from 'programming' to 'authoring' . Unless future generations of authoring software can rise above what is essentially a screen-paging approach, the production of CBL materials which are pedagogically challenging will remain in the hands of specialist programmers.
knowledge and concepts which can support further studies. Preliminary trials have shown that the package is successful in its aims, and that, given adequate competence in using a computer, it encourages students to think, talk and argue about archaeology.
References 1989. Digging Deeper Heritage/NCET , Coventry .
MARTLEW , R.
MARTLEW ,
At an introductory level, Archaeology at Work will give students an understanding of one of the current roles of professional archaeologists in this country. It raises important issues about the nature and use of archaeological evidence, and provides a foundation of
30
into
History,
English
R. 1994. 'Deus ex machina : studying archaeology by computer' , in J. Wilcock & K. Lockyear (eds.) Computer Applications and Quantitative Methods in Archaeolog y 1993, International Series, British Archaeological Reports , Oxford.
6 Characterizing novice and expert knowledge: towards an intelligent tutoring system for archaeological science. Graham Tilbury 1•2 , Ian Bailiff 1 and Rosemary Stevenson 2 Department of Archaeology, 2 Department of Psychology, Durham University, University Science Site, South Road, Durham, DH] 3LE, U.K. 1
6.1
Background
Teaching archaeology is becoming increasingly difficult due to the wide range of educational backgrounds of the students and to their increasing numbers. Whilst Computer Aided Learning has been and will continue to be useful in easing this burden upon teaching resources, it is limited in its uses due to the way in which its operations differ from a real human tutor. If however we can understand how students in archaeology learn and how their knowledge changes as they increase in expertise, then we may attempt to embody this knowledge in a Computer Aided Learning system that could therefore respond intelligently. Such an Intelligent ComputerAided-Learning system, or Intelligent Tutoring System as it is more often called, is the long term goal of this research work. The aim of the present study was to take a first step towards this goal by identifying the knowledge that experts and novices have of scientific dating techniques and determining the differences between them.
6.2
Introduction
To elicit and identify the knowledge of the subjects (archaeologists) we used a technique for mapping out how a person views the relationships between pairs of concepts. The technique makes use of statistical scaling procedures to infer the cognitive organisation of a set of concepts. For example, Stevenson, Manktelow and Howard (1988) used a paired comparison task to investigate the cognitive organisation of computing concepts in expert and novice computer scientists. They took 10 concepts important to programming in PASCAL and presented these 10 concepts to the subjects in all possible pairwise combinations. The subjects were required to rate the concepts according to their similarity by assigning a number to each pair ranging from 1 (very similar) to 7 (dissimilar). These ratings gave a measure of the psychological distance between the concepts and multidimensional scaling techniques were then used on the resulting distance matrices to uncover latent structure in the data. The structure can be represented in multidimensional space that can then be used to infer the cognitive structures of the subjects. Scaling techniques, such as the one just described, are widely used to discover the knowledge of experts. They have also been used successfully to identify some of the differences between the knowledge of experts and novices in a particular topic (Stevenson et al. 1988). We therefore used this technique here to identify differences in the
content of experts' and novices' knowledge of scientific dating techniques. However, there are some features of a person's knowledge organisation that are not captured by these techniques. In particular, it is not possible using these methods to assess the coherence of a person's beliefs. Beliefs are coherent if related beliefs are consistent. For example, suppose that a person believes that the concepts Cat and Dog are highly related and also that Cat and Pig are highly related. Then, in a coherent belief system, the concepts Dog and Pig will also be highly related. If they are not, we would say that this part of the person's knowledge structure was incoherent (Smolensky 1986). The above example is a simple one and most people's belief systems are very complex. Someone's knowledge of scientific dating, for example, would involve a set of interrelationships between the concepts underlying a particular technique (such as mitochondrial DNA dating) and a set of higher order relationships between each of the different techniques. Given the complexity of such knowledge, it is not surprising that inconsistent beliefs are common. However, we would expect a difference between novices and experts in this respect. An expert's knowledge of scientific dating should be more coherent than a novice's. We therefore investigated this issue by supplementing the statistical scaling techniques with an alternative analysis that allowed us to estimate the coherence of each person's knowledge. We did this by constructing a network for each subject where the nodes in the network were the individual concepts associated with mitochondrial DNA dating, and the strengths of the links between the concepts were specified by the subject's rating of similarity between each pair. Having constructed such a network for each subject, we were then able to derive a measure of coherence of the network by using a modification of the formula developed by Smolensky (1986).
6.3
Method
6.3.1
Subjects
Sixteen subjects were used. One was an expert (a lecturer in scientific archaeology) and 15 were novices (archaeology students at either the end of their first or beginning of their second year of undergraduate study). 6.3.2
Materials
We selected 11 words that described individual concepts contributing to the overall idea of mitochondrial DNA (mtDNA) dating, e.g. ancestry, temporal mutation. 31
GRAHAM TILB URY' IAN BAILIFF AND ROSEMARY STEVENSON
NOVICES
EXPERT
3.0
+Sexual reproduction
2.0
Rate of Mutation +Time Scale
Cell Cytoplasm
. +
10
0.
-1.0
+Ancestry
+ 140-280k b.p.
+Temporal Mutation +Rate of Mutation
+Eve :f- +140-280k b.p. mtDNA +
Fe-tale Heredity
+ Africa
++Ancestry
-1.0
Time Scale
+Africa
-2.0
'-----r------,-----,---""""T""--2.0 -1.0 0.0
1.0
Eve
•2.0
+Female Heredity
'-----,----,----....---~--1.0 0.0 1.0
•2. 0
2.0
71-Temporal Mutation + Cell Cytoplasm + Sexual reproduction + mtDNA
2.0
Figure 6.1: The location of the 11 concepts used in the paired comparison task in two dimensional space .
These words were taken from lectures on mtDNA dating previously given to the students. Each word was paired with every other word making total of 55 pairs . The pairs were presented in a single booklet and were in a different random order for each subject.
6.3.3
6.4.1
A two dimensional scaling solution was produced for the 11 concept names in the comparison task . This yielded similarity matrices of these concepts in two dimensional space, one for the expert and one for the novices (see Figure 6.1). Observation of the novices' matrix in Figure 6.1 suggests that the novices distinguish between biological concepts (on the right hand side of the space) and what might best be called historical concepts (on the left hand side). Observation of the expert's matrix in Figure 6.1 presents a more complex picture. The concepts that the novices grouped together do not form discrete categories in the expert's matrix . We will have more to say about the expert's groupings in the Discussion (below). For the moment we will simply note that the organisation of the same concepts is qualitatively different in novices and experts.
Design and Procedure
Subjects were presented with all 55 word pairs and asked to rate the similarity of each pair on a scale from 0 to 10, where O indicated dissimilar and 10 indicated very similar. A 'don 't know' category was also provided for cases where subjects were unfamiliar with the concepts in a pair . The resulting similarity matrix for each subject was analysed using multidimensional scaling. The data were also used to construct a network of concepts and links using a program called NetG which was specially written for this purpose . The program also calculated the coherence of the networks. Each link between pairs of nodes was assigned a strength that reflected the subject's rating of that pair of concepts . Each node was then assigned an arbitrary initial 'activity ' which was allowed to spread through the network according to a formula that was modified and developed from Holyoak and Thagard (1989). This firstly diminished the activity of a node by a certain amount, and then increased it in proportion to the activity of other nodes to which it is connected. To perform this calculation once for each node of the network takes a period of time, called a cycle. After several cycles the network will reach a stable state, where any further cycles will not change the activity of any node . Coherence is measured at this point.
6.4
Multi-Dimensional Scaling
6.4.2
Network Data
The calculation of coherence yields a values between O and 1, where O is incoherent and 1 is completely coherent. The average value of this figure for the novices was .63 while the expert 's value was .89. Thus , as we predicted, the expert's knowledge structure was more coherent than the novices '. We also measured the number of cycles needed for the networks of the novices and of the expert to reach stable states . The expert 's network settled after 39 cycles, while the number of cycles needed for the novices ranged from 37 to 46, with a mean of 40.
6.5
Results
Discussion
Both of our measures revealed differences between the expert and the novices. The two dimensional scaling revealed that the novices and the expert had qualitatively different cognitive organisations . The novices appeared to group the concepts around a single dimension concerned with whether the concepts were biological or chronological. This dimension , therefore , is based on similarities between the concepts that
First , we determined whether the subjects used the full range of numbers when rating each pair of words. We found that the novices did use the full range , but the expert was much more likely to use the numbers 0 and 10.
32
CHARACTERIZING NOVICE AND EXPERT KNOWLEDGE
presumably existed prior to any training in scientific dating . That is, it seems to be based on previously learned biological ideas and ideas about the origins of humans. The expert , on the other hand, revealed a much more complex organisation of concepts, one based on principles of scientific dating and not discernible to a non-expert. The expert seemed to be using three main groupings . One , at the bottom centre, consists of archaeological concepts (140-280k b.p., Time Scale, and Africa), the second , at the left hand side of the space, consists of a cluster of concepts all associated with mtDNA inheritance and its restriction to the female line , and the third, at the right hand side, consists of the two more technical mutation concepts. Sexual reproduction is isolated in the space, perhaps reflecting the fact that inheritance from both parents is not relevant to mtDNA dating. Overlaid on this basic pattern is the location of four concepts close to the centre of the space (Ancestry , Eve, mtDNA and 140280k b.p.). These are the four key concepts needed to understand the evidence claiming that all modem humans descend from Eve and they appear to be crucial in linking the purely Archaeological concepts with those concerning mtDNA inheritance . Overall, therefore, the richly organised categorisation of the expert stands in stark contrast to the simple pretheoretical organisation of the novices.
an organisation based on reproduction to one based on principles of scientific dating. Some caution must be exercised in drawing firm conclusions from these results because only one expert has been tested . Tests on additional experts are therefore needed to consolidate our findings. However, previous research has found that experts are usually in close agreement with each other on rating tasks like the one used here (Stevenson et al. 1988). We have shown , therefore, two ways in which the cognitive organisation of relevant concepts differs between an expert and novices. These differences have important implications for teaching and learning . For example, they highlight the importance of taking pre-existing knowledge into account when assessing students and for developing teaching techniques that point out how the new subject (like Scientific Archaeology) organises things differently from previously learned ones that are not in the primary area , such as Biology and History, as well as how they are similar . In this way, new learning can build on earlier learning rather than being in conflict with it. Furthermore , if we are to exploit the full potential of computer aided learning, then the construction of Intelligent Tutoring Systems will need to incorporate these techniques. They will also need to infer a model of the learner's knowledge if the techniques are to be used successfully. Our research shows how these learner's models can be inferred .
These results confirm those observed by Chi , Feltovich and Glaser (1981) on expert and novice physicists . They asked experts and novices to categorise physics problems according to their similarity, and found the groupings of novices were based on surface similarities between the problems (e.g. problems involving inclined planes were grouped tooether) while experts grouped according to deep th:oretical principles (such as Newton 's third law). What is more, as was the case with our data, the principles underlying the groupings of the experts were only discernible to other experts.
Acknowledgements: We thank Peter Rowley-Conwy for identifying the expert's organisation of concepts and Simon Lawrence for writing the basic NetG program.
Bibliography Clll, M . T. H, FELTOVICH, P., ANDGLASER, R. 1981. 'Categorisation and representation of physics problems by experts and novices' , Cognitive Science , 5, 121-152 . HOLYOAK , K. AND THAGARD , P. 1989. 'Analogical Mapping by Constraint Satisfaction ' Cognitive Science , 13, 295-355 .
We also examined the coherence of the networks and found that coherence was considerably higher for the expert than for the novices. Incoherent knowledge systems are likely to be a persistent feature of learners as they move from a pre-existing organisation based on previously learned knowledge (such as biological and historical principles) to a new organisation based on the subject matter being learned (such as principles of mtDNA inheritance and scientific dating). Our data suggest, therefore , that the novices were showing the beginnings of a shift in understanding by moving from
LAWRENCE, S. H. 1994. Developing Tools for an Intelligent Tutoring
System , Unpublished B.Sc. Dissertation , Department of Computer Science, Durham University . SMOLENSKY , P. 1986 'Harmony Theory ', in J. L. McClelland , D. E. Rumelhart and The PDP Research Group (eds.) Parallel Distributed Processing , MIT Press , chapter 4. STEVENSON , R. J., HOWARD , M. J. and MANKTELOW , K. I. 1988 'Knowledge Elicitation : Dissociating Conscious Reflections from Automatic processes' in D. M. Jones and R. Winder (eds.) People and Computers IV, Cambridge University Press , 565-580 .
33
34
7
The ENVARCH project Anja-Christina Wolle and Clive Gamble Department of Archaeology, University of Southampton, Highfield, SOUTHAMPTON, SO17 1BJ, U.K.
7.1
Introduction
Teaching the basic principles of archaeology to first year undergraduates takes up a substantial proportion of many University lecturers' time. Such teaching is essentially repetitive, providing the same basic lectures each year to the new intake of students. While these lectures are updated in response to technical developments and new archaeological case studies, we suspect they are probably some of the slowest changing parts of the undergraduate course. With increasing student numbers we have, at Southampton, come to question the appropriateness of traditional lecture-based teaching of these essential elements. Our aim is to maintain existing teaching quality and extend it by providing students with alternative learning contexts. These have to be properly resourced and our strategy is to use computer based systems to free staff time for more productive teaching to those same students, as well as to provide access to primary sources that would not traditionally be available from a library. In the first instance we have concentrated on environmental archaeology. We chose this area for several reasons. In the first place it currently lacks a recent textbook. Secondly it is often presented as a set of unrelated techniques and thirdly many of the principles and vocabulary are unfamiliar to first year students. All three elements suggested that a multimedia, computer based approach would be appropriate. The integration of environmental and archaeological work could be stressed. Students could self-pace their learning and the problem of a suitable text-book would be overcome. Teachers could spend more time discussing the application of environmental techniques to archaeological problems. To emphasise integration of environmental techniques and archaeological questions we selected the sites within the area of the Neolithic World Heritage monument at Avebury (Wiltshire) as the main case study. In particular two interpretations by the late Bob Smith ( 1984) and Alasdair Whittle (1993) form the basis for investigating this remarkable area. The ENVARCH package is a group of computer programs compiled using Authorware Professional 2.0 which will allow students to examine an introduction to environmental archaeology. The project was funded by the Archaeology Consortium of the Teaching and Learning Technology Programme (TLTP). A prototype had already been established in 1992 as part of an M.Sc . dissertation (Leggatt 1992).
7.2
Aims and Objectives
Toe aim of this program is to introduce the first year student to some of the most important techniques of environmental archaeology using the computer as a learning base. The possibilities this raises for integrating text, graphics, video and sound are very exciting, although some technical difficulties remain. They improve on the traditional medium of the book, without trying to replace it: students are encouraged to refer to key publications mentioned in the program for reference. Through the use of multimedia the importance and value of integration between those specialist studies of the past which deal either with the physical environment or the world of artefact production and use is shown. It needs to be stressed that we are providing a learning system in contrast to a teaching system. The environmental section emphasises the importance of integration and interdisciplinary approaches by concentrating on the analysis of Neolithic Avebury. This particular region has been chosen because Avebury continues to receive a good deal of research into settlement history and landscape development. The opportunity exists to see the results of interdisciplinary research into monuments and landscapes. The goals of the course are to provide a summary of environmental techniques within an accessible archaeological framework. The student will learn that environmental archaeology is concerned with archaeological questions and problems . The interdisciplinary nature of archaeology - in particular the links between the results of the different environmental techniques in the context of an archaeological case study - will be reinforced. By using two summarising articles as the main anchors for the framework of headings, the individual techniques of environmental archaeology can be investigated with direct reference to the impact of their results in the context of a specific study. This contrasts favourably with the traditional approach of presenting each technique to the students in tum, with a separate case study. Multimedia is used to extend the range of sources available to first year students when coming to terms with a new subject discipline. The student is introduced to a computer based learning environment. The non-specific computer skills that the student will acquire can be applied to other Microsoft Windows-based programs and, more specifically, other TLTP tutorials once all interfaces have been integrated (see below). By navigating through the material at their own pace, students access the information that is relevant to them, 35
ANJA-CHRISTINA
WOLLE AND CLIVE GAMBLE
that is, where they will be most likely to remember it. The educational theory is that students must make navigation decisions , so they become more interactively involved with the learning material than passive readers of linear text or listeners to a lecture . Learner directed exploration of the knowledge base has a motivational impact (Duffy & Knuth 1990, 203) . If there is something on which they would like more detail or perhaps they would like to repeat what they just completed , they are free to do so, as many times as necessary . No interactive assessment is currently included in the program . All assessments so far were provided as pen and paper exercises which also included the handing in of computer printout as compiled by the program at the end of each session. The assessments took the form of series of multiple choice and short answer questions that required the students to carefully examine specific parts of the database , followed by one longer essay question which would require further reading beyond that provided by the program . Interactive assessment could be included in the program but was not developed due to time constraints. It is also possible that if students ' know their interaction with the computer is being assessed they will reduce their exploration of the program , as they might think they will get negative results by straying in the wrong direction .
7.3
The ENVARCH Program
7.3.1
Completion of the project
Subject specialists were asked to write introductions to the subject , or edit existing text. As a result , illustrations were suggested and more were collected as appropriate. Specific slides and photos were also kindly supplied by a number of people . The provision of illustrations proved quite difficult as the inclusion of full screen full colour images drastically increases the overall size of the program file . Images were obtained by scanning slides and negatives straight into the computer and by scanning paper-based illustrations . A project like this initially requires increased input from lecturers who will only at a later stage reap the benefits . There is no 'quick fix'. Furthermore , work is also needed to keep the contents of such a program up to date.
7.3.2
Structuring an extensive informationbase
The existing prototype (Leggatt 1992) was heavily text based and contained virtually no illustrations and was linked in a fairly linear framework . We decided to re-use the existing text by reorganising it into smaller portions arranged in a fairly rigid hierarchical structure . The reasons for this can be found in any textbook on hypermedia: it takes about 30% longer to read text from a computer screen (Nielsen 1990, 191), so the text content has to be reduced in comparison to a paper based document.
It is far more difficult to assimilate text from a computer screen than from paper , so presenting users with large amounts of (possibly scrolling) text will not have a beneficial effect. New text writing skills are necessary which are very different from paper based documents : it is necessary to keep the text brief and concise, without literary flourishes . The structure itself will differ from a paper-based article/book : Making the text available on the computer does not just involve the installation of text files containing the contents of the chapters of a book. It proved quite a challenge to adapt the existing linear text into a usable hierarchical structure . This was particularly so for the two articles mentioned above, as they were not supposed to be altered . They were split into sections according to their original format and those in turn were divided into sets of pages that were then accessible in a linear way. There were also similar problems with the text explicitly written for the package . Its internal structure was not always obvious and it had to be rewritten in parts to include extra references to other sections that would allow for contextual links . New problems appear with the new medium of non linear text : Authors and readers get lost in a non-linear text. Users are not yet accustomed to the extra effort needed to keep track of all paths and open documents . The traditional progression of 'Introduction ' ⇒ 'Main Argument' ⇒ ' Conclusion ' will not apply anymore , since it is possible for the reader to access them in a different order. Unrestricted linkages would result in a spaghetti like structure : This web might be useful for an experienced user, introducing new ideas and associations . However, in this case, faced with first-time users, a fairly rigid structure to guide the student through the information is needed, with freedom to explore , but with less chance of becoming lost (Hutchings et al. 1993, 494). The use of hierarchies rather than webs for the information base was therefore followed here . The text was arranged in hierarchies with increasing detail along each step down in the hierarchy. This structure was decided upon because it is well suited to the task at hand by providing the user with increasing detail as they progress down a line of enquiry. Students as learners need to be guided around the material , while still feeling that they are in control. We provided them with a number of tools that would allow them to follow their own paths and find out where they were and where they had been so far (see below). Inside each major section , the information is organised in topics with a number of sub-topics. Individual topics can be split across several pages, and the user is free to move backwards and forwards within them . Blue underlined text will take the user down one level in the hierarchy, and the Previous Topic button back up, whereas the arrow buttons move the reader sideways through a sequence of pages . Together, they provide the controls to move through the kind of hierarchy that is illustrated in Figure 7.1. In the Map menu , the structure is represented from left to right due to space constraints. Since every sequence of pages belong to one topic, so the structure 36
THEENVARCH PROJECT Intro p2
Intro p3 TG_pirTi~
TopicA p2
~ Menu bar
TopicB p1
~ Title
Sub-TopicC p1 Display space for text and illustrations
Figure 7 .1: A hierarchical topic structure as implemented
inENVARCH .
Navigation bar
TopicA
Sub-TopicC
I
TopicB
Figure 7 .2: simplified representation of the same
Figure 7.3: A typical ENVARCH screen, demonstrating
hierarchy as in Figure 1.
the layout.
shown in Figure 7 .1 would appear in summarised form as Figure 7.2.
on a design by the Nottingham Biodiversity Consortium (Brailsford et al. 1993).
The students are expected to explore the structure by starting from the Smith (1984) article and go to the individual techniques from there when they become relevant. However, direct access is always possible to any one section, either from the 'introduction ' screen that is always only one step away or the How far menu option which allows jumps to any one section .
7.3.3.1 Main display
The individual sections are: • • • • • • • • •
7.3.3
The introduction. Article by R. Smith (1984) Article by A. Whittle (1993) The Avebury Area Molluscan Analysis Pollen Analysis Faunal Remains Plant remains Soil Analysis
The main interactions are carried out by clicking on highlighted text and buttons. The text links will take the user to another topic that provides more detail on the text that was selected. Buttons are provided in some cases where an obvious text link did not exist, and are always present on the navigation bar to provide the step backwards in the topic hierarchy. The main display is illustrated in Figure 7.3 and from top to bottom consists of: 1.
The Main Window Title along the top of the display window which contains the title of the current section.
2.
The Menu Bar, situated below the title bar and containing a number of pull-down menus
3.
The Display Window, the main part of the screen , is used for the display of information. It is usually possible to click on parts of it, such as coloured and underlined text and buttons, and on other areas which are indicated in the text.
4.
The Navigation Bar (see Figure 7.4) consists of a number of elements:
Interface design
A common interface for a group of programs is desirable for political, consistency, and usability reasons . As this project is part of a larger programme producing a range of courseware for archaeology students , it is clearly an advantage if they all shared the same interface and layout, so that once students had learned how to use one program, they could apply their newly acquired computer skill to any other program from the group. However, as ENVARCH was one of the first 1LTP projects to start and be completed (March - October 1993), no interface layout and development guidelines yet had been agreed by the Archaeology Consortium. These were finally laid down in the autumn of 1993 (they were officially confirmed December 1993) using a different template . Consequently ENVARCH will have to be modified later to fit in with the agreed framework. The present interface is partly based
•
37
Tabs with page numbers appear for topics which are split over several pages. Where topics have large amounts of text attached to them, these were broken up into smaller sections which can be viewed by flicking through a number of pages rather than making extensive use of scrolling text. The tabs can be used to move to the specified page.
ANJA-CHRISTINA WOLLE AND CLIVE GAMBLE Current page
Tabs
7.3.3.4 Keeping track of user progress Arrow bultonsTopic name
Progress
Button
Figure 7.4: the navigation bar.
•
Arrow buttons only appear on topics with more than one page and have the same function as the tabs.
•
Topic name gives the name of the current main section and topic .
•
Progress shows how many topics have been seen and how many there are overall. The figure relates to topics rather than pages , except for the article files (Smith 1984 and Whittle 1993) where the figures represent pages.
•
It is important to give a student some means of assessing how much of a particular subject has been investigated in order to assess their own progress and to keep the goal in sight. A problem often encountered with hypermedia is that of the user getting lost in the information structure. ENVARCH includes a number of facilities to address these problems : •
Direct access devices allow the user to gain direct access to any node, using an index facility, for example. In this case, the Map and How Far options provide the means to access any of the local topics and any of the other sections respectively
•
Show all the information and structures with connections in form of a map and allow direct access from there. The problem here lies in displaying complex struc~ures with many links . In ENVARCH, a simplified but complete local Map is presented , in which previously visited topics are indicated by highlights and from which direct access is possible by clicking on the topics displayed in the map
•
History devices work by showing users where they have been, or allow them to 'back track ' . In ENVARCH, this could only be implemented in the form of a 'previous topic' button, that takes the user up one step in the hierarchy. The highlights on the Map and ticks on the How far displays give a visual indication of what has been visited.
•
Allow users to go back to the start. With ENVARCH, users can return directly to the introduction screen if they choose.
Previous topic button enables the user can go back one stage in the topic hierarchy outlined above.
7.3.3.2 Individual references and bibliographies Each section has a bibliography of further reading and key texts included in the Bibliograph y menu item. Any entries listed in this bibliography may be saved selectively by the user to a file that can be printed or saved to floppy disk at the end of a session. The text itself is full of references , each of which can be expanded by clicking on it to reveal the complete reference and provide the opportunity to save the full reference to a file for later use. The option for the student to click on any reference in the text without having to go to the bibliography was included to make the material more accessible, and to encourage the student to follow up the knowledge gained in an individual session . In an ideal situation , a click on a reference would bring up a bibliography with the relevant item highlighted or the list scrolled forward so the item is at the top, however this could not be achieved with the current software, so the simpler solution indicated above was adopted.
7.3.3.3 Print-out and saving Users are given the opportunity, using the Windows Notepad text editor, to add notes to a file into which all other saved references will be collected at the end of the session. They can also print out and/or save these results onto a floppy disk, provided the system has a floppy disk drive or printer attached. This means that the student can take tangible results away from the computer from which to continue learning. References can be examined in more detail, and the easier it is to extract those from the system, the more likely it is that the students might make the effort to look at them . The text file containing the annotations can also be imported into any other work. Actual quantitative results of use of these facilities are included below.
7.3.3.4.1 Progress within a section The navigation bar indicates how much of the subject has already been seen and hence how much remains . Maps for navigation have been shown to be important if users are expected to explore the contents on their own (Nielsen 1990, 130), so it was considered to be an important facility within ENVARCH. A map of the main topics and their connections is included in every section, and in order to give the user some idea of their progress , all topics already visited will be highlighted. The map is also an important aid for searching for a particular topic as it shows the overall structure and allows direct access to any listed topic. If any one page of a multi-page topic has been opened , that topic will be highlighted on the map.
7.3.3.4.2 Progress between sections As well as indicators for progress within a section, users can see which sections they have been to by selecting the How far option of the Tools menu. Sections which have already been visited are indicated by a tick, and from the progress display, users can click on the names of any of the other sections to jump directly to them.
38
THEENVARCH PROJECT
How competent are you?
How difficult is the program? very very difficult difficult average easy easy
Total
Used Windows before?
How difficult is the program? very very difficult difficult average easy easy
expert
no
8
competent
yes
8
novice
Total
16
3
Total
21
14 4
35
Total
Table 7.1: How competence affected perceived difficulty of program: shaded areas indicate empty cells.
7.4
First Reactions from Students
The program was used in 1994 for a half semester course on Environmental Archaeology for first-year undergraduates at Southampton. Since this was the first trial, students were also given one lecture a week to present additional case studies. They also had one weekly two-hour practical during which teaching staff were on hand to deal with queries and computer problems. During the practical, minor assessments gave a direction to the information search, and initiated discussions among students on where to look for the relevant information and where to find it. At other times, students could access the program on their own. The final assessment took the form of a set of multiple choice and short answer questions that required students to carefully examine specific parts of the database, followed by one longer essay question which required further reading over and above that provided by the program. As other studies of this technology have shown that students react favourably to the new technology, the questionnaire for which the results are presented here was designed to compare student's attitudes to lectures and computer practicals in comparison. Other questions were included to assess the usability and structure of the program. Of an original sample of 50 students, only 35 completed the questionnaire (as it was given out in a 9 o'clock lecture), so results, despite giving some insights into student attitudes should be treated with caution due to the small scale of this evaluation both in terms of numbers and timespan .
7.4.1
Expertise
With the introduction of this program to the students, two hurdles had to be overcome. The first was that the students, despite being asked to hand in only wordprocessed work right from the start of the academic year, were relatively new to using computers and half of them declared themselves to be novices. However, an examination of the questionnaire results shows that despite the fact that no-one thought of themselves as expert, all students found the program of average difficulty or better (see Table7.1). So it seems that the
Table 7.2: How Windows experience affected perceived difficulty of program: shaded areas indicate empty cells.
technology and interface were not a stumbling block. Secondly, only half of the students had used Windows before; again this seemed not to affect the perceived difficulty of using the program (see Table 7 .2). These results are very encouraging for those considering introducing this new technology into their teaching methods.
7.4.2
Use of menu items
Questions referring to structure problems showed a wide variety of responses. Students were asked to what degree they felt they got lost and how often they used the map. Those who hardly ever felt lost managed to get a feel for the structure by using the map frequently, while it appears that a majority of students felt that they got lost only sometimes, and used the map sometimes. One student referred to the map as the single most useful tool in the program. The jump out to the Notepad program was confusing for some, especially novices who preferred to stick with paper. 20 (57%) students said they preferred paper, with 6 ( 17%) preferring the Notepad and 9 (26%) using both. Ingenious and effective use of the Notepad were observed in some cases, especially when a better understanding of the Windows environment had been achieved. The usability of the Notepad was not improved by the fact that a bug in ENVARCH sometimes caused the program to 'forget' the user's name which made it impossible to access the notes file despite the fact it existed on disk. Novices in particular found this extremely unsettling, as they seemed unaware that they could access the files directly from any text editor without having to open ENVARCH, despite having been told so.
7.4.3
References and searching
Hardly any students chose to look up further references in the library, though this would probably have been equally true if they were just handed a reading list at a lecture. This clearly demonstrated the prevalent student attitude to work: as no major pieces of coursework requiring additional information and reading had to be prepared for assessment, there was no need to make any use of the library. 39
ANJA-CHRISTINA WOLLE AND CLIVE GAMBLE
2
gave up
practical,=
18
eventually
soon
10
quickly
5
18
lecl:~.s~11
immediately 0
5
0
10
20
15
Figure 7 .5: Answers to : How quickly did you find specific information.
0
15
20
~ -·-······ ~-.... -- I !ffl .
·.·.·-·-
· .-. - .-.·.·.
·1
D
5
D
-:-:-:-:-:-:-:-:-:-: -:-:-1
pract ical
5
10
15
neutra I
■ quite
12 18
not
□ slightly
same .-.·--.·.·-· 1
lectures
0
10
Figure 7.6: Answers to: Do you prefer practicals or lectures?
lecture s
ENVA:: I
5
D
'7"'."71
ver_y
······· ·
20
0
2
4
6
8
10
12
Figure 7.7: Answers to: Will you remember more from ENV ARCH or from lectures?
Figure 7 .8: Preference for lectures as reflected in enjoyment.
Students were given assessments to ensure they would explore most of the database. Most of these were multiple-choice and single word reply questions. Some students admitted to scanning quickly through the information , without taking any of it in , simply to retrieve the replies. As this is the first time this technology has been used for teaching , new ways of assessing students need to be devised . As mentioned above, some degree of interactive assessment would be the best solution, but more time is needed to develop this .
of the scale. Despite the fact that the program was well received , with 22 (62%) responding positively , 9 (26%) neutral and 4 (12%) negative to the question whether they had enjoyed using the program , the overwhelming result of the questionnaire was that students prefer lectures (11/31 %) or have no preference (18/52%) (see Figure 7.6), and definitely think they will remember more from a lecture (18/52%) (see Figure 7.7) . This result surprised some of the volunteers who also tried the program, but were not first year undergraduates . The expectation was that students would prefer this new way of interacting with the material. The cross tabulation of enjoyment against preference (Figure 7 .8) shows that the bulk of replies is concentrated in the centre of the chart , with only a faint pattern of correlation between enjoyment and preference. This contradicts the expectation that those who enjoyed the program would prefer it to lectures . A similar pattern appears when examining the perceived difficulty against the preference of lectures; despite the fact that all the students found the package relatively easy to use , most preferred lectures .
When asked how easy they found it to locate specific information , the replies were weighted towards the more negative end of the scale , with 18 (51 %) reporting that they found the information eventually, 2 (6%) said they could not find the information and gave up (see Figure 7 .5) . This result is probably to be expected , as students should have to make some effort in finding information to reply to questions for assessment. If information was found too quickly , there would be no challenge and boredom would set in .
7.4.4
Overall reaction and preference of lectures
The results concerning the overall enjoyment of the program by the students were encouraging. The distribution of replies is weighted towards the positive end
7.5
Conclusion
One result of the first trial application of the ENV ARCH program is that students appear to prefer lectures. In fact even those who liked the program still expressed a
40
THEENVARCH PROJECT preference for lectures. The reasons for this preference are clearly stated. They think that they will remember more because in lectures continuous , indiscriminate notetaking is prevalent. On the contrary ENVARCH deliberately uses an enabling rather than directive technology, requiring responsibility and decision making on the part of the student , not the lecturer. As students take even more responsibility for their own learning and are not presented with condensed lectures of the main subject matter, they will have to change their learning methods. ENVARCH makes this possible. However, it should be remembered that students involved were first years, who are only gradually being introduced to the importance of individual research and learning, rather than the reiteration of facts as presented by the teacher. While still in this transition the latter will seem the easier option . The results of a study like this are also affected by the skill of the individual lecturer, as lectures on any topic delivered by excellent lecturers can never be replaced by a computer program. It is possibly not fair to compare a learning and a teaching system, as both are trying to address different aspects of knowledge acquisition. However, by using ENVARCH, many students can use one resource simultaneously and when it suits them best. The lecturer is then free to give lectures and can spend more quality teaching time with students , often on a personal basis which is all too often a luxury in the current mass education system of higher education. During practicals, minor assessments give a direction to the information search, and initiate discussions among students on where to look for the relevant information and where to find it. Students can access the program on their own, although with ENVARCH, formal practical sessions as well as a lecture were arranged every week with a supervisor present to help with queries and problems. Students are also provided with multiple copies of the introductory concepts without having to hunt down the few copies of text-books. Libraries can no longer provide enough copies of such introductory texts. Purchasing all the recommended books is financially not an option for most students. ENVARCH was received well, despite initial minor teething problems . We see a continuing role for such computer based learning if for no other reason than it forces students away from the 'easy option ' of lecture-
based learning and challenges them to acquire the skills of investigation and the exploration of information which form the core of an undergraduate degree.
Acknowledgements The ENVARCH has been funded by the 11..TP Archaeology Consortium. We should like to thank Mel Leggatt and Stephen Shennan for commenting on this draft and many other specialists who have freely given their expertise, in particular Alasdair Whittle, John Evans , Rosina Mount, Rupert Housely, Dale Serjeantson , Marijke van der Veen, Arthur ApSimon, Julian Thomas , Tim Sly. Finally, thanks to the 1993 first-year archaeology students at the University of Southampton, who gave the program its first serious run-through and endured the initial minor running problems.
Bibliography BRAn.,SFORD , T. J., DAVIESP. M . C.
SCARBOROUGH S. C. 1993. BioInformatics ResearchGroup, Department of LifeScience , The University , Nottingham NG?2RD. DEEGAN , M., TIMBRELL, N. & WARREN , L. 1992. Hypermedia in the Humanities, University of OxfordandUniversity of Hull. DUFFY , T. M. & K.NurH,R. A. 1990. 'Hypermedia andInstruction : Where is the Match?'in D. H. Jonassen& H. Mandi(eds.) Designing Hypermedia for Learning, Proceedings oftheNATOAdvanced ResearchWorkshopon DesigningHypertext/Hypermedia for Leaming,heldin Rottenburg/Neckar , FRG, July 3-8, 1989, NATOASISeriesF, Vol,67,Springer-Verlag , Berlin. HlITClllNGS , G. A., HAILW. & COLBOURN C.J. 1993. 'A modelof Leaming with Hypermediasystems ' in G. Salvendy& M. J. Smith(eds.)Human-Computer Interaction : Hardware and Software interfaces, Proceedings of the 5th international conference on Human-Computer Interaction , (HCIinternational '93), Orlando,Florida,August8-131993,Volume2. LEGGAIT, M. 1992. The Environmental Science Hypermedia Tutorial: an &
Authorware Tutorial Toolkit Prototype Version 0.8,
Experiment in the Application of Hypermedia Technology to Teaching in Archaeology, Unpublished M.Sc. dissertation ,
University of Southampton . NIELSEN , J. 1990.Hypertext & Hypermedia, Academic Press, London . SMml,R. 1984. 'The Ecologyof NeolithicFarming Systemsas Exemplified bytheAveburyRegionof Wiltshire ', Proceedings of the Prehistoric Society, 50, 99-120. Wmrn...E , A. 1993. 'The Neolithicof The AveburyArea: Sequence , Environment , Settlement and Monwnents ', Oxford Journal of Archaeology , 12(1) , 129-53 .
41
42
8
Multimedia communication in archaeology - why and how Kai Jakobs 1 and Klaus Kleefeld 2 1
2
Technical University of Aachen , Jnformatik TV,Computer Science Dept, Ahomstr. 55, D-52056 Aachen, GERMANY. Buro fii.r historische Stadt- und La.ndschaftsforschung, Kaufmannstr. 81, D-53115 Bonn, GERMANY.
8.1
A brief introduction
Multimedia has become one of today's major buzzwords in communications. This is primarily due to the ever increasing bandwidth available in public networks. This bandwidth, in conjunction with new communication protocols currently under development, enables exchange not only of plain text, but also of drawings, graphics, tables, and images. Moreover, even high-volume digital video information can be transferred in real-time. On the other hand, archaeological research in very many cases suffers from information being unavailable at those places where it is really needed. Furthermore, archaeology typically needs visual information in addition to plain text. Thus, we are convinced that use of multimedia communication systems would be a well suited vehicle to make relevant information available world-wide in a timely manner. This holds for both reference material as well as on-line expert advice. The remainder of the paper is organised as follows: Section 8.2 discusses possible archaeological applications of multimedia communication systems in different typical scenarios. The technology related issues are presented in Section 8.3. Finally, Section 8.4 gives some concluding remarks.
8.2
Three scenarios
This section is intended to demonstrate the use of multimedia communication systems in archaeology. There are obviously other areas of applications, such as teaching . However, we want to focus on those that are archaeology-specific. One of the most important methods in archaeology is visual comparison. This is inevitably involved in dating types of pottery or analysing stratigraphic situation in excavations, for instance. Today, some important issues need to be discussed internationally, rather than on a national or even regional basis. This is simply because prehistoric artefacts must be dated with the traditional comparative historical-archaeological method. In the following, we provide some typical examples to illustrate the potential usefulness of multimedia systems for archaeology.
8.2.1
Analysis of stratigraphic situations
One of the questions currently under discussion in Germany is the deviation of dates obtained using the radiocarbon method or dendrochronology from those indicated by the historical/archaeological chronology of
Neolithic and Copper Age Central and South-Eastern Europe. To answer this important question it is necessary to analyse guide-forms of pottery in their stratigraphic situation. These guide-forms are used as contact-pottery in the European chronology. One of the most peculiar pottery types of the Neolithic period is the gynaicomorphic pottery which was found in Twann at the Kleine Hafner near Zurich, Switzerland. Obviously, the stratigraphic information relating to this pottery is of major importance. This is particularly true in this case since the dendrochronology dates differed from those obtained by the traditional method. The adjustment of the chronological system was done using pottery from Hungary and other Eastern European countries. The archaeologist Jorg Petrasch calls for a comparative analysis of European prehistory with the Balkans, the Aegean, and West Turkey (Petrasch 1984). A detailed discussion of this issue is well beyond the scope of this paper. However, we do think that it would be most important to enable fast and broad availability of related new information, such as that obtained from new excavations. Today, typically there may be some delay even for short first reports to be published in journals, and it may take years until final excavation reports are published. At this early stage, a widely available and accessible multimedia information system would be extremely helpful, even if only very short notes and some photographs of the new excavation and the major artefacts were provided. Multimedia capability is crucial in this context, since photographs, drawings, and maybe video will be needed for a comparative analysis, rather than pure textual descriptions.
8.2.2
Supporting international projects
An international archaeological project is currently being planned jointly in the Netherlands (Stichting vor Bodenkartering/Wageningen) and in Germany (Westfiilisches Museum fiir Archaologie Munster). This project aims to investigate the medieval field-system of the Esche. Esches are fields, whose yield had been improved by amelioration and artificial deposition. Currently, dating is one of the major points under discussion. Until now, according to German geographers, these fields had been dated to the Middle Ages. However, the latest archaeological investigations done by the Staring-Centrum in Wageningen/Netherlands suggest they date to the 16th century instead. The new project will analyse the archaeological situation beneath the Esche. This soil may be considered as an archive of an original medieval soil horizon. Thus, the focus is on the
43
KAI JAKOBS & KLAUS KLEEFELD stratigraphy . This research could be substantially supported by a continuous information exchange on Esch profiles in Germany and The Netherlands , respectively . Due to the very nature of the information to be exchanged , this would again call for a multimedia information system. In fact , a corresponding proposal will be submitted to the project leaders .
8.2.3
Analysis of artefacts
Every now and then archaeological museums receive artefacts without any accompanying information relating to the location of the discovery . If such artefacts need to be dated , no additional information is available : the artefact itself is the only source of information. A typical example that one of the authors has already had to deal with is that of a bronze needle . The one thing he did know was that such needles stem from the Bronze Age . For any further information , dedicated reference books are required. In Germany , for instance , there are volumes of the Prii.historische Bronzefunde (Prehistoric Bronze Finds ). He would use these to look for comparable artefacts in order to date the needle , and to get some information on likely finding places . Having such books available on -line from a (possibly distributed ) data base would be of great help , all the more so since a data base can be updated far more frequently than reference books. We believe that these brief examples provide an impression of the potential usefulness of a multimedia communication and information system .
8.3
Some technical issues
The need for advanced multimedia communication systems has long been realised by the CEC . The Research and Development programme RACE (Research and Technology Development in Advanced Communications Technologies in Europe) focuses on the development of communication systems and pilot applications capable of effectively handling the requirements of multimedia communication. EuroBridge is one of the projects within RACE. It aims to establish a uniform communication platform providing all communication-related services to all kinds of multimedia applications . This platform will be able to run over a variety of different networks (Local Area Networks such as ethernet as well as public data networks , e.g. JANET ), under different versions of UNIX (the major operating system for workstations , also available for PCs and the Macintosh) . All communication services implemented are based on international standards . However, in many cases considerable extensions are being implemented to enable true multimedia communication. Services provided directly to the user or to a multimedia application will be briefly discussed . The services FTAM (File Transfer , Access and Management ) and RDA (Remote Database Access) have been implemented in a way that enables them efficiently to retrie ve large amounts of data from a remote database . RDA is used to search the database for appropriate
information . In case of large documents, FTAM is used to transfer the information from the database to the user. It may also be used to update the database. It is anticipated that FTAM will mainly be used by multimedia applications rather than directly by human users. The electronic messaging standard MHS (Message Handling Service) provides for multimedia documents . However , no implementations of a multimedia :MJIS service are available yet. EuroBridge is implementing user agents with the capability of submitting/receiving , storing , and interpreting arbitrary multimedia messages , possibly including voice and video as well. It is anticipated that :MJIS will mainly be used for interpersonal communication. However , it may also be employed as a transfer service for multimedia documents , which may be arbitrarily structured . Integration of Hypermedia capabilities into :MJIS messages is another task . The term 'Hypermedia ' refers to a multimedia document 's capability to overcome the limiting sequential structure of a normal document . That is, the reader can more or less arbitrarily navigate through the document rather than being forced to follow the pre defined hierarchical structure of sections and paragraphs . In doing so, the reade r is guided by links that point from one part of the document (a word , a figure) to another one . A Document Control Structure (DSc) is currently being defined . This holds all structure information related to the following bodyparts of the message. This feature will be particularly useful if reference books are to be browsed . The Directory Service (DS) provides a functionality roughly equivalent to the white pages plus the yellow pages plus telephone enquiries. In a data network, this service is used to locate potential communication partners as well as resources . For a discussion on further usefulness of this service for archaeological applications see Jakobs and K.leefeld 1990. Finally, the video conferencing service should be mentioned. At present , this service works over local ethernet and FDDI networks , and over public ISDN networks. It allows participation of up to eight people in a conference. In addition , a joint editing tool is provided to enable on-line modification of documents . The video quality is adequate over one ISDN B-channel (64 kbps) , and quite good over the FDDI (100 Mbps) . These services are utilising both standardised and newly designed communication oriented protocols . While the first are required for backward compatibility , the latter are essential since almost all of today's protocols date back to the mid-seventies and do not provide functionality sufficient for today ' s very demanding applications . For instance, dedicated mechanisms are required to support the needs of applications involving more than one sender and one recipient (i.e . applications requiring group communication functionality) . Moreover , a certain communication quality (Quality of Service , QoS) has to be guaranteed to certain applications . A video conferenc e application , for instance , may well tolerate loss or corruption of some information - a viewer won' t even 44
MULTIMEDIA COMMU NICATION IN ARCHAEOLOGY - WHY AND
realise that some pixels on the screen are red instead of green for a second. However, such applications are very demanding in terms of bandwidth . A thirty second black and white video clip is about 5 MB. In colour, this may easily rise to 15 MB. On the other hand , a sample book of, say, 1000 pages, 65 lines per page and 65 columns, plus some graphics comes to approximately 5 MB of data , but no loss of data can be tolerated. The communication oriented layers will have to manage these problems. There is, of course, a simple alternative that may be helpful in many cases. CD-ROMs are becoming a popular storage medium for high-volume data. Some of the benefits achieved by using multimedia communication systems may in many cases also be accomplished by using information stored on CD-ROM . However, there are some possibly severe disadvantages : •
updating will be difficult
•
there is no way to store additional new information on CD-ROMs . Bibliographic data bases on CD-ROM, for instance , are updated every six months at best, but still are an additional six months behind the on-line version.
•
•
How
CD-ROMs do not provide for an interactive discussion with remote experts. Multimedia communication systems do.
8.4
Some concluding remarks
The use of advanced communication and information systems has become an important factor in many different professional fields. We have tried to indicate why we think archaeology may well be one of these fields . We feel that there are a huge number of potential benefits for researchers , excavations, museums , and projects. One of the major obstacles of archaeological research - the extremely slow information flow - could be efficiently overcome if such advanced communication and information services were used to a greater extent.
References JAKOBS ,
K.
AND KLEEFELD , K. D. 1990. 'Using Public Communication Services for Archaeological Applications', in S. P. Q. Rahtz (ed.), Computer Applications and Quantitative Methods in Archaeolog y 1990, British Archaeological Reports Int. Series 565, Oxford .
PETRASCH , J.
1984. ' Die absolute Datierung der Badener Kultur aus der Sicht des stiddeutschen Jungneolithi.kurns', Germania 62, 1984, 269-287 .
no interactive video conference is possible
45
46
9 An electronic guide to the buildings of ancient Rome Philip Perkins Department of History of Art, Birkbeck College, London, UK Email: [email protected]
9.1
Introduction
9.2
Concept
The Department of History of Art of Birkbeck College is currently developing an electronic guide to the buildings of the ancient city of Rome. It is intended to become a free standing resource associated with the slide library of the Department. Although it will be available for consultation by all, it is primarily aimed at students taking the undergraduate course module 'The Classical Tradition'.
The intention of the development is to link together three principal sources of information concerning the buildings of ancient Rome. These are: maps to indicate locations for the various buildings; images to provide a visual impression of the buildings; and a bibliography to give directions for further reading. These three sources are very different and each requires different approaches to make it accessible.
There is a long tradition of guides to ancient Rome, dating from antiquity through the middle ages to present times (e.g. Notitia Regionum XIV; Burn 1895; Coarelli 1974; Luciani & Sperduti 1992; Lugli 1946; Nichols 1986 - originally mid-12th century). This richness in itself presents the student with problems. Many of the best guides are not written in English, and most undergraduates do not have the language skills to overcome this barrier. The great volume of published scholarship causes problems of dispersal and only specialised libraries can possibly have the resources to provide a detailed coverage of the ancient city. Quite apart from locating the relevant books, a great deal of time is required to sift the information which they provide. Furthermore the economic constraints upon publishing reproductions of images means that standard texts such as J. B. Ward-Perkins' lavishly illustrated Roman Imperial Architecture ( 1970) can only contain three monochrome images and two line drawings of the Pantheon, the most complete standing building in the city.
9.2.1
Currently students at Birkbeck College depend upon the Departmental slide library to augment printed sources. The Department is committed to maintaining and developing this resource but large numbers of students consulting few, fragile slides creates problems of access and curation. Additional problems lie in the nature of the 35mm slide itself. Slides are designed to be projected and the rewarding activity of browsing through a slide library is both time consuming and disruptive to the collection. Slides are also small and only limited text and cataloguing information can be carried on the frame of a transparency, providing limited information for the student. The collation and presentation of these disparate visual sources in a computer environment provides solutions to some of these problems, although it also raises new problems of its own. In addition recent advances in the graphic capabilities of low cost computers and the development of flexible authoring software provide new possibilities to integrate and relate various aspects of the ancient buildings.
Maps
A map is a highly structured means of providing spatial information and usually encapsulates a series of conventions, editorial decisions and interpretations. Furthermore, any map is only a representation of the world at a given point in time. Modern atlases need constant updating to keep pace with boundary changes and archaeological maps need to be kept current to take account of both new discoveries and re-interpretations. The abundance of structural remains in the city of Rome causes problems in the compilation of maps of the ancient city, which rapidly become out-dated. A glance at maps compiled in the early part of the twentieth century (e.g. Lugli 1940) reveals identifications of buildings which are no longer generally accepted. Another problem with maps of this sort is the decisions which have been made about what to actually represent on the map. A popular choice (e.g. Lugli 1946, Tav.4) has been Rome at the time of Constantine, justified by the reasoning that by this time the city had reached its greatest extent and all of the major surviving 'Classical' buildings had been constructed. However, the presentation of the city 'at the time of Constantine' is something of a fiction as few buildings have been excavated to a sufficient degree to enable an accurate description of the state of the structure in the early part of the fourth century AD. The approach taken to remedy this situation is usually to represent a building on a map in either its original form or its surviving form. Matters are complicated still further by the superimposition of buildings, so that decisions must be taken whether to represent the Golden House of Nero or the Baths of Trajan on the slopes overlooking the Colosseum. This situation presents problems in the choice of map to provide locations for the buildings of Rome. The remedy adopted in this project is to treat each map of Rome as a cultural artefact in its own right. Thus edited information from a recent archaeological map is used to provide an indication of the current understanding of the known remains. However a renaissance map of Rome is
47
PHILIP PERKINS
also used to provide a different , but equally valid , representation of the topography of the anci~nt city. An equivocal approach such as this is vital since in historical studies there is little point knowing where twentieth century archaeologists thought the Temple of Peace was; what is important is knowing where the Temple was thought to be in the period under study, for example the sixteenth century . The map is also a two dimensional object, which requires a certain amount of skill and training to interpret. This project supplements the maps by also providing images of the three dimensional model of the city of Rome now housed in the Museo della Civilta Romana in Rome . The model enables the visualisation of the city and also provides evidence for three -dimensional relationships between buildings and topography which do not readil y emerge from plans alone.
9.2.2
Images
Each building is different and unique : any number of images could not fully represent its variety. However for purposes of comparison and standardisation (see below) it was considered desirable to present a uniform range of views of each buildin g. This in itself is a difficult task given the diversity and preservation of the buildings . During development , a standard range of images has been established . This includes a general view, views of details , a plan , a section , interior views, details of construction and a reconstruction of the building . Images are in colour where possible except for line drawings . This combination of images provides a reasonable overview of the buildings, but does rely heavily upon the skills of plan and section reading to provide a coherent presentation of the building .
9.2.3
Bibliography
The bibliographic part of the project provides listings of a range of papers, articles and books relating primarily to the buildings. More general titles are also included and these are cross referenced to the buildings .
9.3
Technical Issues
9.3.1
Development environment
The project has been developed using IBM PC compatible machines with Intel 80486 processors running Microsoft Windows . The software used to develop the programs is the ToolBook authoring software produced by Asymetrix Corporation which provides an object based and event driven environment. The package takes the metaphor of the book as its basis and information may be ordered into pages and books . Graphics and screen controls are easily created and manipulated and may be placed on pages or backgrounds which are used for multiple pages with common features. Each of these ToolBook objects may send or receive messages which are generated in response to events such as mouse button clicks or turning a page. These messages are trapped by objects and interpreted by scripts which need to be written in the programming language OpenScript , an integral part of the software .
This language is English-like and very flexible ; however for complex programming tasks it becomes necessary to return to the traditional programming skills of trapping errors and evaluating returned values . To some extent ToolBook is object-oriented : graphics , screen controls, text etc. are all objects with properties which may be manipulated or created. It does not implement the key concept of inheritance , found in true object-oriented systems. However, objects consist of a hierarchy of books, backgrounds , pages, and then the various objects on a page. Messages are automatically passed up the object hierarchy until a script is found which can interpret the message. In this way the properties and scripts of an object further up the hierarchy may be exploited by an object lower in the hierarchy , thus producing some semblance of inheritance (for a review see Perkins 1993). Images were obtained from 35mm slides using a Nikon LS-3500 slide scanner in conjunction with either Adobe Photoshop running on a Macintosh Quadra 800 or Aldus Photostyler on a Viglen 486 for image capture and processing . Images on paper were captured with a Microtek A4 flat-bed scanner connected to the Quadra . All images were scanned in 24-bit colour and then reduced in size to less than 640 x 480 pixels and the colour depth was reduced to 8-bit (either 256 colours or grey scales). As a rule of thumb image file sizes were limited to under 100k by manipulation of resolution and image size since larger images take too long to display in ToolBook. Using such low resolution images causes problems with complex images such as maps and plans but yields perfectly adequate results for other images to be displayed on high resolution SVGA monitors Bibliographical data was stored in a dBase format file since ToolBook provides a Dynamic Link Library containing functions to directly access .dbf files without the need to use any other database management software . The data structure is a simple flat file with a field for each value. A more complex relational structure would have been desirable but would have caused programming complexities and performance penalties which outweighed the benefits .
9.3.2
Structuring
The structure of the project was determined by a compromise between the structure of the data and the capabilities of ToolBook which does not support the Multiple Document Interface standard , so cannot display child windows except when using MCI commands to display multi -media files. Fortunately multiple instances of ToolBook can run concurrently allowing more than one window to be used. This possibility was exploited to enable a window to be used for presenting map data , another for bibliographic data and others for information about individual buildings . Importantly for the buildings this structuring allows the display of more than one building at a time , essential for comparisons . The only limitation on the number of windows , and hence buildings which can be displayed at one time , is the amount of available RAM and system resources .
48
AN ELECTRONIC GUIDE TO THE BUILDINGS OF ANCIENT ROME
=
Figure 9.1: Clicking the Map menu produces this screen providing navigation to different maps; either the map sheet or the title may be clicked to go to that map.
=
l:lr:I
AncientRome
Yiew
Ancient Rome
r:1£:I
Figure 9.2: The Map menu with a view of the model of Rome. Note the mouse pointer over a highlighted monument which is identified at the lower left of the screen.
=
AllclentRome
DIS
View of Rome. from Ptolemy. Geography 1-469 In this plan by Pietro del t.Aassaio. the Castel Sent'Angelo. the Borgo. and Saint Peter's appear at bottom right. separated from the city by the Tiber . Wrthin the city proper. the ancient : monuments rise. stripped of modern ' buildings end urban sprawl. ' The Pantheon. the Forum. the ' Capitoline and Palatine hills. and the Colosseum dominate the central · , space. ,, Vat. lat. 5699 · fol . 127 recto
Figure 9.3: Rennaissence map of Rome with all monuments highlighted with the 'Show All' button.
9.3.2.1 The maps Following the discussion of maps above and the use of low resolution images, two forms of maps were used: •
maps which could be displayed on a single ToolBook page i.e. the entire map could be displayed in sufficient detail on a single screen
•
maps which contained too much detail and consequently required to be displayed at a larger scale on several screens .
The maps as a group may be accessed from the opening screen , or from the 'Maps ' choice on the Map menu . This
Figure monument search is monument positioned the screen
9.4: The Search dialog. Selecting the and clicking OK activates the search. If the sucessful the correct map is displayed, the highlighted and the mouse pointer is over the monument as shown on the right of
presents a graphic of a stack overlaid map sheets, each of which is labelled. Clicking on a map sheet or its label will move to that map page (Figure 9. 1). Each map screen displays an image of the map as a bitmap and the individual buildings represented on the maps are further defined as vector polygons overlaid upon the bitmap. As the mouse pointer is moved over the map and enters a polygon the plan of the building is highlighted in red or green and the name of the building appears in a box at the bottom left hand corner of the screen (Figure 9 .2). This mechanism is maintained for all of the various maps and provides a means of browsing the maps or the model and identifying buildings marked by
49
PHILIP PERKINS
Figure 9.5: The result of clicking on the Pantheon on a plan. A window with detail s of the buildin g is displayed . The menu shows the standard variet y of information available .
Figure 9.6: Screen showing detail s of the building . Clicking on a highlighted area of the overall view display s a detailed view of that part of the building .
simpl y mo ving the mou se pointer. On the lower right of the screen are button s which will highlight all of the buildin gs identified , pro viding a context for brow sing with the mou se pointer (Figure 9.3) .
menu which contain s choices to view other pages . The option s are :
Other more structured form s of geographic query are also possible . The 'Locate ' option on the Map menu produces a dialog box asking for the name of a building to find and , if the search is successful , automatically moves to the map showing that building , puts the mouse pointer over the building and highlights it. The 'Index ' option on the same menu is similar but presents a scrolling list of all of the identified buildings allowing a building to be highlighted and then located with the same results as the 'Locate ' command (Figure 9.4 ). Other options on the Map menu provide a means of moving dire ctly from one historical map to another. On multiple page maps a further control is available at the bottom of the screen (Figure 9 .4) to move from one map sheet to another. Clicking on one of the arrows moves to an adjacent map sheet , and a key to the maps on the left of the control indicate s the relation of the current map sheet to other s and also identifies other sheets previously viewed . It is possible to magnif y the view of a map usin g a right mouse button click ; however the results are not graphically pleasin g because the pixelation of the low resolution images becomes apparent. Thus the digitised maps are unable to provide detailed plans of individual buildings as a good quality printed map can. Detailed plans are held separately along with other details of the building.
9.3.2.2 The buildings Details of each building are maintained in separate files. Windows containing detail s of buildin gs each have a standard interface and as far as possible contain comparable material on each building . The window opens with a general view of the buildin g, usually the main facade . Further information is obtained using the View
•
'General ': the general view (Fi gure 9 .5)
•
'Details ' : the same general view but with colour depth reduced and faded . Areas of the view are highlighted in red and clicking on one of these areas reveal s a detailed view of that part of the building . A click on the picture will remove it. Several details may be viewed at any time (Figure 9.6 ).
•
'Plan ': a plan of the building. If the building is multi phase each phase may be viewed using buttons on the screen (Figure 9.7)
•
' Section ' : a section or multiple sections of the building (Figure 9.8)
•
'Reconstruction ': a view of a reconstruction model or drawing
•
'Construction ' : views or diagrams of constructional details
•
'Interior ' : view of the interior with areas highlighted as in the 'Details ' above.
These categories of information were refined during development ; any idiosyncrasies of a buildin g not catered for under these headings are dealt with using buttons on any of these screens . However , deviations from the standard scheme are kept to a minimum to maintain the familiarity of the standardised interface. Digital video is also implemented in a limited number of buildings . A menu choice 'Video ' appears on the View menu . Selecting this choice plays a short Microsoft Video for Window s file . Scenes are currently limited to simple pannin g views of the buildings : more complex video sequences would require purpose shot video sequence s highlightin g salient feature s.
50
AN ELECTRONIC GUIDE TO THE BUILDINGS OF ANCIENT ROME
_,
.Quitt yiy:w ,~ Bibliography tlelp ~eneral Qetails
·n.S.ection ,•i> tt:2·••;-·s• :r ·· :,::.·:·
Construction jnterior
Figure 9.7: Screen showing a plan of the building , the result of clicking 'Plan ' on the menu.
Figure 9.8: Screen showing a section of the building with the bibliography menu displayed.
9.3.2.3 The bibliography
•
' Search ' presents a dialog box to enter the name of a building or author to search for, and if the search is successful displays the bibliography , otherwise a notification of failure is given ;
•
'Browse ' opens the bibliography window at the current record, which is either the first record, or the last visited.
At the top of the bibliography viewing window is a box titled 'Current Item ' giving the name of the building to which the current database record relates. Beneath this are standard bibliographical details (Figure 9.9) . Towards the lower part of the screen are boxes for keywords to enable searches independent of the buildings themselves (not yet implemented) . Below this is a box titled 'Arrange by.. .' allowing indexes for the data file to be changed. Clicking the button below labelled 'Item ' (Figure 9.9) produces a dialog box asking for a search string using the item (i.e . building) index . When the author index is in use the search criterion is the author's name. To the right of this are buttons for the management of data file records - the current record may be edited directly on screen . At the bottom of the screen are buttons to navigate through the data records sequentially.
9.3.3
From the bibliography window the 'Building' button at the lower right opens a building window relating to the building in the 'Current Item ' box, enabling direct navigation from bibliography to building, and closes the bibliography window. The quit button in the lower right comer closes the bibliography and activates the most recently visited window, either a map or a building . In this way each part is accessible from any of the others via screen controls or menu choices as well as by
Linking the parts
Windows of details of buildings are primarily accessed from the map sheets . On a map sheet if a building highlight is red this indicates that no further details are available , however when the highlight is green a mouse click will open the window presenting details of that building which may then be accessed via the View menu. The window opens leaving the map window visible behind . The maps may be returned to by clicking on the map window, which leaves the building window open, or by selecting ' Quit ' from the menu bar which will close the building window and return to the maps .
~ :::•:::,f•j:-:::/2:::::::::~•::•:;+·•~::~
i a:
•
11
13
15
17
19
21
23
25
27
29
31
33
35
37
39
Clustered Graves (different shading refers to differert brooch types)
Figure 10.6: Stacked Bar Chart for clustered graves , Milnsingen-Rain data
59
JOHN WILCOCK
Mlllsingen-Rain Sk}4inePlot for ClusteredGraves
100 90
BO 70
60 SD
40 30 20 10
Figure 10.7: Skyline Plot for clustered graves, Mtinsingen-Rain data
Mulsingen-Raln SlcylinePlot for Clustered Graves
Figure 10.8: Skyline Plot for clustered graves shown as a surface, Mtinsingen-Rain data
10.6.7 Skyline Plot shown as a Surface Figure 10.8 shows the same skyline plot for the clustered graves as a more 'sculptured' type of surface with shading for the different graves.
10.6.8 3D Skyline Plot showing Phases related to Graves However, none of the above diagrams have really shown 3 or more dimensions in a satisfactory manner - the representation has chiefly been the display of a selected pair of dimensions , albeit selected from the larger number
of dimensions under study. Figure 10.9 is, however, proposed in this paper as a step in the right direction. The conventional skyline plot for the clustered graves has at its head ONE 'side view' of a 3D matrix (Phases versus Graves versus Brooches), in this case showing Phases vertically, and Graves horizontally . The vertical lines relate each grave to its corresponding phase. Another skyline plot exists at the side, at right-angles, which would give Phases vertically and Brooches horizontally. Thus the third dimension has been introduced into the essentially two-dimensional skyline plot.
60
THEINCORPORATIONOF CLUSTERANALYSISINTO MULTIDIMENSIONALMATRIXANALYSIS
Side view of 3D matrix (Phase, Grave , Brooch) showing Phasesvertically Graves horizontally
.
-=--= .:
t :Y 100 90
--- -
...
80 70 60 50
40 30 20 10 0
Figure 10.9: 3D Skyline Plot showing Phases and Graves. The block at the head of the conventional Skyline Plot is the side view of a 3D matrix Phases versus Graves versus Brooches .
10.7
The Triad (Three Dimensions)
It is appropriate to discuss the number of dimensions in a theoretical manner . Figure 10.11 shows a triad of three dimensions . The 3 dimensions A , B and C may be combined in three different pairs, [AB], [AC] and [BC] . In our case we may allocate Phases to A, Graves to B and Brooches to C. Thus we could have a dendrogram for Phases v Graves , another for Phases v Brooches, and a third for Graves v Brooches in our application . The super-dimension appears at the head in each case, and the sub-dimension forms the dendrogram or skyline plot. The diagram above shows a unique allocation of a sub-entity to a super-entity, but in principle sub-entities could occur across several super-entities, and be indicated by some form of bar (as in a battleship plot). A bar could also itself have more than one dimension , the 'battleship' sections then becoming more like globular Christmas-tree ornaments in shape. Such a system could show parallel time-lines for different cultures, related to the areas of the world inhabited/controlled by the respective cultures.
10.6.9 3D Dendrogram showing Phases related to Graves
A
Figure 10.10 repeats the procedure , this time for the Dendrogram. The conventional dendrogram for the clustered graves has at its head ONE 'side view' of a 3D matrix (Phases versus Graves versus Brooches), in this case showing Phases vertically, and Graves horizontally . Another dendrogram exists at the side, at right-angles, which would give Phases vertically and Brooches horizontally . Thus the third dimension has been introduced into the essentially two-dimensional dendrogram .
B
(BC)
C
Phases
(Phasesv Brooches) Sideview of 30 matrix
Brooches
(Phase, Grave , Brooch)
showingPhases vertically,
(Graves v Brooches)
Graves horizontally
-
--
-
-
- -
...:. -
--
---
~
-
-
.
l~
100
-
80 70
---
- - -
- -
-----
60
----
Figure 10.11: The Triad
90
-
50 40
10.8
The Quad (4 Dimensions)
Figure 10.12 extends this procedure to 4 dimensions, and shows the quad. •
Thus 1 Quad has 4 Triads, each of which has 3 Pairs.
•
10
In general for D dimensions, there will be D submatrices of (D -1) dimensions, each of which has
0
(D -1) sub-sub-matrices of (D- 2) dimensions , and
30 ---
--
20
so on.
Figure 10.10: 3D Dendrogram showing Phases and Graves. The block at the head of the conventional Dendrogram is the side view of a 3D matrix Phases versus Graves versus Brooches.
•
61
For 1 Quad there are : 4 unique Triads , each of which has 3 Pairs, and there are 6 unique Pairs
JOHN WILCOCK
archaeological data, and commonplace in a study.
(AB) (AC) (BC)
(AB) (AD) (BD)
(AC) (AD) (CD)
(BC) (BD) (CD)
dimensions
are
•
A method of looking at up to 7 archaeological dimensions has been proposed, with the incorporation of cluster analysis.
•
New diagrams have been proposed for the portrayal of the multidimensional data, in particular a multidimensional form of the dendrogram and skyline plot types of diagram.
Bibliography ALDENDERFER , M.. AND R. . BLASHFIELD1984. Cluster analysis, Quantitative Applications in the Social Science s Series , Sage Publications Inc., Beverly Hills.
(AB) (AC) (AD) (BC) (BD) (CD)
Figure 10.12: The Quad
10.9
five
BONSALL , J. C. ANDC. LEACH1974. 'Multidimensional scaling analysis of British microlithic assemblages'. 1n Laflin, S. (ed.), Computer applications in archaeolog y 1974, Univ. of Birmingham, 16.
Larger numbers of dimensions
FARRIS , J. S. 1969. 'On the cophenetic correlation coefficient', Systematic Z.Ool. 18, 279- 285 .
The procedure may be extended to ever larger number of dimensions . Figure 10.13 shows the pentad.
HARTIGAN , J. A. 1967. 'Representation of similarity matrices by trees', J. Amer. Statist. Ass . 62, 1140-1 158.
•
HODSON , F. R. 1968. The La Tene cemetery at Munsingen -Rain, Acta Bernensia V, Stampfli, Bern.
•
•
•
The Pentad has 5 Quads , each of which has 4 Triads, each of which has 3 Pairs . There is a total of 10 possible unique Pairs . The Hexad has 6 Pentads , each of which has 5 Quads, each of which has 4 Triads, each of which has 3 Pairs. There is a total of 15 possible unique Pairs. The Heptad has 7 Hexads, each of which has 6 Pentads, each of which has 5 Quads, each of which has 4 Triads, each of which has 3 Pairs. There is a total of 21 possible unique Pairs. The number of unique Pairs is in general D(D- 1) I 2 for D dimensions, which is the sum of an arithmetic progression. This is shown in the half-matrix less diagonal shown in Figure 10.13 above, the number of Pairs being the sum of the newly introduced Pairs of dimensions, plus all the existing Pairs on the rows above. No. of dimensions
A
E
@
2
B
No.of Pairs
AB
3
3
5
10
6
6
15
7
FG 21
Figure 10.13: The Pentad, and the half-matrix less diagonal which shows all Pairs for dimensions between 2 and 7
10.10 Conclusion It has been proposed that archaeologists should consider the true dimensionality of their data , and explore ways of portraying results in more than two dimensions . It is apparent that as many as seven dimensions are possible in
HODSO N, F. R. 1969. 'Searching for structure within multivariate archaeological data', World Archaeolog y 1, 90- 105. HODSON , F. R. 1970. 'Cluster analysis and archaeology: some new developments and applications', World Archaeolog y 1 (3), 299320. HODSON , F. R. 1971. 'Numerical typology and prehistoric archaeology'. In Hodson, F.R., D.G. Kendall ANDP. Tautu (eds), Mathematics in the archaeological and historical sciences, Edinburgh University Press, Edinburgh, 30-45 . HODSON , F. R., P. H. A. SNEATHAND J. E. DORAN 1966. 'Some experiments in the numerical analysis of archaeological data', Biometrika 53, 311-324. JARDINE , N. 1969. 'A logical basis for biological classification', Systematic Z.Ool. 18, 37-52. JARDINE , C. J., N. JARDINEAND R. SIBSON 1967. 'The structure and construction of taxonomic hierarchies', Math. Biosc. 1, 173179. MAYR, E., E. G. LINSLEYAND R. L. USINGER1953. Methods and principles of systematic zoology, McGraw-Hill, New York. MORRISON , D. F. 1967. Multivariate statistical methods , McGraw-Hill, New York. SHENNAN , S. J. ANDJ. D. Wll..COCK1975 . 'Shape and style variation in Central Gennan Bell Beakers: a computer-assisted study' , Science and Archaeology 15, 17-31. SNEATH , P. H. A. ANDR. R. SOKAL1973. Numerical taxonomy, W.H. Freeman and Company, San Francisco, 59 . SOKAL , R. R. ANDP. H. A. SNEATH1963. Numerical taxonomy, W.H. Freeman and Company, San Francisco and London. WARD, J. H. JR 1963. 'Hierarchical grouping to optimize an objective function', J. Amer. Statist. Ass. 58, 236-244 . WILCOCK,J. D. 1975. 'Presentation of computer classification results: A comparison of graphical methods', Science and Archaeology 15, 32-37 . WILCOCK,J. D. 1993. 'Analysis of multidimensional matrices for archaeological data'. In Wilcock, J. D. and Lockyear, K. (eds.), Computer applications and quantitative methods in archaeology 1993, BAR International Series S, British Archaeological Reports, Oxford. WIRTH, M., G. F. ESTABROOK ANDD. J. ROGERS1966. 'A graph theory model for systematic biology, with an example for the Oncidiinae (Orchidaceae), Systematic Z.Ool. 15, 59-69 .
62
11 Graphical presentation of results from principal components analysis M. J. Baxter and C. C. Beardah Department of Mathematics, Statistics and O.R., The Nottingham Trent University, Clifton Campus, Nottingham, NGJJ 8NS, UK.
11.1
Introduction
The main purpose of this paper is to discuss approaches to the informative display of output from a principal component analysis (PCA), in the context of common archaeological applications. Displays of the kind we have in mind are made possible, and more widely available, by the development of powerful and accessible software packages. We concentrate on use of the recently released Version 4 of the MATLAB package; other possibilities are noted in a later section. No novelty is claimed for the statistical and graphical approaches used here, though application of them to PCA is uncommon in some cases, and we know of no applications to archaeological data of some of the methods. What we describe is 'work in progress', the aims of which are (a) to explore the capabilities of the MATLAB package, and (b) to develop what are hopefully useful, but uncommon, approaches to the display of PCA output that can be applied routinely. There is a substantive problem that initially motivated this work that is outlined briefly in the next section and used for illustrative application in the text. The reasons for using MATLAB as the analytical package are discussed in Section 11.3, followed by a variety of applications in Section 11.4. Possible alternatives to MATLAB are noted in Section 11.5, and Section 11.6 concludes the paper.
11.2
The Substantive Problem
Barrera and Velde (1989) (referred to as BV in the paper) have published chemical analyses of 486 specimens of French Medieval glass. The percentage presence of ten oxides (based on the elements Ca, Na, K, Mg, P, Si, Al, Fe, Mn, Cl) was measured. Additional information was given on the period, site of origin, type and colour of the glass. On the basis of the level of Na the specimens divide clearly into two groups. The larger of these, with 361 specimens, is termed 'calco-potassic' glass with the percentage presence of CaO + K2O being in excess of 22%. On the basis of consideration of the levels of Na2O, CaO, K 2O and MgO, BV sub-divide the calco-potassic group into three sub-groups. The largest of these groups (Type A in BV) corresponds to compositions that are typical of the Argonnes area in north-eastern France from period II (BV, 95), having relatively low values of Na2O
(
38 29
56
20
36
42 33
58
126
feJJ Figure 19.2: Correspondence analysis including historical information
19.3
A Method for Multi-Criteria Analysis of Incomplete Data
19.3.1 Correspondence Analysis for Historical Data - Problem Description We want to analyse a sample of n individuals (monastic settlements in our study, with n = 65). For each individual i, we can make a sequence of observations o J, 02, ... Om, which correspond to a given question. For instance, the observation oJ J in our study corresponds to the question, 'Is the monastic settlement in a cave ' . We define a value oj(i) which is the probability that the answer to the question is yes, to the best of our knowledge . This value is a real number that has a range from O (we are sure that the answer is no) to l (we are sure that the answer is yes), where oj(i) = 0.5 indicates that we know nothing about the question Oj for the individual i . The data that we want to analyse can , therefore , be seen as an n x m matrix of real numbers . Observations are not independent since many of them are exclusive. We define criteria C1 , ... Ck as sets of observations that focus on the same aspects and are, therefore , clearly dependent. For instance , we define the criteria C1 = {01 , 02, .. . 05} to be the geographical
distribution of settlements . In this case, o J, 02, . . . 05
are
mutually exclusive, but it is not necessary. The structuration of observations into criteria is important because multi-criteria analysis is more complex and biases can be introduced if we are not careful, as we shall see in Section 19.3.2.
19.3.1.1 Correspondence Analysis Numerous statistical tools are available to analyse a data matrix similar to the one we have shown, which can be seen as a set of points in an m dimension space. The most common ' multi-dimension ' method is principal components analysis (PCA), which projects the set of points on the two (or more) more meaningful dimensions (to maximise the dispersion of the cloud of points , that is, to minimise the loss of information). PCA can be defined as a diagonalisation of the covariance matrix, followed by a projection. Correspondence Analysis is a variation that consists of normalising the input matrix before applying the PCA (cf. Figure 19.3). CA is well suited to data categorisation , such as the result of an archaeological excavation . If the input matrix for each site contains the number of artefacts of each type the normalisation step of CA allows us to compare meaningfully sites of different size (only the relative frequency of each artefact group counts ). 117
BEATRICE CASEAU AND YVES CASEAU
041001010 1 1008201010 1001703010 0060 1001 01 0110901 401
Similarly, the relative weight of information associated with each criterion is a moderately significant factor. With historical data, some criteria can be divided into many precise groups while some other only correspond to a binary choice. If we do not normalise the amount of information that we assign to each criterion , we may arbitrarily favour criteria with many categories and disfavour criteria with few categories .
[EJ
relative frequencies
diagona lization & projec tion of co variance mat rix
Figure 19.3: Correspondance analysis
The result of the analysis is a two-dimensional projection of the original point set (the individuals) and the original vector base (the observations ) such that the 'closeness on this projection ' is a good indication of similarity (high covariance) . This is the mathematical basis for the analysis that we made in the previous section. The question that we want to address here is how to apply CA to the analysis of our historical data , which is not simple data categorisation and , therefore , cannot be fed into the computer without caution . In the next section we will illustrate the biases and difficulties that occur with the kind of data that we obtained.
19.3.2 Multi-Criteria Analysis Problems 19.3.2.1 Multi-Criteria Analysis by Concatenation Let us first consider the case of data obtained by concatenation of multiple categorisation. We suppose that for each individual we have obtained a vector (cf. Figure 19 .4), which is the concatenation of the multiple categorisation vector. For instance , we may want to analyse excavation data with different (noncomparable) kinds of output (such as bones , tools and plant fossils) . For each type of data , we can perform a classification into many groups. For instance, we may have 4 types of bones , 10 types of tools and 4 types of plant fossils. We see each of the three categories as a different ' criterion' for the site excavation and we obtain our 'Multi-criteria ' site data by concatenating the categorisation count vectors.
If we run a correspondence analysis on this data without caution , the result may be disappointing. The structure that we have established is not taken into account by the analysis and multiple problems can occur. For instance, if we consider the two sites of Figure 19.5, they have exactly the same distribution patterns for the two first criteria. However, the normalisation step of the correspondence analysis will make these two sites very distinct. What the analysis will show is the correlation between the finding of bones, tools and plants . Although interesting , it is not what we want to accomplish with a Multi-criteria analysis . Applying CA to this 'raw ' data actually produces a classification on meta-information . It will incorporate the fact that we have more information about one criterion than about another for a given site . This can distort considerably the result of the anal ysis, as we shall see in the next section .
19.3.2.2 Impact of Incomplete Information Historical evidence is often plagued with incomplete information . For some individuals and for some criteria, we either do not have any information or we have only approximations . Let us first consider what to do in the absence of information . A usual technique is to represent the incomplete information by Os such as in Figure 19 .6, where there is no information available about the second criterion. This technique is derived from single-criteria analysis, where it would be perfectly legitimate , since an entry filled with O would be ignored. It is not, however , a correct approach for multi-criteria analysis. Let us consider the example of Figure 19.7, where two sites present exactly the same categorisation for the first criterion, but where information about the second criterion is only available for the first site. Using Os violates our previous condition about the independence of the criteria and the normalisation process of correspondence analysis will make the two sites more distinct that they should be since the similarity of the first criteria will be missed. Once again, we need to guard ourselves from producing a meta-information analysis , where sites would be classified according to the presence or absence of information .
criterion l
criterion 2 group1 group2
group1 group2
IIsite identifier IJ 10
3
0
11
o
5
5
3
1
Figure 19.4: Multi-criteria analysis
1Is001
1110
0
11
o
50
50
30
11s123
11100 30 0
11
o
5
5
3
3
Figure 19.5: Two sites with similar structures critererion l gro~1
IIsite identifier IJ 10
critererion 2 gro~1
group2
3
0
11
group2
o o
o o
Figure 19.6: Site with no information about criterion #2
1Is001
1110
3
0
11
o o
o o
11s123
11 10
3
o
11
o
5
5
3
1
Figure 19.7: Sites with similar structure for criterion# I
I 18
A METHOD FOR THE ANALYSIS OF INCOMPLETE DATA
weights-
10
4 c riterer ion 2
c riterer ion 1
group 1 group2
group 1 group2
IIsite ident ifier 11 6
4
0
O
O
11
4
11s123 llo 3
7
11
o
11s001
ii10
1
1
o
o o
1
a
o
1
4
Figure 19.8: Sites representation with our proposed method
19.3.2.3 Uncertain Information The last concern that we need to address is the uncertainty of information that we have explicitly represented in our modelization , through probabilities. This is especially important with historical data such as textual evidence that is by nature almost always incomplete or uncertain. We need a way to carry this uncertainty into the matrix representation that we give to the correspondence analysis software. Since we start with a vector of probability of answers for each criterion and since we want to apply correspondence analysis to multi-criteria categorisation data , it is natural to produce a categorisation integer vector that approximate the probability distribution. Let us consider the example of the geographical criteria C1 = { o,, 0 2, .. . 0 5 }. If we know that the settlement was located in Rome (observation o;), we have the probability vector (0,0 , 1,0,0) that we translate into [0,0, 10,0,0]. If we believe that it is in Rome , but there is also a possibility that it was actually in Umbria , we use [0,2,8,0,0] . The idea is to approximate a probability distribution with an integer sequence of constant sum. This allows taking uncertainty into account with a lot of flexibility that ranges from exact knowledge to a total lack of information. In the previous example , the vector [2,2,2,2,2] is the precise representation of the absence geographical data.
19.3.3 A Method Based on Data Pre-Processing We can now easily describe the method that we have built since it is a consequence of all the previous observations. The first step is to identify the criteria that we want to analyse and to define them as a set of observations . We then collect the values of each observation for each individual in our sample and we record the degree of confidence with the answer . For each criterion , we allocate a number of token that is the weight that we want to attribute to the criteria in the analysis. A good practice is to start with an even distribution (same for each criterion) and then modify it gradually to emphasise certain aspects . As we shall see later, this has only a limited influence on the output produced by correspondence analysis .
We then distribute the tokens for each individual (i.e. settlement ) and each criterion , according to the observations and their degrees of confidence. When we know the exact answer , we put all the tokens in the corresponding information (for instance , the first criterion and the individual s001 in Figure 19.8). On the opposite , we distribute them evenly when no information is available . This allows to break from the constraints imposed by a strict set of observations . For instance a precarious house may be seen as somewhere between a house (observation 0 15) and a precarious habitat (observation Oo), which we will represent by putting half the tokens in each of these two observations. This method enforces all the constraints that we have established previously (independence of criteria ' s weights and fair representation of incomplete information). The result , that we will prove in the next section , is that the analysis is free of many of the biases observed with simpler approaches. An interesting corollary of this method is that answers of 'yes/no' questions should not be represented by one observation (1/0) but rather by two ([2,0] will represent yes , [0,2] will represent no , and [ 1, 1] will represent unknown).
19.3.4 Validation of the Method on Monastic Settlements Data 19.3.4.1 Comparison with other approaches We compared three different approaches for the same set of data. The first method is the one that we present in this paper. The second method is a hybrid method, where we pick one answer (observation) only for each criterion. That is to say, we choose the most likely answer for each criterion and we represent it with a single 1 entry. The result is a 0-1 matrix, where the independence between criteria is verified, but where some of the decisions are made arbitrarily (when no information is available). The last approach is the simpler one, where the input matrix is obtained by using a I when the answer for the observation is yes and O otherwise. We have found this method to be commonly used, even in cases where the biases presented in the previous sections are obviously present. To evaluate the quality of the analysis, we focused on two indicators . The first indicator is a quantitative coverage number, which it is the sum of the two first eigenvalues produced during the diagonalisation of the covariance matrix. This tells us ' how much ' information is represented on the two-dimensional projection of the point cloud . A higher number indicates a better quality analysis. The second indicator is qualitative , and deals with the existence of a structure in the way the observations are represented in the output of the analysis. When the different exclusive observations of a same criterion are aligned , it gives a basis to interpret the two-dimensional representation. A good example is the population criteria , I 19
BEATRICE CASEAU AND YVES CASEAU
the coverage indicator is very low, which gives little significance to the findings that we can extract from this two-dimensional representation.
19.3.4.2 Stability We have tried different samples to evaluate the stability of the analysis and the sensitivity of the size of the input . We used samples of sizes ranging from 15 to 77. Our experience suggests that one needs at least 40 individuals in the sample to get stable results. Samples with fewer individuals have strong biases and are not reliable. On the other hand, the results obtained with samples of size 50, 65 and 77 were strikingly similar.
Figure 19.9: Analysis output with our proposed method
where the various observations (1 , 2-3 , 4-10 , 1O+ members for the monasti c settlement ) naturall y lead to a ' numerous vs. individual' axi s. In the previous analysis (Section 19 .2), we have identified three axes , corresponding to the population criterion , the cenobitism/erernitism criterion and the type of site . The remarkable alignment that we have observed makes the interpretation of the result easier and more convincing . Figure 19.9 represents the structure obtained in our analysis , together with the coverage indicator . Figure 19.10 shows the output for the second hybrid method. We can easily notice the degradation from both the quantitative and the qualitative indicators . The results of the simpler method are shown in Figure 19.11. These results are poor , since there is no visible structure (thus interpreting them is difficult ) and
Our experimentation with statistical method is far from being completed . More work is needed to measure stability more accurately, using larger samples. We also plan to use more subtle strategies to explore the use of CA as a tool to evaluate causal dependencies, especially between the environment and the type of monasticism.
19.3.4.3 Impact of Criteria's Weights The distribution of the criteria ' s global weight is, as we have said , a global parameter of the analysis . However, it is important to note that it has little effect in the quality of the analysis , as measured previous! y. In Figure 19. 12, we show the results obtained with an even distribution of the criteria 's weights .
19.4
Conclusion
We have presented a method for using correspondence analysis with multi-criteria incomplete and uncertain data . This method consists of pre-processing the data so as to eliminate the possible biases caused by the uncertainty and the relationships between criteria. Because of this preprocessing, we have obtained results that are stable and meaningful from a statistical point of view.
IE E!3 Coverage
29 %
--' ---- - -- - ------ --
Figure 19.10: Anal ysis output with the second method
Figure 19.11: Anal ys is output with the simpler method 120
A METHOD FOR THE ANALYSIS OF INCOMPLETE DATA
numerous
6-
A4
(!!!]
1
El rich
poor far from the world
close from the world
[>
Figure 19.12: Results with even distribution of weights for criteria
At the same time, and this is not a coincidence, the results are also very interesting from an historical perspective. From such graphics, we can draw a few conclusions concerning some characteristics of ancient monasticism in Italy. For example, more women than men were living in communities of an urban type, which is not totally surprising. If we follow the chronology (G 1 to G3), we can also note that the tendency is to create monastic centres outside cities, possibly in areas where it was easier to feed a growing community. None of these remarks are revolutionary. A careful analysis of the texts could provide similar ideas. The purpose of such a graphic, however, is to make visible and to confirm such ideas. It should work as an explicit representation of the monastic settlements and therefore, correspond to the results of a careful historical analysis. It can, also, help confirm hypothesis when the number of sites, or of objects studied , is very important. In our study of monasticism , it is particularly useful to identify tendencies and to compare well-known sites to the less successful settlements that disappeared either during this period or later in the Middle Ages. This type of method provides an opportunity to do so. We believe that this method could be applied to a large number of problems in history and archaeology that deal with multi-criteria data and uncertain information.
Djindjian in the seminar 'Informatique et Mathematiques appliquees en archeologie ' . We wish to thank him for introducing us to that field of research. We also wish to thank the CIT at Princeton University for their help with SAT.
References SAS INSTITUTEINC. 1989 SAS/SAT User 's guide. Version 6. Fourth Edition, vol.l, Cary, NC : SAS Institute Inc., 615-675 . BENZECRI , J.P . 1973 L 'analyse des donnees . t.2: L 'analyse correspondances, Paris.
des
GREENACRE, M. J. 1984 Theory and Applications ..~~
~it.
;,I
. U,< ~
~---~~ ~ ~
... '"-.:.. 1.
tt
~
Figure 23.10: Types of window with information on graves. Interactive communication between the user (researcher or museum visitor) and the database.
145
F. QUESADA, J. BAENA AND C. BLASCO
In any case, the results obtained by the present method demonstrate features that deserve further study, such as the grouping of some 'clusters ' which perhaps reflect blood ties in the first half of the 4th century BC (the graves around grave 335, 191, etc. , Figure 23.5). It also seems significant that a high proportion of 'female' graves are covered with large tumuli or square platforms (graves 133, 202, etc.). Another approach is to analyse the association between graves with tumular stone coverings and pit graves not covered with stones. An initial possibility would be to consider that the pit graves may have been 'satellites' of the tumuli (infant graves , poorer graves) , but there would be numerous exceptions to such a supposition (e.g . grave 293 , relatively rich , which is probably a double burial) or graves 350 and 353 (Figure 23.6). Another alternative is that many pit graves , which occupy spaces between the tumuli as best they can, were added considerably later near the tumuli for family reasons . Thirdly , it could be suggested that pits were simply excavated between stonecovered graves so they could be in the nucleus of the cemetery , dispensing with the stone covering if necessary. Only a case by case study can ultimately resolve this question, since there are features (Figure 23.2) that point in opposite directions. Another line of research is that related with the hierarchical structure of this society. This can be undertaken from the study of the grave goods, the area and even the volume of the tumuli , values for all of which are stored in the database, and can be correlated with each other , or with other features (e.g . sex or chronology) all in terms of spatial distribution. By way of example , Figures 23.6-23.9 show the classification of wealth between the 4th and 3rd centuries BC using as defining value limits those considered significant in a previous study (Quesada 1994), although any others could be chosen . It can be seen that the 'units of wealth' and 'number of objects' criteria produce fairly similar overall results (Figures 23.6 and 23.7). A reduction both in the number of grave goods and in the size of the tumuli between the first half of the 4th century (Figure 23.6) and the 3rd century (Figure 23 .8), which continues into the 2nd century BC can also be seen .
distribution plans , but more particularly in other contexts . These dialogue windows are primarily designed for the museum use of the application , so that visitors can have interactive access to information about the cemetery, can ask simple questions and obtain information , not in the form of numeric data or the very schematic spatial delineations like the ones shown here, but in the form of graphic fields showing the elements of the grave goods in each grave, plans or brief textual descriptions .
References BLASCO, C. & BAENA, J . (forthcoming ). 'The application of Geographical Information Systems (GIS) in the Spatial Study of Bell Beake r sites in the Region of Madrid ', Ravello . BLASCO, C. ; BAENA, J. & ESPIAGO, J . (forthcoming ). 'The role of GIS in the Management of Archaeological Data : An example of the Application to the Spanish Administration '. Santa Barbara . BLASCO, C. ; BAENA, J. & RECUERO, V . (in press). ' Aplicacion los SIG para Ia determinacion de los BIC '. Bilbao . CUADRADO , E. 1950. 'Excavaciones en el Santuario iberico de El Cigarralejo (Mula , Murcia )', lnform es y Memorias 2 1, Madrid . CUADRADO , E. 1963. 'Ceramica atica de Barniz Negro de la necropolis de El Cigarralejo (Mul a, Murcia )', Archivo de Prehistoria Levantina , 10, 97- 164. CUADRADO , E. 1968. 'Tumbas principescas de EI Cigarralejo ', Madrider Mitteilungen 9, 148- 186 . CUADRADO , E. 1985- 86a . 'Excavaciones arqueologicas en la necropolis de El Cigarralejo . Campana de 1985 ', Memorias de Arqueolo gia Excavaciones y prospe cciones en la region de Murcia 2, I 91197. CUADRADO , E. 1985-86b . 'Excavaciones arqueologicas en El Cigarralejo . Campana de 1985', Memorias de Arqueologia . Excavaciones y prospe cciones en la region de Murcia 2, 200-202 . CUADRADO , E. 1987 . 'La necropolis iberica de El Cigarralejo (Mula, Murcia )', Bibliotheca Praehistorica Hispana, XXIII , Madrid . CUADRADO , E. & QUESADA, F. 1989 . 'La ceramica iberica fina de 'El Cigarralejo ' (Murcia ) Estudio de cronolog! a', Verdolay 1, 49115. QUESADA,F. 1989 . Armamento , guerra y sociedad en la necropolis iberica del Cabecico del Tesoro (Murcia, Espana) , BAR International Series , 502 , Oxford . QUESADA,F. 1994. 'Riqueza y jerarquizacion social en las necropol is ibericas : los ajuares ', Homenaje al Prof J.M.Blazquez, II, Madrid . SANTONJA , M. 1989 . ' Revision de las tecnicas en Osteologia, a la luz de su estudio en la necropolis de El Cigarralejo (Mula , Murcia )' ,
Boletin de la Asociacion Arqueolo gia 27 , 51-60 .
Together with these potential areas of research , which can be extended in almost any direction , the ARCINFO database also permits the consultation of graphic fields (Figure 23.10), which are useful for the researcher when comparing hypotheses suggested by the various
146
Espanola de amigos de la
24 A GIS approach to the study of non-systematically collected data: a case study from the Mediterranean Federica Massagrande Institute of Archaeology , University College of London , 31-34 Gordon Square, London UK
24.1
Introduction
This paper presents a project aimed at studying the possibilities offered by GIS techniques in the investigation of the relationship between urban centres and country sites during the Roman period in the Mediterranean region. When work was started on this project, the first aim was to obtain as much data as possible about settlement in different areas of the Mediterranean. It soon became evident that a large amount of the type of data required existed in various archaeological units scattered literally all over the Mediterranean region. This information had been generally collected during non-systematic field surveys and had been stored in the form of card collections, one site per card, occasionally supplemented by maps on which the location of each site had been plotted. A few sites were also reported by local people and were sometimes included in the catalogues even without the local archaeologists actually checking for their existence.
24.2
Yes, but what does 'site' mean?
The basic archaeological unit used in the catalogues is the 'site'. What a site actually is has been long debated in archaeology, and occasionally even people from other disciplines have suggested ways to improve the definition of the archaeological ' site' (see, for example, Wagstaff 1987 for a geographical view of the archaeological site). The variety of opinions on the definition of the nature of the archaeological 'site' was summarised very nicely by Schofield: 'The term most widely used in describing surface distributions, therefore, appears to mean something different to the majority of people responsible for their interpretation.' (Schofield 1991, 4)
•
that the data recorded in the surveys was available to the public either as published material or from the archives of local archaeological units;
•
that enough information was recorded about each site (i.e. site contents, not just site location);
•
that the survey areas, taken as a whole, should offer a good sample of the different geographical, geological and environmental conditions occurring around the Mediterranean basin;
Several areas which responded to these criteria were identified. Of these, four were chosen as sample study areas. These areas are: 1. Veii: the area around the Etruscan and then Roman town of Veii, north of Rome (Italy). 2. Maresme: the region of the Maresme, north-east of Barcelona (Spain). 3. Tarragona: the area around modem Tarragona, to the south-west of Barcelona (Spain). 4.
Seville: The region of Seville in the Guadalquivir
valley (Spain).
24.3.1 Veii The source of the information on Veii is the report of the survey published in 1968 in the Papers of the British School at Rome (Kahane et al. 1968). As this information is currently being revised and improved, it was judged better to use the data with some caution and therefore use the area for testing rather than drawing conclusions. The background data was digitised from maps of the region. The region of Southern Etruria, where Veii is situated, is a volcanic area, characterised by round lakes and fertile soil.
Since it is evident that no unique definition of the term 'site' exists in archaeology, it was decided to use this term to indicate any scatter of archaeological material that was entered as a single entry in a card catalogue in an archaeological unit, or in a published source.
The digitised information about the Ager Veientanus covers an area on the ground of 11km (east) by 18km (north). The co-ordinates of the south west corner of this area are 33TIG8205 l 0, and those of the north east comer are 33TIG930690 (UTM).
24.3
24.3.2 Maresme
The study areas
Four criteria were used to choose a selection of study areas from the Mediterranean region. The four criteria are: •
that the non-systematic survey had been carried out for enough seasons to cover a large enough region;
The information concerning the Maresme region was stored in the archives of the Archaeological Service of Catalonia (Servei de Arqueologia, Generalitat de Catalunya) in form of a catalogue of cards, each containing the information about one of the sites. The sites were grouped together according to which urban
147
FEDERICA MASSAGRANDE
centre was nearest. The background data was obtained from Spanish army maps of the region. The sites of the Maresme are along the Mediterranean coast of Spain. The area is not very high above sea level. The area on the ground covered by the digitised information is 20km (east) by 15km (north). The coordinates of the south west corner of this area are 31TDF370900 , and those of the north east corner are 31TDG570050 (UTM).
24.3.3 Tarragona The site information for the Tarragona area was kept at the Archaeological Service of Catalonia in Barcelona and stored in the same way as described for the data about Maresme . The background data was digitised from maps . Tarragona is located , like the Maresme , along the Mediterranean coast of Spain . The elevation is slightly greater and the coast line more ragged than in the Maresme , but the valley of the river Francoli divides the region into two parts , which differ in geology and land form . The size of the area which was digitised for computer analysis is 34km (east) by 23km (north) , the ordinates of the south west corner of this area 31TCF390460 , and those of the north east comer 31TCF730690 (UTM) .
the coare are
24.3.4 Seville The site data for the province of Seville was collected during surveys carried out in the Guadalquivir Valley, in south-west Spain , until 1986. The more recent data collected in the 1989 survey was not yet available. For the Seville area, maps of the different soil types are available too (these maps were included in De La Rosa & Moreira 1987). Part of the site data was kept at the Direcci6n General de Bienes Culturales in Seville. The data was stored in a card catalogue and covered the whole of the province of Seville. Other site data for the Guadalquivir valley was obtained from the four books published by M. Ponsich (1974, 1979, 1986, 1992) and containing the information he collected during a number of survey seasons in the area. More data was obtained from the systematic surveys carried out by Amores Carredano ( 1982), Escacena Carrasco and Padilla Monge (1992) and Ruiz Delgado (1985). The Guadalquivir valley is (nowadays) one of the most important agricultunl areas of Spain , with very fertile soils.
of brevity and because, thanks to its size, it is the one that offers the most possibility of exploring the site settlement patterns .
24.4
The data stored in the card catalogues is purely qualitative , the only information available being whether certain types of materials were present at the site or not. Other information contained in the cards includes the site co-ordinates and a rough dating of the periods in which the site was actually in use. Before the database structure could be designed in the first place , it was necessary to explore the data and decide which of the elements in the assemblage could actually be used to create site typologies and chronologies . Given the non-quantitative nature of the data, the structure of the database was designed with a boolean field for each of the diagnostic elements likely to occur at the site. This is probably a good example of the necessity to have a good knowledge of the form of the data before any sort of artificial structure is imposed upon it. Without a careful consideration of the importance and function (or assumed function) of all the elements of the database, it is impossible to be fully aware of all the implications of imposing a structure on the database itself. This is specially true when the structure is going to determine what can or can not be used as a diagnostic element in the analysis of the data. Table 24.1 (see Appendix) shows a small sample of one of the database files for the area of Maresme (a similar structure was used for the database of sites found in the valley of the Gaudalquivir). Each row represents a site. When one of the boolean fields is set to TRUE, the diagnostic element was present at the site, when it is set to FALSE it was not. Some boolean fields are also used to indicate whether there is definite evidence that the site was in use at any one particular time.
24.5
Technical data
The main GIS software used for the storage and manipulation of the background data (elevation , geology, hydrology etc.) is Idrisi 4.1. The site database is managed with dBase III+ and AutoCAD 12 for Windows was used to input the map data . The Statgraphics and MY-ARCH statistical programs are used to provide statistical capabilities more advanced than those offered by Idrisi alone. A number of custom programs were produced in Turbo Pascal, AutoLISP and dBase programming language to supplement the capabilities of the commercial software available. These include: •
This is the largest of the sample survey areas , covering a region of 143km (east) by 108km (north) . The coordinates of the south west corner of this area are 29SQA545893 , and those of the north east comer are 30SUG450880 (UTM). For the scope of this paper , only the data from the Guadalquivir Valley will be discussed , mainly for reasons
What is the non-systematic data like?
AxisConvert (Turbo Pascal). This program translates
the co-ordinates of spatial data from one system of reference to another . It is necessary to be able to put data from different sources into the same system of reference, so that all the available information can be used in the same GIS. •
ldrVals (Turbo Pascal). This program automatically
extracts the information from a background image 148
A GIS APPROACH
TO THE STUDY OF NON-SYSTEMATICALLY COLLECTED DATA
. • ~.1 ♦,,.
a....,,,.,.
.:;,,. ;;xt!Ct:r.•
~\•._~
♦•
~
. . .. ... .. +
•••
+
II
i~
+
• D
.• +
villagijD operapubblica llulino ID!.SP lattcria ~epDltura chi~ CilVil
c~l!LID Cil'-1
Figure 24.3: The distribution of Terra Sigillata Chiara C.
♦ •♦
TerraSigillataChiaraD
+
+
"'
•♦ + +
. +
•
. • . + X
D
+
CilVil
c~l!llo Cil'-I
Figure 24.4: The distribution of Terra Sigillata Chiara D
This fact, however, has no effect as to the value of this type of pottery to the dating of the sites, as the imitation pottery must be contemporaneous of or later than the type it imitates. Interestingly, the fact that local imitation of Terra Sigillata Chiara D is so widespread in the area is a positive factor in terms of dating because, generally, sites are more likely to have local fine pottery than imported fine pottery. The same factor, however, can cause a site to be classified as a high status one, even though it might not have
been, if the imitation of Terra Sigillata Chiara D was easily available to low status sites as well. The chronological limits are from the mid ill century AD to the VI century AD. The distribution of the sites classified as belonging to period 3 is shown in Figure 24.7. A number of sites contained a type of pottery which the surveyors just classified as Terra Sigillata Chiara without specifying the sub-type. These sites were included in both the period 2 and period 3 groups, but were flagged to 151
FEDERICA MASSAGRANDE
Periodo1 Repubblic~
♦
+ +
+
+
+
C
+
♦•
+
.;
+ c
■
•
X
romano villa urbano fattoria
TSC
Figure 24.5: The distribution of sites dating from the Republic (period 1).
signify that the dating is not accurate . A large number of sites (1007) were classed as dating to the Roman period by the surveyors, but no diagnostic pottery was recovered from these. These sites were again flagged to distinguish them from those which have been classed into any of the three chronological groups. Obviously, the same site can have been in use throughout the Roman period and will therefore appear in all the maps.
of the function of a site done by the people who carried out the systematic surveys in the region . A site was classified as a villa if it contained: •
a floor (mosaic or opus caementicium), or
•
standing structures or evidence of their presence in the past, or
•
marble elements (statues, architectural parts), or
Even excluding the sites which only contained non identified Terra Sigillata Chiara, the number of sites increases dramatically during the Early Empire (period 2) and then drops again, though not so sharply, in the Late Empire (period 3). The number of sites which were certainly in use during the Republic is 42, while the number increases to 618 in the Early Empire (721 with the sites where generic Terra Sigillata was recorded) and then decreases to 283 in the Late Empire (467 including those sites with generic Terra Sigillata) .
•
a kiln, or
•
dolia, or
•
mill or quemstones ,
24.8
Status classification
The classification of the sites was standardised according to the characteristics of the data stored in the database. As the most relevant distinction among the site types is between country sites with low status (farms) and those with a high status (villae), it was decided that a combination of certain specific elements had to be present at one location before that site could be labelled a villa. Where these diagnostic elements are missing, the site is classified as a farm. Farms are also differentiated from 'generic ' sites which the surveyors did not classify as 'rural ', to indicate that 'generic' sites are characterised by a higher degree of uncertainty. For the valley of the Guadalquivir the standard classification was built up by comparing the interpretation
and •
Black Glaze Ceramic, or
•
Terra Sigillata Aretina, or
•
Terra Sigillata Hispanica, or
•
Terra Sigillata Sud-Gallica, or
•
Terra Sigillata Chiara (any subtype), or
•
Thin-walled ware .
On the other hand, if a kiln and only local pottery were present, but no other elements such as imported pottery or floors, that was not considered enough to classify the site as a villa. Also, if a site contained early pottery (i.e . Black Glaze or Terra Sigillata Aretina) and an opus signinum floor, the site was classified as a villa because the opus signinum is an indicator of status in the early period, though not so in later periods. Any site which had been classified as a villa by the surveyors (systematic and nonsystematic) was classified as a farm if it did not meet the specifications outlined above. As a result, according to the standardised classification , 436 sites were classified as villae. Of these, 247 had been originally classified as 152
A GIS APPROACH TO THE STUDY
OF NON-SYSTEMATICALLY COLLECIBD DATA
aC
fl
•c
+ + +
1alcP II,
c+
0 ~c
~~£c ! CC X*t+;f:i
■\
/i
C ~C c~•
C C C
C ++ C n
~ C
~'
*g
i+cril
+ c
C
■ +
ccc
X
romano villa urbano fattoria
TSC
Figure 24.6: The distribution of sites dating from the Early Empire (period 2).
\
..
~c
+
.. + +
••
C
+
+
+ c
■
+ X
romano villa urbano fattoria
TSC
Figure 24.7: The distribution of sites dating from the Late Empire (period 3).
villae by the surveyors, while 189 where otherwise classified. The total number of sites which had been classified as villae by the surveyors was 561, which reflects the tendency for this methodology to stay on the 'low status' side when classifying country sites.
24.9
Looking at the data
In the valley of the Guadalquivir, major concentrations of sites are present around the two towns of Lebrija and El Coronil, while the area to the north of Seville presents a
more uniform pattern of site distribution (see Figure 24.8). This is due to lack of data for the area south of Seville, where surveys have not been carried out yet, rather than reflect a real archaeological pattern. The position of Roman sites in the area appears to be influenced by the position of modem features such as modem towns and roads. A good example of this is the pattern around the modem town of Carmona, where the sites follow the main roads, especially the C432 motorway. Ancient sites also seem to be related to major rivers. Cost distance buffers
153
FEDERICA MASSAGRANDE
Guadalquivir Valley 0
10
20
30
40 km
1 ~
Figure 24.8: The distribution of Roman sites and the position of modern roads and towns .
starting from modem towns and roads were created for the valley of the Guadalquivir . The frequency of sites was then plotted against the distance from the modern features (Figure 24.9): the largest concentration of archaeological sites lies within 3 Km from any major feature , with the curve dropping sharply afterwards , with the increase of distance (the sites at 0 distance account for sites found in towns). However, if we look at the area around the town of Carmona in detail (Figure 24.10), we notice that both the main motorway and the sites follow the landform. Carmona is on the top of a hilly area , called Los Alcores, which has (and had in Roman times) many streams running down its south side. The modern road follows the ridge at the top of the hill and therefore has the same direction as the line of the Roman sites. The Roman sites seem to be more abundant on the south side rather than the north side of the Alcores, which might be due to the presence of the streams and also the protection offered from the northern winds in winter. When the site distribution is plotted against some of the background variables , it appears that some of these are related to the position of archaeological sites. To the north of the study area , the archaeological sites are found on riverine geology and they follow the geology so well that it is unlikely that this pattern is only influenced by the distance from the river . The sites are present up to the limit of one geology type and not in the neighbouring one. So far, not enough work has been done to assess whether this is a real settlement pattern or whether it reflects a bias in recovery or in the survival of the sites . The next stage of the research will be to create a site suitability map of the Valley of the Guadalquivir . The concept of land suitability has been used in soil
assessment and ecology to determine the best use of land (e.g. De La Rosa & Moreira 1987, 85-130). This is based on elements such as the intrinsic characteristics of the soil, the degree of slope, the climate and the drainage . A similar approach can be used in archaeological research . The natural variables will be classified into a series of classes (from worst to best for site location) and then the values of all the background variables in any one cell location in the GIS maps will be added up, just like is done in predictive modelling. In the resulting map each cell will have a value representing the degree of suitability for the purposes of agricultural exploitation weighed against other elements such as the distance from the nearest known Roman road and the nearest town. The site distributions for the three chronological classes will then be compared with the suitability map . Hopefully, in the end it will be possible to obtain an indication of the pattern of land use in Roman times. Some of the questions that might be answered are: •
Did people start to move the settlement into marginal land when new sites were created in the countryside in the Early Empire?
•
Are the sites in marginal land those which were preferably deserted when sites started to be abandoned in the Late Empire?
Another type of question that which will be investigated is whether the relationship of rural sites (villae and farms) to urban centres changed in the three chronological periods. At a first glance, it appears that the site type which is closer to the urban centres are the farms, while the villae appear to be in an outer layer around the farms . This idea will be tested by creating a weighed cost surface around the urban centres and statistically testing the association between certain distance bands and the status of sites. 154
A GIS APPROACH TO THE STUDY OF NON-SYS1EMATICALLY COLLECTED DATA FrequencyPolygon 500
·i········· ····· ·····i· ·········· ········ (·········· ·····i·················· i' ·················1···················r ··················;···················;·
400
: 300
:
•. •
frequency 2
1
-:-••·······- --------·; .. ........ . ............. . .;....... .. ... ... 20
.;..
60
40
80
Distancein Kilometres
Figure 24.9: Graph of the site frequency plotted against the distance from modem features
Figure 24.10: The area around modem Carmona. The dark patches represent concentrations of archaeological sites.
The weighed cost surface will include physical geographical information such as the landform and the soil type (swamps are not uncommon in this region), and human geographical information such as the position of known Roman roads.
AMORESCARREDANO, F. 1982. Carta Arqueologica de Los Alcores (Sevilla), Excma, Diputaci6n Provincial de Sevilla, Seville.
Another point which will be investigated is whether the status of a site appears to have changed in time, for example, whether a farm in period 1 is promoted to villa in periods 2 or 3 or whether a villa in period 2 becomes a farm in period 3.
ESCACENA CARRASCO , J. L. & PADill..AMONGE,A. 1992. El Pobalmiento Romano en las Margenes del Antiguo Estuario del Guadalquivir , Editorial Graficas Sol, Ecija.
References
DE LA ROSA, D. & MOREIRA , R. 1987. Evaluacion Ecologica de Recursos Naturales de Andaluda , Agencia de Medio Arnbiente, Junta de Andalucfa, Seville.
HODDER ,
155
I. R. & Orton, C. 1976. Spatial Analysis in Archaeolo gy Cambridge University Press, Cambridge.
FEDERICA MASSAGRANDE
KAHANE,A., MURRAYTHREIPLAND, L. & WARD-PERKINS, J. 1968. The Ager Veientanus north and east of Veii, Papers of the British School at Rome 36.
P0NSICH,M. 1986. Implantation Rurale Antique sur le Bas Guadalquivir , Vol. III , Publications de la Casa de Velazquez, Serie Archeologie, Paris.
MARSH,G. 1981. 'London's Samian Supply and its Relationship to the Development of the Gallic Sarnian Industry ', in Anderson, A. C. & Anderson, A. S. (eds.) Roman Pottery Research in Britain and North-West Europe , British Archaeological Reports International Series 123 (i), Oxford , 173-238 .
P0NSICH,M. 1992. Implantation Rurale Antique sur le Bas Guadalquivir , Vol. IV, Publications de la Casa de Velazquez , Serie Archeologie, Paris. RUIZDELGADO,M . M . 1985. Carta Arqueol6gica de la Campina Sevillana - Zona Sureste I, Publicaciones de la Universitad de Sevilla, Seville .
P0NSICH, M . 1974. Implantation Rurale Antique sur le Bas Guadalquivir , Vol. I, Publications de la Casa de Velazquez, Serie Archeologie, Paris.
SCHOFIELD , A. J. (ed.) 1991. Interpreting Artefact Scatters. Contributions to Ploughzone Archaeology, Oxbow Monograph 4, Oxbow Books, Oxford .
P0NSICH, M. 1979 . Implantation Rurale Antique sur le Bas Guadalquivir , Vol. II, Publications de la Casa de Velazquez, Serie Archeologie, Paris.
WAGSTAFF , J. M. 1987 . Landscape and Culture. Geographical and Archaeological Perspectives , Basil Blackwell Ltd, Oxford.
Appendix: Sample of database for the Maresme area CD
NO
X
al al al al al al al al al al ca ca ca ca ca ca ca ca ca ca ca ca co co co co co
1 2 3 4 5 6 7 8 9 10 1 2 3 5 6 7 8
639 640 646 637 648 635 647 639 649 635 589 585 552 590 579 595 568 568 585 597 5 66 552 488 501 492 492 485 492 499 485 498 489 502
co co co co
co co
10
11 12 13 14 1 2 3 10 13 14 15 16 17 18 19
YN0ME 561 Castell d'Altafulla 573 Village de la Coma 572 Vil.la de la Casera 579 Vil.la de la Revella 553 Pedrera 'Els Munts ' 565 Pedrera Sant Antoni 554Els Munts 560 Vil.la de l'Esglesia 561 Vil.la del Costat 567 Sant Antoni 587 Vil.la dels Cocons 624 Vil.la Mas Moragues 572Manous 599 Mas d'En Ros 613Mas d' En Bernat 598 Castell el Catllar 587 Quadra de Vilet 632 Mas Fortuny 624 Mas Moragues 585 Camp de Tir 61 8 Sitja Carrettera 575Manous 567 Mas dels Frares 576Castell de Constanti 578 Vil.la de Centcells 578 Les Tries 567 Sant Lloren~ 574Mas de Serapf 571 Sant Pol 566 Sant Lloren~ 566 Riuderenes II 553 Les Gavarres 558 Riuderenes
VA TIPO
ST PAV
3 astle 11 village 13 villa 13 villa 4 quarry 4 quarry 13 villa 13 villa 13 villa 5 site 13 villa 13 villa 5 site 13 villa 13 villa 3 castle 13 villa 2 house 2 house 5 site 5 site 5 site 13 villa
T F
3 castle 13 villa 13 villa 7 church 13 villa 5 site 13 villa 5 site 11 village 5 site
F F F F F F F F T F F F F F F F F F T F F F F F F F F F F F F F T F F T T T T T F F T F T T T T T F F F F F F T F F T F T F F F F F F F T T T T T F F F F F F F
T T
F F
F F F F F F F F F F F F T T F F F F F F F T T F F T F F F F T F F F F F T T T F T F F F F T T F F F F T T F F T T F F F F T F F F F F F F T T F F F T F F F F T T T sign T F F F T F F F F T mosaic F F F T T sign F F F T T F F F F T sign F F F F F F F T F F F F F T F F F F T F F T F F T T T F T T mosaic T sign T sign
F F F F T F F F F F F F F T T T T F T T T T F F F F F F F F F F F F F F F F F F F F T T T T T F F F F F T F F F F T F F F F F F F F F F F F F F F F F F F F F F F F T T T T T F T T T T F F F F F T F F F F F T T T T F T T F F F T T T T F F F F F F T T T F F T T T T F T T T F T T T
F F F F F F T F F F T F F F F T T F F F F F F F F T T F T F F F F F T T F F F F F F F F F F F F F T F F F F F F F F F F F F T F F F F T F F F F F T T F F F F T F F T T
F F F F F F F F F F F F T F T T F F F F T F T T F F F F F F F F F T F F F F F F F F F F F F F T F F F F F F F F F F F F F F F F T F F F F F F F F F F F F F F F F T T T F F F F T F T T F F F F T T F F F F F F F F F F F F T T
Key to the database field names: CD
Town code
AR
Architectural Remains
AM
Amphorae
NO
Number
HI
Hiberic Period
CA
Carnpanian Pottery
X
X coordinate
RE
Republican Period
co
Common Pottery
Y
Y coordinate
AU
Augustus
TI
Terra Sigillata ltalica
NOME
Name
EE
Early Empire
TH
Terra Sigillata Hispanica
VA
Value
TC
Third Century
TA
Terra Sigillata Africana
TIPO
Type
LE
Late Empire
TG
Terra Sigillata Sud Gallica
ST
Structure
MA
Middle Ages
GL
Glass
TL
Tiles
DO
Dolia
MB
Marble
Table 24.1: Sample of database for the Maresme area
156
F F F F F F F F F F F F F F T F T
F F F F T F F F T
F T F T F F T
F
F
F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F
F F F F F T
F F F F F F F F F F F F F F F F F F F F F F F F F F
25
Detection of beacon networks between ancient hill-forts using a digital terrain model based GIS Kazumasa Ozawa 1, Tsunekazu Kato 1 and Hiroshi Tsude2 1 2
Osaka Electro-Communication University, Neyagawa, Osaka, JAPAN. Email: [email protected] Osaka University, Toyonaka, Osaka, JAPAN
25.1
Introduction
Most villages in the Late Yayoi Period (100AD-300AD) were spread over the plains, since people settled near paddy fields. At the same time, a very small number of settlements were sited on hillsides at altitudes higher than I 00 metres. These are archaeologically interpreted as hill-forts on the grounds that they were situated for military purposes (Tsude 1989a). More than six hundred hill-forts have been found all over the country. However, their distribution is not uniform across the country, but many are located in a particular region surrounding the Inland Sea. This is a result of the fact that the region was an important stage for many historical events in the ancient period. One of the most exciting views is that beacon networks may have existed between these hill-forts for military purposes. To establish whether this was the case or not, we should first examine the visibility between every pair of hill-forts. Here, visibility means whether there is an unobstructed view from one site to another. Although the final conclusion of the hypothesis can only be provided by archaeological excavation, examination of visibilities between all sites is nevertheless a very important step. Indeed, a field experiment has already been carried out on a few sites by archaeologists (Tsude 1989b). In the experiment, several people went up to a key site and fired tyres to make smoke. Groups of people at other sites observed the smoke rising from the key site.
25.2
Digital Terrain Model
The digital terrain model employed for our GIS is structured as a hierarchy of grids defined over the Japanese Islands (see Figure 25.1). The minimum grid for digitising the terrain consists of a unit squares of 250 by 250 metres. A fixed number of unit squares of a grid are grouped into a unit square at the upper level of grid. The maximum grid consists of 20 by 20km unit squares. The digital terrain data described here are provided by the Japanese government for limited scientific purposes only. The south-west corner of each unit square of the minimum grid, denoted by a black dot in Figure 25.2, is a representative point for sampling the elevation of the terrain. The elevation of each point is accurately recorded in metres, and the minimum interval between sampling points is 250 metres (Figure 25.2). Obviously, any topographical shape smaller than the minimum unit square cannot be represented by the terrain model. However, a significant proportion of ancient hill-forts are comparable to the minimum unit square in size, making it almost impossible to describe the precise topographical shape of a hill-fort with this grid system. As a result, there may be only one sampling point close to or within a hill-fort site providing a representative value for the whole
44•
The experiment needed much labour and time. Almost all the hill-fort sites are located in the places that cannot be approached by car. Many people had to be coordinated in the experiment at the same time over different sites which are 10km or more distant from each other. Another problem was related to the urbanisation of the surrounding land: tall buildings and air pollution caused difficulties even though the topographical shape of the land has not changed since the ancient period.
42'
-"' ;;g; '" j'
~i:~=~t::~=~t::~=~t:~ r= ,~:L ·S~E :~ :::
.: ·
,er 38' Q)
-c,
:::
36°
~
34•
~
c;s 32' -•· =.. ··...
3cr
Taking those problems into account, an appropriate Geographical Information System (GIS) would appear to be more useful than field experiments in this case. As a matter of course, the GIS should be based on a digital terrain model since the topographical data play a key role in examining visibility between sites. In this paper, current progress towards building the specialised GIS will be presented. A small-scale simulation to verify the basic function of our GIS has also been carried out, and compared with the archaeological field experiment.
"!:-..,!!
m.
28'
26'
!!!!!•"
"""'" 130-
ucr
145'
longitude Figure 25.1: Digital terrain model: The maximum grid defined over the Japanese Islands.
157
K.AzUMASA OZAWA, TSUNEKAZU KATO AND HIROSHI TSUDE 2 5 0m ~
Terrain
Data
Archaeological
r·- ·- ·- ·- ·-·- ·-·-·-·! .-------~
I
i ! I
·-·-·-·-
Query Processing Module
Data
·- ·- ·- ·- ·-·- ·-·- ·- ·- ·- ·-1 ~-----~ !
Geographical Processing Module
i ·-·-·-·-·-·-·-·-·-·-·-·-·-·-·-·-·-·-·-·-·-·-·-·-· -·-'
Figure 25.2: Digital terrain model: The minimum grid.
Figure 25.4: Conceptual diagram of the GIS .
site. It follows that the examination of visibility based on such terrain data is likely to be affected by the quantised error due to the grid system. In our GIS, a very simple procedure such as adding a constant value (IO metres , for example) to each site has been introduced to avoid the quantised error. This procedure might , however, produce a new kind of error, in which the system incorrectly identifies two sites as visible, which are in fact topographically obstructed. Minimisation of this kind of error can only be achieved by optimising the constant value through comparative studies between the GIS and field experiments .
The GIS now contains all the terrain data of the region surrounding the Inland Sea (see Figure 25.3). About 50 megabytes are required to store this data on hard disk. Any part of the region can be displayed by simple commands to the GIS. Basic functions of the GIS are outlined in the next section.
25.3
System
Work to build a digital terrain model based GIS started in 1992 mainly to examine the visibility between ancient hill-fort sites. We now have a long term goal of developing our GIS into a generalised system. However, at present it is simply a specialised system related to the ancient beacon networks. In the following, we present a conceptual illustration of the GIS and its basic functions: Figure 25 .4 shows a conceptual diagram of our GIS which consists of two main modules. One is engaged in query operations and in file management. The other concerns the geographical processing which includes display of every part of the region, drawing of distribution maps and examination of visibility between sites. The database is partitioned into two sub-databases. The first contains the digital terrain data of the region , while the second stores the archaeological data relating to the ancient hill-forts and the keyhole shaped tomb mounds. The current data span a period of 900 years from 300BC to 600AD. As a result of rapid technical innovation, the latest personal computer has sufficient memory and computing power for our work. Our GIS has been implemented on a notebook personal computer with a 320 megabyte hard disk and a liquid crystal colour graphic display. 25.3.1
Figure 25.3: Terrain data: The base region stored in the GIS.
Query operations
Figure 25.5 shows a display image for making an inquiry. The table is designed for so-called QBE (Query-ByExample) based inquiries . A user can make an inquiry by entering the required information into columns and rows using a mouse or trackerball and keyboard. Once the 158
DETECTION OF BEACON NETWORKSBETWEE N ANCIENT HILL-FORTSUSING A DIGITALTERRAIN MODELBASEDGIS
Figure 25.5: QBE-based inquiry table .
Figure 25.7: GIS-assisted examination of visibility : example of invisible sites .
Figure 25.6: Distribution map.
Figure 25.8: GIS-assisted example of visible sites
enquiry is complete, OR operations between rows and AND operations between the columns in each row are performed . This follows the conventional approach to a QBE system (Wiederhold 1977) . The query then extracts a subset of data from the database . Every retrieved subset can be defined as a work file in the system. Combining a work file with the corresponding terrain data, the geographical processing module (Figure 25.4) can quickly draw a distribution map.
To examine visibility between two sites , the GIS looks through the terrain data to find whether or not there is any topographical obstruction between both sites. Figure 25 .7 shows a example where an obstruction cuts visibility between two given sites. Where there is visibility between two sites , a straight line connecting both is drawn on the distribution map (Figure 25.8) . For this type of processing, two parameters need to be fixed in advance : the constant value to add to elevation of each site (from our comparative study described below, the optimum value looks to be between 10 and 20 metres), and the limit of vision meaning the distance over which an observer can have a clear view. In our case, the limit of vision is set to 20km.
25.3.2 Geographical processing Currently, the geographical processing module can only provide the minimum functions to examine visibility between sites. The base region handled by the GIS is limited to the area shown in Figure 25 .3. Any part of this region can be displayed as a two-dimensional image, each pixel of which is coloured according to the elevation. Given a subset of sites in a work file , a distribution map can quickly be depicted by superposing the sites as dots on the displayed map (Figure 25 .6) . Different subsets of sites can also be superposed on the same map using different colours.
25.4
examination
of visibility:
Detection of the Ancient Beacon Networks
As mentioned above , the archaeological hypothesis is that beacon networks existed in the Yayoi Period. In order to obtain the basic information to test the hypothesis , a field experiment has already been carried out by archaeologists on a small set of sites located on the hillsides between
159
KAzUMASA OZAWA, TSUNEKAZU KATO AND HIROSHI TSUDE
Figure 25.9:A visibility network certified by the field experiment.
Osaka and Kyoto. For our GIS-assisted study, we have extracted seven sites from the set, since they are common to our database . Figure 25.9 shows the seven sites, A-G, with black dots denoting locations of the sites and a hard line connecting two sites indicating that both were confirmed to be visible in the field experiment. The Kanji characters neighbouring a dot indicate the name of the site. The first step toward GIS-assisted detection of the beacon networks is a comparative study to verify whether the GIS can function as well as the field experiment. Using the GIS, we obtained a similar connecting relationship between the seven sites (compare Figure 25.9 and Figures 25.10 and 25.11). Figure 25.10 shows the connecting relationships associated with site D, and Figure 25.11 is a display image from the GIS, detecting three sites visible from site D. In this example , we added 10 metres to each site as the constant value and let the limit of vision be 20km. From our comparative study on the seven sites, it appears that the GIS performs as well as the field experiment in identifying visibility between sites. The set of sites handled in our study is, · however, too small to verify our GIS completely. Unfortunately, there are no
other field results available , and more time will be required to obtain additional experimental results . When verification is complete , we will be able to look at the ancient beacon networks as a practical problem . Assuming we were now in the position to do this , a simple procedure to detect the beacon network might be as follows.
If the basic requirement for a beacon network was to transmit information to distant places , it will be important to detect a beacon network as a long-distance structure in a given region. However, our geographical processing automatically detects the visibility network of sites which will include many sub-networks ; i.e. the short-distance structures of clustered sites. It follows that an additional procedure should be introduced to extract the longdistance structure from such a machine-generated visibility network. One possible way to do this would be by clustering sites: A group of sites close to each other can be considered as a cluster of adjacent and friendly sites, among which information could be quickly spread independently of the beacon network. Furthermore , if only one site in a given cluster exchanged information with distant sites, then we need a procedure to extract a beacon network , as a long-distance 160
DETECTION OF BEACON NETWORKS BETWEEN ANCIENT HILL-FORTS USING A DIGITAL TERRAIN MODEL BASED
---
E
___
----~JJ1-
/
/
\
G•----
/ C
/
\
1-fu ~ ttlfi¥.i.
GIS
/
\
/ /
/
\
/
-------------\ -----\ /
F ~~ili
D i¥iw Figure 25.10: Comparative study: the three sites visible from site Din the field experiment.
Figure 25.12: Detection of the beacon network: computer gen~rate~ vi~i~~lity!2_et.wor~ _. _ _ __ _ ___ ___ _ _____.
Figure 25.11: Comparative study: The corresponding result for site D given by the GIS.
Figure 25.13: Detection of the beacon network: longdistance structure .
structure, from the visibility network: To do this , we first detect clusters of adjacent sites in a given region using the single-linkage method (Ozawa 1983) with the threshold distance to link two sites set at 3km. Then , from each detected cluster, we select one site located around the centre as the beacon station of the cluster. Finally, we define the beacon network in terms of the connecting relationships between the stations. Figure 25.12 shows a visibility network in a region surrounding Osaka Bay. Figure 25.13 presents a long-distance structure extracted from Figure 25.12 using the above procedure. Needless to say, further investigation and discussion is required in order to determine whether or not such a long-distance structure actually represents a beacon network.
issues involved in detecting the beacon networks. Among them , archaeological investigation will be most important in order to establish that the type of long-distance structure defined in this paper in fact reflects an ancient beacon network.
25.5
Conclusion
We have briefly described our digital terrain model based GIS designed to detect the ancient beacon networks. We need to verify the ability of the GIS to recognise visibility by comparing its results with more field experiments. Once such verification is completed, there are still many
Another future task concerns developing the present GIS into a generalised system. A GIS associated with digital terrain data which precisely represent the threedimensional shape of the land will play a very important role not only in the examination of visibility but also in approaching other archaeological research problems.
References WIEDERHOLD , G. 1977. Database Design , McGraw-Hill Kogakusha, Tokyo. OZAWA,K. 1983. 'Classic : A Hierachical Clustering Algorithm Based on Asymmetric Similarities', Pattern Recognition , 16(2), 201-211. TSUDE, H. 1989a. Formation Process of Agrarian Society in Japan , Iwanami-Shoten, Tokyo (in Japanese ). TSUDE, H. 1989b. 'People in the Yayoi Period and the Beacon Network ', Tosha 8, 15-19 , Iwanami-Shoten, Tokyo (in Japanese ).
161
162
26 Remote sensing, GIS and electronic surveying: reconstructing the city plan and landscape of Roman Corinth David Gilman Romano Osama Tolba Mediterranean Section, The University Museum, University of Pennsylvania , 33rd & Spruce, Philadelphia, Pa 19104-6324, USA. Email: dromano@mail. sas. upenn. edu, otolba@mail. sas. upenn. edu
26.1
Introduction
Since 1988 a research team from the Mediterranean Section of The University Museum of the University of Pennsylvania has been involved in making a computerised architectural and topographical survey of the Roman colony of Corinth. Known as the Corinth Computer Project, the field work has been carried out under the auspices of the Corinth Excavations of the American School of Classical Studies at Athens, Dr. Charles K. Williams, II, Director 1 . Although the excavations at Corinth by the American School have been underway for almost a century, aspects of the study of the layout of the Roman colony have remained incomplete due to the size and complexity of the site as well as its complicated history. Our original objectives were to study the nature of the city planning process during the Roman period at Corinth ; to gain a more precise idea of the order of accuracy of the Roman surveyor; and to create a highly
0
accurate, computer generated map of the ancient city whereby one could discriminate between and study the successive chronological phases of the city's development 2 . It is important to acknowledge that during the course of the six years of the project to date, the nature of the research has evolved from a fairly straightforward consideration of the location and orientation of the excavated roadways of the Roman colony, to a more complex topographical and architectural consideration of various elements of the colony, including the rural as well as the urban aspects of planning and settlement. The project now utilizes a number of methodologies, simultaneously, in the overall study of the ancient city. At the time of writing, one aspect of the project is a regional landscape study of a portion of the Corinthia, with the city of Roman Corinth as the focus. Another aspect of the project is the effort to include information from the city of
100 km
Figure 26.1: Map of the Peloponnesos and southern mainland Greece.
163
DAVID GILMAN ROMANO AND O SAMA TOLBA
m
2265.6
r L
1062
C A R D 0
D
C
I l
m
M A
1800
B
X
A
Pl!!:?
1!5..LCTUS}
I M
u s
3840 nEr (32 ACTUS)
Figure 25.2: Schematic drawing of the four quadrants of the urban colony , labelled A ,B,C,D, each of which is 32xl5 actus , with centrally located forum and cardo maximus .
dTIL. .
01Jdl l l I I \ 111ID 11111\\ I \ ITTTJ OD D
j
A
... ...
r
II-
I ;! ;
I'-
J..~--... I . I
... FORUM
l 1 r;1 \ t
l ~~
J{l\ \
fililli\ I\fil --
\1 l~\I
1\T
•
l
I
I-
CORINTH r-
0
--,
0 .5
1km
Figure 25.3 : Rest ored ' drawing board ' plan of the Roman colony .
Corinth from chron ological periods other than Roman , specifically Archaic and Classical Greek , Hellenistic , Late Roman , Byzantine , Frankish , Venetian , Turkish and modern . As will be discussed briefly below, by means of low level and high altitude air photography , as well as satellite images and some balloon photographs , the limit s of the project have been greatly expanded into areas that had not been considered in the original research design . This project is under the directi on of the senior author of this paper . In the past six years I have been assisted by numerous students from different fields of study at the Universi ty of Pennsylvania and I am very grateful to all of them for their assistance and contributions 3 • Osama Tolba has been a Research Intern in the Mediterranean Section of The University Museum since 1992, assigned to the Corinth Computer Project and with special interest in remote sensing and GIS applications .
25.2 Historical background Founded by Julius Caesar in 44 BC , the Roman colony of Corinth , Colonia Laus Julia Corinthiensis was laid out virtually on top of the former Greek city that had been destroyed by the Roman consul Lucius Mummius in 146 BC The former Greek city had remained largely uninha bited for 102 years . Acc ording to literary sources , the Greek male population had been killed and the women and children had been sold into slavery 4 . The location of Corinth had been imp ortant during the Greek period , situated near the Isthmus , the land bridge between the Peloponnesos and mainland Greece , as well as having ports on the Sar onic Gulf as well as the Corinthian Gulf (Figure 25.1 ). In the new foundati on of 44 BC the Romans utilized many of the existing Greek buildings in the design of their own city although the organization and city plan of the Roman colony was different than its Greek predecessor . 164
REMOTE SENSING, GIS AND ELECTRONIC SURVEYING
• CORINTH
f-----
N
r
----,
I
0.5
I
1
2km
0
Figure 25.4: The evidence for Roman centuriation of 44 B.C. from all sources of information , including the city grid within the Greek circuit walls, and the rural 16x12 actus system of centuriation.
At this time, at the end of the sixth year of the project, we have succeeded in defining a detailed plan of the urban Roman colony as well as having evidence for what is likely to be several phases of Roman agricultural land division , centuriation, of the territorium surrounding the colony.
25.3 Project result summary 25.3.1 Urban Colony Based on the combined evidence of the site survey as well as information from air photographs and 1:2000 topography maps, which will be discussed below, we know that the Roman colonial plan was based on four quadrants (centuries) , each 32x15 actus or 240 iugera (Figure 25.2). The colony was laid out per strigas with a total of 29 cardines and 29 one actus wide insulae in each of the four centuries. There would likely have been six decumani per century (Figure 25.3). The original plan for the urban
colony existed almost entirely within the Greek circuit walls of the city. It is likely that the forum of the colony was originally designed to occupy an area of 24 square actus or 12 iugera, comprising the topographical center of the colony. Some of this space was already occupied by significant Greek and Hellenistic structures, including the South Stoa. We also know now that probably simultaneous with the laying out of the urban colony, the Roman agrimensores, land surveyors, measured out rural land surrounding the colony for agricultural purposes .
25.3.2 Territorium As a part of our work to date, a widespread pattern of centuriation has been identified of 16x 12 actus , possibly parts of a larger 16x24 system to the north of the colony as far as the Corinthian Gulf (Figure 25.4) . In fact, several systems of centuriation have been identified in and 165
DAVID GILMAN ROMANO AND OSAMA TOLBA
Figure 25.5: Schematic diagram showing the relationship between the multiple elements of the Corinth Computer Project.
around Corinth as a result of this study. For years, scholars had suggested that Roman Corinth could not have been founded on an agrarian base since no centuriation was present in the area 5 . Since this is now proven not to be the case , a new examination of the evidence is presently underway .
25.4 Methodology 25.4.1 Computer hardware The project , both in Corinth as well as in Philadelphia , employs fast IBM type personal computers (Zeos 486 machines , 66 MHz.) with hard disk capacities up to over 1 gigabyte. Because each of the topographical drawings can be so large , up to 12 megabytes each , the large disk capacity is a necessity. The computers currently have 16 MB RAM. The air photographs as well as the satellite imagery are stored permanently on optical disks with 1 GB capacities . Compact disk players have also proved useful for acquiring SPOT satellite data, discussed below under Satellite Imagery. The input devices are of two types. First , digitiser tablets are used for tracing maps and plans in AutoCAD drawing format. These are relatively small tablets , no larger than 12x18 inches (Summasketch and Hitachi). Secondly, a flatbed scanner (HewlettPackard ScanJet Ile) with an area of 8.5x14 inches is used to digitise air photographs and smaller drawings. Plots may be made by means of an E-size eight pen plotter
(CalComp 1025) as well as a laser printer (HewlettPackard LaserJet Illp) (Figure 25.5) .
25.4.2 Computer software The principle computer program utilised in this project and the vehicle with which the other computer programs work is AutoCAD (AutoDESK, Inc). All maps and drawings are digitised and managed using AutoCAD ; we currently use versions 11 and 12. In addition, the field survey data is directly translated from the hand-held computer to the personal computer, from survey software (Sokkia MAP) to AutoCAD using the DXF format. Air photographs are scanned and viewed behind the drawings using CAD Overlay GSX (Image Systems Technology Inc.) which allows us to enhance image contrast and trace selected features. We are also currently using IDRIS! (Clark University) for various GIS functions . For instance , IDRIS! is employed to rectify the air photographs and to maintain the raster geographic database , which includes SPOT and LANDSAT satellite images as well as a variety of aerial and balloon photographs . IDRIS! also includes limited capabilities to import AutoCAD's 'poly-lines' , which we used to translate the elevation data in order to generate a digital terrain model (see below under simulations) . We have used 3D Studio (AutoDESK , Inc .) to generate renderings and animations of the digital terrain model of the topographical region under study.
166
REMOTE SENSING, GIS AND ELECTRONIC SURVEYING
·1 '
-~-
"
,....
~
r
~
-
C '. ·~s.. -~ -~ ,.
t.
C
~~~t;
"'
'
1 •
1··
;;
i
I" ...
L
r-r-,
tU; ~
"r-
\
I\.
..,
,,, ~
, ,"' J" f' "=,
.
-~~
,...
II"
""
~
,r:, j,o
~, 1~
",
I
r-.
" ....
"" i ,"l
........
~
I
L,,~
...
0 1•
J
-to- r-,.._ ;-.
1
I'
..
~~
11.
~
•
...
,-
,- 'J
VL,,'
·
~
FORUM
'
...
~
I"
....
i:ll
• 0
r'I
~
-
t:!
~:
11.
-
D,.~ .... ..
11~r-
[I!
...
I:::
~
......'
-'I\
-' '
,I
... :: ........
'I.
....
~
r--' I"::
-
....;
"',"",
L,,
~
[..,
...
~
'
/
~
~-
Figure 25.6: 'Drawing board ' plan of the Roman colony superimposed on the map of the modern village, including village roads (1963).
Figure 25.7: 'Drawing board' plan of the colony superimposed on the modern village, village roadways and modem field lines and property lines.
25.4.3 Field survey During the past six summers, the physical survey of virtually all of the above-ground monuments and structures of the site of Ancient Corinth has been accomplished with a Sokkia electronic total station linked to a compatible hand-held computer for data storage and retrieval 6 . The visible monuments of the city and surrounding area have been surveyed and recorded by the director of the project aided by small teams of students .
The intent of the survey has not been to create an actualstate drawing of the buildings and monuments of the city, but rather to make measurements in three dimensions of the exact location and orientation of the diagnostic elements of the monuments and structures of the city. Each day the survey data from the hand-held computer is transferred to a PC in the field house where it is edited and the drawings are created in the Sokkia Survey program MAP, and then exported in a DXF file to AutoCAD . Each of the individual survey jobs from all 167
DAVID GILMAN ROMANO AND OSAMA TOLBA
Figure 25.8: Actual-state drawing of the Roman Lechaion Road as it leaves the Roman forum to the north.
Figure 25.9: Actual-state drawing of the Lechaion Road (same as Figure 25.8) as it is scaled to fit the survey drawing of the same area.
around the ancient city has created a separate drawing . All of the season drawings have created the city survey for each summer and all the summer survey drawings have created the cumulative project drawing.
of the modern field boundaries still reflect vestiges of the ancient Roman land division , with some lots retaining the original colonial orientation as well as maintaining widths of 1 Roman actus of 120 Roman feet (Figure 25.7). As will be discussed below, from the contour lines of the topographical maps , it has also been possible to utilise other engineering and GIS programs to create digital terrain models of aspects of the site, general three dimensional computer images of the landscape as well as to run three dimensional GIS functions . (See below, Simulations).
The framework for the entire study has been the precise survey and measurement of 16 different roadways of the Roman city, excavated at different times during the course of the Corinth excavations . These 16 roadways that provide the basis for the reconstruction of the regular colonial city plan have laid the groundwork for the further study of the urban and rural planning and organization of Roman Corinth . From the beginning we have tied our work into the local Greek geodetic coordinate system so that all of our site surveys, as well as the actual-state drawings, notebook drawings, photographs , and satellite images as well as the digitized topographical maps, are geographically registered with the same system. Using as a base the four permanent Greek geodetic markers that are within the range of our survey instrument , we have over the past six years installed a series of secondary reference points which we use in our day to day work.
25.4.4 Topographical maps Sixteen 1:2000 topographical maps from the Greek Geodetic Survey have been digitised to create the topographical foundation for the immediate area of Roman Corinth , roughly 35 square kilometres. The topographical maps include information such as modern roads , paths , ledges, property lines, field lines , houses, as well as topographical contours. It has been noted, for instance , that several modern village roads still have as their orientation the Roman roads of the colony (Figure 25.6). In addition some modern house and lot lines still respect the ancient Roman insulae and , in the areas surrounding the city, it has been noted that aspects
25.4.5 Actual-state plans As the Corinth Excavations of the American School of Classical Studies at Athens have been underway since 1896 there exist a great number of excavations from in and around the Greek and Roman cities. Each excavation has produced an actual-state plan or a stone for stone drawing . One of our current objectives is to digitise many of these actual-state plans and to scale or rotate them, where necessary, to fit the precisely surveyed monuments . In this way it has been possible to recreate , literally block for block, the excavated remains of the successive phases of the city. Each of the actual-state drawings is retained as an independent entity in our drawing archive and then can be imported into the larger survey drawing , when needed. In this way, a very precise physical site survey is combined with accurate stone for stone actual-state drawings of the site. There exist now over 50 actual-state plans that have been carefully digitised , representing different structures and buildings throughout the ancient city (Figures 25.8 and 25.9). The goal will be to complete the stone for stone drawings of the entire excavated city. Needless to say, the availability of the actual-state drawing s from a computer aids a great deal to the ongoing study of the successive cities of Corinth . 168
REMOTE SENSING, GIS AND ELECTRONIC SURVEYING
001 D
I I I I 11111111 1 --r
--
r
r-r
UM •
_I
III I II
~111111 111111~~
I I I I II I I I I I I
• N
CO INTH r------,r-----~,
0
0.5
1km
Figure plan 25.10: Thecolony. evidence from air photographs relating to the Roman city !!rid hioh-liohted aoainst the 'd rawmg · board' of the o 0 0 0
,
l I 111111 111111 100 D
[ill
I 1
'
,-
r
,..,...,..
j 1J
FF =F
'~
I"'~
FORUM
'
I
ll
I I I111111111 11111
• N
I
I
~
CORINTH r------,-,-.----~-1
0
0.5
1km
Figure 25.11: The evidence from modern roadways, property lines and field lines relating to the Roman city !!rid hi hhghted agamst the 'drawing board' plan of the colony. g 0
25.4.6 Database We have begun to create a database (using dBase ID+) of all of the identifiable structures and monuments included in the map of the successive ancient cities of Corinth. At the moment the database includes information about the name of the building or structure , its date of construction and up to three bibliographic references to the publication. We are currently exploring different ways of linking the database with the various completed AutoCAD drawings . For example, we have associated most of the forum
buildings' drawing entities (lines) with their respective dates and references through AutoCAD's SQL extension. The objective was to automate the generation of consecutive period plans of the Roman city. However, this task has been hindered by the difficulties of assi ooninoo a . discrete number to the date of construction, primarily because these dates are usually specified in ranoes. Additionally, most buildings in the forum were subje;ted to modification and addition, and it was not always clear when the buildings were abandoned.
169
DA V ID GILMAN ROMANO AND OSAMA TOLBA
Figure 25.12: A vector-based topographical map of Corinth showing contours at 10 metre intervals , and illustrating the least slope path from the Greek agora to the Gulf of Corinth.
Figure 25.13: A raster-based map of the same area , illustrating the least slope path from the Greek agora to the Gulf of Corinth . (Lighter shades of gray indicate steeper slopes) .
25.5 Remote sensing
points needed for this operation are taken from the topographical maps . The corners of buildings or the intersection of field boundaries have proven to be most precise. Once the photograph has been successfull y rectified , it is possible to display it as a backdrop to the AutoCAD drawings using CAD Overlay GSX . In this way one is able to trace over the crop and soil marks and study them in conjunction with other surveyed or map data.
25.5.1 Air photographs We have used several types of air photographs to study Greek and Roman city planning and land organisation in the Corinthia . There exist both low altitude as well as high altitude photographs of the area as well as some very low level balloon photographs . Low altitude air photographs , at an approximate scale of 1:6000 , taken in 1963 by the Hellenic Air Force , correspond very well with the 1:2000 topographical maps , which were made in the same year using the air surve y7. The air photographs have been useful for a number of reasons . Shadow s and vegetation or soil mark ings high -lightin g unexca vated underground features in the landscape , such as roads , ditches or structures are visible . These features can be helpful when put together with other forms of information, such as the surveyed and excavated roadways (Figures 25.10 and 25.11). Before performing any analysis of any of the photographs it is necessary to first rectify its geometry in calibration with the existing maps and surveyed data. Therefore , each photograph is scanned at the resolution of 400 dpi (dots per inch ) using a desktop flatbed scanner (Hewlett-Packard ScanJet Ile ) and rectified using the resampling program included in the IDRIS! package , (discussed below under GIS appli cation s). The control
High altitude air photographs at a scale of approximately 1:37,500 , taken in 1987 by the Greek Army Mapping Service , have helped us to understand the overall pattern of the roads and field boundaries in the larger terrain surrounding Ancient Corinth . Control points necessary to rectify these photographs are taken from the topographical maps or satellite images , where we do not have a detailed map of the entire area covered by the photograph . A series of low level balloon photographs at an approximate scale of 1: 1750 taken by Dr. and Mrs . J. Wilson Meyers in 1986 have greatly assisted in the identification of details in the landscape at the Roman harbor of Lechaeum. What may be an earlier (pre-44 BC) Roman agrimensorial survey of the harbor, possibly to be associated with the Lex Agraria of 111 BC or alternativel y an aborted colonization attempt of the late second centur y BC , has been noted . 170
REMOTE SENSING ,
GIS
AND ELECTRONIC SURVEYING
26.5.2 Satellite imagery
26.6
During 1993 we have added two satellite images of the Corinthia to our data set; a panchromatic scene from SPOT, the French satellite agency, and a multi-spectral scene from LANDSAT, the US satellite company. The image from SPOT is a single spectral band scene, gray scale, acquired in May 1991 at the resolution of ten metres (per pixel) and covers an area of 60x60 kilometres. The LANDSAT scene, on the other hand , is an EOSAT archive image acquired in June 1987 with a resolution of 28.5 metres. It covers a larger area of 185xl 70 kilometres and is a Thematic Mapper (TM) scene with seven coloured spectral bands that can be displayed individually or in combination with others . The different nature of the two satellite images has dictated very different uses for them. One can see roads and agricultural field boundaries clearly on the SPOT image and , therefore , we are using it to analyse the patterns in the landscape along the southern coast of the Corinthian Gulf between Corinth and Sikyon. We have been able to identify uniform grid systems conforming to the practice of Roman centuriation. This particular study has been carried out using AutoCAD and CAD Overlay GSX by measuring road and field spacing on the image and testing against various hypothetical grids of Roman land division . The grid systems are created in AutoCAD and can be superimposed on the satellite image using CAD Overlay GSX. The preliminary success of this investigation has lead us to purchase an additional SPOT image , a 15 minute by 15 minute window to the east of Corinth so that we may in the future study the land organisation to Corinth 's second port of Cenchreai on the Saronic Gulf.
The principal GIS program utilised to date has been IDRIS! which was developed by the Graduate School of Geography at Clark University. IDRIS! is a grid-based geographic information and image processing system whose purpose is to provide professional-level geographic research tools at a low cost. The program comes with excellent documentation, is easy to use, and one can buy, very inexpensively, a student manual. It is a raster based system and it includes over 100 program modules of which we have used a number, including those that handle database management, geographical analysis, image processing as well as statistical analysis.
The LANDSAT image is of coarser resolution (28 metres per pixel vs. 10 metres) and, therefore , is better suited to studying land use pattern, ground cover and geological interpretation . In the future we may consider these well known applications of the LANDSAT multispectral scene in this study. The SPOT satellite image came in the BIL (band interleaved by line) format, which was imported into the various kinds of software that we use, e.g ., IDRIS!. The original SPOT image was shipped to us on a series of twenty-two 3.5 inch diskettes which proved to be somewhat of a challenge to download and decompress. The more recent 15 minute window from SPOT was shipped on a compact disk , which greatly facilitated its use . The total size of the larger SPOT image is approximately 50 MB while the total size of the LANDSAT image is approximately 360 MB. These large files are stored on an auxiliary optical disk (Panasonic Optical Disk Drive LF-7010) and parts of the scenes have been clipped for processing and analysis as necessary. The LANDSAT scene came on mainframe 'computer compatible tapes ' (CCT) , which necessitated the use of university facilities to download the files onto our PC's. This was accomplished by utilizing a 250 MB tape backup system (Colorado Jumbo Trakker) .
GIS applications
We have utilized a number of features of IDRIS! in our study of the landscape to the north of the Roman colony. For instance, by utilizing the elevation element of the topographical maps of the study area we have asked the IDRIS! program to find the path of least effort between the proposed location of the Greek agora or city center of Corinth, with any point on the south coast of the Corinthian Gulf . The result is a line or lines that the program has created that represents the easiest route to follow to get to the shoreline (Figures 26.12 and 26.13). This information, together with other data , regarding possible Greek land organization and orientation, has suggested the location of the Greek port of Lechaion to be in a different location than that of the Roman port some 1-2 kilometres to the west 8 • We have also been using ArcCAD (ESRI , Inc.) in the study of the various agricultural field systems visible in the landscape to the north of the Roman city of Corinth. Since ArcCAD provides a link to existing AutoCAD drawings we could create ArcCAD coverages from selected CAD-generated maps. However, our maps had been digitized with great attention to detail and appearance, which led to two problems when we began the conversion to ArcCAD. First, there were too many vertices in the polygons defining the boundaries of properties and roads, more than the maximum allowed by ArcCAD. Second, while the lines looked right, they were not always continuous or closed. It is possible to correct these problems if we can afford the time overhead . Another application of ArcCAD in the Corinth project involves using its querying capabilities to separate lines, in a map or aerial photo, into groups having similar orientation. We were able to write a small AutoLISP program that calculates the angle of a line segment and stores it in a field in the respective database. We can then use ArcCAD or ArcView to high-light, display, or generate a new map of the group of lines which angle falls within a certain specified range (e.g. , the angle corresponding to the Roman colony).
26.7
Simulations
Using the Digital Terrain Model component of Softdesk , Inc. , a civil engineering program , we have been successful in transferring the elevation data (contour lines) from the series of 1:2000 topographical maps to create a simulated 171
DAVID GILMAN ROMANO AND OSAMA TOLBA
Figure 25.14: Digital terrain model illustrating a portion of the ancient city of Corinth and including the artificially constructed modem excavation dump , looking southwest.
Figure 25.15: Digital terrain model of the same area as Fig. 14 above, eliminating the excavation dump .
landscape in three dimensions. We have done this for the entire area of the topographical maps, ca. 35 square kilometres, creating a large DTM based on a 20 metre square grid . For smaller DTM's we have chosen a square grid based on 5 metre segments. An example of such a smaller DTM was created to assist us in the understanding of the topography of the ancient Greek and Roman city. In the late nineteenth and early twentieth centuries, during the early years of the excavations at Corinth , a huge excavation dump created an artificial peninsula of land that extended from Temple Hill out to the north . The total length of the artificial mound is approximately 200 metres and its maximum height approximately 15 metres. In the modem day, the excavation dump literally obscures a clear view of the nature of the topography of the ancient landscape . We created two DTM's to study the location. First we built a DTM from the contours of the topography map, reflecting the appearance of the area in the modern day (Figure 25.14). Then we connected the contour lines to the east and west of the artificial peninsula of land to create what may be a reasonable model of the landscape
before the excavation dump was created (Figure 25.15). It is our hope to be able to study the modified landscape and DTM in order to better understand the ancient city. We have used 3D Studio to generate renderings and animations of the digital terrain model of the 35 square kilometre area of Corinth , some including the colonial Roman grid and the Greek city walls and the area from Akrocorinth in the south to the Gulf of Corinth in the north . These images have assisted in the recreation of the landscape and are especially useful in demonstrating gross topographical features (Figure 25.16). Another kind of simulation is created simply by showing the contour lines of the topographical map from a distant three dimensional viewpoint (Figure 25.17). These visualization techniques, however, have so far had little influence on the outcome of our research serving mainly as presentation and communication tools.
25.8 Order of accuracy We have been very much aware that the order of accuracy of each of these techniques varies greatly from one another. For instance , the architectural and topographical 172
REMOTE SENSING,
GIS AND ELECTRONIC
SURVEYING
Figure 25.16: Rendering, created from a digital terrain model, of a large portion of the study area illustrating Akrocorinth to the left and Penteskouphi to the right, looking southwest. The shoreline of the Gulf of Corinth is in the foreground.
Figure 26.17: Three-dimensional view of Akrocorinth and Penteskouphi from the north.
survey by means of electronic total station produces a very high order of accuracy, usually within millimetres or centimetres. The topographical maps are generally accurate to 1-2 metres as are the air photographs from which they were made. The satellite images, of course, have far less accuracy. The pixel size in SPOT panchromatic image is 10 metres and in LANDSAT multi-spectral image is 28 metres . We are using each of these techniques for differing purposes. The backbone of the entire project is produced by the electronic total station survey of the roadways and structures of the ancient city.
The other kinds of evidence is used in conjunction with the survey and is calibrated with respect to it.
Acknowledgements The Corinth Computer Project was initiated by the Corinth Excavations of the American School of Classical Studies at Athens under the Direction of Dr. Charles K. Williams, II. We thank Dr. Williams and Dr. Nancy Bookidis, Assistant Director, for friendly and generous assistance in all aspects of this undertaking.
173
DAVID GILMAN ROMANO AND OSAMA TOLBA
The Corinth Computer Project is supported by the Corinth Computer Project Fund of The University Museum of the University of Pennsylvania as well as by the School of Arts and Sciences, the Department of Classical Studies and the Graduate Group in Classical Archaeology of the University of Pennsylvania and the 1984 Foundation. The project has also received generous support and educational grants and considerations from AutoDESK, Inc., Environmental Systems Research Institute, Inc., the IBM Corporation, Image Systems Technology, Inc., Softdesk, Inc., the Sokkia Corporation, the SPOT Image Corporation and the Earth Observation Satellite Company.
Notes I.
During each summer season of the survey, the project has been earned out as a part of the architectural aspect of the Spring Training Seasons of the Corinth Excavations . The annual reports of the Corinth Excavations appear in Hesperia , the Journal of the American School of Classical Studies at Athens .
6.
A detailed discussion of the computerized field survey has appeared as Romano and Schoenbrun (l 993)
7.
Air photographs do not have a scale which is consistent all over the image due to lens distortion and terrain variation .
8.
For a summary of the evidence see Romano (1994 ).
References AVERY, T. E. & BERLIN, G. L. 1985. Interpretation of Aerial Photographs . 4th ed., Burgess Publishing Co., Minneapolis . BRADFORD , J . 1957. Ancient Landscapes, Studies in Field Archaeology, G. Bell and Sons, London . ENGELS, D. 1990. Roman Corinth, An Alternate Model for the Classical City, The University of Chicago Press, Chicago . ROMANO , D. G. 1989 . 'The Athena Polias Project/ The Corinth Computer Project: Computer Mapping and City Planning in the Ancient World', Academic Computing, March, 26-53 . ROMANO,D. G. 1993. 'Post-146 B.C. land use in Corinth, and planning of the Roman colony of 44 B.C.' , in T . E. Gregory (ed.), : The Corinthia in the Roman Period, Journal of Roman Archaeology Supplementary Series 8, 9-30 .
2.
A brief summary of the methodology of the Corinth Computer Project is discussed in Romano (1989) .
ROMANO,D. G. & SCHOENBRUN, B.C. 1993. 'A computerized Architectural and Topographical Survey of Ancient Corinth ', Journal of Field Archaeology , 29 , 177-190 .
3.
A preliminary report of an aspect of the results of the project has appeared in Romano (l 993) .
ROMANO , D. G. 1994 . 'Greek Land Division and Planning at Corinth ', American Journal of Archaeology , 98, 246 .
4.
For a general discussion of the history of Corinth during this time period , see Wiseman (1979) .
WISEMAN , J. 1979. 'Corinth and Rome I: 228 B.C. - A.D. 267', Aufstieg und Niedergang der Romischen Welt, Berlin , II, 7.1, 438-548 .
5.
The latest interpretation of this type is Engels (1990) .
174
27
Image processing and interpretation of ground penetrating radar data Vanessa S. Blake Geospace Consultancy Services, Hailes House, 32 Hailes Avenue, Edinburgh EH 13 OLZ, UK
27.1
Introduction
The application of ground penetrating radar (GPR) to archaeology has had mixed results. A recent editorial in The Field Archaeologist (1FA 16) suggested that GPR was discussed by two types of people: those who believe that GPR is the best thing since sliced bread, and those who believe that dragging pieces of sliced bread across a site gives equally useful results. The use of radar to investigate structures beneath the ground surface has been known for around 80 years (Daniels 1988), but only with recent advances in computer technology and signal and image processing has the technique really become widespread. GPR is used in a variety of civil engineering applications - road and bridge surveys, for example - and can be used in conditions as variable as ice, fresh water, salt deposits, desert sand and rock formations. GPR has been used in archaeological applications in Japan (Imai et al. 1987) in York (Stove & Addyman 1989), Gloucester (Milligan & Atkin 1992) and at Sutton Hoo (Daniels 1988). GPR has occasionally been discredited by overzealous interpretations, but it has produced good results in both archaeology and civil engineering. The technique is nondestructive and non-invasive and so can be used through the floor of a cellar to examine underlying layers, for example. The time spent on site can be kept very short. For road or bridge projects, the survey must be done during the few hours when the area can be closed to
traffic. Data is then taken back to the office where it can be computer processed and interpreted while activity continues on the site. This paper describes some recent archaeological projects undertaken by Geospace and the software used to process survey data.
27.2
Principles of GPR
The basic components of a GPR system are the radar unit, a power supply, and two antennas. The antennas may be mounted on a simple sled and drawn by hand or by a suitable vehicle. They may also be placed in a rubber boat for working through fresh water. The basic principle in GPR is that a radar antenna transmits an electromagnetic pulse of radio frequency into the ground. When the pulse reaches a layer with different electrical properties, some of the energy will be reflected back while the rest is transmitted on. As transmitter and receiver are towed along the surface, images are built up showing the time elapsed between wave transmission and reflection. The production of images is explored further by Fletcher and Spicer (1993), who describe a computer program to simulate GPR. Using this program, it is possible to visualise the returns produced by individual simple targets. As a learning tool, this is of great value as actual GPR images are not immediately interpretable to the untrained eye. Further work in using synthetic radar
71 &.
l No
1 Figure 27.1: Point targets appear as hyperbolas on ground penetrating radar records. 175
•
o O
•
2
0 ~
•
3
VANESSA
S. BLAKE
METRES 0
1
,
2
.3 4
......
.
.
N A
0
6
7
8
9101112131415161718192021222324252627282930 ---·---,.,._. ....,..._ ....,.., ..,,....... ··-··--· .,.,.,..
~--· ..
~~~~~~W•fv~~~~~~~~;/t; ,.,~,
5 0 ---IP; . :,; ,
.
·
:.:.... ·.::.:.:.:........ ~ ..
N
0
S
,,
,
..._,
,,.
,
•~;~r~~=~~~=7i✓ ~~~••
.,..11a4-.,
.....
_.,.
_______ ·-··---
.... .. .. ..... ,. .
,
. ... .
~~~::.~::~~ic.~~• W~~~
•/•
•~,,(~'f~~•~~
...., ...... ---~ ., ,, · · · ·;·,· ;: ·····:_, - :::;:= ·,•=·:t,.i i;
ll !
DEPTHCS
DEPTlf -.
0.1.H
rues
1 . (J', 0.()0
HODT
nw
!
.:o
p 11 , '11} 0,00
..!!
Figure 37 .4: A modelled cut feature and its associated SQL data .
deposited , are all questions that need to be addressed. The site plans can provide some of this information , but computer generated reconstructions offer the possibility of comparing and contrasting datasets more effectively and , crucially , in three dimensions.
37.3
The software environment
The requirements of the previous section can be summarised as software for: validating and sorting sets of three-dimensional coordinates ; calculating soil volumes ; and carrying out schematic (as opposed to visually realistic) geometric reconstructions of features , ground surfaces and artefact distributions , providing visual aids to interpretation. Many of the calculations and reconstructive geometries needed are not sufficiently general in nature to be immediately available in off-theshelf software ; a programming environment was therefore required . It was felt to be highly advantageous if the software could directly access existing data held within dBASE IV files . AutoCAD Version 12 was chosen for the following reasons : •
It w~ already available at the British Museum and had already proved its worth in an earlier project (Main etal . 1994).
•
It has powerful two- and three -dimensional graphical modelling capabilities .
•
The ADS (AutoCAD Development System) module provides a C programming environment.
•
The ASE (AutoCAD SQL Extension ) module , which is new with Release 12, provide s embedded SQL routin es and dri vers to link with a number of common
database management systems , including dBASE IV. If need arose in the future, the software written for the Runnymede project could be used for data held within other DBMSs , such as Paradox , by altering only one line of code within each application . Applications were written in C to read data from appropriate dBASE IV files, and call routines from the ADS library to generate corresponding graphical entities. This required the dBASE IV files to be converted to SQL tables, making the data accessible both to the standard dBASE IV menus , and to C routines running under AutoCAD . This conversion is straightforward using a utility that is well documented in the dBASE IV manuals. Although the SQL link has been used mainly in read-only mode to import filtered and sorted data into AutoCAD , data can also be written back to the tables under software control. This facility was used to write soil volumes information to the bulk finds database , as described in Section 37.4 below. A particularly valuable feature of the ASE software is that it is possible to incorporate within the drawing file itself information which links each graphical entity back to the row of the SQL table used to generate it. By picking an entity with the mouse, a window can be overlaid on the drawing which allows the user to view and modify the data within that row (see Figure 37.4). Data maintenance can thus be integrated with graphical modelling , entirely within the AutoCAD environment. The applications have been designed to allow the user to specify interactively the filter conditions for retrievin g the data. In this way, any subset of the data can be represented graphically , and different subsets overlaid using a variety of colours or symbols . As th is proces s 238
COMPU1ER-A1DED DESIGN TECHNIQUES FOR THE GRAPHICAL MODELLING OF DATA
takes place under straightforward to conditions such as other idiosyncrasies
the control of a C program, it is cope intelligently with exception those arising from missing data or of the data recording method .
All the applications described in later sections have a similar preamble where the user is asked to provide the following information: •
The name of the SQL database containing the data. The data have been subdivided according to the excavation areas distinguished on-site. All data tables contain coordinates relative to the overall site grid , so that it will eventually be possible to generate reconstructions of the complete site, either by amalgamating corresponding tables from the various databases, or by modifying the software to retrieve data from multiple databases .
•
The name of the SQL table containing the data . Within each database there are four types of table, having structures appropriate for grid levels, cut features, special finds and bulk finds.
•
A selection condition for retrieving data from the table. This is entered by the user as the WHERE clause of an SQL SELECT sentence. For example, the clause gridx > 30 and gridx < 40 and cuts= ' 16.501 '
applied to a cut features table would instruct the application to retrieve only data relating to a strip between 30 and 40 metres from the site origin, and where the features cut through context ' 16.501'. The complete SELECT sentence is constructed by the application, by prepending the list of table columns required to generate the reconstruction , and, in some cases, by appending an ORDER BY clause. (Presorted data are necessary, for example , when checking grid level sequences as described in the following section.) More detailed descriptions of the specific applications written to cope with the different classes of data appear in the following sections.
37.4
The grid levels application
This application comprises four principal routines .
37.4.1 Data retrieval Each row of a grid levels table corresponds to one surveyed point in 3D space . Seven columns are retrieved from each selected row, representing the X, Y and Z coordinates of the surveyed point , the numbers of the contexts lying above and below the point , and the names of the stratigraphic phases lying above and below. The user is given the choice of discarding context information and dealing only with stratigraphic phases, in which case the application amalgamates the data from those contexts that are adjacent and lie within the same stratigraphic phase.
The data are accumulated in memory as linked lists of C structures , with pointers assigned to connect each survey point with its previous and following point in the X, Y and Z directions . This structure allows the data to be accessed either as vertical 'pipes' of survey points (appropriate for the data checking routine described below), or as horizontal spreads of points which make it easy for the display routine to reconstruct the surface of any stratigraphic unit.
37.4.2 Data checking This routine supplies the user with tabulated details of each survey point , flagged where necessary with warnings or errors. These may relate , for example , to breaks or reversals in the sequence of stratigraphic phases. It is most important that inconsistencies in grid levels data are corrected before the data are used to reconstruct surfaces , otherwise some bizarre geometric anomalies may be evident. A feedback loop has therefore been introduced at this point, whereby the software flags problems, the archaeologist corrects the raw data , and the data are checked again. This is repeated as often as required until 'clean' data results.
37.4.3 Volume calculations The grid levels tables contain data which define the upper and lower surfaces of each stratigraphic unit at all points on a regular one metre square grid. The three dimensional extent of the stratigraphic unit can therefore be regarded as comprising a spread of volume units which have square cross-section in the horizontal plane, and flat but non-parallel upper and lower surfaces . This is in fact an approximation to the truth, since the four points on the top or bottom will not be strictly coplanar. In practice , however, they will be nearly so since surface contours do not exhibit sharp local variations within a square metre. In geometric terms these units of volume are parallelepipeds, whose volumes can readily be calculated. The level of positional recording employed for bulk finds was, in effect, sufficient to tie each find to its enclosing parallelepiped, and aggregate weight figures for each class of bulk find were therefore known for each parallelepiped. These weights , together with the volumes calculated by this routine, allow finds densities to be calculated for use by the bulk finds dot-density display routine described in Section 37 .6 below.
37.4.4 Surface reconstruction This routine allows the user to select a stratigraphic unit and to reconstruct its ground surface graphically . This is done by taking each set of four adjacent survey points in turn and generating a Coons surface patch which interpolates the points . A Coons patch (Coons 1967) is a bicubic surface which interpolates any four edgeconnected 3D curves, and is implemented as AutoCAD's EDGESURF command. Where only three adjacent points exist, as may happen at the edge of a stratigraphic unit , a triangular polyface mesh is generated instead with the RULESURF command .
239
P.L.MAIN , A.J.SPENCE AND T .HIGGINS
.
+-w ,---+ i
(X,Y,Z) /
r d
l
The solids of revolution generated by this application are a particular type of polyface mesh , and as with reconstructed ground surfaces, can therefore have hidden line removal or rendering applied to give more realistic 3D views. Figure 37.6 shows an AutoCAD reconstruction of exposed pits and post-holes from Area 19.
37.5.2 Volume calculation This routine calculates the volume of all features selected by the user, using a formula based on the volume of a cone, and writes the results back to a specified column within the table.
37.5.3 Special finds generation
Figure 37.S: Schematic representation of the technique for modelling cut features .
Each complete reconstructed stratigraphic unit is assigned to a separate AutoCAD 'layer ' in the drawing, so that each can be switched on or off independently, or coloured differently. Since the surfaces generated are 3D polygon meshes , they will 'hide ' objects they overlap if hidden line removal is applied . This gives added clarity and realism to 3D views of the surfaces . It is also possible to render the surfaces under chosen lighting conditions using AutoCAD 's rendering facilities .
37.5
The cut features and special finds applications
37 .5.1 Cut features generation This routine models cut features (typically post-holes or pits) using measurements of position and size stored within a cut features SQL table . The measurements utilised to reconstruct the features are the X, Y and Z coordinates of the centre of the top of the feature, the top and bottom widths of the feature (w 1 and w2), and its depth (d) . The measurements are those routinely taken on site and are sufficient to allow a simple stylised reconstruction in the form of a truncated cone. Following the user 's specification of database, table and selection conditions, this routine generates one truncated cone, correctly located in 3D space, for each row retrieved from the table. This is achieved by drawing a line corresponding to the central vertical axis of the feature, a polyline whose locus is the top, edge and bottom of the feature , and calling AutoCAD's REVSURF routine to generate a solid of revolution by rotating the polyline around the central axis (see Figure 37.5). The original line and polyline are then deleted from the drawing, and a link created between the solid of revolution and the row of the SQL table which was used to generate it.
This routine accesses a special finds SQL table and draws markers to represent the positions of the finds in 3D space. Each selection from the table can be associated with a particular type of marker, chosen from a file of visually distinctive and customisable symbols. Markers are drawn with AutoCAD 's SHAPE command , which also allows markers to be drawn in different sizes if required . Further refinements to the display can be applied by using the layer allocation routine described below to assign different colours and layers to the marker groups.
37.5.4 Layer allocation This routine accesses an existing AutoCAD drawing which already contains modelled cut features and/or special finds markers. It allows the user to select subsets of the entities and assign them to different AutoCAD layers, choosing an appropriate colour for each layer. The user can select the members of each subset manually (i.e. by pointing at the features and clicking the mouse) or by executing an SQL selection sentence on the table to which the entities are linked , or by a combination of both methods. Where manual selection is used, the user may optionally connect the centres of selected cut features with straight lines to delineate , for example , the edge of a putative structure. The alternative method of selection is potentially powerful since the selection condition can involve any data contained within the table, not only the measurements used to generate the reconstruction . Provided appropriate data have been recorded in the table, one could, for example , assign all post-holes that have been cut into a specified stratigraphic context to a single AutoCAD layer.
37.6
The bulk finds application
This application comprises two principal routines , the first of which draws a 3D grid of lines marking the boundaries of each volume unit with metre-square cross-section , and the second of which populates each volume unit with a random spread of AutoCAD POINT entities to represent a distribution of bulk finds .
37.6.1 Drawing the grid The data utilised from the grid levels and bulk finds tables comprise the 3D coordinates of the vertices of each volume unit, and the density, within each unit , of each
240
COMPUTER-AIDED DESIGN TECHNIQUES FOR THE GRAPHICAL MODELLING OF DATA
~
'
'
n'
~
I
'' '' '
Figure 37.6: AutoCAD reconstruction of selected cut features from Area 19 at Runnymede.
bulk finds type as derived from the volume calculations within the grid levels application. The grid is constructed by drawing lines between the comers giving the impression of truncated cubes. Since the context surfaces involved are relatively flat, provision has been made for the user to specify an exaggeration factor to enhance nonobvious features.
37.6.2 Populating the grid The number of points plotted within a volume unit is related to the density of the finds type. Both linear and logarithmic mappings are provided to allow experimentation with the weighting of the high and low densities. An upper limit of 300 points per volume unit is, however, imposed so that AutoCAD is not overstretched during the drawing and regeneration procedures. Figure 37. 7 shows part of the grid from Area 16 populated with points. The X and Y coordinates of each point are determined using randomly generated numbers between zero and one as offsets from the bottom left-hand comer of each context. Each point thus has coordinates (x 1+dx, Y1+dy) where dx and dy are the offsets. Calculating the Z coordinate is less straightforward. Although the limits to the X and Y coordinates are determined by the geometry of a square grid, if we include the Z coordinate we find that the comers are not coplanar. There are no obvious single upper or lower values that can be used as the Z coordinates at interior points of the volume units. If we assume that the lowest Z value is the Z coordinate of the lowest comer and similarly for the highest Z value then some points will lie outside the grid when plotted. If there
is another context immediately above or below the current context then there will be space common to two contexts which will be doubly populated. This will lead to 'false features' being displayed as bands of dense points. To overcome this problem a 'minimum' surface passing through the comers is calculated. Thus, for a given dx and dy lying between zero and one, the Z coordinate is calculated as: z = (I-dx)(l-dy)z
1 +dx(l-dy)z
2
+(I-dx)dyz
3
+dxdy4
This value is calculated for the current context, and for that immediately overlying it, with the final z value chosen to lie between these two. Although this procedure makes a particular assumption about the behaviour of contours between grid points, some assumption has to be made, and this one does ensure that the points populating adjacent volume units cannot overlap. Figure 37.8 shows a sectional dot-density view of Area 16 with clearly visible zoning of bulk finds.
37.7
Software assessment
Certain of the AutoCAD facilities we have exploited in developing these applications have proved to have shortcomings, or have been less easy to utilise than we would have wished. When considering these, however, it should be borne in mind that AutoCAD is a powerful, broadly based package designed for the wider needs of computer-aided design and was not specifically designed to cater for the often specialised and demanding requirements of archaeological graphics. It is a testimony to its power and flexibility that we have been able to achieve all that we have with very moderately sized
241
P.L.MAIN , A.J.SPENCE AND T .HIGGINS
Figure 37.7: Three-dimensional dot-density representation of bulk finds from part of Area 16, overlaid with context grid .
Figure 37.8: Sectional view of Area 16 with clearly visible zoning of bulk finds.
segments of C program . Most of the lower level graphical routines were already available within the package; much of the code we have written has been to call these up in the correct sequence and to accommodate the particular data categories and coding employed on the Runnymede excavation . The need to convert dBASE IV files to SQL tables to render them accessible to AutoCAD has been something of an irritation. Once these have been converted, the data can still be accessed and modified using dBASE's ASSIST menu system, but certain operations , such as modifying the file structure in place , become impossible. Furthermore , any dBASE 'memo ' fields need to be deleted before the SQL tables are generated , otherwise any attempt to access the SQL table from AutoCAD fails with a 'corrupt table ' error . The ability to create links between graphical entities and rows of an SQL table has been of great value to us, particularly within the cut features and special finds applications . AutoCAD's ASE interface , however, has clearly been designed around creating these links interactively using 'point and click' methodology. It was essential for our purposes to be able to create the links programmatically , at the same time as the graphical entities were created. It has proved possible to do this using a combination of trial and error and intelligent guesswork to call the linking routine in a way that is not fully documented. The process of creating links in this way seems relatively slow, and it can take many minutes to generate a cut features drawin g of any complexity. It is to be hoped that programmatic generation of entity/SQL
table links will be fully documented and supported in future releases of AutoCAD . The use of AutoCAD 'shapes' to represent special find locations is not ideal. The result is only satisfactory in plan views, since shapes are two-dimensional and lie in the X-Y plane. When distributions of artefacts are viewed in section, they are visible only as lines and are effectively indistinguishable except by colour. Furthermore , being two-dimensional, they cannot be rendered and disappear altogether if rendering is applied . Ultimately we plan to develop a range of three-dimensional solids to use as icons for special finds, although colour will become the major discriminator since different types of three-dimensional solid are notoriously difficult to distinguish at small scales. Although standard AutoCAD facilities allow only static views to be created, we see great potential for the user being able to interactively control his viewing position and 'walk' around and into rendered images . We have been able to achieve this with additional hardware and software, namely a Matrox MGA Impression graphics card and the supplied Dynaview driver for AutoCAD . Such facilities are not merely luxuries ; they are important for examining the relationship between features , artefacts and their stratigraphic context. The 'right' view, which illuminates some aspect of the data , may be very difficult to find without exploring the data in this way. Finally, it should be pointed out that we have not yet been in a position to fully exploit the software we have written . We have been working primarily with a small
242
COMPUTER-AIDED DESIGN TECHNIQUES FOR THE GRAPHICAL MODELLING OF DATA
core of test (though real) data relating to an LBA midden deposit in Area 16 (Needham and Spence, forthcoming), whose choice has been dictated largely by publication priorities. Although we also have cut features data for Area 19, we have not yet been able to generate composite reconstructions of all types of data on a single drawing. When this becomes possible, further modifications and improvements to the software will no doubt suggest themselves.
37 .8
Archaeological assessment
The aims of graphical computerisation for the Runnymede project are to provide ways of representing bulk find, special find and cut feature information which aid visual interpretation. Initially much of the analysis can be carried out using sections and plans, but there are times when three-dimensional studies are essential. Additional benefit has been gained by the ability to study finds as densities as well as aggregated weights. Some problems remain, however. The difficulties in accurately determining soil volumes have already been outlined, and these apply particularly to the smaller soil bodies. To a certain extent errors can be reduced by amalgamating contexts into stratigraphic phases, although this will then mask detail. In analyzing the Area 16 East midden deposits the individual spit information was gathered into phases to spot long term trends, but local variation was detected from studies of individual categories. It is also inevitable that cut features from more than one spit will be used when searching for alignments, as during excavation there were problems at times locating the 'real' tops of the features, resulting from the similarity of the fills to the surrounding soil matrix. In representing the data graphically, there is also an important consideration to be taken into account. The cut feature data can be accurately represented in three dimensional space, even if only stylistically. Bulk finds data can only show distributions schematically within each metre square, while special finds fall between these two categories. For those special finds identified immediately on excavation, precise three-dimensional pinpointing is possible, whereas those recovered in finds processing can only be shown as being located 'somewhere' within their metre square spit. Given that the interpretation of much of the site detail will require very careful analysis of distribution, a distinction will need to be drawn between the precise and the approximate plots. It is probable that the computer graphics will point the way towards significant boundaries and distributions, but the final conclusions will need to be confirmed from the site plans, which contain detail not available within the computer records.
There is a temptation to regard the excavated data as objectively recovered, whereas perceived spatial patterning may merely reflect the excavation method, or even the abilities of the individual excavators. On-site practices such as sieving 'control columns' to recover all finds from certain squares, or using different teams on neighbouring trenches helps to reduce this human factor. Confidence in the interpretation can be increased by checking results against another dataset. Given that most archaeological processes are far from straightforward, it is the ability to try many different permutations of data that makes computerisation worthwhile. As a spin-off, the archaeologist becomes more explicit in formulating hypotheses; in asking these questions the limitations of the data often become apparent. The use of the AutoCAD system will also allow the comparison of Runnymede data with results from elsewhere. So far there has been little opportunity to do this, but it is hoped to compare the Neolithic house ground plan in Area 19 with other excavated examples from Britain, and inputting these other plans will also provide a useful resource database. Perhaps the ultimate objective for the computerisation of Runnymede is to attempt a partial reconstruction of the site. While such graphical reconstructions must make many assumptions, they undoubtedly concentrate the excavator's mind on how the site may have looked and functioned. Ultimately, this must surely be the purpose of archaeological excavation.
Acknowledgement The authors would like to thank Dr Stuart Needham, director of the Runnymede excavations, for his participation and support throughout this project.
References COONS, S. A. 1967. Surfaces for Computer Aided Design of Space Forms, MIT Project Mac, TR-41. MIT, Cambridge MA. LONGLEY, D. 1980. Runnymede Bridge 1976: Excavations on the site of a Late Bronze Age Settlement. Surrey Archaeological Society Research Volume No.6. Surrey Archaeological Society, Guildford. MAIN, P. L., HIGGINS, T. F. WALTER, A. ROBERTS, A. J. & LEESE, M. N. (1994). 'Using a three-dimensional digitiser and CA D software to record and reconstruct a Bronze Age fissure burial', in K Lockyear & J. Wilcock (eds.) Computer Applications and Quantitative Methods in Archaeology 1993 British Archaeological Reports, BAR Publishing, Oxford. NEEDHAM, S. P. 1991. Runnymede Bridge 1978: Excavations and Salvage on a Bronze Age Waterfront. British Museum Press, London. NEEDHAM, S. P. & MACKLIN, M. G. (eds.). 1992. Alluvial Archaeology in Britain. Oxbow Monograph 27. Oxbow Press, Oxford. NEEDHAM, S. P. & SPENCE, A. J. (forthcoming). Runnymede Bridge Research Excavations, Volume 2, Area 16 East: A Late Bronze Age Midden and Underlying Neolithic Remains. British Museum Press, London.
243
244
38 The Archaeological Data Archive Project Harrison Eiteljorg II Center for the Study of Architecture, Bryn Mawr, USA
38.1
Introduction
The use of computers to record archaeological field data and to assist with individual scholarship has grown exponentially in the last decade. However, careful attention to problems with the storage and preservation of data files has not accompanied this growth. Too many files lie unattended and ignored on university mainframes, on hard disks on desktops, or on floppy disks in drawers. After they have served their original purposes, the files are, for all practical purposes, forgotten. No less than the notebooks, plans, and catalogues, though, computer files from excavations are important records. Their preservation is crucial to scholarship, and access to them is no less crucial. In the case of data sets gathered by individual scholars, the importance of the files to other scholars varies widely. Nonetheless, the labour which was spent to create the data sets should not be wasted through neglect. Unfortunately, neither individual scholars nor universities that sponsored their work are well prepared and equipped to deal with the problems of data storage. Therefore, an archive to house and care for data sets of value to archaeological research should be created. That is the goal of the Archaeological Data Archive Project
38.2
The Archaeological Data Archive Project
The idea of the Archaeological Data Archive Project (ADAP) grew out of a discussion at a meeting of the computer committee of the Archaeological Institute of America. Although the participants in that first discussion were uniformly excited about the potential of network access to computer data, on reflection we realised that there were immediate problems with the storage and preservation of such data. In particular, we were concerned about files from excavations and other data files that form the core of our knowledge base. A long and careful process of examining goals and exploring possibilities followed. During that time it was decided that the archival concerns deserved first priority and that the Archaeological Data Archive Project should be independent of academic or professional groups. The ADAP is directed by Harrison Eiteljorg, II, and operates as a unit of the Center for the Study of Architecture, which was already building an archive of CAD models. Initially, the appeal of an archive had more to do with access to computer data than with storage and safety. The idea of providing Internet access to huge quantities of data
is so appealing to us all that it is seductive. However, it has become clear that the more crucial issue for the moment is the preservation of data files that are at risk and the building of an archive to preserve the files that are being created even as we speak. Although scholars understand that paper records are subject to many kinds of damage and decay, we have often assumed that computer files are far more stable than they actually are. In reality, data storage media are subject to decay. Unfortunately, the damage is generally recognised only when access to the data shows problems; then it is usually too late to rectify the situation. Less obvious, but equally devastating, the data in computer files have often been compiled to assist with analysis for the person who created the files and no one else. As a natural result, the utility of the files is severely limited if they are used by anyone other than their creator. Taken together, these problems with existing computer data make urgent the task of proper storage and preservation. We must begin now so that we do not lose more information, and, in fact, the ADAP has actually begun to accept data files. Early uses of computers for archaeology necessarily involved data cards, tapes, and mainframe disks. As time passed, of course, the development of the microcomputer led to the use of floppy disks and desktop hard disks. How much data resides on cards that are now be unreadable, tapes that may now be without appropriate tape drives, or floppies made for machines long since antiquated we do not know. Nor do we know how much data may have been left on a hard disk, unused and unrefreshed, for years. That is one of the reasons the task is urgent. An appropriate archive must be prepared to deal with all those storage forms - not to mention file formats - simply to preserve the information. To be realistic, however, we know we cannot deal with all media; so we must make some difficult choices. The first choice has to do with the physical media on which data lie. The ADAP can deal with media from PCs, Macs, and Sun workstations without difficulty. Of course, files can also be sent over the Internet. What about the other media? Here there is no simple answer. We have, for instance, been assured of the co-operation of a colleague should we have KayPro disks. He can access the information and transfer it to DOS-formatted disks; other CP/M disks may also be accessible with this system. One scholar has asked about dealing with disks from Apple Ils, and we have found what must be done to accept them. Those are relatively easy problems. Dealing with data cards, tapes, and other such media is another matter. We cannot predict what problems we will
245
HARRISON EITEUORG
II
face until someone brings us a specific request. The ADAP will certainly not own card readers or tape drives which have become obsolete, but we can expect to have help from those who have the data and from others who are experiencing similar problems with old media . After all , we are not alone in this. We have, for instance, heard from one IBM employee who spent years dealing with old files at IBM, and the US government has been obli 0oed to give up on some important data files because they were on tapes no longer supported. It is doubtless too late for some files, but we can prevent the unnecessary loss of more by starting this process now. Similarly , we cannot predict the file format problems we will face, but we can be sure that there will be some we have not expected . Fortunately, ASCII provides a fallback choice that , generally speaking, will preserve the data adequately . (Sooner or later, Unicode or the 32-bit ISO standard will probably supersede ASCII.) But many files should be preserved in far more complex formats so that they can be used to their fullest. It would be a pity to settle for ASCII files as the lowest common denominator if we start with files in sophisticated database formats . On the other hand , ASCII files permit anyone to use the information ; access to a specific database management system is not required . But how many different formats can we store? How many should we keep? Here there is a difference between the archival duty and the question of effective access . It may be argued that the archival purpose is satisfied with ASCII files , but those files can be far less effectively used than the data files which preserve the complexity of a good database management system. Indeed , some of the data complexity would surely be lost if the ASCII files were not accompanied by thorough descriptions of the data files , the relationships between and among fields, the authority lists used , and so on . I have not answered the question posed about what formats should be accepted , because, at least for the moment , we will again emphasise our archival responsibility and accept any format. That may seem ridiculous ; some file formats will be all but useless . But we and our colleagues are better served if we have the files well preserved and properly maintained in an obsolete format ; such files can be translated if they are important . Once left to decay, however , they are unrecoverable . Whenever possible, we will ask that files be supplied in their native format and ASCII. If the database management system will do so automatically, we will also ask that the files be supplied in .dbf format, since that seems to be a widely-used effective standard . More important than the file format , however, will be the documentation that accompanies the files . That documentation should make it possible for a user who has access to nothing more than the published material about a site to utilise the files . The descriptions of files , fields, relationships , and so on must be complete and accurate ; they must make absolutely clear the ways the data can be
fitted together. Without such documentation, the data files can never be fully understood. But we cannot refuse to accept files if there is no documentation. To do so would be to deny our arc hi val function. Therefore, the ADAP will, indeed , accept files that have no accompanying documentation if and only if the scholars who created the files are no longer able to supply such documentation . Of course , we hope that will not be necessary, because undocumented files are of so little value to others until someone has spent the time required to document them . One of the functions of the ADAP will be to make certain that, once a part of the archive , files will be maintained in the most current and useful formats possible . No matter the format received , the files will be maintained and migrated into new formats as old ones fall out of use. This , of course, will be one of the most valuable services performed by the archive. Individual scholars will not be required to transfer their own data from old to new formats , though they will obviously do so with data files still in use . Over the very long term , this will be a crucial service , as files pass from generation to generation, with each new generation providing more effective data access. That brings us back to the original interest of the participants in the first discussion of the ADAP - access to the archive . The ADAP is not and will not be a closed archive , accessible only to a select few. It will be a networked archive , open to all who have access to the Internet and its successors , as well as anyone with a computer. The files stored in the archive will be available either over the networks or on disk . No files will be accepted unless they may be made public , though we will not require that they be made public at the moment they are contributed to the archive. (Access to some data items may be restricted , however , to keep potential looters from learning the exact location of a site , for example. ) Files to be archived include , first and foremost, any file from an excavation . Other files, however, will also be archived - catalogue files, CAD models , GIS files, images, authority lists, and so on - in sum , any data file; we do not expect to archive text files other than those text files that are required to describe data files. The archive will accept files from any cultural , geographic , or chronological area. Many individual scholars and institutions will wish to discharge their archival responsibilities directly and will not, therefore , want their records archived elsewhere . The ADAP will co-operate with them, assisting in those ways possible , and will use the facilities of the network to include their files in the system without duplicating them on site , when possible , or to provide information about where and how to find them . An archive such as this raises numerous thorny but important problems about standards , particularly because easy access to information - not simply data files - is severely limited by an absence of standards , In fact , it is 246
THEARCHAEOLOGICAL DATA ARCIDVE PROJECT
tempting to begin working on the standards questions first, waiting to work on the archive when the standards have been established. We have decided, however, that the urgency of the archival needs - coupled with the likelihood that creating standards will be both a protracted process and, at best, influence future computer files only requires us to move forward with the archive. Here again, the need for preservation supersedes the need for information. We will work on issues involving standards, and have begun to do so already. But we will not delay the important archival work, nor will the ADAP attempt to apply standards which are not fully accepted by the archaeological community.
November. I have been delighted by the response to the announcement, indeed, nearly overwhelmed. We have had offers of data, and, in fact, some files are on their way to the ADAP now. We have had numerous offers of support and questions about process - from people in the computer world as well as scholars. We have also learned about relevant projects and are actively co-ordinating our efforts with those of other scholars. But as we stand at the very beginning of the computer era, so we stand at the start of the process of archiving archaeological computer information. Indeed, the direction is so uncharted that I hope all of you will help to guide the process with your interest, your suggestions, and your expertise. We have a long way to go, but the journey should be exciting, rewarding, and full of surprises.
In conclusion, I should say a few words about the progress of the ADAP since its announcement last
247
248
39 A new method of off-line text recognition Susan Laflin School of Computer Science, University of Birmingham. Email: [email protected]. uk
39.1
Introduction
The problem of recognising handwritten text as a means of data input to a computer has been with us almost as lono-as computers themselves. Some areas have achieved 0 . a large measure of success, others very little. The mam problem is the lack of consistency in human handwriting - humans are generally better at perceiving the intended word lying behind the scrawl on the paper. Computer methods are less good at guessing what is intended and very few computer-based methods use anything like the amount of background information _on context and likely meaning that the human reader apphes without consciously realising what he or she is doing. Other problems arise because of the poor quality of many of the surviving texts and, to a lesser extent, the use of 'obvious' abbreviations which can usually be guessed by the human reader but must be expicitly defined before they can be handled automatically by the computer. As usual, to tackle this large and ill-defined problem, the first stage is to subdivide it into smaller, more closelydefined problems. The first division is into on-line or dynamic methods contrasted with the off-line or static ones. With an on-line method, the data is captured as it is written. Letters and words are written on a digitising tablet or pad and successive coordinates of pen-position are recorded and the whole is deciphered by the computer (Tappert 1990). This records not only the pen path eo-ivino e information about both the shape of the letters and the order in which the strokes are made - but also the speed with which different parts of the word are drawn and recording any pauses made by the writer during the word. All this information is lost once the writing is complete. With this extra information, some allowance can be made for inconsistencies in human writing and this improves the success rate of recognition. Recently a spate of portable 'notebooks' allowing handwritten input have been released on the market and with most of these the recognition has reached an acceptable level. They require an initial training session to accustom the system to the style and idiosyncracies of the owner and after that may be used by this one writer with an acceptable level of success. By contrast, the off-line methods have only an image of the completed document to work on. Typed or printed documents have a much greater consistency and once a system has been trained on that particular typeface, it will recoonise most of the document. Problems still arise on b faded or discoloured areas, but in general scanning and automatic recognition are quicker than re-typing the whole document into the computer. The next problem, in
order of difficulty, is the recognition of handwritten characters, either letters or digits. This is of particular interest when these characters make up post-codes and their recognition is necessary for automatic sorting of mail. Some success has been achieved and the current state of the art is described in a special issue of Pattern Recognition Letters (Tominaga 1993). The final and most difficult problem is that of cursive handwriting and there are two main approaches to this problem. One of them attempts to extend the experience gained with on-line methods to apply this to the off-line problem. The aim is to identify pen-strokes within the image, and then build these up into letters and then words. Much work has been done on this approach (e.g. Helsper & Schomaker 1993; Helsper et al. 1993) and success has been obtained in a number of cases. However many others still remain, cases where the human reader is much more successful than the computer system, and these have led me to devise an alternative approach, based on the methods used by the human reader.
39.2
Global method.
Let us for a moment consider how the human reader deals with a diffucult passage in a handwritten text. We have all met the problem - whether in our own or someone else's handwritng - of the illegible phrase. How do we tackle it? For myself, I am rarely successful in trying to spell it out letter by letter. Far more often I puzzle over it for some time and then, suddenly, solve the entire phrase not just a single word within it. Sometimes the whole problem arises because of the misreading of an earlier word and when this is corrected the rest falls into place. In all this process, it is a little difficult to watch myself and decide what I am really doing. I am aware of failing to recognise the phrase at the first attempt, of puzzling over it for some time and, during this time, of trying and rejecting certain possibilites and finally a feeling of jubilation when a successful match is found and the whole text falls into place. Now I need to produce a similar process, explicitly, on a computer system. The main reason for the greater success of the human reader in deciphering these difficult cases come from knowlege of the context of the word. The human reader does not attempt to compare all the words in the language with the unknown word, but uses all experience of the writer and subject to restrict comparison to those which are believed to be reasonable. Sometimes a change in interpretation of another word or words in the phrase may extend the scope of what is reasonable and allow a fit
249
SUSAN LAFLIN
where one did not previously exist. To design a similar system for the computer, it is necessary to define the method explicitly. Instead of vague ideas about what is likely or unlikely from that particular correspondent, the computer system will require a list of words defining the vocabulary to be compared with the unknown wordimages within the document. This vocabulary will have to be supplied by the users , who know that the manuscript to be recognised comes from a particular context and this context can be used to give a list of probable words and phrases. In addition , the manuscript is known to be written in a particular hand , and so the computer will need precise instructions for generating an image of the handwritten word from the ascii string. This image can be compared with the unknown word-images in the manuscript and when matches are found, the problem of recognition has been solved. The process can be repeated for every word in the vocabulary. This , very briefly, is the general idea behind my current research . I call it a 'global ' method because it compares complete words (blocks within the image of the document) rather than attempting to analyse these into individual strokes and then build them up into letters and words. From the above discussion , it should be clear that this very large problem can be broken down into stages and each one tackled separately. There will probably be several methods which may be applied to some of the stages and these will need to be compared and the most successful ones included in the system.
39.2.1 Text generation This involves a study of the methods of writing the various hands in order to generate an image for any given ASCII string . Work has already started in this area and is described later in this paper.
it will be necessary to try fuzzy matching techniques , various distortions of the geometry of these images , and neural net methods to find suitable methods for these comparisons.
39.2.5 Testing of prototype system Once at least one method exists for each section , it will be possible to produce a prototype system and invite users to test it and comment on the facilities needed.
39.3
Text Generation
The problem may be defined very simply: Given any ASCII string, generate an image of the corresponding handwritten word. This requires an analysis of the way in which the handwriting was formed in order to produce an image for comparison. Modern texts on calligraphy (e.g. Hardy Wilson 1990) describe the production of a number of hands in terms of pen-path for each of a number of strokes, with the pen-width and angle to the horizontal assumed constant throughout. This suggested that first approximation should model this using these three variables . The description of the gothic (blackletter ) hand required a broad pen held at an angle of about 35° to the horizontal. This was simulated by drawing a line of width w/2 at this angle on each side of the pen-position . The pen-path was produced by recording a series of (x,y) coordinates and joining them by a sequence of straight lines (polyline). This produced a first approximation and the gothic hand was chosen because it was well suited to this method . The indi victual strokes making up the letters are linear and the script is not truly cursive, so that words may be generated by the juxtaposition of the individual
39.2.2 Classification of manuscripts This requires a study of different types of manuscripts at different periods to produce the relevant lists of words and phrases. The lack of any standardised spelling may make these lists very long , or may require their amendment during use.
39.2.3 Segmentation of document images When a page of a document has been scanned , it will be necessary to subdivide the image into lines of text and the lines into individual words. It is likely that automatic methods are not entirely successful and will need to be checked and edited interactively.
39.2.4 Comparison of word images The generated images of known words have to be compared with the images of unknown words in the manuscript and matches sought. The success or failure of this section is crucial to the whole approach. It is likely that the many possible methods will need to be compared and a choice of the more successful ones included in the package. Since human handwrit ing is frequently variable,
Figure 39.1: Example output from the experiment al text generator. 250
A NEW METHOD OF OFF-LINE TEXTRECOGNITION
letters without having to bother about a smooth join. Figure 39 .1 shows an example of the output from this initial experiment , with the ascii characters down the lefthand side and the corresponding 'handwritten ' output in the centre of the page . There are a number of modifications which must be included in the next version. In the book, the height of the script is described in terms of pen-width - in one place the gothic hand is said to have a height of five or six nibwidths and be written with an angle of 45° while later on the blackletter gothic is said to have a height of three to five nib-widths and an angle of 30-40 °. This suggests that the coordinates describing the pen-path should be normalised to fractions of a nib-width rather than millimetres on the page. Ascenders and descenders appear to be two nib-widths above or below the body of the text, and this ratio can be checked. Also there are a few thin flourishes, which are probably produced by the edge of the nib and need to be coded separately. These are minor additions, but they would improve the appearance of the text produced by this program. The major problem encountered when using this book it that it assumes modem equipment, especially metal nibs which remain constant in shape for the majority of their lifetime and which can be replaced by another identical one when worn out. This was certainly not the case when medieval manuscripts were written - the quill pens then in use had to be re-sharpened at very frequent intervals and were unlikely to be of identical shape or thickness after each operation. Also a very slight change of pressure would have a large change in the thickness of the resulting line. All of this suggests that the 'constants ' of modern calligraphy may have been far from constant in the ancient manuscripts with which the comparisons are desired . To produce a system which would fit more closely to the actual conditions of the time, a specimen sheet by the German writing master Gregory Bock (c.1510) (Friedman 1993, 70, Fig 3) showing how each letter was built up, stroke by stroke was used. This suggested that the angle of pen-nib to the horizontal was far from constant - indeed the angle between the pen-nib and the gradient of the pen-path is a more important factor and the script has both thick , black strokes (with this angle close to 90°) and thin flourishes (with this angle close to 0°). In-between angles seem rare at a first inspection of this page. It also appeared that some of the lines of the pen-path were in fact curved and so the polyline approximation would require many more points, whilst the number of strokes per letter should be increased from that shown in the modem text. If the pen-angle is to be defined as the angle between the nib and the gradient of the pen path , this implies that gradient as well as coordinates must be recorded at the
points defining the outline. This additional information immediately offers the alternative of representing the actual path as an interpolating cubic rather than a polyline . Such a cubic can also be used to provide smooth joins in a cursive script and thus to represent many other hands using the same techniques. The present software will be developed further during the summer of 1994, using the ideas discussed here, and incorporated in the full system in due course.
39.4
Conclusion.
The aim of this research is to provide an interactive tool which will help users to obtain a text copy of the contents of a manuscript (Laflin 1993; Laflin forthcoming). It assumes that as the cost of computer hardware continues to fall and as the demands on Public Record offices and others archives continue to grow, the use of computer systems will become more widespread. Instead of allowing users to handle the ever-more-fragile original documents, these archives will scan them and make the images available on the computer screen. Under such a system (and I realise it may be many years into the future), a tool such as this will be of great use. I expect the user to alternate between automatic comparison of the wordsegments in the document and the images generated from the appropriate word-list and interactive intervention to identify a particular image as a particular ASCII string . When the session is finished, the user should be able to get hard-copy, either all text (if the transciption is complete) or all image or a mixture , indicating the state achieved at the end of the session.
References FRIEDMAN, J. F. 1993. 'Computerized Script Analysis and Classification: Some Directions for Research' in Optical Character Recognition in the Historical Discipline ', Halbgruae Reihe Band Al 8, MaxPlanck Institut. HARDYWILSON , D. 1990. The Encyclopaedia of Calligraphy Techniques. Headline Books. HELSPER,E. L. & SCHOMAKER, L. R. 1993. 'Off-line and On-line Handwritten Recognition' in Optical Character Recognition in the Historical Discipline' , 39-51 Halbgruae Reihe Band A18 , Max Planck lnstitut. HELSPER , E. L., SCHOMAKER, L. R. & TEuLINGS , H-L. 1993 'Tools for the Recognitionof Handwritten Historical Documents', History and Computing 5, 88-93. LAfl.IN, S. 1993. 'An Interactive System for Off-line Text Recognition' in Optical Character Recognition in the Historical Discipline ', 53-58 , Halbgruae Reihe Band A 18, Max Planck Institut. LAfl.IN, S. (forthcoming) 1994. 'Processing Historical Information with the Aid of Computers,' in F. Bocchi & P. Denley (eds.) Storia & Multimedia . TAPPERT , C. C., SUEN, C. Y. & WAKAHARA , T. 1990. 'The State of the Art in On-Line Handwriting Recognition' IEEE Transactions on Pattern Recognition and Machine Intelligence, 12, 787-808. TOMINAGA , H. (ed.) 1993. Special Issue on Postal Processing and Character RecognitionPattern Recognition Letters, 14, 257-354
251
252
40 The use of computers in the decipherment of the Hackness Cross cryptic inscriptions Richard Sermon Gloucester Archaeology, The Old Fire Station, Barbican Road, GLOUCESTER GLJ 2JF, UK
40.1 Introduction The Hackness Cross consists of two stone fragments of an 8th to 9th Century Anglian cross, located in the south aisle of St Peter's Church at Hackness in North Yorkshire. The mutilated stones were discovered some time before 1848 in an outhouse at Hackness Hall, and prior to that show signs of having been used as a gate post. The fragments would appear to come from the top and bottom of the cross shaft, and together stand to a height of 1.5 metres. However, the original height of the monument would have been approximately 4.5 metres.
(OEDI)L(BVR)GA SEMPER TENENT MEMORES COMMV(NITATE)S TV AE TE MATER AMANTISSIMA
Oedilburga your communities hold you always in memory most loving mother
TREL( ..)OSA ABBA TISSA OEDILBVRGA ORATE PR(O NOBIS)
Trel..osa Abbess Oedilburga pray for us
OEDILBV(RGA) BEATA A(D S)EMPER T(E REC)OLA(NT)
Blessed Oedilburga always may they remember you
Table 40.1: Hackness Latin inscriptions
The decoration on the stones consists of vine scroll, interlacing, the feet of two beasts, and what is presumably the head of Jesus. In its original form the Hackness cross would have been equal to the famous examples from Bewcastle and Ruthwell. The cross also bears five inscriptions, two coded or cryptic inscriptions, and three in Latin (Table 40.1).
During this century the monument has been examined by a number of scholars including W. G. Collingwood (1927, 59-61), G. B. Brown (1930, 52-75) and R. A. S. Macalister (1945, 478). However, it was Brown who provided the most comprehensive description, and set out the problems surrounding cryptic inscriptions upon which this present work is based.
Oedilburga probably refers to Abbess Aethelburg mentioned in the Life of St Wilfrid (Webb & Farmer 1983, 171). In this reference she is found accompanying Abbess Aelfflaed of Whitby, when visiting King Aldfrith of Northumbria on his deathbed in 705 AD. Aethelburg was presumably Abbess of the Monastery at Hackness which was founded in 680 AD by Abbess Hilda, according to Bede (Sherley-Price 1968, 249).
The first of the cryptic inscriptions is written in a form of Ogham, a Celtic alphabet developed in Ireland in the 4th Century AD. The second inscription is written in Runic, a Germanic alphabet used by the Anglo-Saxons.
40.2 The Ogham Inscription The inscription is located on the south side of the lower fragment and consists of 27 letters forming a four line inscription (see Figure 40.1) . Brown (1930) suggests that about another six lines above may have been lost. According to Macalister (1945) the inscription shows only a superficial similarity to Celtic Ogham but may have been invented by someone familiar with the Ogham system. The script was formed by dividing a fixed alphabet into groups of five letters and then using a particular type of stroke for each group, the letter within that group being denoted by the number of strokes employed. In the case of the Hackness Ogham this would give rise to an alphabet of 30 letters of which only 14 are used in the inscription (see Figure 40.2).
Figure 40.1: Hackness Ogham Inscription Collingwood 1927 and Brown 1930).
(based on
Attempts to decipher the inscription have so far proven unsuccessful due to uncertainty about the alphabet upon which the Ogham characters are based. In any event the alphabet would have to consist of at least 30 letters which would exclude Latin or Greek, leaving us with one of two options: the Anglo-Saxon runic alphabet (see Figure 40.3), or the Old Irish Ogham alphabet (see Figure 40.4).
253
RICHARDSERMON
Group
A
II
Ill
Group
B
-
-
-
-
Group
C
II
Ill
/Ill
Ill/I
I
of consonants. On the other hand many of the readings based on the Ogham alphabet contained good syllables, especially the following examples.
1111 11111
Hackness Group
Group Group
\\
\
D
(
E
\\\\
\\\ (((
((
Ogham Inscription
Order
\\\\\
((((
(((((
e
111 11 r
11 11 1 r
ea
Group
)
F
)))
))
))))
)))))
((((
\
\\\\
!Ill
\\\
ph
ia
0
s
11 g
11 g
e
11 g
g
e
111 ng
e
1 11 ng
th
II
11111 r
ui
Record
: 003
: 3 4 5 6 1 2
((((
u )))
)
0
I l l/
1111 z
ph
C
II
s
111 1 z
Figure 40.2: Hackness Ogham Alphabet. II
Ill
m
g
ng
1111 z
a
0
u
e
II
Ill
IIII
I II I I
kh
th
p
ph
X
\\
\\\
\\\\
\\\\\
ea
oi
ia
ui
ae
b
1 ))
)) )
))))
) )) ) )
h
d
t
C
q
Record
: 028
Group A
However, the choice between these two alphabets is not the only problem that confronts us. If we divide either of the above alphabets into six groups of five letters , we then need to find out which of the six different types of stroke employed in the inscription correspond to each letter group. For the six letter groups this gives us a total of 720 different possible permutations , i.e. the factorial of six. Consequently , a program was written in Microsoft QBASIC (1992) with which it is possible to generate all 720 permutations for each of the two alphabets (a listing is available on request from the author).
Group
B
Group
C
Group Group Group
D E
((
F
Hackness Group
(( ( (
40.2.1 Results
11111 r
\
II
kh
g
Having generated a total of 1440 possible readings of the inscription it was then necessary to start examining each one in detail. Most of the 720 readings based on the Runic alphabet contained completely unintelligible strings
\\\\
11 11 1 r
II
r
II I I
\\\
p
0
s
11 g
e
g
II
ng
C
111 ng
II
e
oi
1111 z
'Y
&-,
j
e
p
X
s
X
~
H
~
F
Is\ 'r
t
b e m
I
u
re
d
a
~
y ea io
C
g
i
1 11 1 z
ui
~
i
r
0
IIII
~
M f4
0
in
f> ~ w h
s
♦
~
X
)
e
t
h
)))
g
f
~
.
( ( ((
u
Il l
f'\ ~ u p
~
( (( ((
n
: 3 4 6 5 1 2
11111 r
ui
((((
s
V
Ogham Inscription
Order
e
ph
(( (
11 1 1 1 r
'
r "' k g q )(
t:1
st
Figure 40.3: Runic Alphabet (Anglo-Saxon)
-. B
-.,-L
I I I
s
V
--f-
--H-
M
G
Ng
~
-e-
~
; ' ;;l.
{.
)
;
....L
I I 11 I
I I I I
H
N TJ J ' TJ; J '
} }
l. ' ' '
z
-v-
Kh
Th
p
Ph
(ea)
(oi)
(ia}
(ui)
X (ae)
A
-+? (
D
-+- -++-
'
R
_L
.-1..L.
.)
Figure 40.4: Ogham Alphabet (Old Irish ) 254
0
11 I
1I I I
11 11 I
T
C
Q
111
u
1111 E
11111 I
THEUSE OF COMPUTERS Group
Group
Group
II
111
1111
11111
m
g
ng
z
r
a
0
u
e
I II
/Ill
//Ill
ia
ui
ae
A
oi
ea Group
D
kh
Group
\\
\\ \
\\\\
\\\\\
th
p
ph
X
(((
((
E
b
Group
Hackness
)))
))))
)))))
t
C
q
0
>>
////
11111
G
E
Ng
Ph
Ui
R
E
Ng
II Oi
z
11
111
111
1111
(( ((
e
g
u
S
0
ll ll
1111
1111
e
111 ng
ph
ui
z
Ill
II
1111
ng
oi
Table 40.2: Hackness Ogham inscription (final version).
255
RICH ARD SERMO N
no doubt have been found. This would also account for the Irish name Angus at the end of the inscription .
Reconstruction
Old Irish
English
Ceros gu
Cross cu
Cross to
Rhge Guso
Rfg fsu
King Jesus
40.3 The Runic Inscription
erg eng phuir
carric an f6ir
rock of help
uit Engoiz
uait 6engus
from Angus
The inscription is located on the east side of the upper fragment and consists of 15 Anglo-Saxon Runes, 35 Tree Runes and three Latin letters, combining to form a six line inscription (see Figure 40.5) .
Table 3: Hackness Ogham inscription (interpretation ).
( (
B
( ( (
( ( ( (
( ( ( ( (
L
V
s
\\\ T
\\\\
\\\\\
C
Q
N
\
\\
H
D
M
G
Ng
z
R
A
0
u
E
I
11
1 11
1111
11111
)
) )
) ) )
) ) ) )
) ) ) ) )
Kh
Th
p
Ph
X
I
II
/ II
/Il l
//Il l
Ea
Oi
Ia
Ui
Ae
Table 40.4: Hackness Ogham alphabet (final version) .
If the interpretation in Table 40 .3 is correct, then the inscription would have been written using the variant of the standard Ogham alphabet shown in Table 40.4 . The third line of the inscription may also be a reference to the biblical Eben-e zer meaning stone of help, which was set up by Samuel following an Israelite victory over the Philistines : 'Then Samuel took a stone, and set it between Mizpeh and Shen , and called the name of it Ebenezer, saying hitherto hath the Lord helped us.' (Samuel I, 7: 12). Those who made the cross would have used the Vulgate Bible and therefore known the reference in its Latin form Lapis adiutorii . A contemporary cross at Bewcastle in Cumbria may have served a similar purpose . The Runic inscription , though altered by 19th Century antiquarians , would also appear to celebrate a victory: 'This victory sign set up by Hwaetred Wothgaer Olwfwolthu in memory of Alcfrith a king and son of Oswiu pray for his soul.' (Bewcastle Cross, Cumbria) . The Ogham inscription might at first appear unusual in this context , but not when we consider the Celtic origins of the Christianity in Northumbria. As Brown (1930) points out, the cryptic inscriptions were probably devised by monks or nuns among whom Irish individual s would
The Tree Runes employed here are thought by Page (1973, 64-66) to be a form of Hahalruna , which are described in the 9th Century /sruna Tract, and are similar to Norse inscriptions from Maes Howe on Orkney. The Runic alphabet (futhorc) is split into four groups of eight letters (Figure 40.6). Each rune is then represented by a vertical stemline, with the number of arms to the left indicating the group in which the rune occurs and those to the right indicating its position within that group . This system would give rise to an alphabet of 32 letters , of which 14 are identified in the inscription . Once again attempts to decipher the inscription have so far proven unsuccessful due to uncertainty about the alphabet upon which the Tree Rune characters are based. In any event the alphabet would have to consist of at least 32 letters, which would exclude the Latin , Greek or Ogham Alphabets leaving us with the Anglo-Saxon Runic Alphabet (see Figure 40.3). However, the solution is not simply a matter of dividing the Anglo-Saxon Runic Alphabet into four groups of eight letters, since we still do not know the order in which the letter groups occurred . For the four letter groups this gives us a total of 24 different possible permutations , i.e. the factorial of four. A program was therefore written to generate all 24 possible permutations . It was also decided to run the program using the Runic alphabet in reverse order, as some Norse inscriptions are known to use this system (Derolez 1954, 140-142) .
40.3.1 Results The program generated 48 possible readings of the inscription , none of which appeared to form any intelligible pattern . It would seem that the Tree Runes are now too fragmentary to be fully understood .
40.3.2 Conclusions Though the results of the program were inconclusive , it was still be possible to glean something from the first two lines of the inscription (Figure 40. 7) which appear to be an anagram: Reconstruction
OEDILBURG GNOEW ME
Anglo-Saxon
.tEpelburg cneow me
Modern English
Aethelburg knew me
The words appear to be spoken by the cross , a personification which is not unique. The Ruthwell Cross near Dumfries is also inscribed in Runic, with the AngloSaxon poem the Dream of the Rood , in which the cross 256
THEUSE OF COMPU1ERS IN THE DECIPHERMENT OF THE HACKNESS CROSS CRYPTIC INSCRIPTIONS
+MMRH f>~ X i- l'X fl \ ~ R
tf1f1!T! ! l
Group
A
Group
B
Group
C
Group
D
fflf1f1f!
1T 111111
Figure 40.6: Hackness Tree Rune Alphabet
+MMgHf>P. X"'l'Xl'IIP.~ u +
G
Figure 40.5: Hackness Runic Inscription (based on Collingwood 1927 and Brown 1930)
itself describes the suffering of Jesus. Parallels also exist for Runic anagrams , two of which appear in riddles (24 and 42) from the Exeter Book of Anglo-Saxon poetry (Rodrigues 1990, 104-107). Finally, this interpretation is in keeping with the three Latin inscriptions that also commemorate Abbess Aethelburg.
E
N
MB
L
G
DWOE
OE
R
Figure 40.7: Hackness Anglo-Saxon Runes
LANG,J . 1988. Anglo-Saxon Sculpture, Shire Publications, Aylesbury. LEHMAN N, R. P. M. & LEHMAN N, W. P. 1975. An Introduction to Old Irish, Modem Language Association, New York. MACALISTER , R. A. S. 1945. Corpus Inscriptionum Insularum Celticarum I, Irish Text Society, Dublin. ELLIOTT , R. W. V. 1959. Runes, An Introduction , Manchester, University Press.
Acknowledgements
PAGE,R. I. 1973. An Introduction to English Runes, London.
The author would like to thank David Bowler for the biblical reference, Donald MacKenzie for guidance on Old Irish grammar and Mike Rains for advice on the Q-Basic programs.
RODRIGUES , L. J . 1990. Anglo-Saxon Riddles , Llanerch, Lampeter.
References
SWEET , H. 1896. The Students Dictionary of Anglo-Saxon , Clarendon Press, Oxford.
BROWN , G. B. 1930. The Arts in Early England vol.VI, John Murray, London. COLLINGWOOD, W. G. 1927. Northumbrian Crosses of the Pre-Norman Age , Llanerch, Lampeter. DAVIS,N. 1980. Sweet 's Anglo-Saxon Primer, Clarendon Press, Oxford.
SHERLEY-PRICE , L. (ed.), 1968. Bede, A History of the English Church and People, Penguin, London. STEPHENS , G. 1884. Handbook of the Old-Northern Runic Monuments of Scandinavia and England, Llanerch, Lampeter.
THURNEYSEN , R. 1970. A Grammar of Old Irish, Irish Text Society, Dublin. WEBB, J. F. & FARMER , D. H. (eds.) 1983. The Age of Bede , Penguin, London. WINTERBOTHAM,J . J . 1985. Hackness in the Middle Ages , Hackness Press,
London.
DEROLEZ , R. 1954. Runica Manuscr ipta, Bruggs.
257