Computer (March 2005)


403 111 4MB

English Pages 101 Year 2005

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
r30c1.pdf......Page 1
toc.pdf......Page 2
r3002.pdf......Page 4
r3004.pdf......Page 5
r3006.pdf......Page 6
r3008.pdf......Page 8
r3012.pdf......Page 12
r3014.pdf......Page 14
r3018.pdf......Page 18
r3022.pdf......Page 22
r3025.pdf......Page 25
r3026.pdf......Page 26
r3032.pdf......Page 32
r3033.pdf......Page 33
r3041.pdf......Page 41
r3050.pdf......Page 50
r3061.pdf......Page 61
r3069.pdf......Page 69
r3070.pdf......Page 70
r3074.pdf......Page 74
r3075.pdf......Page 75
r3080.pdf......Page 80
r3083.pdf......Page 83
r3085.pdf......Page 85
r3086.pdf......Page 86
r3087.pdf......Page 87
r3090.pdf......Page 90
r3091.pdf......Page 91
r3093.pdf......Page 93
r3096.pdf......Page 96
r3100.pdf......Page 98
r30c4.pdf......Page 101
Recommend Papers

Computer (March 2005)

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Innovative Technology for Computer Professionals

March 2005

Engineers, Programmers, and Black Boxes, p. 8 Building the Software Radio, p. 87

h t t p : / / w w w. c o m p u t e r. o r g

The Winner’s Curse in High Tech, p. 96

Innovative Technology for Computer Professionals

March 2005,Volume 38, Number 3

COMPUTING PRACTICES 26

Integrating Biological Research through Web Services Hong Tina Gao, Jane Huffman Hayes, and Henry Cai A case study demonstrates that Web services could be key to coordinating and standardizing incompatible applications in bioinformatics, an effort that is becoming increasingly critical to meaningful biological research.

C O V E R F E AT U R E S 33

Socially Aware Computation and Communication Alex (Sandy) Pentland By building machines that understand social signaling and social context, technologists can dramatically improve collective decision making and help keep remote users in the loop.

41

Designing Smart Artifacts for Smart Environments Norbert A. Streitz, Carsten Röcker, Thorsten Prante, Daniel van Alphen, Richard Stenzel, and Carsten Magerkurth Smart artifacts promise to enhance the relationships among participants in distributed working groups, maintaining personal mobility while offering opportunities for the collaboration, informal communication, and social awareness that contribute to the synergy and cohesiveness inherent in collocated teams.

Cover design and artwork by Dirk Hagner

50

Sumi Helal, William Mann, Hicham El-Zabadani, Jeffrey King, Youssef Kaddoura, and Erwin Jansen Many first-generation pervasive computing systems lack the ability to evolve as new technologies emerge or as an application domain matures. Programmable pervasive spaces, such as the Gator Tech Smart House, offer a scalable, cost-effective way to develop and deploy extensible smart technologies.

ABOUT THIS ISSUE

ncreasingly inexpensive consumer electronics, mature technologies such as RFID, and emerging wireless sensor technologies make possible a new era of smart homes, offices, and other environments. In this issue, we look at state-of-the-art technology applications including socially aware communication-support tools; programmable pervasive spaces that integrate system components; smart environments that incorporate information, communication, and sensing technologies into everyday objects; and an industry-specific initiative that uses a Web-based approach to bring processes, people, and information together to optimize efficiency.

The Gator Tech Smart House: A Programmable Pervasive Space

I

61

Web-Log-Driven Business Activity Monitoring Savitha Srinivasan, Vikas Krishna, and Scott Holmes Using business process transformation to digitize shipments from IBM’s Mexico facility to the US resulted in an improved process that reduced transit time, cut labor costs and paperwork, and provided instant and perpetual access to electronically archived shipping records.

IEEE Computer Society: http://www.computer.org Computer: http://www.computer.org/computer [email protected] IEEE Computer Society Publications Office: +1 714 821 8380

OPINION 8

At Random Engineers, Programmers, and Black Boxes Bob Colwell

NEWS 14

Industry Trends Search Engines Tackle the Desktop Bernard Cole

18

Technology News Is It Time for Clockless Chips? David Geer

22

News Briefs Finding Ways to Read and Search Handwritten Documents ■ A Gem of an Idea for Improving Chips ■ IBM Lets Open Source Developers Use 500 Patents

MEMBERSHIP NEWS 75

Computer Society Connection

80

Call and Calendar COLUMNS

87

Embedded Computing Building the Software Radio Wayne Wolf

93

NEXT MONTH:

Beyond Internet

Standards Public Opinion’s Influence on Voting System Technology Herb Deutsch

96

IT Systems Perspectives The Winner’s Curse in High Tech G. Anandalingam and Henry C. Lucas Jr.

100

The Profession An Open-Secret Voting System Thomas K. Johnson

D E PA R T M E N T S 4 6 12 70 73 83 86 90 Membership Magazine of the

Article Summaries Letters 32 & 16 Career Opportunities Advertiser/Product Index Products Bookshelf IEEE Computer Society Membership Application COPYRIGHT © 2005 BY THE INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS INC. ALL RIGHTS RESERVED. ABSTRACTING IS PERMITTED WITH CREDIT TO THE SOURCE. LIBRARIES ARE PERMITTED TO PHOTOCOPY BEYOND THE LIMITS OF US COPYRIGHT LAW FOR PRIVATE USE OF PATRONS: (1) THOSE POST-1977 ARTICLES THAT CARRY A CODE AT THE BOTTOM OF THE FIRST PAGE, PROVIDED THE PER-COPY FEE INDICATED IN THE CODE IS PAID THROUGH THE COPYRIGHT CLEARANCE CENTER, 222 ROSEWOOD DR., DANVERS, MA 01923; (2) PRE1978 ARTICLES WITHOUT FEE. FOR OTHER COPYING, REPRINT, OR REPUBLICATION PERMISSION, WRITE TO COPYRIGHTS AND PERMISSIONS DEPARTMENT, IEEE PUBLICATIONS ADMINISTRATION, 445 HOES LANE, P.O. BOX 1331, PISCATAWAY, NJ 08855-1331.

Innovative Technology for Computer Professionals

Editor in Chief

Computing Practices

Special Issues

Doris L. Carver

Rohit Kapur

Bill Schilit

Louisiana State University [email protected]

[email protected]

[email protected]

Associate Editors in Chief

Perspectives

Web Editor

Bob Colwell

Ron Vetter

[email protected]

[email protected]

Bill N. Schilit Intel

Research Features

Kathleen Swigger

Kathleen Swigger

University of North Texas

[email protected]

Area Editors

Column Editors

Computer Architectures Douglas C. Burger

At Random Bob Colwell Bookshelf Michael J. Lutz

University of Texas at Austin

Databases/Software Michael R. Blaha

University of Maryland

Standards Jack Cole

OMT Associates Inc.

Graphics and Multimedia Oliver Bimber

Embedded Computing Wayne Wolf

Advisory Panel

Bauhaus University Weimar

Princeton University

University of Virginia

Information and Data Management Naren Ramakrishnan

Entertainment Computing Michael R. Macedonia

Thomas Cain

Georgia Tech Research Institute

Virginia Tech

Ralph Cavin

IT Systems Perspectives Richard G. Mathieu

Semiconductor Research Corp.

IBM Almaden Research Center

Networking Jonathan Liu University of Florida

Software H. Dieter Rombach

St. Louis University

Invisible Computing Bill N. Schilit

Carl K. Chang [email protected]

CS Publications Board Security Bill Arbaugh

Rochester Institute of Technology

Multimedia Savitha Srinivasan

2004 IEEE Computer Society President

US Army Research Laboratory

James H. Aylor

University of Pittsburgh

Michael R. Williams (chair), Michael R. Blaha, Roger U. Fujii, Sorel Reisman, Jon Rokne, Bill N. Schilit, Nigel Shadbolt, Linda Shafer, Steven L. Tanimoto, Anand Tripathi

CS Magazine Operations Committee Bill Schilit (chair), Jean Bacon, Pradip Bose, Doris L. Carver, Norman Chonacky, George Cybenko, John C. Dill, Frank E. Ferrante, Robert E. Filman, Forouzan Golshani, David Alan Grier, Rajesh Gupta, Warren Harrison, James Hendler, M. Satyanarayanan

Ron Hoelzeman University of Pittsburgh

Edward A. Parrish Worcester Polytechnic Institute

Intel

Ron Vetter

The Profession Neville Holmes

Alf Weaver

University of North Carolina at Wilmington

University of Tasmania

University of Virginia

AG Software Engineering

Dan Cooke Texas Tech University

Administrative Staff

Editorial Staff Scott Hamilton

Mary-Louise G. Piner

Senior Acquisitions Editor [email protected]

Staff Lead Editor

Judith Prow

Membership News Editor

Managing Editor [email protected]

Bryan Sallis

James Sanders Senior Editor

Lee Garber Senior News Editor

Chris Nelson Associate Editor

Bob Ward

Executive Director David W. Hennage Publisher Angela Burgess

Manuscript Assistant

[email protected]

Design Larry Bauer Dirk Hagner Production Larry Bauer

Assistant Publisher Dick Price Membership & Circulation Marketing Manager Georgann Carter

Business Development Manager Sandy Brown Senior Advertising Coordinator Marian Anderson

Circulation: Computer (ISSN 0018-9162) is published monthly by the IEEE Computer Society. IEEE Headquarters, Three Park Avenue, 17th Floor, New York, NY 100165997; IEEE Computer Society Publications Office, 10662 Los Vaqueros Circle, PO Box 3014, Los Alamitos, CA 90720-1314; voice +1 714 821 8380; fax +1 714 821 4010; IEEE Computer Society Headquarters,1730 Massachusetts Ave. NW, Washington, DC 20036-1903. IEEE Computer Society membership includes $19 for a subscription to Computer magazine. Nonmember subscription rate available upon request. Single-copy prices: members $20.00; nonmembers $94.00. Postmaster: Send undelivered copies and address changes to Computer, IEEE Membership Processing Dept., 445 Hoes Lane, Piscataway, NJ 08855. Periodicals Postage Paid at New York, New York, and at additional mailing offices. Canadian GST #125634188. Canada Post Corporation (Canadian distribution) publications mail agreement number 40013885. Return undeliverable Canadian addresses to PO Box 122, Niagara Falls, ON L2E 6S8 Canada. Printed in USA. Editorial: Unless otherwise stated, bylined articles, as well as product and service descriptions, reflect the author’s or firm’s opinion. Inclusion in Computer does not necessarily constitute endorsement by the IEEE or the Computer Society. All submissions are subject to editing for style, clarity, and space.

2

Computer

Published by the IEEE Computer Society

ARTICLE SUMMARIES Integrating Biological Research through Web Services

Designing Smart Artifacts for Smart Environments

pp. 26-31

pp. 41-49

Hong Tina Gao, Jane Huffman Hayes, and Henry Cai

Norbert A. Streitz, Carsten Röcker, Thorsten Prante, Daniel van Alphen, Richard Stenzel, and Carsten Magerkurth

A

t present, compatibility problems prevent researchers from cooperating in using bioinformatics to solve important biological problems. Web services might be a way to solve this integration problem. Web technology provides a higher layer of abstraction that hides implementation details from applications so that each organization can concentrate on its own competence and still leverage the services other research groups provide. To test the potential of a Web services solution, the authors implemented a microarray data mining system that uses Web services in drug discovery—a research process that attempts to identify new avenues for developing therapeutic drugs. Although their implementation focuses on a problem within the life sciences, they strongly believe that Web services could be a boon to any research field that requires analyzing and mining large volumes of data.

Socially Aware Computation and Communication pp. 33-40 Alex (Sandy) Pentland

M

ost would agree that today’s communication technology seems to be at war with human society. Pagers buzz, cell phones interrupt, and e-mail begs for attention. Technologists have responded with well-meaning solutions that ultimately fail because they ignore the core problem: Computers are socially ignorant. A research group at MIT is taking the first steps toward quantifying social context in human communication. These researchers have developed three socially aware platforms that objectively measure several aspects of social context, including analyzing the speaker’s tone of voice, facial movement, or gestures.

4

Computer

T

he integration of information, communication, and sensing technologies into our everyday objects has created smart environments. Creating the smart artifacts that constitute these environments requires augmenting their standard functionality to support a new quality of interaction and behavior. A system-oriented, importunate smartness approach creates an environment that gives individual smart artifacts or the environment itself certain self-directed actions based on previously collected information. For example, a space can be smart by having and exploiting knowledge about the persons and artifacts currently situated within its borders. In contrast, a people-oriented, empowering smartness approach places the empowering function in the foreground by assuming that smart spaces make people smarter. This approach empowers users to make decisions and take actions as mature and responsible people. Although in some cases it might be more efficient if the system doesn’t ask for a user’s feedback and confirmation at every step in an action chain, the overall design rationale should aim to keep the user in the loop and in control whenever possible.

The Gator Tech Smart House: A Programmable Pervasive Space pp. 50-60 Sumi Helal, Hicham El-Zabadani, Youssef Kaddoura, Erwin Jansen, Jeffrey King, and William Mann

M

any first-generation pervasive computing systems lack the ability to evolve as new technologies emerge or as an application domain matures. Integrating numerous hetero-

Published by the IEEE Computer Society

geneous elements is mostly a manual, ad hoc process. The environments are also closed, limiting development or extension to the original implementers. To address this limitation, the University of Florida’s Mobile and Pervasive Computing Laboratory is developing programmable pervasive spaces in which a smart space exists as both a runtime environment and a software library. Service discovery and gateway protocols automatically integrate system components using generic middleware that maintains a service definition for each sensor and actuator in the space. Programmers assemble services into composite applications, which third parties can easily implement or extend.

Web-Log-Driven Business Activity Monitoring pp. 61-68 Savitha Srinivasan, Vikas Krishna, and Scott Holmes

B

usiness process transformation defines a new level of business optimization that manifests as a range of industry-specific initiatives that bring processes, people, and information together to optimize efficiency. For example, BPT encompasses lights-out manufacturing, targeted treatment solutions, real-time risk management, and dynamic supply chains integrated with variable pricing To examine how BPT can optimize an organization’s processes, the authors describe a corporate initiative that was developed within IBM’s supply chain organization to transform the import compliance process that supports the company’s global logistics.

@

LE T TERS

CONCERNS ABOUT EDUCATION In “Determining Computing Science’s Role” (The Profession, Dec. 2004, pp.128, 126-127), Simone Santini speaks for many of us who are worried about the direction of computer science—and higher education in general. I’m concerned that we are fast approaching a time in this country when science will be directed by powerful industry and business objectives first and foremost, and “pure research” will become increasingly marginalized. I believe this is the end result of a capitalist system, where money rules nearly every activity. This process was given a big push by US President Ronald Reagan 20 years ago, and it’s now accelerating under the Bush administration. Unfortunately, I don’t see any way to stop this slide under the present conditions and cultural climate. Jim Williams Silver Springs, Md. [email protected] I enjoyed reading Simone Santini’s excellent article in Computer’s December issue. From working in both academia and industry for many years, I can add the following. Industry is concerned not just with commercial applicability, but with immediate commercial applicability (their thinking is very short term) in response to current requests from customers—it’s an easier sale if the customer is already demanding the product. A breakthrough that has immediate commercial applicability, but is so novel that no customer has thought of it and asked for it, is of lesser value. There is an infinite number of algebras that can be defined and an infinite number of algorithms that can be developed, but relational algebra is very helpful and so is Quicksort. All academic pursuits are not equal, and there needs to be some measure of the usefulness of one over another. I agree that short-term industrial con6

Computer

cerns should not dictate this measure. Steve Rice University of Mississippi [email protected] Simone Santini responds: Consumer wishes often don’t convey the infallible foresight that industry would like. In 1920, consumers didn’t know they wanted radio. In 1975, they didn’t know they wanted CDs—they were perfectly happy with vinyl. At most, they merely desired better pickups so as not to ruin their records, and they wanted better hi-fi systems to play them on. The list goes on. The problem is that, in many cases, industry only takes small steps for fear of the risks, forgetting that no focus group will ever propose the next major step. All you can get from a focus group is advice on how to marginally improve an existing product. This is important, of course, but there is more to innovation, even to industrial innovation than that—academia, as I have tried to argue, should have different priorities. I have nothing against practical applications of computing science, of course. In fact, I think any mathematician would be happy to know that his theorem has improved the breadto-prosciutto ratio in sandwiches worldwide. I am just saying that practical applications can’t be the force that drives the discipline. The fact is that Quicksort and relational databases do not spring up whole like Athena from the head of Zeus. They are part of a process, and the process must proceed by its own internal logic. It would be an illusion to think that you can get results that have practical applicability without the “pure” Published by the IEEE Computer Society

research that lies behind them. No amount of money could have convinced engineers in the Victorian era to invent television. It took Maxwell’s aesthetic dissatisfaction when faced with the asymmetry of the field equations to get things started. Industry would like to have “readyto-wear” research—applicable results without the cultural (and often not directly applicable) background—but this is an illusion.

J2EE FRAMEWORK DEVELOPER The article titled “J2EE Development Frameworks” (Rod Johnson, IT Systems Perspectives, Jan. 2005, pp. 107-110) was well-written and insightful. However, it would have been useful to know that the author is also one of the creators of the Spring framework. This connection does not detract from the article, but it is clearly a relevant piece of information that should have been disclosed to the reader. Landon Davies Baltimore, Md. [email protected] Rod Johnson replies: As a former academic, I agree that it is important to remain impartial with regard to specific technologies. Therefore, I took care to mention alternatives to Spring when writing this article.

RETOOLING FOR SUCCESS IN A KNOWLEDGE-BASED ECONOMY In “People and Software in a Knowledge-Based Economy” (The Profession, Jan. 2005, pp. 116, 114115), Wojciech Cellary uses simple and elegant service sector taxonomies to analyze human roles in a knowledgebased economy. He rightly points out that even as the increasing use of computers to provide routine intellectual services shrinks the market for humans performing these services, humans will continue to excel in areas that involve

the production of intangible goods and advanced services. Although the author anticipates that robots and automated machines will prevail in the production of tangible goods (presumably in the industrial sector), he does not elaborate on the impact of automation in the manual services sectors. It is particularly interesting to observe the evolving roles of humans in manual skill areas that not so long ago required only moderate intellectual abilities. For example, modern automobiles come with complex electronically controlled subsystems that require using sophisticated diagnostic machines for troubleshooting when they fail. In addition to learning how to operate these machines, auto mechanics also must keep up to date with new technologies so they can recognize and fix problems, especially as additional innovations are incorporated into newer models. The proliferation of self-serve systems has eliminated the need for many services that humans formerly performed; instead, the human role now focuses on providing supervision and offering assistance if needed. Even household appliances are becoming intelligent—vacuum cleaners that can guide themselves around a room are now well within the reach of the average consumer. While technology improves human productivity and frees people from tedious effort, at times it also has the effect of eliminating employment opportunities. The challenge for those affected is to retool their skills in ways that emphasize the same qualities that would enable them to succeed in intellectual areas, namely creativity, manual expertise, and interpersonal skills. Badri Lokanathan Atlanta, Ga. [email protected]

We welcome your letters. Send them to [email protected].

REACH HIGHER Advancing in the IEEE Computer Society can elevate your standing in the profession. Application to Senior-grade membership recognizes

✔ ten years or more of professional expertise Nomination to Fellow-grade membership recognizes

✔ exemplary accomplishments in computer engineering

GIVE YOUR CAREER A BOOST ■

UPGRADE YOUR MEMBERSHIP www.computer.org/join/grades.htm March 2005

7

A T

R A N D O M

Engineers, Programmers, and Black Boxes Bob Colwell

U

niversities teach engineers all sorts of valuable things. We’re taught mathematics—especially calculus, probability, and statistics—all of which are needed to understand physics and circuit analysis. We take courses in system design, control theory, electronics, and fields and waves. But mostly what we’re taught, subliminally, is how to think like an engineer. Behind most of the classes an engineer encounters as an undergraduate is one overriding paradigm: the black box. A black box takes one or more inputs, performs some function on them, and produces one output. It seems simple, but that fundamental idea has astonishing power. You can build and analyze all engineered systems—and many natural systems, specifically excluding interpersonal relationships—by applying this paradigm carefully and repetitively. Part of the magic is that the function the black box contains can be arbitrarily complex. It can, in fact, be composed of multiple other functions. And, luckily for us, we can analyze these compound functions just as we analyze their mathematical counterparts. As part of an audio signal processing chain, a black box can be as simple as a low-pass filter. As part of a communications network, it can be a complicated set of thousands of processors, each with its own local network.

8

Computer

You can build and analyze all engineered systems by applying the black box paradigm.

MARVELS OF COMPLEXITY Modern microprocessors are marvels of complexity. Way back when, the Intel 4004 had only 2,300 transistors, a number that is not too large for smart humans to keep in their heads. Engineers knew what each transistor did and why it had been placed where it was on the die. The bad news was that they had to know; there were no CAD tools back then to help keep track of them all. But even then, the black box functional decomposition paradigm was essential. At one level of abstraction, a designer could ask whether the drive Published by the IEEE Computer Society

current from transistor number 451 was sufficient to meet signaling requirements to transistors 517 and 669. If it was, the designer would conceptually leave the transistor level and take the mental elevator that went to the next floor up: logic. At the logic level, the black boxes had labels like NAND and XOR. The designer’s objective at this level was to make sure that the functions selected correctly expressed the design intent from the level above: Should this particular box be a NAND or an AND? There were also subfloors. It’s not only possible, it’s also a very good idea to aggregate sets of boxes to form more abstract boxes. A set of D flip-flops is routinely aggregated into registers in synchronous designs, for example. Next floor up: the microarchitecture. At this level, the boxes had names like register file, ALU, and bus interface. The designer considered things like bandwidths, queuing depths, and throughput without regard for the gates underlying these functions or the actual flow of electrical currents that was such a concern only a few floors below. For hardware engineers, there was one more floor: the instruction set architecture. Most computer engineers never design an ISA during their careers—such is the commercial importance of object code compatibility. For decades now, the prevailing theory has been that to incentivize a buyer to suffer the pain of mass code conversion or obsolescence, any new computational engine that cannot run old code, unchanged, must be at least N times faster than anything else available. The trouble with this theory is that it has never been proven to work. At various times in the past 30 years, N has arguably reached as high as 5 or 10 (at equivalent economics) without having been found to be compelling. The x86 architecture is still king. But the latest contender in the ring is IBM’s Cell, introduced in February at ISSCC 05. Touted as having impressive computational horsepower, Cell is aimed initially at gaming platforms that may

not be as sensitive to the compatibility burden. Stay tuned—this new battle should play out over the next three years. Maybe computer engineers will get to play out in the sunshine of the top floor after all.

SOFTWARE FOLKS DO IT TOO The ability to abstract complex things is vital to all of engineering. As with the 4004’s transistors, without this ability, engineers would have to mentally retain entire production designs. But the designs have become so complicated that it has been about 25 years since I last saw a designer who could do that. Requiring designers to keep such complex designs in their heads would limit what is achievable, and doing so isn’t necessary as long as we wield our blackbox abstractions properly. In the early days of P6 development at Intel, I found it amusing to try to identify various engineers’ backgrounds by the way they thought and argued during meetings. My observations went through several phases. I was intrigued to observe that a group of 10 engineers sitting around a conference room table invariably had a subtle but apparent common mode: They all used the black-box abstraction implicitly and exclusively, as naturally as they used arithmetic or consumed diet Coke. Although these engineers came from different engineering schools, and their degrees ranged from a BS to an MS or a PhD, they implicitly accepted that any discussion would occur in one of two ways—either at one horizontal abstraction layer of the design or explicitly across two or more layers. It was generally quite easy to infer which of those two modes was in play, and all 10 engineers had no difficulty following mode changes as the conversation evolved. When thinking about this (and yes, I probably should have been paying attention to the technical discussion instead of daydreaming), it occurred to me that the first two years of my undergraduate EE training had sometimes

seemed like a military boot camp. In fact, it was a boot camp. With the exception of social sciences, humanities, history, and phys. ed., all of our classes were done in exactly this way. I don’t know if we became EEs because we gravitated toward the academic disciplines that seemed most natural to us, or if we just learned to think this way as a by-product of our training. Maybe we just recognized a great paradigm when we saw it and did the obvious by adopting it.

In general, constraints and boundaries are a good thing—they focus the mind.

Microprocessor design teams also have engineers with computer science backgrounds, who may not have gone through an equivalent boot camp. I tried to see if I could spot any of them by watching for less adroitness in following implicit abstraction-layer changes in meetings. I thought I saw a few instances of this, but there’s a countervailing effect: CS majors live and breathe abstraction layers, presumably by dint of their heavy exposure to programming languages that demand this skill. When I began pondering the effect of black-box function-style thinking and programming language abstractions to see if that might distinguish between CS- and EE-trained engineers, I did see a difference. Good hardware engineers have a visceral sense of standing on the ground at all times. They know that in the end, their design will succeed or fail based on how well they have anticipated nature itself: electrons with the same charge they have carried since the birth of the universe, moving at the same speed they always have, obeying physical laws that govern electronic and magnetic interactions along wires, and at all times constrained by thermodynamics.

Even though EEs may spend 95 percent of their time in front of a computer putting CAD tools through their paces (and most of the other 5 percent swearing at those same tools), they have an immovable, unforgettable point of contact with ultimate reality in the back of their minds. Most of the decisions they make can be at least partially evaluated by how they square against natural constraints.

CONSTRAINTS ARE GOOD FOR YOU You might think such fixed constraints would make design more difficult. Indeed, if you were to interview a design engineer in the middle of a tough morning of wrestling with intransigent design problems, she might well express a desire to throw a constraint or two out the window. Depending on the particular morning, she might even consider jumping out after them. In general, though, constraints and boundaries are a good thing—they focus the mind. I’ve come to believe that hardware engineers benefit tremendously from their requisite close ties to nature’s own rules. On the other hand, the CS folks are generally big believers in specifications and writing down the rules by which various modules (black boxes) interact. They have to be—these “rules” are made up. They could be anything. Assumptions are not just subtly dangerous here, they simply won’t work— the possibility space is too large. It’s not that every choice a hardware engineer makes is directly governed by nature and thus unambiguous. What functions go where and how they communicate at a protocol level are examples of choices made in a reasonably large space, and there a CS grad’s proclivity to document is extremely valuable. To be sure, some programmers face natural constraints just as real as any the hardware designers see. Real-time code and anything that humans can perceive—video and audio, for example—impose the same kinds of immovable constraints that a die size limit does for a hardware engineer. March 2005

9

At Random

I’m not looking for black and white—I’m just wondering if there are shades of gray between EE and CS. My attempt to discern differences between EE and CS grads was simply intended to see if the two camps were distinguishable “in the wild”—to see if that might lead to any useful insights. Computer science is not generally taught relative to natural laws, other than math itself, which is arguably a special case. I don’t know if it should be, or even can be, and it’s not my intention to pass a value judgment here. The CS folks, it seems to me, tend to be very comfortable in a universe bounded only by conventions that they (or programmers like them) have erected in the first place: language restrictions, OS facilities, application architectures, and programming interfaces. The closest they generally come to putting one foot down on the ground is when they consider how their software would run on the hardware they are designing—and that interface is, at least to some extent, negotiable with the EE denizens on the top floor. Absolutes, in the nonnegotiable natural-law sense of what EEs deal with, are unusual to them. The best engineers I have worked with were equally comfortable with hardware and software, regardless of their educational backgrounds. They had somehow achieved a deep enough understanding of both fields that they could sense and adjust to whatever world view was currently in play at a meeting, without giving up the best attributes of the alternative view. There is a certain intellectual thrill when you finally break through to a new understanding of something, be it physics or engineering or math—or poetry analysis, for that matter. I always felt that same thrill when I saw someone blithely displaying this kind of intellectual virtuosity.

BOTTOMS UP AND TOPS DOWN The engineers I know who routinely do this intellectual magic somehow arrived at their profound level of 10

Computer

understanding via the random walk of their experiences and education, combined with extraordinary innate intelligence. Can we teach it? Yale Patt and Sanjay Patel think so. It’s a basic tenet of their book, Introduction to Computing Systems (McGraw Hill, 2004). On the inside cover, no less a luminary than Donald Knuth says, “People who are more than casually interested in computers should have at least some idea of what the underlying hardware is like. Otherwise, the programs they write will be pretty weird.”

There is a certain intellectual thrill when you finally break through to a new understanding of something.

Conversely, people who design computers without a good idea of how programs are written, what makes them easy or hard, and what makes them fail, will in all likelihood conjure up a useless design. I once heard a compiler expert opine that there’s a special place in the netherworld for computer designers who create a machine before they know if a compiler can be written for it. One other data point I’m sure of: Me. I had taken an OS course and several programming language courses and did well at them, but I didn’t understand what computer architecture really meant until I had to write assembly code for a PDP-11. My program had to read the front panel switches, do a computation on them, and display the results on the front panel lights. My first program didn’t work reliably, and I spent hours staring at the code, line by line, trying to identify the conceptual bug. I couldn’t find it. I finally went back to the lab and stared instead at the machine. Eureka! It suddenly occurred to me that the assign-

ment hadn’t actually stated that the switches were debounced, and the PDP-11 documentation didn’t say that either. I had simply assumed it. Mechanical switches are constructed so that flipping the switch causes an internal metal plate to quickly move from one position to a new one where it now physically touches a stationary metal plate. Upon hitting the stationary plate, the moving metal repeatedly bounces up and down until it eventually settles and touches permanently. Even at the glacial clock rates of the 1970s, the CPU had plenty of time to sample a switch’s electrical state during the bounces. Debouncing them in software was just a matter of inserting a delay loop between switch transition detection and logical state identification. Without an understanding of both the hardware and the software, I’d still be sitting in front of that PDP-11, metaphorically speaking. There are always tradeoffs. The horizontally stratified way we teach computer systems today makes it difficult for students to see how ideas at one level map onto problems at another.

EVEN GOOD ABSTRACTIONS CAN HURT If you really want to snow a student under, try teaching computer system design from application to OS to logic to circuits to silicon physics as a series of vertical slices. In some ways, I think this problem was fundamental to Intel’s failed 432 chips from the early 1980s—they were “capability-based” object-oriented systems in which one global, overriding paradigm was present. The system was designed from one point of view, and to understand it you had to adopt that point of view. To wit: Everything—and I do mean everything—was an object in those systems. In some ways, it was the ultimate attempt to systematically apply the black-box paradigm to an entire computer system. An object in a 432 system was an abstract entity with intrinsic capabilities and extrinsic features. Every object

was protected by default against unauthorized access. If one object (your program, say) wanted access to another (a database, perhaps) your program object had to first prove its bona fides, which hardware would check at runtime. At a software level, this kind of system had been experimented with before, and it does have many appealing features, especially in today’s world of runaway viruses, Trojans, worms and spam. But the 432 went a step further and made even the hardware an object. This meant that the OS could directly look up the CPU’s features as just another object, and it could manipulate that object in exactly the same way as a software object. This was a powerful way of viewing a computing system, but it ran directly contrary to how computer systems are taught. It made the 432 system incomprehensible to most people at first glance. There would be no second glance: Various design errors and a poor match between its Ada compiler

and the microarchitecture made the system almost unusably slow. The 432 passed into history rather quickly. If the design errors had been avoided, would the 432 have taken hold in the design community? All things considered, I don’t think so: It had the wrong target in the first place. The 432 was intended to address a perceived looming software production gap. The common prediction of the late 1970s was that software was too hard to produce, it would essentially stop the industry in its tracks, and whatever hardware changes were needed to address that gap were therefore justified. With a few decades of hindsight, we can now see that the industry simply careened onward and somehow never quite fell into this feared abyss. Perhaps we all just lowered our expectations of quality to “fix” the software gap. Or maybe Bell Labs’ gambit of seeding universities in the 1970s with C and Unix paid off with enough pro-

grammers in the 1980s. Whatever the reason, the pool of people ready to dive into Ada and the 432’s new mindset was too small.

ew paradigms are important. Our world views make it possible for us to be effective in an industry or academic environment, but they also place blinders on us. In the end, I concluded that it wasn’t a matter of identifying which world view is best—EE or CS. The best thing to do is to realize that both have important observations and intuitions to offer and to make sure the differences are valued and not derided. Society at large should go and do likewise. ■

N

Bob Colwell was Intel’s chief IA32 architect through the Pentium II, III, and 4 microprocessors. He is now an independent consultant. Contact him at [email protected].

The 30th IEEE Conference on Local Computer Networks (LCN) Sydney, Australia – November 15-17, 2005 Call for Papers

http://www.ieeelcn.org

The IEEE LCN conference is one of the premier conferences on the leading edge of practical computer networking. LCN is a highly interactive conference that enables an effective interchange of results and ideas among researchers, users, and product developers. We are targeting embedded networks, wireless networks, ubiquitous computing, heterogeneous networks and security as well as management aspects surrounding them. We encourage you to submit original papers describing research results or practical solutions. Paper topics include, but are not limited to: • Embedded networks • Wearable networks • Wireless networks • Mobility management • Networks to the home • High-speed networks • Optical networks • Ubiquitous computing • Quality-of-Service • Network security/reliability • Adaptive applications • Overlay networks Authors are invited to submit full or short papers for presentation at the conference. Full papers (maximum of 8 camera-ready pages) should present novel perspectives within the general scope of the conference. Short papers are an opportunity to present preliminary or interim results and are limited to 2 camera-ready pages in length. All papers must include title, complete contact information for all authors, abstract, and a maximum of 5 keywords on the cover page. Papers must be submitted electronically. Manuscript submission instructions are available at the LCN web page at http://www.ieeelcn.org. Paper submission deadline is May 10, 2005 and notification of acceptance is July 28, 2005. General Chair: Burkhard Stiller University of Zürich, and ETH Zurich, Switzerland [email protected]

Program Chair: Hossam Hassanein Queen’s University Canada [email protected]

Program Co-Chair: Marcel Waldvogel University of Konstanz Germany [email protected]

1973 1989 1973 1989 •

3 2 & 16 YEARS AGO



MARCH 1973 GENE AMDAHL (p. 39). “‘The large computer market is the market that is being addressed most poorly by any of the competition today. It is also the most difficult market to address, and requires the most skill, technological knowhow, and financial backing. Because this is so, if we can meet these challenges properly, we would reasonably expect to have considerably less “transient” competition.’” “So the Amdahl Corporation seems to have a comfortable backlog, adequate financing, and the considerable talents and reputation of their president. What they don’t have is a detailed product description and that all-important track record of successful delivery, installation, operation, and support. And a great deal hinges on Gene Amdahl’s judgment that IBM’s flank really is exposed.” THE FLEXIBLE DISKETTE (p. 43). “A versatile system for entering information into a computer—with a dramatically different look in data storage media—has been announced by International Business Machines Corporation. “The IBM 3740 data entry system incorporates a flexible disk cartridge for capturing data called the IBM Diskette. Weighing just over an ounce, the flexible diskette resembles a small phonograph record, yet can store as many as 242,000 characters—equivalent to a box and a half of 80column cards.” “The IBM 3540 diskette input/output unit, also announced, can be attached to an IBM System/370, permitting data recorded on diskettes to be entered directly into the computer. This high speed unit can hold up to 20 diskettes at a time and read more than 3,000 data records per minute into a System/370. The 3540 also has the capability to receive information from the computer and record it on a diskette at more than 2,000 records per minute.” CALCULATOR (p. 44). “A powerful electronic calculator, small enough to fit into a shirt pocket yet capable of performing the most complex business and financial calculations, was announced recently by Hewlett-Packard Company. “The new HP-80 differs from the HP-35 (HewlettPackard’s original pocket-sized scientific calculator) in its builtin programming. The HP-35 solves functions with a single keystroke; the HP-80 solves equations with a single keystroke. Typical of the functions solved by the HP-35 with one keystroke are: log, ln, sin, cos, tan and xy. Some of these functions are hard-wired into the HP-80 as subroutines within the single keystroke programs. In other words, the HP-35 has one level of programming, while the HP-80 has two levels.” INTEL 8008 SIMULATOR (p. 45). “Intel Corporation has introduced a Fortran IV program for simulating the operation of Intel’s 8008 computer-on-a-chip, a complete 8-bit CPU packaged in an 18-pin DIP. 12

Computer



“The program, designated INTERP/8, is available from Intel on magnetic tape. It is also available under time-share arrangements with General Electric Timeshare, Tymshare Corporation and Applied Logic Corporation.” “The addition of this simulator program completes a comprehensive set of hardware and software support to assist development of Intel’s MCS-8 micro computer systems. Support now includes prototyping system, PROM programmer, hardware assembler, Fortran IV assembler, Fortran IV simulator, several control programs and a system interface and control module.” SIMULATION COMPUTER (p. 45). “A new British simulation computer which is programmed and used in a similar way to an analog computer offers digital accuracy, reliability and repeatability. “Designed to replace conventional analog and hybrid equipment with an all-digital system, the Membrain MBD24 consists of a number of separate digital computing modules which are interconnected by means of a patchboard. Each unit is addressable from a keyboard to enable the setting of problem parameters such as gains, initial conditions, time-scale, non-linear functions and timers. Data is transmitted and received simultaneously by all units, the output of each unit being a 24-bit serial number which is updated once every 100 micro-seconds.” “Compared with an analog computer, programming and patching a problem is claimed to be easier and to take less time. Typically, less than half the number of operational elements and patch cords are needed.” MULTICS SYSTEM (p. 46). “Honeywell Inc. has introduced to commercial markets what it calls the most advanced, sophisticated computer system available in the world. “The system, known as Multics (Multiplexed Information and Computing Service) derives from a system that evolved through more than seven years of joint effort with the Massachusetts Institute of Technology. It is designed to operate as a general-purpose system serving a large community of users and their diverse needs.” “According to a Honeywell spokesman, Multics is the most powerful virtual memory system yet available. The Multics hardware and software, ring protection features, and paging and segmentation techniques provide ‘close to ideal’ on-line system characteristics for interactive problem solving.” TALKING COMPUTER (p. 47). “Over 5,000 blind people in the Boston area have a new friend in a talking computer system that allows them to type letter-perfect correspondence, proofread manuscripts, calculate bookkeeping problems, and write computer programs. “The first of these systems, known as an Audio-ResponseTime-Shared (ARTS) Service Bureau, is operating at the Protestant Guild for the Blind in Watertown, Mass. It is

Published by the IEEE Computer Society

built around a Data General Corporation Nova 800 minicomputer. “A blind person telephones the Bureau from his office, home or school and transmits information to the computer via the telephone line by using a console resembling a standard typewriter. The talking computer responds to the typist in words and sentences telling him precisely what he has typed or giving him the results of indicated commands or computations.” CLASSROOM FEEDBACK (p. 47). “An $80,000 electronic student response system, designed to increase the efficiency of student-teacher communication, is now in operation at the University of Southern California School of Medicine. “The system, recently installed in the Louis B. Mayer Medical Teaching Center, allows individual student participation and response which would otherwise be impossible in the large classroom environment of the 500-seat auditorium. “As questions are presented by the instructor, a push-button device on the arm of 265 seats allows the students to pick one of five possible answers. The device immediately indicates to the student whether he is right or wrong, and indicates to the instructor the percentage of the class responding, and the percentage correct or incorrect for each possible answer.”

MARCH 1989 GEOMETRIC COMPUTATION (p. 31). “Despite great advances in geometric and solid modelling, practical implementation of the various geometric operations remains error-prone, and the goal of implementing correct and robust systems for carrying out geometric computation remains elusive.” “… the problem is variously characterized as a matter of achieving sufficient numerical precision, as a fundamental difficulty in dealing with interacting numeric and symbolic data, or as a problem of avoiding degenerate positions.” “In fact, these issues are interrelated and are rooted in the problem that objects conceptually belonging to a continuous domain are analyzed by algorithms doing discrete computation, treating a very large discrete domain—for instance, the set of all representable floating-point numbers—as if it were a continuous domain.” ROBOTIC EXCEPTION HANDLING (p. 43). “A robot program can be logically correct yet fail under abnormal conditions. A major goal of robotics research is to construct robust and reliable robot systems able to handle errors arising from abnormal operating conditions. Consequently, error handling and recovery is becoming increasingly important as researchers strive to construct reliable, autonomous robot systems for factory, space, underwater, and hazardous environments.”

SECURE DATABASES (p. 63). “A multilevel secure database management system is a system that is secure when shared by users from more than one clearance level and contains data of more than one sensitivity level. MLS/DBMSs evolved from multilevel secure computing systems. Presentday DBMSs are not built with adequate controls and mechanisms to enforce a multilevel security policy. Thus, an MLS/DBMS is different from a conventional DBMS in at least the following ways: “(1) Every data item controlled by an MLS/DBMS is classified in one of several sensitivity levels that may need to change with time. “(2) Access to data must be controlled on the basis of each user’s authorization to data at each sensitivity level.” 32-BIT EISA CONNECTOR (p. 72). “All key aspects of the Extended Industry Standard Architecture specification— electrical, mechanical, and system configuration details— have been incorporated and distributed to participating developer companies ….” “The specification now includes the finalization of mechanical details for the EISA 32-bit connector. The new connector will reputedly allow high-performance 32-bit expansion boards to be installed in PCs utilizing EISA when they become available later this year.” MICROCODE COPYRIGHT (p. 78). “Microcode is a computer program and therefore protected under copyright laws, US District Court Judge William F. Gray ruled February 7. The ruling came at the conclusion of a 41/2-year court battle in which Intel claimed that NEC’s V-series microprocessors violated the copyright on Intel’s 8086/88 microcode. “Although he decided that microcode is protected, Gray ruled in NEC’s favor in the main dispute, finding that Intel forfeited its copyright by allowing copies of the 8086/88 chip to be distributed without copyright notice.” SUPERMINICOMPUTER (p. 91). “Wang Laboratories claims that it has optimized its new superminicomputers, the VS 10000 Series, for high-volume computing by incorporating a new disk subsystem and system management software. The new models … are reportedly based on emitter-coupled logic technology with custom gate arrays, VLSI microprocessors, and the mainframe VS instruction set. “The VS 10000 systems use a 90-MHz clock rate and an I/O bus capacity of 30.3 Mbytes per second.” “Other features include 32 Kbytes of write-back cache memory in the CPU, up to 64 Mbytes of addressable memory with physical accommodations for up to 256 Mbytes, a 128-bit-wide memory bus that supports 128-bit read and 64-bit write operations, an independent 80286-based support control unit, and up to 15 intelligent I/O controllers.” Editor: Neville Holmes; [email protected].

March 2005

13

INDUSTRY TRENDS

Search Engines Tackle the Desktop Bernard Cole

A

s PC hard drives get bigger and new information sources become available, users will have much more data of different types, including multimedia, on their computers. This makes it increasingly difficult to find documents, e-mail messages, spreadsheets, audio clips, and other files. Current desktop-based search capabilities, such as those in Windows, are inadequate to meet this challenge. In response, major Web search providers and other companies are offering engines for searching PC hard drives. This requires new search approaches because desktop-based documents are generally structured differently than those on the Web. A number of smaller vendors such as Accona Industrier, Automony, Blinkx, Copernic Technologies, dTSearch, and X1 Technologies are upgrading or providing free basic versions of their existing desktop search engines. Google has introduced a free beta version of an integrated desktop and Web search engine. Search providers Ask Jeeves, HotBot, Lycos, Microsoft, and Yahoo, as well as major Internet service providers such as AOL and Earthlink, are developing similar technologies. One important factor in the competition is the desire by some Web search providers to use desktop search as a way to convince people to always or at least regularly use their portals. This would create a large user base that could encourage businesses to either advertise on the portals or buy other services.

14

Computer

In addition, some desktop search providers may want to generate revenue by charging businesses for sending their advertisements, targeted to user queries, along with responses. Such advertising has generated considerable revenue for Web search providers. A user could work with several desktop search engines, said Larry Grothaus, lead product manager for Microsoft’s MSN Desktop search products. “But practically speaking, the average consumer will stick with the most attractive, easy-to-use, and familiar alternative.” Some desktop search approaches present security and privacy problems. Nonetheless, search providers are pushing ahead and adding usability features to attract users.

DESKTOP SEARCH CHALLENGES Desktop search features built into current operating systems, e-mail programs, and other applications have far fewer capabilities than Web search engines. They generally offer only simple keyword searches of a set of files, usually of a single file type. On the Web, search engines can exploit information organized into a common HTML format with stanPublished by the IEEE Computer Society

dardized ways of identifying various document elements. The engines can use this information, along with links to other documents, to make statistical guesses that increase the likelihood of returning relevant results. The desktop is more complicated to search because Microsoft Word and other applications format different types of documents in various ways. In addition, desktop files can be either structured or unstructured. The function and meaning of structured files—such as information in a relational database or a text document with embedded tags—are clearly reflected in their structure. The easily identified structure makes searching such files easier. This is not the case with unstructured information, which includes natural-language documents, unformatted text files, speech, audio, images, and video. Therefore, desktop search engines must add capabilities in different ways than Web search applications. The Boolean AND, OR, and NOT mechanisms and keyword-indexing algorithms by which searches are conducted on the desktop are similar to those used for years on the Web, said Daniel Burns, CEO of X1. However, desktop search engines face the additional challenge of recognizing which of the many file types it is dealing with. The engines also must derive whatever metadata authors have chosen to include in e-mail notes, database files, and other document types. While conducting searches, desktop engines must be efficient and avoid imposing a substantial processing or memory load on the computer. “A Web search service can set aside entire server farms to do only searches, while the desktop search engine has to be as efficient as possible within the constraints of the user’s computing resources,” explained Susan Feldman, search-engine market analyst at IDC, a market research firm. To gain these desktop search capabilities, some Web search vendors have either acquired or licensed desktop-

based technology, noted Nelson Mattos, distinguished engineer and director of information integration at IBM. For example, Momma.com bought part of Copernic, Ask Jeeves purchased Tokaroo, and AOL and Yahoo have licensed X1’s technology.

DESKTOP SEARCH METHODOLOGIES Desktop search engines employ one or more file crawler programs—similar to those used by Web search engines— that, upon installation, move through disk drives. As Figure 1 shows, the crawlers use an indexer to create an index of files; their location on a hard drive’s hierarchical tree file structure; file names, types, and extensions (such as .doc or .jpg); and keywords. Once existing files are indexed, the crawler indexes new documents in real time. During searches, the engine matches queries to indexed items to find relevant files faster. The crawlers also collect metadata, which lets the engine access files more intelligently by providing additional search parameters, according to X1’s Burns. Several desktop search engines are integrated with the providers’ Web engines and simultaneously run both types of searches on queries. These providers are putting considerable effort into desktop feature sets and interfaces that will be as familiar and easy to use as their Web-based counterparts, said IDC’s Feldman.

SEARCH WARS Because they want to reach the broadest number of users, all Web search providers entering the desktop arena work only with the market-leading Windows and Internet Explorer platforms, explained Ray Wagner, search-engine analyst at Gartner, a market research firm. Some providers that offer only desktop search engines have versions for other operating systems and browsers. Much of the industry’s attention is focused on three major companies: Google, Microsoft, and Yahoo.

Search form

Indexer

Stores file information

Crawls files and extracts information

Index file

Look in index Get list of matches

Search engine

Send search query Return formatted results Search results display

Stored files Documents, HTML, images, audio

Figure 1. A typical desktop search engine includes an indexer application that crawls existing and new stored files and extracts information on keywords, metadata, size, and location in memory. This information is kept in an index file. Some systems use multiple indexes and indexers, to keep index files from getting too large to work with efficiently. When a user fills out a search form and sends a query, the engine searches the index, identifies the appropriate files, finds their locations on the hard drive, and displays the results.

Google Desktop Search Google was the first major Web search company to release a desktop search beta application (http://desktop. google.com), a free, simple, lightweight (400-Kbyte) plug-in. The Google Desktop Search beta is configured as a local proxy server that stands in for the Web search engine. It performs desktop searches only via Internet Explorer. By default, GDS returns desktop and Web search results together, but users can configure it to return them separately. The GDS beta does not let users search a specific field within a file, such as e-mail messages’ “To” and “From” fields. Google expects to release a commercial GDS version this year. The company’s search-related business model relies on revenue generated from real-time advertisements selected to match query terms and search results. With the Web and desktop search engines operating in tandem, the latter maintains a link with the former, which connects to a server responsible for providing advertising that relates to search terms.

GDS tracks and fully indexes Outlook and Outlook Express e-mail messages; AOL instant messages; the Internet Explorer history log; and Microsoft Word, Excel, and PowerPoint documents. Currently it does not index PDF files. And for nondocument files such as those with images, video, and audio, GDS indexes only file names. Reflecting GDS’s use of a Web server as the main mechanism for coordinating desktop and Web searches, the search engine indexes URLs for Web pages saved to the Internet Explorer favorites or history list, noted Nikhil Bhatla, product manager for desktop search at Google. GDS uses a single crawler that indexes all file types.

MSN desktop search MSN’s 400-Kbyte desktop search application, part of the MSN Toolbar Suite (http://beta.toolbar.msn.com), is closely integrated with Windows. When the search utility is available commercially, slated for later this year, users will see it as part of the MSN March 2005

15

I n d u s t r y Tr e n d s

Deskbar, noted Grothaus. The Deskbar, which appears on the Taskbar when Windows boots, contains buttons for direct access to MSN services. The engine also appears as MSN search bars within Outlook, Windows Explorer, and Internet Explorer. Unlike Google’s tool, MSN’s application doesn’t search local files and the Web at the same time. However, the MSN tool can index and search files on network-based drives, which Google’s and Yahoo’s engines don’t. Grothaus said Microsoft doesn’t plan to display advertisements along with the results of desktop searches. The Deskbar tool enables searches for any supported file type—Outlook and Outlook Express e-mail; Microsoft Office’s Word, Excel, PowerPoint, Calendar, Task, and Notes files; plaintext and PDF documents; MSN messenger conversation logs; HTML file names; and many types of media files. By default, the Outlook-based toolbar searches only Outlook and Outlook Express e-mail files, and the Internet Explorer-based toolbar enables searches only of HTML and e-mail files. The Windows Explorer toolbar allows keyword searches of all drives and maintains a history of previous searches. The MSN desktop search engine uses separate file crawlers, each coded to search only for video or documents or any other supported file type, according to Grothaus. On the desktop, he explained, it’s important not to use more computing resources than necessary. MSN has tailored each desktop crawler to perform only the work necessary to do its job.

Yahoo Desktop Search The Yahoo Desktop Search beta (http://desktop.yahoo.com) is a standalone application that runs on Windows. Designed to look and feel like the Yahoo Web search engine, the YDS beta is built on X1’s commercial tool. For the upcoming commercial version, Yahoo says, it intends to create additional customized features and 16

Computer

layer them on top of the X1 technology it licensed. Unlike some other desktop engines, YDS also searches compressed ZIP and Adobe PDF, Acrobat Illustrator, and Photoshop files. Users can find and play audio and video files without launching a separate media player. YDS can only search for Outlook and Outlook Express e-mails, unlike X1’s engine, which also handles Eudora and Mozilla/Netscape mail.

Bigger hard drives require better desktop search tools. A YDS convenience that neither GDS nor the MSN Desktop Search offers is the ability to preview files before opening them. Yahoo’s tool searches HTML pages that users download from the Web and those they create locally. However, Yahoo says, YDS doesn’t index Internet Explorer history or favorites files or the browser’s hard-drive-based cache memory, to keep others from accessing Web files that previous users have viewed. Users can control and change settings to index only specific files or file types or files smaller than a given size. In the future, Yahoo says, it hopes to make the desktop search tool particularly useful by tying it to the company’s portal offerings, including its e-mail, calendar, photo, music, and chat services.

SECURITY AND PRIVACY ISSUES Integrating desktop and Web search capabilities into the same application presents security and privacy challenges.

Security Integrated search engines use a local proxy-server program on the desktop to coordinate the delivery of real-time targeted advertising from Web servers for placement along with search results. This could open a security hole in

the connection between the PC and the Web, according to Daniel Wallach, Rice University assistant professor of computer science. “The more tightly the two are coupled,” he said, “the more likely there are to be holes that hackers can breach.” For example, the local proxy server can let hackers use Java- or JavaScriptbased man-in-the-middle attacks that redirect desktop results intended only for the user to an unauthorized location over the Internet, according to Wallach. Also, hackers in some cases could insert an applet to open a control channel within the proxy server, letting them issue queries to obtain private information. Providers are taking steps to block these attacks. Some desktop search engines’ use of the browser cache to look for previously viewed Web pages could lead to other security breaches. “Access to the browser cache through the integrated search interface is an extraordinary lure to potential hackers,” said Richard Smith, Internet security analyst at ComputerBytesMan.com. Blinkx’s desktop search engine prevents this by encrypting the cache, as well as communications between server and client.

Privacy Some integrated search tools make stored personal files, including e-mail and AOL chat logs, viewable on the Web browser, which could prove embarrassing if someone else has access to the computer. And some tools also allow searches of recently viewed Web sites, a feature that has raised privacy concerns, particularly for users of shared PCs. Microsoft’s desktop tool doesn’t index or allow searches of recently viewed Web sites, although it hasn’t eliminated the possibility of doing so in the future, Grothaus said. YDS doesn’t index the browser cache or the browser history or favorites files. Also, Microsoft’s tool searches for

information based on each user who logs in. If one person uses a computer for personal banking, the next person logging into that machine can’t access the sensitive data, Grothaus said.

ccording to Gartner’s Wagner, the deciding factors in the marketplace competition between desktop search engines “will be the unique usability features they bring to the game and how well they deal with a number of perceived, rather than actual, security and privacy issues that have emerged.” However, said IBM’s Mattos, search engine technology on the Web and the desktop needs radical changes to become truly useful. “On the Web, when a user puts in a sequence of keywords, even with advanced keyword search capabilities, he is liable to get a page telling him there are a million files that match the requirements,” he said. “Searches on the desktop are not much better. They yield several hundred or several thousand. What is needed is

A

something more fine-grained and able to pinpoint more exactly what you are looking for.” The goal of a desktop search is different from that of a Web search. On the Web, you are looking for information, not necessarily a specific document, explained X1’s Burns. “On the desktop,” he said, “you know that what you are searching for is there. You don’t want to wade through pages and pages of possibilities to find it. You want it now—not several possibilities, but the right file.” Many industry observers are thus waiting to see the new XML-based WinFS file system (http://msdn. microsoft.com/data/winfs) that Microsoft plans to incorporate in future Windows versions. The company originally anticipated including WinFS in its upcoming Longhorn version of Windows but apparently won’t be able to do so. According to Blinkx cofounder Suranga Chandratillake, moving to an XML-based structure is difficult and won’t occur for years. The Web and

local storage are growing rapidly, and most of the growing number of data types they contain work with traditional file structures, he explained. Imposing a new file structure on all this data is impractical, he said. He concluded, “The alternative that I favor and that offers the only hope of keeping up with the growth and the increasing diversity of information on both the desktop and the Web, is wrestling with data, finding clever ways to add metadata, and discovering better search mechanisms that work within the file structures with which we are already familiar.” ■ Bernard Cole is a freelance technology writer based in Flagstaff, Arizona. Contact him at [email protected].

Editor: Lee Garber, Computer, [email protected]

GET CERTIFIED Apply now for the 1 April—30 June test window.

CERTIFIED SOFTWARE DEVELOPMENT PROFESSIONAL PROGRAM Doing Software Right ■ Demonstrate your level of ability in relation to your peers ■ Measure your professional knowledge and competence Certification through the CSDP Program differentiates between you and other software developers. Although the field offers many kinds of credentials, the CSDP is the only one developed in close collaboration with software engineering professionals. “The exam is valuable to me for two reasons: One, it validates my knowledge in various areas of expertise within the software field, without regard to specific knowledge of tools or commercial products... Two, my participation, along with others, in the exam and in continuing education sends a message that software development is a professional pursuit requiring advanced education and/or experience, and all the other requirements the IEEE Computer Society has established. I also believe in living by the Software Engineering code of ethics endorsed by the Computer Society. All of this will help to improve the overall quality of the products and services we provide to our customers...” — Karen Thurston, Base Two Solutions

Visit the CSDP web site at www.computer.org/certification or contact [email protected] March 2005

17

TECHNOLOGY NEWS

Is It Time for Clockless Chips? David Geer

V

endors are revisiting an old concept—the clockless chip— as they look for new processor approaches to work with the growing number of cellular phones, PDAs, and other highperformance, battery-powered devices. Clockless processors, also called asynchronous or self-timed, don’t use the oscillating crystal that serves as the regularly “ticking” clock that paces the work done by traditional synchronous processors. Rather than waiting for a clock tick, clockless-chip elements hand off the results of their work as soon as they are finished. Recent breakthroughs have boosted clockless chips’ performance, removing an important obstacle to their wider use. In addition to their efficient power use, a major advantage of clockless chips is the low electromagnetic interference (EMI) they generate. Both of these factors have increased the chips’ reliability and robustness and have made them popular research subjects for applications such as pagers, smart cards, mobile devices, and cell phones. Clockless chips have long been a subject of research at facilities such as the California Institute of Technology’s Asynchronous VLSI Group (www. async.caltech.edu/) and the University of Manchester’s Amulet project (www. cs.man.ac.uk/apt/projects/processors/ amulet/). Now, after a few small efforts and false starts in the 1990s, companies such as Fulcrum Microsystems, Handshake Solutions, Sun Microsystems, and Theseus Logic are again looking

18

Computer

to release commercial asynchronous chips, as the “A Wave of Clockless Chips” sidebar describes. However, clockless chips still generate concerns—such as a lack of development tools and expertise as well as difficulties interfacing with synchronous chip technology—that proponents must address before their commercial use can be widespread.

PROBLEMS WITH CLOCKS Clocked processors have dominated the computer industry since the 1960s because chip developers saw them as more reliable, capable of higher performance, and easier to design, test, and run than their clockless counterparts. The clock establishes a timing constraint within which all chip elements must work, and constraints can make design easier by reducing the number of potential decisions.

Clocked chips The chip’s clock is an oscillating crystal that vibrates at a regular frequency, depending on the voltage applied. This frequency is measured in gigahertz or megahertz. All the chip’s work is synchronized via the clock, which sends its signals out along all circuits and controls the registers, the data Published by the IEEE Computer Society

flow, and the order in which the processor performs the necessary tasks. An advantage of synchronous chips is that the order in which signals arrive doesn’t matter. Signals can arrive at different times, but the register waits until the next clock tick before capturing them. As long as they all arrive before the next tick, the system can process them in the proper order. Designers thus don’t have to worry about related issues, such as wire lengths, when working on chips. And it is easier to determine the maximum performance of a clocked system. With these systems, calculating performance simply involves counting the number of clock cycles needed to complete an operation. Calculating performance is less defined with asynchronous designs. This is an important marketing consideration.

The downside Clocks lead to several types of inefficiencies, including those shown in Figure 1, particularly as chips get larger and faster. Each tick must be long enough for signals to traverse even a chip’s longest wires in one cycle. However, the tasks performed on parts of a chip that are close together finish well before a cycle but can’t move on until the next tick. As chips get bigger and more complex, it becomes more difficult for ticks to reach all elements, particularly as clocks get faster. To cope, designers are using increasingly complicated and expensive approaches, such as hierarchies of buses and circuits that adjust clock readings at various components. This approach could, for example, delay the start of a clock tick so that it occurs when circuits are ready to pass and receive data. Also, individual chip components can have their own clocks and communicate via buses, according to Ryan Jorgenson, Theseus’s vice president of engineering. Clock ticks thus only have to cross individual components. The clocks themselves consume

power and produce heat. In addition, in synchronous designs, registers use energy to switch so that they are ready to receive new data whenever the clock ticks, whether they have inputs to process or not. In asynchronous designs, gates switch only when they have inputs.

HOW CLOCKLESS CHIPS WORK There are no purely asynchronous chips yet. Instead, today’s clockless processors are actually clocked processors with asynchronous elements. Clockless elements use perfect clock gating, in which circuits operate only when they have work to do, not whenever a clock ticks. Instead of clock-based synchronization, local handshaking controls the passing of data between logic modules. The asynchronous processor places the location of the stored data it wants to read onto the address bus and issues a request for the information. The memory reads the address off the bus, finds the information, and places it on the data bus. The memory then acknowledges that it has read the data. Finally, the processor grabs the information from the data bus. Pipeline controls and FIFO sequencers move data and instructions around and keep them in the right order. According to Jorgenson, “Data arrives at any rate and leaves at any rate. When the arrival rate exceeds the departure rate, the circuit stalls the input until the output catches up.” The many handshakes themselves require more power than a clock’s operations. However, clockless systems more than offset this because, unlike synchronous chips, each circuit uses power only when it performs work.

CLOCKLESS ADVANTAGES In synchronous designs, the data moves on every clock edge, causing voltage spikes. In clockless chips, data doesn’t all move at the same time, which spreads out current flow, thereby minimizing the strength and frequency of spikes and emitting less

A Wave of Clockless Chips In the near future, Handshake Solutions and ARM, a chip-design firm, plan to release a commercial asynchronous ARM core for use in devices such as smart cards, consumer electronics, and automotive applications, according to Handshake chief technical officer Ad Peeters. Sun Microsystems is building a supercomputer with at least 100,000 processors, some using asynchronous circuits, noted Sun Fellow Jim Mitchell. Sun’s UltraSPARC IIIi processor for servers and workstations also features asynchronous circuits, said Sun Distinguished Engineer Jo Ebergen. Fulcrum Microsystems offers an asynchronous PivotPoint high-performance switch chip for multigigabit networking and storage devices, according to Mike Zeile, the company’s vice president of marketing. The company has also developed clockless cores for use with embedded systems, he noted. “Theseus Logic developed a clockless version of Motorola’s 8-bit microcontroller with lower power consumption and reduced noise,” said vice president of engineering Ryan Jorgenson. Theseus designed the device for use in batterypowered or signal-processing applications. “Also, Theseus and [medical-equipment provider] Medtronic have worked on a [clockless] chip for defibrillators and pacemakers,” Jorgenson said.

Cycle time of clocked logic Logic time

Manufacturing margin Clock jitter, skew margin

Cycle time of clockless logic

Worst case–average case (logic execution time) Source: Fulcrum Microsystems

Figure 1. Clockless chips offer an advantage over their synchronous counterparts because they efficiently use cycle times. Synchronous processors must make sure they can complete each part of a computation in one clock tick. Thus, in addition to running their logic, the chips must add cycle time to compensate for how much longer it takes to run some operations than to run average operations (worst case – average case), variations in clock operations (jitter and skew), and manufacturing and environmental irregularities.

EMI. Less EMI reduces both noiserelated errors within circuits and interference with nearby devices.

Power efficiency, responsiveness, robustness Because asynchronous chips have no clock and each circuit powers up only when used, asynchronous processors use less energy than synchronous chips by providing only the voltage necessary for a particular operation.

According to Jorgenson, clockless chips are particularly energy-efficient for running video, audio, and other streaming applications—data-intensive programs that frequently cause synchronous processors to use considerable power. Streaming data applications have frequent periods of dead time—such as when there is no sound or when video frames change very little from their immediate predecessors—and little need for running March 2005

19

Te c h n o l o g y N e w s

error-correction logic. During this inactive time, asynchronous processors don’t use much power. Clockless processors activate only the circuits needed to handle data, thus they leave unused circuits ready to respond quickly to other demands. Asynchronous chips run cooler and have fewer and lower voltage spikes. Therefore, they are less likely to experience temperature-related problems and are more robust. Because they use handshaking, clockless chips give data time to arrive and stabilize before circuits pass it on. This contributes to reliability because it avoids the rushed data handling that central clocks sometimes necessitate, according to University of Manchester Professor Steve Furber, who runs the Amulet project.

Simple, efficient design Companies can develop logic modules without regard to compatibility with a central clock frequency, which makes the design process easier, according to Furber. Also, because asynchronous processors don’t need specially designed modules that all work at the same clock frequency, they can use standard components. This enables simpler, faster design and assembly.

RECENT ADVANCES BOOST PERFORMANCE Traditionally, asynchronous designs have had lackluster performance, even though their circuitry can handle data without waiting for clock ticks. According to Fulcrum cofounder Andrew Lines, most clockless chips have used combinational logic, an early, uncomplicated form of logic based on simple state recognition. However, combinational logic uses the larger and slower p-type transistors. This has typically led to large feature sizes and slow performance, particularly for complex clockless chips. However, the recent use of both domino logic and the delay-insensitive mode in asynchronous processors has 20

Computer

created a fast approach known as integrated pipelines mode. Domino logic improves performance because a system can evaluate several lines of data at a time in one cycle, as opposed to the typical approach of handing one line in each cycle. Domino logic is also efficient because it acts only on data that has changed during processing, rather than acting on all data throughout the process. The delay-insensitive mode allows

Clockless chips offer power efficiency, robustness, and reliability. an arbitrary time delay for logic blocks. “Registers communicate at their fastest common speed. If one block is slow, the blocks that it communicates with slow down,” said Jorgenson. This gives a system time to handle and validate data before passing it along, thereby reducing errors.

CLOCKLESS CHALLENGES Asynchronous chips face a couple of important challenges.

Integrating clockless and clocked solutions In today’s clockless chips, asynchronous and synchronous circuitry must interface. Unlike synchronous processors, asynchronous chips don’t complete instructions at times set by a clock. This variability can cause problems interfacing with synchronous systems, particularly with their memory and bus systems. Clocked components require that data bits be valid and arrive by each clock tick, whereas asynchronous components allow validation and arrival to occur at their own pace. This requires special circuits to align the asynchronous information with the synchronous system’s clock, explained Mike Zeile, Fulcrum’s vice president of marketing. In some cases, asynchronous systems

can try to mesh with synchronous systems by working with a clock. However, because the two systems are so different, this approach can fail.

Lack of tools and expertise Because most chips use synchronous technology, there is a shortage of expertise, as well as coding and design tools, for clockless processors. According to Jorgensen, this forces clockless designers to either invent their own tools or adapt existing clocked tools, a potentially expensive and time-consuming process. Although manufacturers can use typical silicon-based fabrication to build asynchronous chips, the lack of design tools makes producing clockless processors more expensive, explained Intel Fellow Shekhar Borkar. However, companies involved in asynchronous-processor design are beginning to release more tools. For example, to build clockless chips, Handshake uses its proprietary Haste programming language, as well as the Tangram compiler developed at Philips Research Laboratories. The University of Manchester has produced the Balsa Asynchronous Synthesis System, and Silistix Ltd. is commercializing clockless-design tools. “We have developed a complete suite of tools,” said Professor Alain Martin, who heads Caltech’s Asynchronous VLSI Group. “We are considering commercializing the tools through a startup (Situs Logic).” There is also a shortage of asynchronous design expertise. Not only is there little opportunity for developers to gain experience with clockless chips, but colleges have fewer asynchronous design courses.

A HYBRID FUTURE No company is likely to release a completely asynchronous chip in the near future. Thus, chip systems could feature clockless islands tied together by a main clock design that ticks only for data that passes between the sections. This adds the benefits of asyn-

chronous design to synchronous chips. On the other hand, University of Utah Professor Chris Myers contended, the industry will move gradually toward chip designs that are “globally asynchronous, locally synchronous.” Synchronous islands would operate at different clock speeds using handshaking to communicate through an asynchronous buffer or fabric. According to Myers, distributing a clock signal across an entire processor is becoming difficult, so clocking would be used only to distribute the signal across smaller chip sections that communicate asynchronously.

xperts say synchronous chips’ performance will continue to improve. Therefore, said Fulcrum’s Lines, there may not be much demand for asynchronous chips to enhance performance. Furber, on the other hand, contended there will be demand for clockless chips because of their many advantages. “Most of the research problems are resolved,” Myers said. “We’re left with development work. [We require] more design examples that prove the need for asynchronous design.” Said Intel’s Borkar, “I’m not shy about using asynchronous chips. I’m here to serve the engineering community. But someone please prove their benefit to me.” Added Will Strauss, principal analyst at Forward Concepts, a market research firm, “I’ve yet to see a commercially successful clockless logic chip shipping in volume. It requires thinking outside the box to find volume applications that benefit from the clockless approach at a reasonable cost.” ■

E

David Geer is a freelance technology writer based in Ashtabula, Ohio. Contact him at [email protected].

How to Reach Computer Writers We welcome submissions. For detailed information, visit www. computer.org/computer/author.htm.

News Ideas Contact Lee Garber at lgarber@ computer.org with ideas for news features or news briefs.

Products and Books Send product announcements to [email protected]. Contact [email protected] with book announcements.

Letters to the Editor Please provide an e-mail address with your letter. Send letters to [email protected].

On the Web Explore www.computer.org/computer/ for free articles and general information about Computer magazine.

Magazine Change of Address Send change-of-address requests for magazine subscriptions to [email protected]. Make sure to specify Computer.

Missing or Damaged Copies If you are missing an issue or received a damaged copy, contact [email protected].

Reprint Permission To obtain permission to reprint an article, contact William Hagen, IEEE Copyrights and Trademarks Manager, at [email protected]. To buy a reprint, send a query to [email protected].

Editor: Lee Garber, Computer, [email protected]

March 2005

21

NEWS BRIEFS

Finding Ways to Read and Search Handwritten Documents

T

echnologies for reading and searching digitized documents have helped academic researchers. However, no one has developed a truly effective engine that can work with handwritten documents, a potentially valuable source of information for many purposes. R. Manmatha, a research assistant professor with the Center for Intelligent Information Retrieval at the University of Massachusetts, Amherst, hopes to change this. Handwritten documents are generally scanned as images of entire pages, not as individual characters that optical-

character-recognition technology can recognize when searching for responses to queries. Current handwriting recognition systems generally work well only with documents that contain specific types of information written in a consistent format, such as addresses. Thus, to read and search most handwritten documents, someone must type them up and create digitized versions, a costly and time-consuming process. Manmatha’s system scans handwritten documents as images. His research team first tried to match each written letter with a digital image of

Source: University of Massachusetts, Amherst

A University of Massachusetts researcher has developed a technique for reading and searching handwritten documents. The system works with a statistical model that learns to associate images of words with actual words in a probabilistic manner. The system first segments a document to obtain images of individual words. It compares the images with images it has encountered in the past to find a probable match. The system then identifies and tags the new image as the word associated with the likely match. It keeps these new identifications in an index for future reference.

22

Computer

Published by the IEEE Computer Society

a letter. However, handwriting variations—such as letter height and slant— made consistent accuracy difficult. Instead, Manmatha developed a system that examines entire words, which provides more context than individual letters for identifying written material. Using a statistical model, he explained, the system learns to associate images of words with actual words in a probabilistic manner and then stores this information in a database. The system compares an image of a word with images it has encountered in the past to find a likely match. It then identifies the new image as the word associated with the likely match. To develop their system, Manmatha and his students obtained about 1,000 pages of US President George Washington’s correspondence that had been scanned from microfilm by the Library of Congress. Even after training, the system’s accuracy in identifying words ranges from 54 to 84 percent. Manmatha said refinements such as better image processing could make his technology more accurate. And making the system faster, perhaps by developing more efficient algorithms, will be particularly important so that it can work with large collections of documents, he noted. Chris Sherman, associate editor at SearchEngineWatch.com, noted that research on searching handwritten document has been taking place since the 1980s. There seems to be a limited demand for this technology, Sherman said. “I could see this being used for scholarly archives going back to eras when there weren’t computers, but I don’t see it as being a huge application.” ■

A Gem of an Idea for Improving Chips US researcher is developing ways to use diamonds in chips to overcome some of silicon’s limitations. Damon Jackson, a research physicist at the Lawrence Livermore National Laboratory, has used diamonds to house electronic circuitry. Jackson’s research team lays a 10- to 50-micron layer of tungsten film on a one-third-carat diamond, adds circuitry, then grows a single crystal layer of synthetic diamond on top of the tungsten so that the wires are completely embedded. The research team uses diamonds because they offer advantages over silicon in strength and their ability to resist high temperatures, improve cooling by conducting heat away from the circuitry, and withstand high pressure and radiation. This protection makes the system ideal for circuitry used in challenging environments such as space, Jackson said. Satellites, for example, experience considerable heat buildup, atmospheric pressure, and radiation. However, there are significant obstacles to using diamonds in chips. First, diamonds are expensive, although widespread use in chips would eventually reduce the per-unit cost to some degree. Fabrication-related activities and research would also be costly, Jackson noted. He is working with Yogesh Vohra, a physics professor at the University of Alabama at Birmingham who developed the chemical-vapor-deposition technique for growing industrial-quality diamonds by cooking methane, hydrogen, and oxygen gases in a very hot microwave oven. The advantages of this method, Vohra explained, are that the raw materials are inexpensive, the process scales well, it’s easy to embed wiring, and the diamond’s electrical properties can be changed via doping. And as more businesses use diamonds in manufacturing, their price will drop.

A

Researchers have found a way to house electronic circuitry on a diamond. Diamonds have advantages over silicon in strength and in the ability to resist heat and withstand high pressure and radiation. This makes “diamond chips” ideal for use in challenging environments such as space.

At some point, Vohra said, researchers may even develop diamondbased circuitry. Pushkar Apte, vice president of technology programs at the Semiconductor Industry Association, a US trade asso-

ciation, expressed doubt about using diamonds in chips, saying that silicon is already a well-established technology. However, he added, “It may be used in some niche applications that demand thermal conductivity.” ■

IBM Lets Open Source Developers Use 500 Patents BM has made 500 US software patents available royalty-free to open source developers. The patents cover 14 categories of technology, including e-commerce, storage, image processing, data handling, networking, and Internet communications. “We wanted to give access to a broad range of patents,” explained Marc Ehrlich, IBM’s legal counsel for patent portfolio management. He said the patents represent “areas reflective of activity in the open source commu-

I

nity,” such as databases and processor cores. IBM will continue to own the 500 patents but will allow fee-free use of their technologies in any software that meets the requirements of the Open Source Definition, managed and promoted by the nonprofit Open Source Initiative (www.opensource.org). IBM, which has vigorously supported the open source operating system Linux, has expressed hope its action will establish a “patent comMarch 2005

23

News Briefs

mons” on which open source software developers can base their code without worrying about legal problems. Traditionally, IBM and other companies amass patents, charge anyone who wants to work with them, and take legal action against anyone who uses them without paying royalties. IBM has 40,000 patents worldwide, including 25,000 in the US. It has obtained more US patents than any other company during each of the past 12 years. However, Ehrlich said, IBM has realized that letting open source developers use some of its patents without charge

will allow them to expand on the technologies in ways that the company might never do on its own. This could benefit IBM and others, he explained. IBM could create new products or services for a fee on top of open source applications that use its patented technologies, said Navi Radjou vice president of enterprise applications at Forrester Research, a market analysis firm. And, he said, selling versions of software that open source developers have based on its patents eliminates the need for IBM to pay its own developers for the new work. In the process, Radjou noted, IBM’s

Biometrics Could Make Guns Safer An innovative biometric system could keep children, thieves, and others from firing guns that don’t belong to them. New Jersey Institute of Technology (NJIT) professor Timothy N. Chang and associate professor Michael L. Recce are developing the new approach, which is based on authorized user’s grip patterns. In the past, researchers have worked with fingerprint scanners to recognize authorized shooters, and systems with tokens that users wear to wirelessly transmit an unlocking code to a weapon. In the NJIT system, 16 tiny sensors in a gun’s grip measure the amount and pattern of finger and palm pressure as the user tries to squeeze the trigger. “The system doesn’t care how you pull the gun out of the holster or how you handle it when you are not actually shooting it,” explained Donald H. Sebastian, NJIT’s senior vice president for research and development. Unlike the static biometrics found in fingerprint scanning, NJIT’s system looks at a pattern of movement over time. Shooters create a unique pattern of pressure when squeezing a weapon while firing it. During the first tenth of a second of trigger pull, the system can determine whether the shooter is authorized to use a gun, according to Sebastian. If not, the system will not let the gun fire. Sensors measure the voltage patterns the system’s circuitry generates when a user tries to pull the trigger. The system then converts the analog signals to digital patterns for analysis by specially designed software. All authorized users of a gun initially train the system to recognize the patterns they create when using the weapon. This information is stored for comparison any time someone tries to use the gun. Currently, a computer cord tethers the gun to a laptop that houses the biometric system’s software. However, Chang said, the team plans to move the circuits from the laptop into the gun’s grip. The system presently has a 90 percent recognition rate. Sebastian said this is not precise enough for a commercial system but “90 percent accuracy out of 16 sensors is amazing.” The research team plans to use up to 150 sensors to improve precision and may add biometric palm recognition as a backup. According to Sebastian, the technology may be ready for commercial release within three to five years. ■

24

Computer

patent release lends credibility to the open source movement and gives IT departments more confidence in using open source products. Because groups of independent developers create open source software, proponents sometimes have trouble knowing whether products include patented technologies. This could expose open source proponents and their products to legal action by patent holders. And finding patented software in open source products could force programmers to write new products and customers to switch to the new versions. However, Ehrlich noted, sophisticated open source projects are starting to adopt practices to ensure that the code being used is free from patentrelated problems. He said IBM may give open source developers royalty-free access to more patents in the future and hopes other companies will do the same. Businesses such as Novell and Linux vendor Red Hat have already either offered their technologies to open source developers or taken steps to protect users of their open source software. “Unsurprisingly, the open source community thinks this is a good thing,” said Eric S. Raymond, the Open Source Initiative’s founder and president emeritus. “But the deeper message here is that IBM is saying by its actions that the patent system is broken. The top patentholder in the US, the biggest beneficiary of the system, has concluded that the best way it can encourage innovation is to voluntarily relinquish its rights.” According to Raymond, this should “give pause to those who believe strong intellectual-property laws and IP enforcement are vital to the health of the software industry.” ■ News Briefs written by Linda Dailey Paulson, a freelance technology writer based in Ventura, California. Contact her at [email protected]. Editor: Lee Garber, Computer; [email protected]

December 18 - 21, 2005 • Goa, India

CALL FOR PARTICIPATION CALL FOR PAPERS

The 12th Annual International Conference on High Performance Computing (HiPC 2005) will be held in Goa, the Pearl of the Orient, during December 18-21, 2005. It will serve as a forum to present the current work by researchers from around the world, act as a venue to provide stimulating discussions, and highlight high performance computing activities in Asia. HiPC has a history of attracting participation from reputed researchers from all over the world. HiPC 2004 was held in Banglore, India, and included 48 contributed papers selected from over 250 submissions from 13 countries. HiPC 2005 will emphasize the design and analysis of high performance computing and networking systems and their scientific, engineering, and commercial applications. In addition to technical sessions consisting of contributed papers, the conference will include invited presentations, a poster session, tutorials, and vendor presentations. IMPORTANT DATES May 2, 2005

Conference Paper Due

May 9, 2005

Workshop Proposal Due

May 16, 2005

Tutorial Proposal Due

July 11, 2005

Notification of Acceptance/Rejection

August 15, 2005 Camera-Ready Paper Due October 3, 2005

Poster/Presentation Summary Due

For further information visit the HiPC website at

www.hipc.org

Authors are invited to submit original unpublished manuscripts that demonstrate current research in all areas of high performance computing including design and analysis of parallel and distributed systems, embedded systems, and their applications. All submissions will be peer reviewed. The deadline for submitting the papers is May 2, 2005. Best paper awards will be given to outstanding contributed papers in two areas: a) Algorithms and Applications, and b) Systems.

GENERAL CO-CHAIRS Manish Parashar, Rutgers University V. Sridhar, Satyam Computer Services Ltd.

VICE GENERAL CHAIR Rajendra V. Boppana, University of Texas at San Antonio

PROGRAM CHAIR David A. Bader, University of New Mexico

STEERING CHAIR Viktor K. Prasanna, University of Southern California

PROGRAM VICE CHAIRS Algorithms Michael A. Bender, SUNY Stony Brook, USA Applications Zhiwei Xu, Chinese Academy of Sciences, China Architecture Jose Duato, Technical University of Valencia, Spain Communication Networks Cristina M. Pinotti, University of Perugia, Italy Systems Software Satoshi Matsuoka, Tokyo Institute of Technology, Japan CO-SPONSORED BY: IEEE Computer Society Technical Committee on Parallel Processing, ACM SIGARCH , Goa University, European Association for Theoretical Computer Science, IFIP Working Group on Concurrent Systems, National Association of Software and Services Companies (NASSCOM), Manufacturers Association for Information Technology (MAIT)

COMPUTING PRACTICES

Integrating Biological Research through Web Services A case study demonstrates that Web services could be key to coordinating and standardizing incompatible applications in bioinformatics, an effort that is becoming increasingly critical to meaningful biological research.

Hong Tina Gao Jane Huffman Hayes University of Kentucky

Henry Cai Big Lots

26

N

o longer only a field of experimental science, biology now uses computer science and information technology extensively across its many research areas. This increased reliance on technology has motivated the creation of bioinformatics, a discipline that researches, develops, or applies computational tools and approaches for expanding the use of biological, medical, behavioral, or health data.1 Because tools and approaches cover how to acquire, store, organize, archive, analyze, and visualize data,1 bioinformatics is a promising way to help researchers handle diverse data and applications more efficiently. Unfortunately, at present, bioinformatics applications are largely incompatible, which means that researchers cannot cooperate in using them to solve important biological problems. The “The Integration Challenge” sidebar explains this problem in detail. Web services might be a way to solve the integration problem because Web services technology provides a higher layer of abstraction that hides implementation details from applications. Using this technology, applications invoke other applications’ functions through well-defined, easy-to-use interfaces. Each organization is free to concentrate on its own competence and still leverage the services that other research groups provide. Computer

To test the potential of a Web services solution, we implemented a microarray data-mining system that uses Web services in drug discovery—a research process that attempts to identify new avenues for developing therapeutic drugs. Although our implementation focuses on a problem within the life sciences, we strongly believe that Web services could be a boon to any research field that requires analyzing volumes of data and conducting complex data mining.

WHY WEB SERVICES? A Web service is a group of network-accessible operations that other systems can invoke through XML messages using the Simple Object Access Protocol (SOAP). The service can be a requester, provider, or registry. A service provider publishes its available services on a registry. A service requester looks through the registry to find the service it needs and consumes the service by binding with the corresponding service provider. The services are independent of environment and implementation language. In biology research, these traits are advantageous because, as long as the interfaces remain unchanged, researchers need not modify the application or database or unify diverse schemas. Moreover, invoking a Web service can be as easy as checking an information directory and calling the right number. Given that data analysis is the most

Published by the IEEE Computer Society

0018-9162/05/$20.00 © 2005 IEEE

time-consuming step in many bioinformatics applications, this simplicity makes it tolerable to incur even the overhead of transmitting XML tags for explaining the data structures. Web services also transform biology’s current ad hoc software-development architecture into a component-based structure. Unlike technologies such as the common object request broker architecture (Corba), using Web services makes it easier to glue components together by exploiting existing standards and implementing underlying communication protocols instead of using a specifically defined transportation protocol for each technology. Corba assumes that its users will be competent programming professionals. Web services are oriented toward the less technical IT communities. For biological researchers in highly specific subfields, the less technical solution is far better. A group annotating a human genome segment, for example, must precisely locate genes on genomes and assign genes to their protein products. To invoke services that implement the needed algorithms, the researchers simply acquire the services’ descriptions from the registry and generate SOAP requests to those services. They don’t have to know how to implement the algorithms. More important, because integration occurs at the client instead of on the server side, service providers and requesters have more flexibility and autonomy. Each service provider can incrementally add value to the overall community by building Web services that integrate existing services.

WEB SERVICES IN DRUG DISCOVERY Our microarray data-mining system uses Web services to identify potential drug targets—molecules with problematic biological effects that cause diseases in animal models. The drug targets then serve as a basis for developing therapeutic human drugs.

With a better understanding of human genes, scientists can identify more drug targets and design more effective drugs, but traditional techniques— those based on one gene in one experiment— discover gene functions too slowly. Many highthroughput genomics technologies, such as microarrays and gene chips, could speed up genefunction analysis. Arranging gene products in a microarray lets researchers monitor the entire genome’s expression on a single glass slide2 and gain insight into the interactions among thousands of genes simultaneously.

Drug discovery scenario Drug discovery using a microarray involves a

The Integration Challenge Because of the Human Genome Project’s great success, the current research trend in the life sciences is to understand the systemic functions of cells and organisms. Not only has the project increased data on gene and protein sequences, it has further diversified biology itself. Many study processes now involve multistep research, with each step answering a specific question. Researchers in vastly different organizations design and develop computing algorithms, software applications, and data stores—often with no thought as to how other researchers are doing the same tasks. Consequently, one interdisciplinary research question might require interactions with many incompatible databases and applications. The study of E. coli enzymes is a good example. Researchers must visit EcoCyc, Swiss-Prot, Eco2DBase, and PDB to obtain information about the enzymes’ catalytic activities, amino acid sequences, expression levels, and three-dimensional structures.1 This labor-intensive process can be even more tedious if the research requires studying thousands of genes. An integrated process that follows a certain research pathway is thus critical, and its successful evolution depends heavily on the compatibility of the applications involved. The current incompatibility level of bioinformatics applications makes integration of data sources and programs a daunting hurdle. From cutting-edge genomic sequencing programs to high-throughput experimental data management and analysis platforms, computing is pervasive, yet individual groups do little to coordinate with one another. Instead, they develop programs and databases to meet their own needs, often with different languages and platforms and with tailored data formats that do not comply with other specifications. Moreover, because biology research lacks a well-established resource registry, no one can share information efficiently. Users from diverse backgrounds repeatedly generate scripts for merging the boundaries between upstream and downstream applications, wasting considerable time and effort. The integration challenge is not just for those in the life sciences. Any discipline that deals with massive amounts of data and computing loads and geographically distributed people and resources faces the same problem: economics, Earth sciences, astronomy, mechanical engineering, and aerospace, for example. Solving the integration problem in the life sciences will provide vital benefits to these fields as well. References 1. P.D. Karp, “Database Links Are a Foundation for Interoperability,” Tibtech, vol. 14, 1996, pp. 273-279.

chain of large-scale data-processing modules and databases. In our implementation, we wrapped each module in the data-analysis chain into a Web service and integrated them. We then built a portal to make this aggregated functionality available to users. March 2005

27

Microarray experimental data a. Gene-expression pattern-analysis service provider

4 1 5 Biology registry

2 3

User application

6 7

b. Gene sequence search and retrieval service provider

ÖAGCGGGTACCAGGTTCACTGCCTAGAÖ

8 10

9 c. Sequence alignment service provider

Further experiments to find binding proteins as potential drug targets

CTAGGCATCGCTTC-TTCGTATGAATACTTT TTAGATGTCTTTTC-TTCGTAT-----TTCA TTCTATATTATATA-TACACAT----CTTTT TTCTATTTTGTTTCATTCTTATATCCCTTTC * * * * ** *

Figure 1. Microarray data-analysis scenario for identifying targets in drug discovery. Components a, b, and c are three service providers that provide Web services for the data analysis related to drug discovery. The numbered lines are the steps in the analysis path. A researcher passes the data collected from a microarray experiment to a user application (1), which queries a biology service registry for the locations of service providers (2 and 3). The user application invokes the Web services provided by the three service providers (4 to 9). The user application transmits the result of an upstream service as the input of the next downstream service. Finally, the researcher passes the result of the last queried Web service to other drug discovery experiments (10).

Figure 1 shows the three Web services in the scenario and the data-analysis path to discover drug targets using these services. The path begins with the user finding the URLs of the necessary Web services from a biology service registry. She then queries those remote services to find similar fragments from the gene sequences that have similar expression patterns in the microarray experiments. Finally, she uses the fragments in additional experiments to identify drug targets.

Scenario implementation We decided to implement scenario steps in three applications that use different, largely incompatible algorithms or databases to accomplish their tasks. We reviewed only applications we felt we could easily translate into Web services. The candidate applications had to have • good encapsulation of implementation details, • clear interface definitions, and • simple input and output data structures. Table 1 lists the three applications we selected: IBM’s Genes@Work,3 the National Center for Biotechnology Information’s Entrez Databases,4 and the 28

Computer

Baylor College of Medicine’s Search Launcher.5 We then built a Web service for each scenario step using a mix of green-field and bottom-up strategies.6 The green-field strategy is a from-scratch implementation of both the Web service’s description and its functionality. The bottom-up approach is similar except that the functionality it exposes as a Web service already exists. Next we rewrote the interfaces for each application. For Genes@Work, the interface takes as its input the gene-expression data set file and the corresponding phenotype file for each microarray experiment and returns an expression pattern file. We adopted SOAP with attachment technology to transfer the files. Finally, we wrote the service interface and implementation descriptions. Many tools are available to help generate these definitions, but we used the Java2 Web Services Description Language (WSDL) tool in IBM’s Web services toolkit.6 We published the service interface and implementation in a local registry suitable for testing and for restricting user access to services. In some cases, the service provider might want to make the services available to the entire community. If so, a public registry, such as universal description discovery

Table 1. Selected applications for the drug discovery scenario.

and integration (UDDI) or a special biological registry would be more appropriate. We also built a client platform to consume the services. Users can invoke the three services independently as network-accessible stand-alone applications or as a group, to perform the tasks in our scenario. This system provides more flexibility for researchers to use the functionality in the three applications we chose, while data integration through Web services streamlines the entire analysis process.

Results With the Web services, service registry, and Web portal we built, we were able to smoothly pass the experimental data from microarray experiments to individual service providers and perform the analysis in Figure 1. Although we were the only users who performed pilot tests with this system, we believe anyone doing drug discovery research could easily use this system with very little computer training. Users must understand only generic operations, such as loading data files, entering the index of the gene expression patterns of interest, selecting the genes of the sequences to be retrieved, and so on. They need not worry about writing their own patches of scripts to transform data among incompatible programs from various versions. Our system handles the many time-consuming and tedious transformations between data formats. The only time left is the time it takes for each service provider’s analysis program to execute and the delays from network traffic. Using the traditional approach, it could take a user one hour to set up the Genes@Work standalone application and a few more hours to manually transform results to the legible input for gene identification. This doesn’t include the mechanics of cutting and pasting, which can take 5 to 10 minutes per operation, depending on how many patterns a user must query. With our system, typically it takes approximately 10 minutes from uploading the microarray data to finally clustering the gene sequences of the expression patterns of interest. Considering that a microarray experiment usually includes a few hundred to thousands of genes, our system saves significant time, most of which is otherwise spent in tedious tasks.

Lessons learned In conducting this project, we discovered two keystones to the successful and widespread use of

Application

Scenario component

Genes@Work

a

Entrez Databases

b

Search Launcher

c

Description A package that automatically analyzes gene expression patterns from the data microarray that technologies obtain A search and retrieval system that stores nucleotide sequences, protein sequences, and other sequences A project that aids in clustering gene and protein sequences

Web services in biological research. Well-defined interfaces. To support a services-oriented architecture, each software component must have a well-defined function and interface. If functions for different components are orthogonal, software coupling will be minimal, which will make it more convenient to transform these components into Web services that many kinds of researchers find acceptable. One way to achieve clean functions and a decoupled software architecture is to use an objectoriented design with systematic analysis in conjunction with design patterns. The bioinformatics software we worked with, for example, required much refactoring to separate the calculation logic from its Java Swing interface. Had its implementers followed the model-view controller design pattern instead of coupling the presentation logic and the business logic, we might have been able to extract a clear interface much more easily. Then the remaining work would have been to simply wrap the interface with the Web service. Standardization. Only by using a standard and widely agreed-on vocabulary can a given service requester and provider understand each other. If the biological research community is to realize the full benefit of Web services, it will have to make more progress in standardizing data formats and ontologies. Many researchers have already taken steps toward accomplishing this, such as conducting the Minimum Information about a Microarray Experiment for microarray data7 and developing ontologies that can apply to all life sciences and accommodate the growth and change in knowledge about gene and protein roles in cells.8 Standardization can also aid in creating corresponding data serializers and deserializers more systematically. Although this could take time, researchers need not wait until the community has defined every standard in detail. With Web services, they can transmit highly complicated data as attachments to SOAP messages, which can save the bandwidth taken by sending XML tags. In addition to working on vocabularies and data formats, standardization must formalize service March 2005

29

descriptions so that registries can assign all Web services that address the same problems to the same category. A service requester can then easily identify all the available services that can solve a problem. And it can choose to invoke different services that provide the same interface without modifying the client-side programs. Registry standardization is also critical. The data objects and service descriptions in a registry can give software developers clues about how others have defined services. The problem for biology researchers is that, although registries such as UDDI store many services, most are unrelated to biology research. To avoid wasting time sifting through irrelevant services, biologists need registries built specifically for biology and its subfields. These registries should have a hierarchical structure, with the top-level registry mirroring the registries of other scientific fields. Finally, help from widely coordinated organizations can be invaluable. The Web Services Interoperability Organization (www.ws-i.org), for example, provides guidance, best practices, and resources for developing Web services solutions across standards organizations. Its first release of WS-I Basic Profile, a set of nonproprietary Web services specifications, represents a milestone for Web services interoperability.

run as a Web service using SOAP and WSDL. XEMBL users can keep their original Corba framework.

A

s our case study shows, Web services have great potential for solving the data- and application-integration problems in biology, particularly for time-consuming data analysis. The wider application of this technology depends greatly on the willingness of the biological research community to pursue standardization, including building ontologies, developing biology-specific registries, and defining the service interfaces for well-known functions. The community will also need to develop more frequently used services and address the concerns of security and quality of service. Fortunately, there is a huge volume of existing applications and modules on which to base these efforts, and the successful implementation of Web services will further it. Clearly much work lies ahead, but the efficiency payoff should be well worth the effort. In the interim, researchers who spend even a short time becoming familiar with the service descriptions will benefit. This familiarity will expedite the spread of technology, increase the number of services provided, and eventually raise the quality and quantity of available Web services. ■

WORK IN PROGRESS Evolving standardization will not be trivial, but the adoption of Web services technology is a solid first step because such a move can have a snowball effect: The more people are willing to provide their resources in Web services format, the more attractive this strategy becomes for others—and the more favorably users and providers will view standardization in general. Some health agencies have already taken this step. The National Cancer Institute, for example, provides a group of legacy Web services for direct access to information and has a list of applications already wrapped into Web services (http://cabio. nci.nih.gov/soap/services/index.html). In 2003, for the DNA Data Bank of Japan (DDBJ), Hideaki Sugawara and colleagues defined DDBJ-XML and developed a DDBJ-SOAP server. Their work is Japan’s earliest published effort using Web services in the life sciences.9 Some organizations that have already invested in an integration technology can use Web services as an enhancement. The EMBL Nucleotide Sequence Database provides an extended version of EMBL (http://www.ebi.ac.uk/xembl/) that can 30

Computer

References 1. M. Huerta et al., “NIH Working Definition of Bioinformatics and Computational Biology,” The Biomedical Information Science and Technology Initiative Consortium (BISTIC) Definition Committee of National Institutes of Health (NIH), 17 July 2000; www.bisti.nih.gov/CompuBioDef.pdf. 2. M. Schena et al., “Quantitative Monitoring of Gene Expression Patterns with a Complementary DNA Microarray,” Science, vol. 270, no. 5232, 1995, pp. 467-470. 3. A. Califano, G. Stolovitzky, and Y. Tu, “Analysis of Gene Expression Microarrays for Phenotype Classification,” Proc. Int’l Conf. Intelligent Systems for Molecular Biology, vol. 8, AAAI Press, 2000, pp. 7585. 4. D.L. Wheeler et al., “Databases Resources of the National Center for Biotechnology,” Nucleic Acids Research, vol. 31, no. 1, 2003, pp. 28-33. 5. R.F. Smith et al., “BCM Search Launcher—An Integrated Interface to Molecular Biology Database Search and Analysis Services Available on the World Wide Web,” Genome Research, May 1996, pp. 454-462.

6. J. Snell, “Implementing Web Services with the WSTK v3.3: Part 1,” IBM DeveloperWorks, Dec. 2002, pp. 5-6. 7. A. Brazma et al., “Minimum Information about a Microarray Experiment (MIAME)—Toward Standards for Microarray Data,” Nature Genetics, vol. 29, 2001, pp. 365-371. 8. M. Ashburner et al., “Gene Ontology: Tool for the Unification of Biology,” Nature Genetics, vol. 25, 2000, pp. 25-29. 9. H. Sugawara and S. Miyazaki, “Biological SOAP Servers and Web Services Provided by the Public Sequence Data Bank,” Nucleic Acids Research, vol. 31, 2003, pp. 3836-3839.

Hong Tina Gao is a software engineer at Lexmark. Her research interests include bioinformatics, software maintenance, testing, software architecture, and Web engineering. She received an MS in com-

puter science from the University of Kentucky and an MS in molecular biology from Shanghai Jiao Tong University in China. Contact her at tgao@ lexmark.com.

Jane Huffman Hayes is an assistant professor of computer science at the University of Kentucky. Her research interests include software verification and validation, requirements engineering, and software maintenance. Huffman Hayes received a PhD in information technology from George Mason University. Contact her at [email protected].

Henry Cai is a senior application analyst at Big Lots. His research interests include software engineering, supply chain management, and e-commerce. Cai received an MS in computer science from the University of Kentucky. Contact him at [email protected].

March 2005

31

&DOOIRU3DSHUV

63(&76 ,QWHUQDWLRQDO6\PSRVLXPRQ3HUIRUPDQFH(YDOXDWLRQ RI&RPSXWHUDQG7HOHFRPPXQLFDWLRQ6\VWHPV

*HQHUDO&KDLU 0RKDPPDG62EDLGDW 'HSWRI&RPSXWHU6FLHQFH0RQPRXWK8QLYHUVLW\ (PDLOREDLGDW#PRQPRXWKHGX 9LFH*HQHUDO&KDLU )UDQFR'DYROL ',678QLYHUVLW\RI*HQRD (PDLOIUDQFR#GLVWXQLJHLW 3URJUDP&KDLUV 0DULR0DUFKHVH &1,78QLYHUVLW\RI*HQRD,WDO\ (PDLOPDULRPDUFKHVH#FQLWLW -RVH/0DU]R 8QLYHUVLW\RI*LURQD6SDLQ (PDLOMRVHOXLVPDU]R#XGJHV 9LFH3URJUDP&KDLUDQG6SHFLDO6HVVLRQV&KDLU ,PDG0DKJRXE )ORULGD$WODQWLF8QLYHUVLW\86$ (PDLOLPDG#FVHIDXHGX 9LFH3URJUDP&KDLUDQG/RFDO$UUDQJHPHQW &KDLU &RQVWDQWLQH.DWVLQLV 'UH[HO8QLYHUVLW\86$ (PDLOFNDWVLQL#HFHGUH[HOHGX 9LFH3URJUDP&KDLU 3DVFDO/RUHQ] 8QLYHUVLW\RI+DXWH$OVDFH)UDQFH (PDLOORUHQ]#LHHHRUJ 7XWRULDO&KDLU $EEDV-DPDOLSRXU 6FKRRORI(OHFWULFDODQG,QIRUPDWLRQ(QJLQHHULQJ 8QLYHUVLW\RI6\GQH\ (PDLODMDPDOLSRXU#LHHHRUJ 7HFKQLFDO3URJUDP&RPPLWWHH $EGXOODK$ERQDPDK=D\HG8QLYHUVLW\8$( ,DQ$N\LOGL]*HRUJLD7HFK86$ 0RKDPPHG$WLTX]]DPDQ8QLYHUVLW\RI2NODKRPD86$ 1RXUHGGLQH%RXGULJD8QLYHUVLW\RI7XQLV7XQLVLD 0DULD&&DO]DURVVD8QLYHUVLW\RI3DYLD,WDO\ $QGUHZ&DPSEHOO&ROXPELD8QLYHUVLW\86$ +DLWKDP&UXLFNVKDQN8QLYHUVLW\RI6XUUH\8. *DERU)RGRU(ULFVVRQ5DGLR6\VWHPV6ZHGHQ *HRIIUH\)R[,QGLDQD8QLYHUVLW\86$ -RKQ)R[0RWRUROD,QF8. 6HEDVWLD*DOPHV8QLYHUVLWDWGHOHV,OOHV%DOHDUV6SDLQ (URO*HOHQEH,PSHULDO&ROOHJH8. 6DPL+DELE.XZDLW8QLYHUVLW\.XZDLW 2PDU+DPPDPL(167$)UDQFH -DUPR+DUMX7DPSHUH8QLYHUVLW\RI7HFKQRORJ\)LQODQG +HUPDQ+XJKHV0LFKLJDQ6WDWH8QLYHUVLW\86$ 5DM-DLQ1D\QD1HWZRUNV,QF 2KLR6DWH8QLY86$ &DUORV-XL]8QLYHUVLWDWGHOHV,OOHV%DOHDUV6SDLQ ,QJHPDU.DM8SSVDOD8QLYHUVLW\6ZHGHQ .ULVKQD.DQW,QWHO86$ +HOHQ.DUDW]D$ULVWRWOH8QLYHUVLW\RI7KHVVDORQLFD*UHHFH 'HPHWULRV.D]DNRV8QLYHUVLW\RI,GDKR86$ 8OULFK.LOODW7HFK8QLYRI+DPEXUJ*HUPDQ\ .HYLQ.ZLDW$LU)RUFH5HVHDUFK/DERUDWRU\86$ 9HURQLFD/DJUDQJH05HLV+3&RUS86$ $[HO/HKPDQQ8QLYHUVLWlWGHU%XQGHVZHKU0QFKHQ*HUPDQ\ 0LNH7/LX2KLR6WDWH8QLYHUVLW\86$ (ULFK/XW]'/5*HUPDQ\ 6DP0DNNL4XHHQVODQG8QLYHUVLW\RI7HFKQRORJ\$XVWUDOLD .U]\V]WRI0DOLQRZVNL:DUVDZ7HFKQLFDO8QLYHUVLW\3RODQG 0DUHN0DORZLG]NL0LOLWDU\&RPPXQLFDWLRQ,QVWLWXWH3RODQG 3DVFDOH0LQHW,15,$)UDQFH 9RMLVODY0LVLF8QLYHUVLW\RI0DQLWRED&DQDGD +XVVHLQ0RXIWDK8QLYHUVLW\RI2WWDZD&DQDGD ,EUDKLP2Q\XNVHO1RUWKHUQ,OOLQRLV8QLYHUVLW\86$ 0RKDPHG2XOG.KDRXD8QLYHUVLW\RI*ODVJRZ8. (OHQD3DJDQL8QLYHUVLWjGL0LODQR,WDO\ *HRUJLRV,3DSDGLPLWULRX$ULVWRWOH8QLYHUVLW\*UHHFH $FKLOOH3DWWDYLQD3ROLWHFQLFRGL0LODQR,WDO\ .U]\V]WRI3DZOLNRZVNL8QLYHUVLW\RI&DQWHUEXU\1HZ=HDODQG $QWRQLR3HVFDSH¶8QLYHUVLWD¶GL1DSROL³)HGHULFR,,´,WDO\ *UHJRU\'3HWHUVRQ8QLYHUVLW\RI7HQQHVVHH86$ 6WHYHQ3LQN8QLYHUVLW\RI$UL]RQD86$ *HRUJH3RO\]RV$8(%*UHHFH 5DPRQ3XLJMDQHU8QLYHUVLWDWGHOHV,OOHV%DOHDUV6SDLQ .DOLDSSD5DYLQGUDQ&81