Flexible, Reliable, Distributed Software [2 ed.] 1565694342

Flexible, Reliable, Distributed Software guides readers through the process of developing high quality distributed softw

223 32 2MB

English Pages 120 Year 2018

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Table of Contents
Flexible, Reliable, Distributed Software
TeleMed Case
Learning Objectives
TeleMed Stories
A Role Based Design
A Server side Role based Design
Related Work
Basic Concepts
The Issues Involved
Elements of a Solution
Tying Things Together
Summary of Key Concepts
Review Questions
Broker Part One
Learning Objectives
The Problem
The Broker Pattern
Analysis
Summary of Key Concepts
Review Questions
Implementing Broker
Learning Objectives
Architectural Concerns
Domain Layer
Client Side
Server side
Test-driven development of Distributed Systems
Using the Broker library
Summary of Key Concepts
Broker Part Two
Learning Objectives
Limitations in the TeleMed Case
Game Lobby Stories
Walkthrough of a Solution
Summary of Key Concepts
Review Questions
HTTP
Learning Objectives
A HTTP Walk-through
A HTTP Case Study: PasteBin
Broker using HTTP
Summary of Key Concepts
Review Questions
REST
Learning Objectives
The Demise of Broker Architectures
Representational State Transfer (REST)
Richardson's model for Levels in REST
The Architectural Style
Level 1 REST: TeleMed
Documenting REST API
Continued REST Design for TeleMed
Implementing REST based TeleMed
Level 2 REST: GameLobby
Testability and TDD of REST designs
Summary of Key Concepts
Review Questions
Bibliography
Recommend Papers

Flexible, Reliable, Distributed Software [2 ed.]
 1565694342

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Flexible, Reliable, Distributed Software Still Using Patterns and Agile Development Henrik Bærbak Christensen This book is for sale at http://leanpub.com/frds This version was published on 2020-08-12

This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing process. Lean Publishing is the act of publishing an in-progress ebook using lightweight tools and many iterations to get reader feedback, pivot until you have the right book and build traction once you do. © 2018 - 2020 Henrik Bærbak Christensen

CONTENTS

Contents Flexible, Reliable, Distributed Software . . . . . . . . . . . . . . . . . . . .

ii

1. TeleMed Case . . . . . . . . . . . . . . . . 1.1 Learning Objectives . . . . . . . . 1.2 TeleMed Stories . . . . . . . . . . . 1.3 A Role Based Design . . . . . . . . 1.4 A Server side Role based Design . 1.5 Related Work . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

1 1 1 2 4 8

2. Basic Concepts . . . . . . . . . . . 2.1 The Issues Involved . . . . 2.2 Elements of a Solution . . 2.3 Tying Things Together . . . 2.4 Summary of Key Concepts 2.5 Review Questions . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. 9 . 10 . 12 . 19 . 20 . 21

3. Broker Part One . . . . . . . . . . 3.1 Learning Objectives . . . . 3.2 The Problem . . . . . . . . . 3.3 The Broker Pattern . . . . . 3.4 Analysis . . . . . . . . . . . . 3.5 Summary of Key Concepts 3.6 Review Questions . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. 23 . 23 . 23 . 24 . 28 . 33 . 33

4. Implementing Broker . . . . . . . . . . . . . . . . . . . . . 4.1 Learning Objectives . . . . . . . . . . . . . . . . . . . 4.2 Architectural Concerns . . . . . . . . . . . . . . . . . 4.3 Domain Layer . . . . . . . . . . . . . . . . . . . . . . . 4.4 Client Side . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Server side . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Test-driven development of Distributed Systems 4.7 Using the Broker library . . . . . . . . . . . . . . . . 4.8 Summary of Key Concepts . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. 36 . 36 . 36 . 36 . 38 . 43 . 47 . 52 . 53

5. Broker Part Two . . . . . . . . . . . . . . 5.1 Learning Objectives . . . . . . . . 5.2 Limitations in the TeleMed Case 5.3 Game Lobby Stories . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

2nd Edition

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

54 54 54 54

CONTENTS

5.4 5.5 5.6

Walkthrough of a Solution . . . . . . . . . . . . . . . . . . . . . . . 59 Summary of Key Concepts . . . . . . . . . . . . . . . . . . . . . . . 70 Review Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

6. HTTP . . . . . . . . . . . . . . . . . . . 6.1 Learning Objectives . . . . . . 6.2 A HTTP Walk-through . . . . . 6.3 A HTTP Case Study: PasteBin 6.4 Broker using HTTP . . . . . . . 6.5 Summary of Key Concepts . . 6.6 Review Questions . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

75 75 75 80 84 87 87

7. REST . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Learning Objectives . . . . . . . . . . . . 7.2 The Demise of Broker Architectures . . 7.3 Representational State Transfer (REST) 7.4 Richardson’s model for Levels in REST 7.5 The Architectural Style . . . . . . . . . . 7.6 Level 1 REST: TeleMed . . . . . . . . . . . 7.7 Documenting REST API . . . . . . . . . . 7.8 Continued REST Design for TeleMed . . 7.9 Implementing REST based TeleMed . . 7.10 Level 2 REST: GameLobby . . . . . . . . 7.11 Testability and TDD of REST designs . . 7.12 Summary of Key Concepts . . . . . . . . 7.13 Review Questions . . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

88 88 88 90 90 91 92 95 96 98 100 111 113 113

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

2nd Edition

i

CONTENTS

Second Edition This second edition represents a major rewrite. While the core contents and structure remain the same, all source code within the book has been updated to reflect release 2.0 of the FRDS.Broker library. Release 2.0 has cleaned up the Broker code to have a clear separation between the marshalling and interprocess communication layers of the architecture. All code fragments in the edition have been updated to reflect that.

Front Picture The front picture shows the first pyramid ever built, Pharaoh Djoser’s step pyramid and the pyramid complex surrounding it. A real image of the pyramid was featured on my first book Flexible, Reliable Software but this time I acknowledge UbiSoft’s game Assassins Creed Origins which beside being an intriguing game also has invested many resources in providing a historical correct depiction of ancient Egypt, and allowing players to take pictures of it. Djoser’s step pyramid was built around 2600 BC by Djoser’s chancellor, Imhotep. Imhotep was the first engineer and architect in history known by name, and was deified almost 2000 years after his death. Imhotep’s ingenious idea was to reuse the existing tomb design of a flat-roofed rectangular structure, the mastaba, and create the royal tomb by building six such mastabas of decreasing size atop one another. You can still admire the pyramid at Saqqara, Egypt, today, more than 4600 years after it was completed. It is a design that has stood the test of time from an architect who was deified - a worthy role model for all who create designs and realize them.

Acknowledgments Quite a few people have contributed to the present version of the book. I would like to thank: Kira Kutscher, Emil Vanting, Emil Christensen, Ole Høiriis Iversen, Chris Christiansen, Sune Chung Jepsen, … - Henrik Bærbak Christensen

2nd Edition

Flexible, Reliable, Distributed Software This book focuses on developing software for distributed computing using the principles, methods, and practices introduced in my previous book Flexible, Reliable Software - Using Patterns and Agile Development, published by CRC Press, 2010. Chapter TeleMed Case. The PayStation was the central case study for introducing and discussing test-driven development and design patterns in my original book. In this chapter I will introduce TeleMed, a small tele medical systems for monitoring patients in their homes, that in a similar manner serves as case study for distributed computing. Chapter Basic Concepts. This chapter introduces basic concepts and outlines the many challenges of distributed computing. Chapter Broker Part One. The Broker pattern is a central architectural pattern for implementing method calls on remote objects. In this chapter, I will go through the basics of the pattern. Chapter Implementing Broker. In this chapter, I will highlight central aspects of the Broker implementation, using the TeleMed case study as a foundation. I also show how to use test doubles to allow easy testing of broker based systems. Chapter Broker Part Two. Adhering to the TDD philosophy of take small steps, this chapter completes the Broker pattern. I will deal with the issues of server created objects as well as handling more than one type of remote object. Chapter HTTP. In this chapter I will detail some of the basic concepts and techniques in the hypter text transfer protocol that is the engine behind WWW, the world wide web. It turns out that HTTP is an excellent technology to build Broker upon. Chapter REST. Representational State Transfer, REST, is an architectural pattern for large-scale distributed system based upon HTTP. This chapter presents the pattern and compares it to the Broker.

1. TeleMed Case 1.1 Learning Objectives To ground the discussion on distributed computing, I will present a simple case study—similar to how the Pay Station evolved in the beginning of my previous book Flexible, Reliable Software. Our case study is monitoring patients in their homes through a system called TeleMed.

1.2 TeleMed Stories The context for our case study is tele medicine or patient monitoring in the home. Consider a person who has suffered a serious heart attack or have some other serious condition that requires the blood pressure to be monitored at regular intervals. One possible solution is to let the patient go to an outpatient clinic, let a nurse measure the blood pressure, and enter the measured values for systolic and diastolic blood pressure into an electronic patient record. However, this is obviously rather disruptive for the patient if daily measurements are required, and costly in terms of hospital staffing. Thus, an alternative solution, which I will unfold here, is to provide the patient with a blood pressure meter, let the patient make the measurement daily, and let the meter upload the measurement automatically to a centralized electronic patient record. As an alternative, the patient may enter the readings from the meter into an app on a smart phone or tablet for upload. To rephrase, I come up with the following two stories: Story 1: Upload a blood pressure measurement. A patient starts her daily routine of monitoring her blood pressure by starting the blood pressure meter. Once it is ready, she puts the inflatable cuff, as seen in the figure below, on her arm and presses the start button on the meter. After a short while, the meter displays the measured values: 126 mmHg systolic pressure, and 70 mmHg diastolic pressure. The patient then enters these values in the HomeClient application, which starts the upload of the measured values to the TeleMed server. A short while later the application reports that the upload have been successful.

2

TeleMed Case

A blood pressure measurement device.

Story 2: Review blood pressure. The patient starts her HomeClient application, and requests to review her measured blood pressures. She choose to review measurements for the last week, and is presented with a graph showing how systolic and diastolic blood pressure have evolved during the last week. This entails a small but obviously distributed system: A tele medicine server on one hand that processes, stores, and supports queries on blood pressure measurements for a large set of patients; and a (large) set of clients, each allowing a single patient to measure and upload his or her blood pressure.

1.3 A Role Based Design I initially identify three main roles in our TeleMed system: We need an object to represent the measured blood pressure, we need an object that is responsible for sending (and querying about) the patient’s blood pressure, and finally, we need a (remote) object that is responsible for processing, storing, and analysis of all patients’ measurements. I can express these three roles in the UML diagram in the figure below.

2nd Edition

3

TeleMed Case

TeleMed class diagram.

The TeleObservation represents an observation of the patient — in our case study it is a blood pressure which again is represented by two quantities: systolic and diastolic blood pressure. A quantity1 is a measurement of some value associated with a unit. The measuring unit is essential: 10 does not tell us much, until we know if it is 10 kilometers or 10 grams. In our case, the value is 126 and the unit mmHg for the systolic quantity. Creating a tele observation is exemplified by the following instantiation: 1 2

TeleObservation teleObs = new TeleObservation("251248-1234", 126.0, 70.0);

This code creates an observation for a specific patient identified by “2512481234” (Danish social security number format, which is six digits indicating birth date (ddmmyy) followed by a serial number), and the quantities for systolic and diastolic blood pressure (126 mmHg, 70 mmHg). The HomeClient role is responsible for creating tele observation objects and sending them to the the TeleMed server; as well as retrieving sets of tele observations from the TeleMed server and present them for the patient in one way or another. Finally, the central server side role is TeleMed that is responsible for the safe storage of measurements for a large set of patients, and respond to uploads and queries by the HomeClients. We can express the responsibilities of the role TeleMed by the following interface: 1 2 3 4 5 6

/** * The central role in the TeleMed medical system, the application * server that supports storing for all the tele observations made * by all patients as well as queries */ public interface TeleMed {

7 8 9

/** * Process a tele observation and store it in an electronic 1 Martin Fowler, “Analysis Patterns”, Addison-Wesley, 1997.

2nd Edition

4

TeleMed Case

* patient record database. * * @param teleObs * the tele observation to process and store * @return the id of the stored observation */ String processAndStore(TeleObservation teleObs);

10 11 12 13 14 15 16 17

/** * Retrieve all observations for the given time interval for * the given patient. * * @param patientId the ID of the patient to retrieve * observations for * @param interval define the time interval that * measurements are wanted for * @return list of all observations */ List getObservationsFor(String patientId, TimeInterval interval);

18 19 20 21 22 23 24 25 26 27 28 29 30

}

The main challenge is that the TeleObservation object is obviously created by the HomeClient but the TeleMed object with its processAndStore method is a remote object on some distant server. That is, the method call 1

remoteTeleMed.processAndStore(teleObs);

will entail the method processAndStore being called on some object that represent an instance of interface TeleMed on a remote server.

1.4 A Server side Role based Design The TeleMed application server is responsible for communication with the home clients, and for storing the measurements in some reliable database system. The former responsibility is the main focus of the next chapters on distribution. The latter responsibility, storage, is not a central issue in the context of this book, however as the associated code base implements it, I will shortly outline the design below. Persisting data in a database requires decisions to be made in a number of areas. As our domain is clinical systems, and I have worked for some years 2nd Edition

5

TeleMed Case

in this domain, I have chosen to mimic2 two clinical standards in widespread use in hospital systems. The first is IHE Cross Enterprise Document Sharing Profile or XDS for short. XDS defines a database system for sharing large amounts of clinical data in a decentralized way. The central role is the Document Registry which acts like a searchable index of all documents stored in XDS. Think it like a Google search engine — it does not store the documents itself, it is just a database that indexes all documents and knows which actual database they are stored in. The other role is Document Repository which is the actual database in which documents are stored. Again, think of it as the actual servers where web pages are stored. A registry typically index a lot of repositories: this way each hospital or clinic can have its own repository of documents (for instance, an X-Ray clinic can store images for a lot of patients in its repository), while there is only one central registry for, say, a small country or a state. This way a clinician can search for all x-ray images for a given patient at any hospital, and retrieve them from the respective repository. Our simplified XDS interface, XDSBackend, in the TeleMed system looks like this (some methods skipped): 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

public interface XDSBackend { /** * Store observation in XMLformat (HL7) in a XDS repository and * ensure that the metadata for it is stored in the registry. * * @param metaData * the meta data to store in registry to allow queries * to be made * * @param observationAsHL7 * the clinical document in HL7 format that * is to be stored in the repository * * @return uniqueId a unique id generated for the document, allows * operations to be applied to that particular document * in the repository, see correctDocument */ public String provideAndRegisterDocument(MetaData metaData, Document observationAsHL7);

20 21 22 23

/** * Query the XDS for all documents whose metadata fulfill criteria: * A) the id of the person equals personID B) the time interval 2 By “mimic” I mean that my design and implementation bears some resemblance to the actual standard, but much detail has been abstracted away: Both HL7 and XDS are large and complex standards.

2nd Edition

6

TeleMed Case

* [start;end] * * @param personID * id of the person searched for * @param start * begin of time interval * @param end * end of time interval * @return list of all documents that fulfil criteria */ public List retriveDocumentSet(String personID, LocalDateTime start, LocalDateTime end);

24 25 26 27 28 29 30 31 32 33 34 35 36

}

When storing a document, you first have to generate meta data for it. In our case, this is the identity of the patient and the time for the measurement. These meta data are stored in the registry along with a reference to the actual document, so a clinician can formulate queries for specific patients and time intervals without the registry having to contact any repositories. XDS does not care about the contents of the documents it stores—it does not specify any particular document format. Therefore we need to agree on some format to use for storing, say, blood pressure measurements. The Health Level 7 (HL7) standard3 is such a standard. It is basically an XML format though a rather complex one. It consists of a header section, which describes the patient and time, and then a data section, where actual data is stored— in our case blood pressure measurements. HL7 is a very verbose XML format, so I have taken the liberty to remove most of the required sections of proper HL7 for the sake of simplicity. Below is an example of our truncated HL7. 1 2 3 4 5 6 7 8 9 10 11 12 13









3 Keith W. Boone, “The CDA Book”, Springer, 2011.

2nd Edition

7

TeleMed Case 14 15 16 17



You can see that the first sections are the time stamp of the document and the identity of the patient. The “component” section then contains the observed quantities (126 mmHg, 70 mmHg). Now, which is which? Well any clinical quantity needs to tell what clinical measurement it is, and code systems are such classifications. In Denmark, there is a code system that has assigned the specific code “MSC88019” to a measurement of systolic blood pressure. Thus a computer will know exactly what clinical aspect the quantity represents.

Method processAndStore The TeleMed server’s central responsibility is to store incoming measurements in the database. Using the two standards mentioned above, XDS and HL7, this entails first constructing the HL7 XML document and the metadata for it to store in the XDS registry, and next tell XDS to store the document. The actual code in the server, that does this, is shown below: 1 2 3 4 5 6

public String processAndStore(TeleObservation teleObs) { // Generate the XML document representing the // observation in HL7 (HealthLevel7) format. HL7Builder builder = new HL7Builder(); Director.construct(teleObs, builder); Document hl7Document = builder.getResult();

7

// Generate the metadata for the observation MetadataBuilder metaDataBuilder = new MetadataBuilder(); Director.construct(teleObs, metaDataBuilder); MetaData metadata = metaDataBuilder.getResult();

8 9 10 11 12

// Finally store the document in the XDS storage system String uniqueId = null; uniqueId = xds.provideAndRegisterDocument(metadata, hl7Document);

13 14 15 16

return uniqueId;

17 18

}

Note that the Builder4 pattern is a good choice for generating meta data and XML document, and is thus used here. 4 Henrik B. Christensen, “Flexible, Reliable Software - Using Patterns and Agile Development”, CRC Press, 2010.

2nd Edition

8

TeleMed Case

Review the use of Builder in the code and argue why it is proper choice.

1.5 Related Work The TeleMed case is inspired by work in the Net4Care project, see (Christensen, 2012)5 . 5 Henrik B. Christensen and Klaus M. Hansen, “Net4care: Towards a mission-critical software ecosystem”, In Proceedings of WICSA, 2012.

2nd Edition

2. Basic Concepts Distributed computing is the field of computer science that studies distributed systems. Distributed System A distributed system is one in which components located at networked computers communicate and coordinate their actions only by passing messages1 . Distributed systems have always been important in computing, but the World Wide Web, pioneered by Tim Berners-Lee in the early 1990’s2 , made such system widely available and accessible to a large audience. Traditionally, distributed systems have been designed with two purposes in mind: • To increase the computational power, that is, to allow complex calculations to be performed faster by utilizing more than a single computer. • To share large amounts of data, that is, by storing data on a single central computer, a large number of other computers and thus users can access and modify common information. It is fair to say today that the latter aspect is by far the most important, which the world wide web is a striking example of. The ability to share pictures and thoughts with friends on social media, the ability to handle orders and inventories globally for a company, the ability to book flights, hotel rooms, car rental, etc., all over the world are all examples of how shared data allows fast communication and coordination across people and continents. Distributed systems can be organized in many ways but I will restrict myself to discuss client-server architectures. Client-server architecture Two components need to communicate, and they are independent of each other, even running in different processes or being distributed in different machines. The two components are not equal peers communicating with each other, but one of them is initiating the communication, asking for a service that the other provides. Furthermore, multiple components might request the same service provided by a single component. Thus, the 1 George Coulouris, Jean Dollimore, Tim Kindberg and Gordon Blair, “Distributed Systems – Concepts and Design, Fifth Edition”, Pearson Education Limited, 2012. 2 Tim Berners-Lee, “WWW: Past, present, and future.”, Computer 29, Oct 1996.

10

Basic Concepts

component providing a service must be able to cope with numerous requests at any time, i.e. the component must scale well. On the other hand, the requesting components using one and the same service might deal differently with the results. This asymmetry between the components should be reflected in the architecture for the optimization of quality attributes such as performance, shared use of resources, and memory consumption. The CLIENT-SERVER pattern distinguishes two kinds of components: clients and servers. The client requests information or services from a server. To do so it needs to know how to access the server, that is, it requires an ID or an address of the server and of course the server’s interface. The server responds to the requests of the client, and processes each client request on its own. It does not know about the ID or address of the client before the interaction takes place. Clients are optimized for their application task, whereas servers are optimized for serving multiple clients3 . In this part of the book, we will discuss the issues involved in developing distributed systems. Distributed computing is a large and complex subject as is evident from the discussion in the next section. I therefore have to limit myself to discussing core aspects that relate to those of the main themes of the book: the responsibility-centric perspective on designing objectoriented software, patterns for distributed computing, and how to develop flexible and reliable systems.

2.1 The Issues Involved If you compare the definition of distributed systems with that of objectoriented programming by Budd (2002), they are actually strikingly similar: Object-orientation (Responsibility): An object-oriented program is structured as a community of interacting agents called objects. Each object has a role to play. Each object provides a service or performs an action that is used by other members of the community4 . If you replace the word “object” in Budd’s definition with “components located at networked computers” they are nearly identical. Indeed, a web server is just an object that provides a service, namely to return web pages when your browser requests so. Essentially, your web browser is invoking 3 Paris Avgeriou and Uwe Zdun, “Architectural patterns revisited - a pattern language”, In 10th European

Conference on Pattern Languages of Programs (EuroPlop), Irsee, 2005. 4 Timothy Budd, “An Introduction to Object-Oriented Programming”, Addison-Wesley, 2002.

2nd Edition

11

Basic Concepts

the method of the web server object located at the computer given by the host name part of the URL. However, there are some fundamental differences between invoking methods on objects in a standalone program, and invoking methods on remote objects. The first problem in the programming model is that networked communication is implemented at a low level of abstraction, basically the only thing you can, is to send an array of bytes to a remote computer and receive an array of bytes from the network: 1 2

void send(Object serverAddress, byte[] message); byte[] receive();

Moreover, send() is an asynchronous function, which just puts the byte array into a buffer for the network to transmit and then returns immediately without waiting for a reply. This is of course far from a high level object-oriented method call. Much of the following discussions, and the chapter on the Broker pattern in particular, deal with solutions to this problem. Even when we do manage to bridge the gap between invoking methods on remote objects on one hand, and the low-level send/receive support on the other, there are still numerous issues to deal with. • Remote objects do not share address space. Thus, references to local objects on my machine passed to a remote object on a remote machine are meaningless. • Networks only transport bits and bytes with no understanding of classes, interfaces, or types which are fundamental structuring mechanisms in modern programming languages. In essence, a network is similar to a file system: just as a program’s internal state needs to be converted to allow writing and reading from a disk—the internal state must be converted to allow sending it to a remote object. • Remote objects may execute on different computing architectures— different CPUs and instruction sets. The problem is that different processors layout bits differently for for instance integers. • Networks are slow. Invoking a method on a remote object is between 10 to 250 times slower than if the object is within the same Java virtual machine. You can find some numbers at the end of the chapter. • Networks fail. This fundamental property means methods called on remote objects may never return. • Remote computers may fail, thus the remote object we need to communicate with may simply not be executing. • Networks are unsafe. Data transferred over networks can be picked up by computers and people that we are not aware of, that we can not trust. 2nd Edition

12

Basic Concepts

• Remote objects execute concurrently. Thus, we face all the troubles of concurrent programming at the very same moment we program for distributed systems. • If too many clients invoke methods on a server, it will become overloaded and respond slowly or even crash - typically because memory is exhausted. These issues are complex, and several of them are architectural in nature, dealing with quality attributes like performance, availability, and security5 . Treating each aspect in great detail is well beyond the scope of this book. In this book, I will present architectural patterns and programming models that handles the fundamental aspects of invoking methods on remote objects. The emphasis is on “normal operations”, that is, the context in which networks are stable, and the computers that host the remote objects are online and working correctly. This is of course the fundamental level to master before venturing into the more complex architectural issues of high performance and high availability distributed computing. At the end of the chapter I will point towards central books that I have found helpful in handling the “failed operations” scenario, in which there are issues with slow network, broken servers, etc.

2.2 Elements of a Solution Returning to our case study, TeleMed, we can state the challenges faced more concretely. On one hand we would like the source code that handles TeleMed story 1 to look something like this 1 2 3 4 5 6

public void makeMeasurement() { TeleObservation teleObs; teleObs = bloodPressureMeterHardware.measure(); TeleMed server = new RemoteTeleMedOnServer(...); String teleObsId = server.processAndStore(teleObs); }

On the other hand, we only have operating system functions to send and receive byte arrays over a network. Bridging this gap requires us to consider (at least) four aspects. 1. How to make two distributed objects, the one on the client side and the one on the server side, simulate a synchronous method call when they only have send/receive at their disposal ? 5 Len Bass, Paul Clements, and Rick Kazman, “Software Architecture in Practice, 3rd ed.”, Addison-Wesley, 2012.

2nd Edition

13

Basic Concepts

2. How to convert structured objects, like the TeleObservation, into byte arrays suitable for transport on a network and back again? 3. How to keep, as best possible, our object-oriented programming model using method calls on objects? 4. How to locate the remote object to call? These four challenges are answered by four programming techniques, that I will sketch below: The request-reply protocol, the marshalling technique, the Proxy pattern, and name services, which together form the backbone of the Broker pattern.

Request-Reply A normal method call at run-time in an object-oriented program is a socalled synchronous method call — the caller waits (blocks) until the object has computed an answer and returns the answer using the return statement. In contrast a call to send() returns immediately once the operating system has sent the message on the network — which is not the same as the server having received the message, processed it, and computed an answer. To simulate synchronous method calls over a network between a client object and a remote server object, a communication pattern called the requestreply protocol has evolved. Request-Reply Protocol. The request-reply protocol simulates a synchronous call between client and server objects by a pairwise exchange of messages, one forming the request message from client to server, and the second forming the reply message from the server back to the client. The client sends the request message, and waits/blocks until the reply message has been received. In this protocol, the client makes the request by calling send(server, request) and then immediately calls reply = receive() which waits until it receives a reply back from the server. The server does the same but in the opposite order: It invokes receive() and awaits until some client sends it a message. Once received, it computes an answer, and then calls send(client, reply) to send the reply back. The UML sequence diagram in the figure below illustrates this protocol.

2nd Edition

14

Basic Concepts

Request reply protocol.

The request-reply protocol is well known from browsing web pages. You enter some URL in the address field (that is, the server address and a path) and hit the ’Go’ button, and then the browser sends the request, and waits until it receives a reply (usually showing some kind of “waiting for…” message or icon) which is the web page contents, after which it renders that page in the browser.

Marshalling Methods take parameters that range from simple data types like an integer to complex object types like a HashMap or a class structure. Moreover, a remote object must be identified, and it must be told the exact identity of the method to call. All this data has to be converted into a format suitable for transmission over a network—that basically can only transport sequences of bytes. This process is called serialization or marshalling. Marshalling is the process of taking a collection of structured data items and assembling them into a byte array suitable for transmission in a network message. Unmarshalling is the process of disassembling a byte array received in a network message to produce the equivalent collection of structured data items. Obviously, the marshalling and unmarshalling processes have to agree on the format used, the marshalling format. In the days before the world wide web changed everything, binary formats were often used, as they produce small packages on the network and are thus faster to transmit. In domains where speed and latency are very important, like online gaming, binary formats are still preferred. However, binary has 2nd Edition

15

Basic Concepts

its disadvantages. First, CPUs may vary in their encoding of bits into e.g. integers, which requires low level conversions. Find resources on little-endian and big-endian byte ordering.

Second, they have the downside of not being easy to read for humans as well as less easy to adapt. Extensible Markup Language (XML) became a widely used markup language to define open standard formats during the late 1990. It is defined by the World Wide Consortium (W3C) which is a main international standards organization. XML is both machine-readable as well as (relatively) humanreadable. XML allows data values to be expressed in XML documents as elements using tags that are markup constructs that begin with the character < and end with >. Elements are surrounded by start-tags and end-tags. As an example, consider a TeleObservation object with the values 1 2 3 4

teleObservation: patientId: ``251248-1234'' systolic: 128.0 diastolic: 76.0

Such an object could be represented in XML as 1 2 3 4 5

251248-1234 128.0 76.0

XML has a lot of advantages, mainly due to the fact that it is a well known format, standardized, and supported by a large number of tools and software libraries. One of its main disadvantages is that it is verbose which makes it harder to read, and puts a lot of requirements on the hardware in terms of bandwidth to carry the large amounts of text. JavaScript Object Notation (JSON) is an alternative whose main advantage is that it is much more compact. It is a human-readable open standard format for representing objects consisting of key-value pairs. It includes definition of data types, like strings, numbers, booleans, and arrays, and is thus closer to a description of objects than XML. As an example, the above TeleObservation object may look like

2nd Edition

16

Basic Concepts 1

{ patientId: systolic: diastolic:

2 3 4 5

``251248-1234'', 128.0, 76.0

}

The examples in XML and JSON above represent a tele observation object, but I also need to encode the method to call—for instance the TeleMed interface has two methods, processAndStore and getObservationsFor . Thus, one plausible marshalling in JSON of an invocation of processAndStore would be: 1

{ methodName : "processAndStore_method", parameters : [ { patientId: ``251248-1234'', systolic: 128.0, diastolic: 76.0 } ]

2 3 4 5 6 7 8 9 10

}

Note that marshalling formats are fine for encoding simple data types like strings and numbers, as well as record types (that is: classes that only have simple data types as instance members), but they cannot easily represent references to other objects. I will return to this issue in Chapter Broker Part II. Marshalling is a universal issue in distributed computing and is well supported by numerous open source libraries, see the sidebar.

JSON Libraries JSON Libraries Marshalling and demarshalling JSON is supported by numerous libraries. Here I will demonstrate how Google Gson easily marshalls an object into JSON and back again. The code looks like 1 2 3 4

public static void main(String[] args) { Gson gson = new Gson(); TeleObs a = new TeleObs("251248-1234", 128, 76); String aAsJson = gson.toJson(a);

5 6 7

System.out.println("Original: " + a); System.out.println("As JSON: " + aAsJson);

8 9 10

TeleObs b = gson.fromJson(aAsJson, TeleObs.class); System.out.println("Demarshalled: " + b);

2nd Edition

17

Basic Concepts

11

}

This code produces 1 2 3 4

Original: [id = 251248-1234: (128.0, 76.0)] As JSON: {"patientId":"251248-1234","systolic":128.0, "diastolic":76.0} Demarshalled: [id = 251248-1234: (128.0, 76.0)]

Note how the demarshalling method requires a type argument. Thereby Gson can itself create an object from the indicated type.

Proxy With the request-reply protocol, we can convert a synchronous method call into a pair of calls: send data and wait until reply is received. However, this is obviously tedious to do at every place in the client code where we want to invoke methods on the remote object. Also we create a hard coupling from our domain code to the network implementation which lowers maintainability and testability. Luckily, we have already a design pattern that solves this problem, namely the Proxy pattern (Chapter 25 in Flexible, Reliable Software). Remember that a Proxy object implements the same interface as the RealSubject but is a placeholder that controls access to it. In our networked environment we would thus create a proxy on the client side, in which all methods are coded using the request-reply protocol to make the computation occur on the real object on the server. In pseudo code for our TeleMed system, the TeleMedProxy’s method processAndStore method may look like this: 1

public class TeleMedProxy implements TeleMed, ClientProxy {

2 3 4 5 6 7 8 9

public String processAndStore(TeleObservation teleObs) { byte[] requestMessage = marshall(teleObs); send(server, requestMessage); byte[] replyMessage = receive(); // Blocks until answer received String id = demarshall(replyMessage); return id; }

We achieve two important properties by using the Proxy. First, our domain code only interacts with TeleMed using its interface so there is no hard cou2nd Edition

18

Basic Concepts

pling to the network and distribution layer. Second, it supports dependency injection - for instance wrap the proxy by a Decorator which caches values locally, etc. Note that every method in every proxy will follow a similar template: marshal, send, receive, demarshall. It can therefore be abstracted into a new role, the Requestor, which I will introduce in the Broker pattern in the next chapter.

Name Services Finally, we need to refer uniquely to the remote object, that we want to call a method on. In ordinary Java code, you refer to an object through its object reference: 1 2 3

TeleObservation teleObs = new TeleObservation("251248-1234", 126.0, 70.0); String s = teleObs.toString();

Here teleObs refers to the object, and is actually a pointer to the part of the computer memory that holds the object’s data: It is a memory address. Having this object reference allows us to invoke methods on the object, like the teleObs.toString() method invokation above. However, if the object is located on some remote computer, not in our own computers memory, things are more complicated. First, we have to known which computer hosts the object, and next, we need some way to identify exactly which object on the remote computer, we need to invoke a method on. So basically a remote object reference needs to encode two pieces of information: the identity of the remote computer, and the identity of the object on the remote computer. The solution requires two pieces: First, we have to define a naming scheme and second we need name services6 . The naming scheme is a standardized way to identify/name remote references, and the name service is a dictionary/yellow pages/directory that allows us to translate from a name/identity of the remote object to the actual computer that hosts the object, as well as the actual reference to it on the remote computer. The naming scheme must ensure that each name/identity of a remote object is unique. One scheme is a string with a hierarchical template, like file paths: “www.baerbak.com/telemed/251248-1234/2019-26-04-08-57-22”; which follows a template like “computername/system/person/timestamp”. The benefit is that a human can actually read the string and get an impression of what object we are talking about. You will note that URLs follow this naming 6 George Coulouris, Jean Dollimore, Tim Kindberg and Gordon Blair, “Distributed Systems – Concepts and Design, Fifth Edition”, Pearson Education Limited, 2012.

2nd Edition

19

Basic Concepts

scheme. Another approach is simply to machine generate unique identities, like universally unique identifier (UUID). The role of a name service is then to store and lookup names/identities of remote objects and translate them into their real (computer, reference) counter parts. This can be implemented in many ways. At it simplest level, a name service is simply a Java Map data structure: Once a server receives a remote object name, it can fetch the relevant object reference: 1 2

// Declaration of the name service Map nameService = ..

3 4 5 6

// Server has received a remote object name from client TeleObs teleObs = nameService.get(remoteObjectName); String s = teleObs.toString();

Of course, this simple example above assumes that the client already knows which remote computer, the object is located on. This is actually a quite feasible case, as I will discuss below. In the more general case, name services are standalone services/processes. This is similar to DNS servers on the internet, which allow symbolic names, like www.baerbak.com7 to be looked up to get their actual physical IP address, like 87.238.248.136. The great advantage of a registry is that if location changes (say, my web server moves to a new IP address), you just change in the registry and everybody can still find it. Java RMI and similar remote method invocation frameworks provides a separately running service, the registry, where you can bind a name/identity with a remote object reference. Frameworks based on HTTP can be said to have solved the look up problem by reusing the DNS look up servers. Thus usually they simply know the name of the server with the object by using the URL of the server. Thus instead of such libraries having their own built-in registry, they use hard coded server names and let the DNS system handle the problem of finding the correct server to call.

2.3 Tying Things Together The Broker pattern discussed in the next chapter can be said to combine the elements above into a cohesive whole that provides a strong programming model for building client-server architectures. 7 www.baerbak.com

2nd Edition

20

Basic Concepts

It uses the Proxy pattern on the client side to ’mimic’ the normal interface oriented programming model so client objects communicate with the remote object in an almost normal way. The methods of the client side proxy are all following a template that uses marshalling to encode parameters, object id, and method name into a byte array; uses the low level send()/receive() methods to implement a request-reply protocol call to a remote server; and then demarshalls the returned byte array into a valid return value. Similarly the server side uses demarshalling and marshalling to decode the received byte array into information on which object’s method to call with which parameters; does the call (often called the “up call”) to the real implementation of the object; and finally marshals the return value into a byte array to be returned to the client.

2.4 Summary of Key Concepts Distributed systems are computer systems that provides services to their users by having multiple computers communicating and coordinating by exchanging messages over a network. One of the main advantages of distributed systems is the ability to access large amounts of shared data. A common way to structure distributed systems in many domains is the client-server architecture in which a large number of clients sends requests to a central server which processes the request and return an answer to the clients. Typically, the server maintains a large data set, that clients create, read, and update. Such distributed systems resemble object-oriented system in that client objects invoke methods on server objects, and thus supporting remote method calls is a natural way to express message passing in distributed systems. However, remote method calls are fundamentally different from ordinary methods calls in a single program. Key differences are • Networks can only send and receive byte arrays • Client and server objects are executing concurrently • Network send() is asynchronous, thus the client will not await the answer being returned from the server • Object references are meaningless across computer boundaries • Network communication is slower, may fail, and transmissions can be intercepted by unwanted processes To mitigate these differences, we can apply methods and practices • The Request-Reply protocol defines a template algorithms in which a client will send a request to the server, and then wait/block until an answer has been returned, thereby mimicking normal OO method calls. 2nd Edition

21

Basic Concepts

• Marshalling is a technique to translate normal programming types, like strings, integers, and booleans, into byte arrays and back. Thereby we can convert back and forth between the OO programming level and the network level. • Proxy is a pattern that provide a surrogate for another object and we can use this pattern to implement a surrogate for the server object on the client side, having the same interface, but whose implementation uses request-reply and marshalling to communicate in an OO method call manner with the real object on the server. • Name service is the role of a storage that can bind object names or identities to the real objects. Thereby we can transmit object identities instead of object references between clients and servers and use the name service to resolve which actual object to communicate with. The aspects of failure handling, security, and performance are outside the scope of this book. Distributed Systems–Concepts and Design by Colouris et. al8 is comprehensive treatment of distributed system.

2.5 Review Questions Explain what a distributed system is? Explain what a client-server architecture is - and what clients and servers can and can not. What are the challenges of implementing distributed system? How does the techniques of request-reply, marshalling, proxy, and name service handle aspects of a solution to these challenges? Explain each of the four techniques in detail.

Performance of Network Calls Network calls are slower than ordinary method calls, but the slowdown is highly affected by how geographically separated the client and server are. Below I give some numbers that I have measured. I used the TeleMed system and did five times 10,000 uploads of a single blood pressure measurement for each experiment. The time measured below is in milliseconds for 10,000 uploads. The tabel outlines a configuration, the average run time for the 5 times 10,000 uploads, the maximum time for any of the 5 runs, and finally the average slow down factor, using local call as baseline. Thus, remote calls to another machine on the same network switch was a factor 12.7 times slower than doing in-memory method calls.

8 George Coulouris, Jean Dollimore, Tim Kindberg and Gordon Blair, “Distributed Systems – Concepts and Design, Fifth Edition”, Pearson Education Limited, 2012.

2nd Edition

22

Basic Concepts

Configuration Local call Localhost Docker On switch Frankurt

Average time (ms) 1,796 9,731 17,091 19,690 494,966

Max time (ms)

Factor

3,366 12,806 35,873 22,025 513,411

1.0 5.4 9.5 11.0 275.6

I ran the server in a virtual machine (4GB memory, 2 processers, Lubuntu Linux) on a small business server (4 core 3.1 GHz Xeon E3) running VMWare ESXi 6.5. The Frankfurt case ran the server in a Docker container. The configurations were: • Local call: The TeleMed client made local method calls to “the server object” within the same Java program. • Localhost: The TeleMed client made remote socket calls to a TeleMed server hosted on the same machine. • Docker: The TeleMed server ran in a docker container on the same machine as the TeleMed client. • On switch: The TeleMed client ran on a separate machine, connected to the same network switch as the TeleMed server. • Frankfurt: The TeleMed server ran on a DigitalOcean virtual machine located in Frankfurt (Germany), while the TeleMed client was located in Aarhus (Denmark). As is evident, network calls are slower but as long as your calls are between machines in close proximity, the slowdown is not overwhelming. However, once you get to production environments the slowdown is much higher: a factor of 275 compare to local calls, and a factor of 11 times slower than if the server was on the same network switch. Disclaimer: These values are from a small experiment, and there are different characteristics on the hardware of the machines that run clients and server, so take the values with a grain of salt…

2nd Edition

3. Broker Part One 3.1 Learning Objectives The learning objective of this chapter is the Broker pattern which is an architectural pattern that allows methods to be called on remote objects while hiding much of the complexity involved in the fact that these objects are indeed located on different computers connected in a network.

3.2 The Problem Distributed systems consist of objects that are located on different computers. In the client-server architectural style, distribution is organized with shared objects stored in a single, centralized, server that a large number of clients query and update. We want to program such client-server architectures in a programming model that is object-oriented, that is, support calling methods in clients on remote objects. The remote method calling shall behave as closely as possible to normal method calls: they are synchronous, they take parameters, and so forth, without the need to use low level distributed programming calls like send() and receive() methods.

24

Broker Part One

3.3 The Broker Pattern

Broker role structure.

The Broker pattern solves the problem of the mismatch between the abstraction wanted: method calls; and the abstractions given: raw byte array message sending and receiving; by defining a chain of six roles, three on the client side and three on the server side, as shown in the figure above. These roles are at execution time connected by a protocol in which a method call is made on a ClientProxy on the client side that acts as a placeholder object for the real object, the Servant, on the server side. The protocol dictates that the method call is marshalled first, and next sent to the server, as outlined in the figure below.

2nd Edition

25

Broker Part One

Broker client side dynamics.

The Requestor’s responsibility is to do marshalling and execute the send/receive calls - essentially implementing the client side of the request-reply protocol. If you review the Proxy section in the previous chapter you will note that the send/receive calls are in the Requestor in this presentation, not in the ClientProxy as outlined there. This is because the algorithm can be expressed in general we thus avoid quite a lot of code duplication.

Its central method is request(location, objectId, operationName, arguments) which tells the requester which computer (location) to send messages to, which servant object to call (objectId), which method to call (operationName), and finally the list of all parameters (arguments). Upon reception at the server side, the matching transformation occurs as shown in the figure below.

2nd Edition

26

Broker Part One

Broker server side dynamics.

The incoming message is received by the ServerRequestHandler, then forwarded to the Invoker (method handleRequest) that handles the demarshalling to produce the identity of the Servant object, the method to call, and the original parameters, and then invoke the method. Finally, a similar process is employed in order to return the computed answer from the Servant, the return value of the method, back to the client side and the calling object. Essentially, the six roles are divided into three layers. The upper layer, the Role, ClientProxy and Servant, form the domain layer in which the abstraction level is at that of the domain: The Role is an interface representing some kind of role that embody a domain concept that reside on the server but accessed by the clients. Just like TeleMed in our case study. At the next layer, the marshalling layer, the Requestor and Invoker, are dealing with marshalling, unmarshalling, and dispatching. And finally, the bottom layer is the inter process communication (IPC) layer, in which the ClientRequestHandler and ServerRequestHandler are bound to the operating system and the chosen inter process communication technology, responsible for sending and receiving messages on the network. The six roles are also “mirrored” in the sense that the client side have the three roles ClientProxy, Requestor, and ClientRequestHandler that are mirrored in the server side’s corresponding Servant, Invoker, and ServerRequestHandler. The responsibilities of the roles involved in the broker are: ClientProxy • Proxy for the remote servant object, implements the same interface as the servant. 2nd Edition

27

Broker Part One

• Translates every method invocation into invocations of the associated Requestor’s request() method. Requestor • Performs marshalling of object identity, method name and arguments into a byte array. • Invokes the ClientRequestHandler’s send() method. • Demarshalls returned byte array into return value(s). • Creates client side exceptions in case of failures detected at the server side or during network transmission. ClientRequestHandler • Performs all inter process communication send/receive of data on behalf of the client side, interacting with the server side’s ServerRequestHandler. ServerRequestHandler • Performs all inter process communication on behalf of the server side, interacting with the client side’s ClientRequestHandler. • Contains the event loop thread that awaits incoming requests from the network. • Upon receiving a message, calls the Invoker’s handleRequest method with the received byte array. Sends the reply back to the client side. Invoker • Performs demarshalling of incoming byte array. • Determines servant object, method, and arguments and calls the given method in the identified Servant object. • Performs marshalling of the return value from the Servant object into a reply byte array, or in case of server side exceptions or other failure conditions, return error replies allowing the Requestor to throw appropriate exceptions. Servant • Domain object with the domain implementation on the server side.

2nd Edition

28

Broker Part One

3.4 Analysis Invoking methods on remote objects is a technique that is well supported by many libraries and frameworks, and indeed it is directly supported by Java Remote Method Invocation (RMI). So why spend much time on describing the pattern here? The answer is twofold. First, the pattern is the inner workings of any such framework and a deep knowledge of this pattern will increase your ability to use such frameworks correctly and efficiently. For instance, REST frameworks that we will discuss in more detail later provide an implementation of the IPC layer (using HTTP) and partial implementation of the marshalling layer (method and object names and parameters are encoded as URIs), and it is thus relatively easy to relate to the Broker. Second, the broker pattern is all about loose coupling between the three levels of responsibility: let someone else do the dirty job of inter process communication, and let someone else do the job of marshalling. This separation of concern allows us as software architects to choose the right delegates for the task at hand. If I need high performance and horizontal scalability (that is: the server side is not just one server but perhaps 100 servers to cope with very high demands), I will probably pick a message queue system or a web framework with a load balancer in front as basis for my implementation of the ClientRequestHandler and ServerRequestHandler pair. In a gaming domain, I would pick a binary format for requests and replies and develop the Requestor and Invoker roles accordingly. For doing TDD or testing, I can use Fake Object IPC implementations and make comprehensive testing without spawning a server - actually I will do just that in the next chapter. And all the while, my ClientProxy and Servant role implementations would stay unaffected of the underlying implementations. This is obviously also beneficial in the context of agile development of a growing system that may start out simple and easy, but will allow more complex and higher performant variants to be substituted in at later stages in the product’s life cycle. These benefits are not available by a standard implementation like Java RMI.

Limitations The Broker pattern outlined above can be used to design client-server systems similar to modern web based designs. However, it is not a full broker system as it is implemented in for instance Java RMI. This is due to a couple of limitations. • The structure is pure client-server, that is, the server can never invoke a method on an object on a client. It is always the client that makes a call. 2nd Edition

29

Broker Part One

• The location and identity of the server object must be known to the client as my description above did not include any name service. • The arguments to any method call are pass-by-value (see Sidebar Passby-Reference and Pass-by-Value below) as they must be able to marshall and demarshall to produce identical results on both client and server. Pass-by-reference cannot be handled as there is no name service to resolve remote references. The first issue has the consequence that some designs/protocols cannot be implemented between client and server. A good example of a design our broker cannot implement is the Observer pattern in which the Subject is on the server side while Observers are on the client side(s). This is because the observer implements call-backs: the client makes a state change to the subject which then responds by invoking the update() method on all registered observers. This would require the servant object to call methods that are on the client side which our Broker cannot handle. Essentially this Observer behavior is not a true client-server architecture but a peer-to-peer architecture because suddenly the server acts as client that makes a call to a object hosted on the client, essentially making the client act as a server. The problem can be solved by mirroring the broker pattern roles so both client and server sides have all six roles implemented. But the code and design of course increase in complexity. Pass-by-Reference and Pass-by-Value Many programming languages distinguish strongly between pass by reference or pass by value when it comes to providing arguments to function and method calls. The difference is whether the argument is the value itself (like the integer value 42) or is a reference to a variable holding the value (like the reference to a variable holding the value 42). The programming languages C and C++ allow both to be expressed: 1 2

void fooByValue(int value) { ... } void fooByRef(int* value) { ... }

The call will pass the value 42 to the C function, and thus ’value’ is a new variable whose value is a copy. Thus if the formal argument ’value’ is modified with the function, it will have no effect on the value of ’v’ in the calling code—it is just the copy that changes value. In contrast the call will pass the reference (memory address) of the ’v’ variable, and thus if ’value’ is changed within the function, it will also change the value of ’v’. Java provides no such freedom but passes all arguments as values: pass-byvalue; but with the complication that for class types, it is the object reference that is passed by value, not the object itself — you get a copy of the object reference. Thus if you call methods on this object reference, the original

2nd Edition

30

Broker Part One

object will potentially change state (the copied reference points to the same object), however if you change the copied reference itself, it will have no effect on the original object. Yes, a bit tricky indeed… Exercise: Argue whether the following call String hello = “World”; addHello(hello); will change the ’hello’ string, given this implementation: 1 2 3

private static void addHello(String s) { s = "Hello " + s; }

The next two issues are more or less two sides of the same coin. By avoiding a name service in my broker design it becomes a simpler design. However, now the client has to know the location/identity of the server object and the server machine. In the TeleMed case this could be knowing the name of the server, like ’www.telemed.org’, as well as having a unique identity of the patient object, like “251248-1234”. These two pieces of information are enough to make the call: the client request handler contacts the server at ’www.telemed.org’ and the patient id can be used by the invoker to get the right object to review blood pressure for. The missing name service also means that we cannot pass object references to the server side, we can only pass basic data types (int, double, arrays, etc.) and string objects (“hello world” is passed as a value to the server, not as an object reference). Remember that an object reference is a reference into a specific address in the computer’s memory in which the object is stored. This address of course does not make sense on the server side. If I were to insist on being able to handle pass-by-reference, the requester on the client side could substitute the object reference by an object name that can be registered in the name service. Note that this requires this client side object to be remotely accessible for method calls in case the server decides to call a method on it. On the server side, the invoker should then use the name service to look up that particular object name, get hold of a ClientProxy for it as it needs to make remote calls, and then pass a reference to this proxy object up to the servant role. Thus, pass-by-reference both requires a name service and it requires calls to be made from server to client. Thus, this limitation again ties to the previous requirement of only supporting true client-server communication. It is important to note that this limitation is only an issue with client side object references. As I will discuss in Chapter Broker II later, the server can create objects and return “references” back to the client, that it can use to make remote method calls on the server-created objects. So, my Broker is limited compared to Java RMI, and asymmetric in a sense with more constraints on the clients than on the server. But the limitations 2nd Edition

31

Broker Part One

I have made are the same as those made in general by large scale web architectures, notably REST, which have achieved world wide success and that today are much more common than Java RMI systems, CORBA, .Net remoting and other full Broker systems. Large scale web systems based upon REST are also only able to pass-by-value, and (for the most part) are purely client-server. And there are good architectural reasons for doing so. First, a pure client-server architecture scales horizontally: if you need more computing power you can add more servers and use a load balancer, and the clear distinction between the two roles (the client always calls, and the server reacts) are much easier to implement to be secure, highly available, and performant. The location problem is partially mitigated by using URI that uses DNS so servers can move to new locations even if their DNS names are hard coded.

Failure Modes A remote method call is architecturally significantly different from a local method call. It is always much slower. And unless extreme measures have been taken, some day it will simply fail — no computer operates reliably indefinitely. To create highly available systems, such failure modes must be anticipated and their effects mitigated. This means all method calls to a ClientProxy must be scrutinized and coded with attention — they are what Nygard1 terms integration points, that is, points in the code where two systems integrate and thus will cause failures at some time. The same goes, of course, for the server side, that may not be able to transmit the reply because the client computer crashed after sending the initial request. One simple complication is that our standard failure handling mechanism, i.e. throwing exceptions, of course does not propagate across networks. For example, if the TeleMed server tries to store a document in XDS but the connection is lost, an exception will be thrown in the server, but it will propagate only to the server request handler implementation. As a thrown exception is also a return value, it must be caught in the invoker, marshalled, and sent to the client side. In turn, the Requester must identify the reply as an exception, and then throw a suitable client side exception to inform the domain code there, that some unusual situation has occurred. Failure handling is a large, architectural, topic and beyond the scope of this book, and I refer to the books outlined in the end of the chapter. The most basic recipe for handling failures is to analyze each remote call site (the integration points) thoroughly in order to catch and handle all exceptions 1 Michael T. Nygard, “Release it - Design and Deploy Production-Ready Software, 2nd ed.”, Pragmatic Bookshelf, 2018.

2nd Edition

32

Broker Part One

gracefully, to ensure all server side exceptions are handled and acknowledged to the client side, and importantly ensure that the server will survive.

Marshalling Robustness The Requestor and Invoker must of course agree on the format of the transmitted message: the marshalling format. As mentioned earlier, many libraries exist for converting ordinary Java objects into, say, JSON and back again. This is fast and convenient, and I also use it in the TeleMed implementation in the next section. However, for production quality software it is important to adhere to Jon Postel’s robustness principle: Be conservative in what you do, be liberal in what you accept from others.2 In other words, marshalling data to be read by other machines (or by other programs on the same machine) should conform completely to the specifications, but code that receives input should accept non-conformant input as long as the meaning is clear. This principle requires a bit more design and programming at the onset but it pays back as time goes, and services are updated as new features are added. To see this, consider a situation in which developers continue work on an existing system, using version 1 of a marshalling format, that is in production. They have to change the marshalling format to a new version, version 2, as their development of new features and bugfixes progress. They of course test their development intensively, and it works fine in their staging environment, as they of course run both server and clients using the updated code base. But once they put it into production, they discover that servers crash. This is because the server now expects version 2 of incoming messages whereas all already deployed clients send messages using marshalling format version 1. One important lesson is the following: Key Point: Include Format Version Identity. Always include version identity in the marshalling format. Then the demarshalling code can do a first lenient demarshalling to look for the version identity in the received message; and then next based upon that, choose the appropriate demarshalling algorithm that fits the particular format. If no such algorithm exists, then at least report the error gracefully to the users or system administrators, so appropriate actions can be taken. One obstacle is that marshalling libraries for convenience often marshalls to and from objects using their type as template. For instance, GSON allows you to write: 2 Postel, Jon, “Transmission Control Protocol. IETF.”, RFC 761, Jan 1980.

2nd Edition

33

Broker Part One 1 2 3

TeleObs a = new TeleObs("251248-1234", 128, 76); String aAsJson = gson.toJson(a); TeleObs b = gson.fromJson(aAsJson, TeleObs.class);

While this is effective and convenient for the programmer, it unfortunately also implies that once the marshalling format changes (because developers have change the TeleObs class in a new release), the fromJson() method call will throw an exception if the supplied JSON string uses the old format. You will have to dig into the concrete library in order to explore its options for a more lenient handling of marshalling.

3.5 Summary of Key Concepts The Broker architectural pattern defines the architecture for allowing client objects to invoke methods on server objects which are located on different machines or in different processes. The benefit is that we can program using the object-oriented paradigm instead of having to program using low level network send() and receive() calls. Still, a remote method call is likely to fail, and thus attention must be made to handling failure situations. The Broker pattern dictates six roles organized in three layers of abstraction: the domain layer (roles ClientProxy and Servant) which is defined by the objects that implement the domain interface Role; the marshalling layer (roles Requestor and Invoker) which handle converting objects, methods, and parameters to and from a marshalling format that is suitable for transmission on a network; and the inter process layer (roles ClientRequestHandler and ServerRequestHandler) which handles network traffic and threading issues. The outlined Broker is the core of Java RMI and .NET remoting, but is simpler in the sense that it is purely client-server: no method call from the server side to an object on the client side is possible. My presentation of the is highly influenced by the patterns described in “Pattern-Oriented Software Architecture, Volume 4”3 . Patterns to deal with failure modes are presented by Nygard4 .

3.6 Review Questions Draw the UML diagram of the Broker pattern. Explain what each role’s responsibilities are. Draw a sequence diagram of how a single method call on 3 Frank Buschmann, Kevin Henney, and Douglas C. Schmidt, “Pattern-Oriented Software Architecture Volume 4: A Pattern Language for Distributed Computing Volume 4 Edition”, Wiley, 2007. 4 Michael T. Nygard, “Release it - Design and Deploy Production-Ready Software, 2nd ed.”, Pragmatic Bookshelf, 2018.

2nd Edition

34

Broker Part One

the client side is executed using the Broker patterns. Explain the limitations of the present discussion of the pattern.

Architecture Pattern: Broker Intent Define an loosely coupled architecture that allows methods to be called on remote objects while having flexibility in choice of operating system, communication protocol, and marshalling format. Problem We want to program using the object-oriented paradigm but some objects are distributed on a remote machine. Solution The remote object must implement an interface. The domain implementation of it on the server is the Servant, while the client communicate with it using a ClientProxy implementation. The ClientProxy and Servant implementations are coupled with layers that handle marshalling and inter process communication. Structure

Broker role structure.

Cost-Benefit The benefits are: loose coupling between all roles that provides flexibility in choosing the marshalling format, the operating system, the communication

2nd Edition

35

Broker Part One

protocol, as well as programming language on client and server side. The liabilities are: complex structure of the pattern.

2nd Edition

4. Implementing Broker 4.1 Learning Objectives In this chapter, I will detail the central challenges of implementing the Broker pattern using the TeleMed systems as a case. I will show the central pieces of the code that handle the challenges. You can find the full code base on FRDS.Broker Library at Bitbucket1 .

4.2 Architectural Concerns The TeleMed system is architecturally simple, because there is only one remote object: the TeleMedServant object. This is a simplification that suits the pedagogy of the book: we take small steps and present the simple Broker implementation first, and only later in Chapter 5 go on into systems with many servant objects. However, it is not uncommon - most distributed systems have a kind of single point of entry. Think of services on world wide web - they are hosted on a single URL, like www.amazon.com or www.facebook.com, and all the internal objects and services are accessed via that single entry point. Still, the remote servant object must be located on a specific machine on the network. The Broker implementation in the FRDS.Broker library rely on internet protocol (IP) and Domain Name System (DNS) to identify servers on the internet. I assume you have a working knowledge of this, so the location of the telemed server could be “telemed.baerbak.com:37111” - on port 37111 on the server with DNS name “telemed.baerbak.com”. In the code base, the server defaults to “localhost”.

4.3 Domain Layer At the domain layer we need to implement the Servant and the ClientProxy, both implementing the TeleMed interface. The TeleMed interface represents a Facade to the tele medicine system.

1 https://bitbucket.org/henrikbaerbak/broker

37

Implementing Broker 1

public interface TeleMed {

2

/** * Process a tele observation into the HL7 format and store it * in the XDS database tier. * * @param teleObs * the tele observation to process and store * @return the id of the stored observation * @throws IPCException in case of any IPC problems */ String processAndStore(TeleObservation teleObs);

3 4 5 6 7 8 9 10 11 12 13

/** * Retrieve all observations for the given time interval for the * given patient. * * @param patientId * the ID of the patient to retrieve observations for * @param interval * define the time interval that measurements are * wanted for * @return list of all observations * @throws IPCException in case of any IPC problems */ List getObservationsFor(String patientId, TimeInterval interval);

14 15 16 17 18 19 20 21 22 23 24 25 26 27 28

}

If you study the interface in the code base, you will find a few more method which I will return to in the REST Chapter. Developing the Servant implementation represents a problem already dealt with in the chapter on Test-Driven Development (TDD) in my original book Flexible, Reliable Software2 . You can find the implementation for TeleMedServant in the telemed.server package in the telemed project. Next, I will detail the implementation for the client side and next the server side. 2 Henrik B. Christensen, “Flexible, Reliable Software - Using Patterns and Agile Development”, CRC Press, 2010.

2nd Edition

38

Implementing Broker

4.4 Client Side ClientProxy The ClientProxy implementation (package telemed.client in project telemed in the code base) is more interesting, as it is at this level we start the process of converting the method calls to remote calls. Actually, this is quite straightforward as it simply needs to make a call to its associated Requestor, which is dependency injected through the constructor. 1

public class TeleMedProxy implements TeleMed, ClientProxy {

2 3

public static final String TELEMED_OBJECTID = "singleton";

4 5

private final Requestor requestor;

6 7 8 9

public TeleMedProxy(Requestor requestor) { this.requestor = requestor; }

10 11 12 13 14 15 16 17 18

@Override public String processAndStore(TeleObservation teleObs) { String uid = requestor.sendRequestAndAwaitReply(TELEMED_OBJECTID, OperationNames.PROCESS_AND_STORE_OPERATION, String.class, teleObs); return uid; }

19 20 21 22 23 24

@Override public List getObservationsFor(String patientId, TimeInterval interval) { Type collectionType = new TypeToken(){}.getType();

25

List returnedList; returnedList = requestor.sendRequestAndAwaitReply(TELEMED_OBJECTID, OperationNames.GET_OBSERVATIONS_FOR_OPERATION, collectionType, patientId, interval);

26 27 28 29 30

return returnedList;

31 32

}

The requestor’s sendRequestAndAwaitReply() is the library’s equivalent to the request() method of the Broker pattern from the previous chapter, but note 2nd Edition

39

Implementing Broker

that the location parameter is missing. As argued above, there is only one physical server computer involved, so this parameter is better provided once and for all instead of in every method call (in the code base, you will find that hostname and portnumber is provided to the ClientRequestHandler as constructor parameters.) So, the remaining parameters are ’objectId’, ’operationName’, a type parameter, and finally ’arguments’. And - there is only one servant object, so the ‘objectId’ also becomes a dummy parameter: all incoming messages to the server will go to the server’s single TeleMedServant object (also known as a Singleton). The actual method to be called is simply a named constant string defined in a OperationNames class. One example is: 1 2

public static final String PROCESS_AND_STORE_OPERATION = "telemed-process-and-store";

That is, the way clients and servers agree on which method/operation to call is simply by a unique string. As I have done here, it is a good idea to convey both the object type “telemed” as well as the method name “process-andstore” to create a unique, easily identifiable, operation name. The Invoker on the server side will of course use this constant string to identify which method to call. The next argument is the return type, like String.class. The requestor’s implementation uses Java generic types to know the type of the return value. I will return to that in the next section. Finally, the arguments or the parameter list. Here I pass each method parameter in the same order as on the parameter list. The ordering is important, as the ClientProxy and Invoker have to agree on in which order parameters appear in the marshalled message. So Do the same thing, the same way and respect the ordering. Looking at the getObservationsFor method, you will note that the type for a list of tele observations is a bit of magic, and actually makes a binding to the marshalling library, Google Gson, used in the requestor code. This is an example that boundaries between the roles of the Broker are not easily upheld strictly. The code shown here is also truncated a bit compared to that in the real code base, as it has to handle exceptions when an unknown patient is encountered. Note how “mechanical” this code is, it is repeating the same template with slight variations in parameters.

Requestor The requestor’s responsibility in turn is to marshall the method call, delegate to the client request handler for send/receive, and demarshall and return the 2nd Edition

40

Implementing Broker

answer from the remote object. 1 2 3 4 5 6

public interface Requestor { T sendRequestAndAwaitReply(String objectId, String operationName, Type typeOfReturnValue, Object... arguments); }

I have chosen JSON as marshalling format as it is human readable which is important during the development process and in debugging situations, because it is relatively terse compared to XML, and because I can easily find high-quality JSON libraries to reuse. I have chosen to use the Gson library by Google. (Fragment in broker project: StandardJSONRequestor.java) 1 2 3 4 5 6 7 8 9 10

@Override public T sendRequestAndAwaitReply(String objectId, String operationName, Type typeOfReturnValue, Object... arguments) { // Perform marshalling String marshalledArgumentList = gson.toJson(arguments); RequestObject request = new RequestObject(objectId, operationName, marshalledArgumentList); String marshalledRequest = gson.toJson(request);

11 12 13 14

// Ask CRH to do the network call String marshalledReply = clientRequestHandler.sendToServerAndAwaitReply(marshalledRequest);

15 16 17

// Demarshall reply ReplyObject reply = gson.fromJson(marshalledReply, ReplyObject.class);

18 19 20 21 22 23 24 25 26 27 28

// First, verify that the request succeeded if (!reply.isSuccess()) { throw new IPCException(reply.getStatusCode(), "Failure during client requesting operation '" + operationName + "'. ErrorMessage is: " + reply.errorDescription()); } // No errors - so get the payload of the reply String payload = reply.getPayload();

29

2nd Edition

41

Implementing Broker

// and demarshall the returned value T returnValue = null; if (typeOfReturnValue != null) returnValue = gson.fromJson(payload, typeOfReturnValue); return returnValue;

30 31 32 33 34 35

}

Note that no TeleMed domain specific roles or objects are used. Thus this is a general purpose implementation that may serve other domains, and I have therefore put it into the frds.broker package. The requestor needs an instance of a ClientRequestHandler to make the actual network send() and blocking reply() calls, and this instance is again constructor injected, as you can see in the constructor declaration. Inside the sendRequestAndAwaitReply method I do the marshalling (lines 7-10). This is done in two steps: first the ‘arguments’ are marshalled making a JSON array, and next I populate a record type class, RequestObject, that just stores the ‘objectId’, ‘operationName’, and the JSON array. Finally, the request object is marshalled to produce a JSON string emboding all provided arguments. Next, I ask the client request handler to send to the server (lines 13-14) and await its reply. Upon reception, it is demarshalled into a ReplyObject (line 17), again a simple record type class that only contains a status code and the reply JSON message. The status code is used to report exceptions and error conditions from the Invoker on the server side. So, the status code is first tested whether any failures happened on the server which will cause an exception to be thrown (lines 20-26). Next, the actual return value is demarshalled and returned to the client proxy (lines 28 onwards).

RequestObject and ReplyObject Marshalling and demarshalling is about converting domain objects to byte arrays and back again. However, the Broker needs to tack some information along the method calls, like the object id, method/operation name, and the reply needs to convey more than just the return value of the method call, for instance exceptions thrown or error codes. This is the reason for the two additional roles introduced RequestObject and ReplyObject in the code above. They are convenience classes of the record/struct/PODO (plain old data type) type: they only have accessor methods and just stores information. They make using marshalling libraries like Gson easier.

2nd Edition

42

Implementing Broker

ClientRequestHandler Finally, the ClientRequestHandler needs to bind to the operating system and encode a protocol for network communication with the server. In the package frs.broker.ipc.socket you will find class SocketClientRequestHandler that uses Java socket programming for this. Note that this code is also general and not tied to the TeleMed system itself. Sockets are basically IO streams, so I can write the request object to an output stream (sends the request byte array to the server), and next read the reply from the input stream (receive the reply byte array from the server). The core of this is shown here: (Fragment in broker package: SocketClientRequestHandler.java) 1 2

// Send it to the server (= write it to the socket stream) out.println(request);

3 4 5 6 7

// Block until a reply is received String reply; try { reply = in.readLine();

8 9

} catch (IOException e) {

where ‘out’ is a output IO stream and ‘in’ is the input stream. If you review the full code, you will note that I use String instead of byte[] in the code base. Strings are conceptually similar to byte arrays, and easier to handle in the code base and when testing and debugging. So, I have the three roles implemented and only need to inject the dependencies. In the package telemed.main in the telemed subproject of the Broker code, you can find client programs that simulate uploading blood pressure and reading values for the last week for a given patient. The core configuration code that injects the dependencies can be found in HomeClientSocket and HomeClientHTTP and follows the template: 1 2 3 4

ClientRequestHandler clientRequestHandler = new SocketClientRequestHandler(hostname, port); Requestor requestor = new StandardJSONRequestor(clientRequestHandler);

5 6

TeleMed ts = new TeleMedProxy(requestor);

Note that the location of the server, ’hostname’ and ’port’, is simply given as parameters to the client request handler.

2nd Edition

43

Implementing Broker

4.5 Server side ServerRequestHandler At the server side, the ServerRequestHandler obviously must also bind to the operating system and understand the protocol used by the client request handler. A socket based implementation that matches the SocketClientRequestHandler can be found in package frs.broker.ipc.socket. The implementation is rather long, with setting up the socket and error handling taking quite a lot of code, but the central message receiving and processing is shown below (Fragment in broker project: SocketServerRequestHandler.java ) 1 2 3 4 5 6

private void readMessageAndDispatch(Socket clientSocket) throws IOException { PrintWriter out = new PrintWriter(clientSocket.getOutputStream(), true); BufferedReader in = new BufferedReader(new InputStreamReader( clientSocket.getInputStream()));

7

String inputLine; String marshalledReply = null;

8 9 10

inputLine = in.readLine(); System.out.println("--> Received " + inputLine); if (inputLine == null) { System.err.println( "Server read a null string from the socket???"); } else { marshalledReply = invoker.handleRequest(inputLine);

11 12 13 14 15 16 17 18

System.out.println("--< replied: " + marshalledReply);

19

} out.println(marshalledReply);

20 21 22

System.out.println("Closing socket..."); in.close(); out.close();

23 24 25 26

}

The basic algorithm is simple: receive the message from the network (line 11), pass it on to the invoker (lines 17) and finally send the reply back to the client (line 21). This socket server is rather verbose for the sake of demonstration. 2nd Edition

44

Implementing Broker

Invoker The Invoker in turn is responsible for demarshalling the message, determining the right object and method to invoke, call it, and marshalling the return value into a reply. As such, the Invoker code is always specific to the domain, and must close match the marshalling format defined in the Requestor. (Fragment in telemed package: TeleMedJSONInvoker.java): 1 2 3 4 5 6 7

public String handleRequest(String request) { // Do the demarshalling RequestObject requestObject = gson.fromJson(request, RequestObject.class); JsonArray array = JsonParser.parseString(requestObject.getPayload()) .getAsJsonArray();

8 9

ReplyObject reply;

10 11 12 13 14 15 16 17 18 19

/* As there is only one TeleMed instance (a singleton) the objectId is not used for anything in our case. */ try { // Dispatching on all known operations // Each dispatch follows the same algorithm // a) retrieve parameters from json array (if any) // b) invoke servant method // c) populate a reply object with return values

20 21 22 23 24 25

if (requestObject.getOperationName().equals(OperationNames. PROCESS_AND_STORE_OPERATION)) { // Parameter convention: [0] = TeleObservation TeleObservation ts = gson.fromJson(array.get(0), TeleObservation.class);

26 27 28 29

String uid = teleMed.processAndStore(ts); reply = new ReplyObject(HttpServletResponse.SC_CREATED, gson.toJson(uid));

30 31 32 33 34 35 36 37

} else if (requestObject.getOperationName().equals(OperationNames. GET_OBSERVATIONS_FOR_OPERATION)) { // Parameter convention: [0] = patientId String patientId = gson.fromJson(array.get(0), String.class); // Parameter convention: [1] = time interval TimeInterval interval = gson.fromJson(array.get(1), TimeInterval.class); 2nd Edition

45

Implementing Broker 38 39 40 41 42 43 44 45

List tol = teleMed.getObservationsFor(patientId, interval); int statusCode = (tol == null || tol.size() == 0) ? HttpServletResponse.SC_NOT_FOUND : HttpServletResponse.SC_OK; reply = new ReplyObject(statusCode, gson.toJson(tol));

46 47 48 49

} else if (requestObject.getOperationName().equals(OperationNames. CORRECT_OPERATION)) { [... handling of other methods removed]

The algorithm to determine which object and method to invoke is often termed the dispatcher and is sometimes expressed as a role in its own right (Buschmann et al. 20073 ). Here I have taken the simplest possible approach, namely to make a state machine using a long-winding if-statement. This does not scale well but suffices for our small TeleMed system. In a following chapter, I will discuss and implement the dispatching for multi-object systems. Actually the handleRequest method needs to demarshall in two phases. The first is to get the objectId, the operationName and the arguments as a JSON array (lines 3-7). Upon this first demarshalling, the algorithm has the information to determine which method to call on which object. Note that the ‘objectId’ is not used anywhere: There is one and only one servant object, ‘teleMed’, which is created and dependency injected into the Invoker in the server’s main() method. Thus, the TELEMED_OBJECTID on the client side could have been any string. Thus, it is only the operationName that is used to determine which method the client side invoked. For each operation, a further demarshalling of the JSON array must be executed, to get the original parameters. For instance the TeleObservation as the first parameter to the processAndStore method (lines 24-25). Again, the algorithm for handling each method is “mechanical”: demarshall the provided arguments , do the “upcall” to the servant object, and marshall the return value and error code into a reply (shown below). Replies from the server need to communicate exceptions happening on the server, and can only do so using marshalled error codes - exceptions cannot by themselves cross machine boundaries. Instead of inventing my own error code system, it is much better to reuse an established one, and I have adopted the HTTP error codes. These are standardized and well described (I will detail them later in Chapter HTTP), and I can reuse constants defined in 3 Frank Buschmann, Kevin Henney, and Douglas C. Schmidt, “Pattern-Oriented Software Architecture

Volume 4: A Pattern Language for Distributed Computing Volume 4 Edition”, Wiley, 2007.

2nd Edition

46

Implementing Broker

common Java HTTP libraries. The code fragment below (which is the end of the ‘handleRequest’ method) shows how exceptions happening on the server side are caught by the invoker, and converted into HTTP error codes in the reply object. } else { // Unknown operation reply = new ReplyObject(HttpServletResponse. SC_NOT_IMPLEMENTED, "Server received unknown operation name: '" + requestObject.getOperationName() + "'."); }

1 2 3 4 5 6 7 8

} catch( XDSException e ) { reply = new ReplyObject( HttpServletResponse.SC_INTERNAL_SERVER_ERROR, e.getMessage()); }

9 10 11 12 13 14 15

// And marshall the reply return gson.toJson(reply);

16 17 18

}

The fragment above handles two situations: the first is if the ’operationName’ was not known (seems to indicate some kind of marshalling error), and second in case the XDS threw an exception. So the final piece of the puzzle is the dependency injection and coupling of delegates in the server: Create the Servant, inject it into the Invoker, and finally inject that into the ServerRequestHandler. You will find main programs for the TeleMed server also in package telemed.main, below is the configuration fragment from the class ServerMainSocket(the binding to the XDS layer has been left out for clarity.) 1 2 3

[XDS creation code removed] TeleMed tsServant = new TeleMedServant(xds); Invoker invoker = new TeleMedJSONInvoker(tsServant);

4 5 6 7 8

// Configure a socket based server request handler SocketServerRequestHandler ssrh = new SocketServerRequestHandler(); ssrh.setPortAndInvoker(port, invoker);

9 10

ssrh.start();

2nd Edition

47

Implementing Broker

4.6 Test-driven development of Distributed Systems I actually used the TeleMed case to develop both the TeleMed case code as well as the general implementations of the broker roles, like e.g. the StandardJSONInvoker, and it was mostly done using test-driven development and automated testing. How? By using test stubs and test doubles (See Sidebar 12.4 in “Flexible, Reliable Software”) to keep focus on one particular role and associated delegate implementation at a time, so I could take small steps—and most importantly: avoid using real network communication. The overall process was: • Always get the domain code in place first. Thus I started my TDD process by test-driving the TeleMedServant implementation first. After all, if your domain code does not work correctly at the onset, you may easily waste time tracking defects in the large set of code making up the broker delegates only to find a defect in the domain code. • Next I turn my attention to the client side proxy, and to TDD that I use a test spy as delegate to play the Requestor role. These tests I normally do not make comprehensively as the proxy code will be tested intensively later in the process, but they serve the purpose of driving parts of the client proxy code into existence. • A similar approach can be taken with the Invoker role. • While developing the broker delegates, I simply avoid the IPC layer completely; it is the last thing I turn my attention to. I can do that by introducing a Fake Object ClientRequestHandler which calls the Invoker directly. • Finally, it of course is important to do integration testing in which the full IPC layer is used. I will cover aspects of this process below. It should be noted that developing Broker based distributed systems is not just developing a fully mature domain code base first, and then just add distribution. More than once in my process of developing the Broker role implementations, I ran into insights that forced me to make minor modifications to the domain code. As an example, a constant source of modification is marshalling of domain objects: marshalling libraries require the existence of a no-argument constructor (the JavaBean standard). When developing the domain code, they are not necessary, but once you have to use them as value objects to be passed between client and server, I had to add these.

2nd Edition

48

Implementing Broker

Requestor Spy To help develop the client proxy I use a test spy for the Requestor role. Remember that a test spy will record the method calls it receives (and nothing else), so they can later be asserted. In this case I inject a SpyRequestor into the TeleMedProxy, call the proxy’s methods and assert that the requestor gets called with the proper parameters. 1

public class TestTeleMedProxy {

2 3 4

private SpyRequestor requestor; private TeleMed telemed;

5 6 7 8 9 10

@Before public void setup() { requestor = new SpyRequestor(); telemed = new TeleMedProxy(requestor); }

11 12 13 14 15 16 17 18

@Test public void shouldValidateRequestObjectCreated() { // Given a tele observation TeleObservation teleObs1 = HelperMethods.createObservation120over70forNancy(); // When the client stores it telemed.processAndStore(teleObs1);

19 20 21 22 23 24 25 26

// Then validate that the proper operation and object id was // provided to the requestor assertThat(requestor.lastOperationName, is(OperationNames.PROCESS_AND_STORE_OPERATION)); assertThat(requestor.lastObjectId, is(TeleMedProxy.TELEMED_OBJECTID)); // Testing the arguments and the type is tricky, but they will be // covered intensively by other tests later

27 28

[...]

The SpyRequestor has “spy methods” that allow me to inspect what parameters were passed to its method. Or, rather, as the code above shows, I have simply made the instance variables, like lastOperationName, public, so I can read them directly. A spy is just for testing purposes, not part of the production code, so I have been a bit lazy about adhering strictly to good programming doctrine…

2nd Edition

49

Implementing Broker

Fake Object Request Handlers A particularly tricky aspect of test-driven development of Broker is necessarily the IPC part. First, starting and stopping a server for each individual test case is much slower than normal unit tests. Second, it is also often problematic due to concurrency race conditions in the OS. One such issue is on Linux that spends several seconds to release a port connection, thus if one passed test case has shut down my TeleMed server on port 4567, then when the next test case starts a server on the same port it simply fails with a java.net.BindException: Address already in use, as the OS still thinks there is a live connection. Third, the IPC between a client and a server is arguably more in the realm of integration testing than in unit testing. So the question is, if there is a way, that you can avoid the IPC layer while you are just getting the other aspects in place: marshalling and demarshalling, object fetch and method dispatch in the invoker, etc. Actually, this is easy if your architectural toolbox includes test doubles! The client and server request handlers are just delegates to get messages from the client to the server and back again, and we can thus replace them with fake object test double implementations: Instead of the ClientRequestHandler sending messages to a ServerRequestHandler which in turn invokes the Invoker’s ‘handleRequest()’ method — I can simply make a fake object ClientRequestHandler that calls my server side Invoker directly. From the test case, that verifies TeleMed’s user stories (TestStory1.java in the TeleMed project), this is the @Before method which creates the production delegates for servant, invoker, requestor and client proxy roles while using fake object implementations for the client request handler (lines 13-14). 1 2 3 4 5 6 7 8 9

@Before public void setup() { // Given a tele observation teleObs1 = HelperMethods.createObservation120over70forNancy(); // Given a TeleMed servant xds = new FakeObjectXDSDatabase(); TeleMed teleMedServant = new TeleMedServant(xds); // Given a server side invoker associated with the servant object Invoker invoker = new TeleMedJSONInvoker(teleMedServant);

10 11 12 13 14 15 16

// And given the client side broker implementations, using the local // method client request handler to avoid any real IPC layer. ClientRequestHandler clientRequestHandler = new LocalMethodCallClientRequestHandler(invoker); Requestor requestor = new StandardJSONRequestor(clientRequestHandler);

17

2nd Edition

50

Implementing Broker

// Then it is Given that we can create a client proxy // that voids any real IPC communication teleMed = new TeleMedProxy(requestor);

18 19 20 21

}

As both client and server “live” in the test case, all broker roles are created and configured here. Note that there is no need for a server request handler— instead the client request handler is doubled by an implementation that simply uses local method calls to send the marshalled message right into the server side invoker. 1 2

public class LocalMethodCallClientRequestHandler implements ClientRequestHandler {

3 4 5 6

private final Invoker invoker; private String lastRequest; private String lastReply;

7 8 9 10

public LocalMethodCallClientRequestHandler(Invoker invoker) { this.invoker = invoker; }

11 12 13 14 15 16 17 18

@Override public String sendToServerAndAwaitReply(String request) { lastRequest = request; String reply = invoker.handleRequest(request); lastReply = reply; return reply; }

This allows fast and simple unit testing and development as there are no threads nor any servers that need to be started and shut down. The flip side of the coin is of course, that no testing of the real IPC implementations are made. So, this is the next issue.

Integration Testing IPC From a reliability perspective, it of course also recommendable to have the IPC layer under automated test control. However, as the ClientRequestHandler and ServerRequestHandler are necessarily in separate threads, and as both rely on the underlying OS, these tests are subject to less control from JUnit and are thus more brittle. Indeed I have experienced that some of the test cases described below have passed flawlessly using Gradle in the commandline but the very same tests fail when run from within IntelliJ - for no obvious reason. And, these tests are integration tests which are out of the scope of 2nd Edition

51

Implementing Broker

the present book. Still, the Broker code base contains some integration tests so a short presentation is provided here. The IPC tests are grouped in the package telemed.ipc. A simple test of the socket based CRH and SRH is shown below. 1 2 3 4 5 6 7 8 9 10

@Test public void shouldVerifySocketIPC() throws InterruptedException { // Given a socket based server request handler final int portToUse = 37111; Invoker invoker = this; // A self-shunt spy ServerRequestHandler srh = new SocketServerRequestHandler(); srh.setPortAndInvoker(portToUse, invoker); srh.start(); // Wait for OS to open the port Thread.sleep(500);

11

// Given a client request handler ClientRequestHandler crh = new SocketClientRequestHandler(); crh.setServer("localhost", portToUse);

12 13 14 15

// When we use the CRH to send a request object to // the external socket handler RequestObject req = new RequestObject(OBJECT_ID, CLASS_FOO_METHOD, MARSHALLED_PAYLOAD); ReplyObject reply = crh.sendToServer(req);

16 17 18 19 20 21

// Then our test spy has indeed recorded the request assertThat(lastObjectId, is(OBJECT_ID)); assertThat(lastOperationName, is(CLASS_FOO_METHOD)); assertThat(lastPayLoad, is(MARSHALLED_PAYLOAD));

22 23 24 25 26

// Then the reply returned is correct assertThat(reply.getStatusCode(), is(HttpServletResponse.SC_ACCEPTED)); assertThat(reply.isSuccess(), is(true)); assertThat(reply.getPayload(), is(MARSHALLED_REPLY_OBJECT));

27 28 29 30 31 32

crh.close(); srh.stop();

33 34 35

}

The overall template of the test case is to create first a SRH and next a CRH on the same port on localhost. As I only want to verify the communication across the CRH and SRH implementations, the Invoker role is not really interesting and I replace it with a simple Test Spy whose handleRequest method just records the parameters and returns a dummy reply: 2nd Edition

52

Implementing Broker 1 2 3 4 5 6 7 8 9 10

@Override public String handleRequest(String request) { RequestObject requestObj = gson.fromJson(request, RequestObject.class); this.lastObjectId = requestObj.getObjectId(); this.lastOperationName = requestObj.getOperationName(); this.lastPayLoad = requestObj.getPayload(); ReplyObject reply = new ReplyObject(HttpServletResponse.SC_ACCEPTED, MARSHALLED_REPLY_OBJECT); return gson.toJson(reply); }

So the test just sends a dummy request containing some constant values, and validate that A) the Invoker test spy received the proper request object and B) the reply contained the proper values. The test spy is a self-shunt 4 , that is, the test case class itself plays the role of the test spy. This is the reason for the line 1

Invoker invoker = this; // A self-shunt spy

in the test case above.

4.7 Using the Broker library The FRDS.Broker is also available as a Maven Repository library BinTray FRDS.Broker Library5 . This library can be included in your projects, and provides all the core interfaces and general implementation from the frds.broker package. To include in your Gradle project, just add the dependency 1 2 3

dependencies { implementation 'com.baerbak.maven:broker:2.0' }

to your ‘build.gradle’ file. For your given use case, you need to develop the ClientProxy classes as well as Invoker implementation(s). The Requestor, ClientRequestHandler, and ServerRequestHandler imlementations of the library can just be reused - or rewritten for your convenience. For instance, I have written CRH and SRH implementations that use Rabbit MQ6 as IPC layer which provides a lot of benefits such as load balancing and increased architectural availability. 4 Meszaros, Gerard, “xUnit Test Patterns - Refactoring Test Code”, Addison Wesley, 2007. 5 https://bintray.com/henrikbaerbak/maven/broker 6 https://www.rabbitmq.com/

2nd Edition

53

Implementing Broker

4.8 Summary of Key Concepts It is possible to use test-driven development (TDD) to develop most of a Broker pattern based implementation of the TeleMed system. The key insight is to use a fake object implementation of network layer - doable by making ClientRequestHandler that simply calls the server side Invoker directly using a local method call. Then only the network implementations need to be under manual test, or tested by integration tests. The implementation effort of TeleMed showed that of the six roles in the Broker pattern, actually only two of them: CientProxy and Invoker; contains TeleMed specific code. The remaining four roles are general purpose implementations for the specific choice of marshalling format and network transmission implementation. In the FRDS.Broker project, the frds.broker package contains implementations of the Requestor using JSON as marshalling format, and implementions of the ClientRequestHandler and ServerRequestHandler using sockets and HTTP libraries. The HTTP implementation will be discussed in the later HTTP chapter.

2nd Edition

5. Broker Part Two 5.1 Learning Objectives In this chapter, I will dig deeper into the Broker pattern and show how to handle distributed systems that have more than a single type that needs to make remote method calls. Supporting many different types each with many methods each will force the server side invoker to handle a lot of different method up calls. Thus a secondary learning objective is to show a simple way to avoid the invoker code to become incohesive.

5.2 Limitations in the TeleMed Case While the TeleMed case is a realistic one, it is also quite simple with regards to design and implementation. It does not showcase techniques for handling some of the more complex challenges when using the Broker architecture. It is simple because: • Only one type, and indeed only one object, is handled in our TeleMed case: The single TeleMed instance. While it captures systems like web based shopping baskets, we also have to look at systems that deal with many different types (classes) and many instances of each type. • The central object identity used in TeleMed is the person’s social security number and thus given by the medical domain. However, in many computing systems, this is not the case - the server creates many objects of a given type, and the client must then be able to identify exactly which object it needs to call a method on. So, I will present a new case below, which allows me to discuss designs and implementations to deal with multi type, multi object, systems. Beware that I have on purpose over-engineered the architecture a bit in order to showcase more types and objects for the sake of demonstration.

5.3 Game Lobby Stories The context is GameLobby - a system to match two players, in different locations, that want to play a computer game together over the internet.

55

Broker Part Two

To do that, one player creates a game on a server computer, and invites the second player to join it. Once the second player has joined, the remote game becomes available for them to play. Rephrasing this as user stories Story 1: Creating a remote game. Player Pedersen has talked with his friend Findus about playing a computer game together; they both sit in their respective homes, so it must be a remote game, played over the internet. They agree that Pedersen should create the game, and Findus should then join it. Pedersen opens a web browser and opens the game’s game lobby page. On this lobby page, he hits the button to create game. The web page then states that the game has been created, and displays the game’s join token, which is simply a unique string, like “game-17453”. It also displays a play game button but it is inactive to indicate that no other player has joined the game yet. Pedersen then tells Findus the game’s join token. Next, he awaits that Findus joins the game. Story 2: Joining an existing game. Meanwhile Findus has entered the same game lobby page. Once he is told the join token, “game-17453”, from Pedersen, he hits the join game button, and enters the join token string. The web page displays that the game has been created, and he hits the play game button, that brings him to the actual game. Story 3: Playing the game. Pedersen has waited for Findus to join the game. Now that he has, the play game button becomes active, and he can hit it to start playing the game with Findus. This is a more complex scenario, as there are several roles involved, like the game lobby, the game to be played, and also the intermediate state of a ‘game that has been created but is not playable yet.’

A Role Based Design I design the game lobby domain using three basic roles: I need an object to represent the GameLobby, which is responsible for allowing players to create and join games. One player must create a game, while another may join it. When a user create a game, he/she will be given a FutureGame object, that is a role that represents “the game to be” in the near future. The FutureGame has a method isAvailable() that is false until the second player has joined the game. Once he/she has done that, the FutureGame can be queried using getGame() to get an object of the final role Game. Game of course represents the actual game to be played with game specific methods for movement, game state accessors, or whatever is relevant.

2nd Edition

56

Broker Part Two

A Future is a well-known software engineering concept which represents the answer to a request which may take a long time to compute - instead of waiting for the answer, the client receives a Future immediately. The future can then be polled to see if the answer has become available, and once it is, the answer can be retrieved. The purpose of the Future is to avoid that the client blocks for a long time while waiting for the answer to be computed.

GameLobby • Singleton object, representing the entry point for creating and joining games. FutureGame • A Future, allowing the state of the game (available or not) to be queried, and once both players have joined, return the game object itself. • Provides an accessor method getJoinToken() to retrieve the join token that the second user must provide. Game • The actual game domain role. So a typical execution of the stories of the scenario with Pedersen and Findus can be visualized using this UML sequence diagram:

2nd Edition

57

Broker Part Two

Game lobby dynamics.

I have encoded this sequence diagram as a JUnit test case below. Note that the test case act both as Pedersen and Findus (player1 and player2). (Fragment from gamelobby package, in TestScenario.java): 1 2 3

// Lobby object is made in the setup/before method FutureGame player1Future = lobbyProxy.createGame("Pedersen", 0); assertThat(player1Future, is(not(nullValue())));

4 5 6 7 8 9 10 11

// Get the token for my fellow players to enter when wanting // to join my game. The token may appear on a web site // next to player 1's name so player 2 can see it; or // some other mechanism must be employed by the two players // for player 2 to get hold of the token. String joinToken = player1Future.getJoinToken(); assertThat(joinToken, is(not(nullValue())));

12 13 14 15

// As a second player has not yet joined, the game // is not yet created assertThat(player1Future.isAvailable(), is(false));

16 17 18 19

// Second player - wants to join the game using the token FutureGame player2Future = lobby.joinGame("Findus", joinToken); assertThat(player2Future, is(not(nullValue()))); 2nd Edition

58

Broker Part Two 20 21 22 23 24

// Now, as it is a two player game, both players see // that the game has become available. assertThat(player1Future.isAvailable(), is(true)); assertThat(player2Future.isAvailable(), is(true));

25 26 27 28 29 30

// And they can make state changes and read game state to the game Game gameForPlayer1 = player1Future.getGame(); assertThat(gameForPlayer1.getPlayerName(0), is("Pedersen")); assertThat(gameForPlayer1.getPlayerName(1), is("Findus")); assertThat(gameForPlayer1.getPlayerInTurn(), is("Pedersen"));

31 32 33 34 35 36

// Our second player sees the same game state Game gameForPlayer2 = player2Future.getGame(); assertThat(gameForPlayer2.getPlayerName(0), is("Pedersen")); assertThat(gameForPlayer2.getPlayerName(1), is("Findus")); assertThat(gameForPlayer2.getPlayerInTurn(), is("Pedersen"));

37 38 39

// Make a state change, player one makes a move gameForPlayer1.move();

40 41 42 43

// And verify turn is now the opposite player assertThat(gameForPlayer1.getPlayerInTurn(), is("Findus")); assertThat(gameForPlayer2.getPlayerInTurn(), is("Findus"));

Challenges What are the challenges in the above design which is not already covered by the previous discussion of Broker? Well, let us consider what our Broker does provide solutions for already. • All three roles (lobby, future, game) must be defined as Java interfaces, and we must implement Servant and Proxy code. • The Invoker on the server side must switch on quite a few operation names. As outlined earlier, a good operation name includes both the type and the method name, so, say, gamelobby_create_game may represent the createGame() method for the GameLobby interface. Of course, our invoker code easily becomes pretty long and tedious to write, but it is doable. • Marshalling and demarshalling does not pose any problems we have not already dealt with. • The same goes with IPC, nothing new under the sun here. But, the main challenge is that the server side creates new objects all the time, like the outcome of createGame() that returns a FutureGame instance. 2nd Edition

59

Broker Part Two

Our Broker cannot “pass-by-reference” and the FutureGame is of course an object reference, so our current Broker pattern is seemingly stuck. This is issue 1. Another, and unrelated, issue is the cluttering in the Invoker - consider having 20 different types with 20 methods each, this would require a switch on operation name having 400 different cases. This is hardly manageable. This is issue 2.

5.4 Walkthrough of a Solution Instead of just providing solutions to the two issues, I will outline aspects of the development process to make this work, and then provide an abstract process description in the conclusion. You can find a detailed development diary in the diary.md file in the gamelobby package in the source code.

Issue 1: Server Created Objects Let us focus first on issue 1: server created objects, and let us start by looking at what happens in the client code. I will take my starting point at the following code, which exposes the issue: 1 2

// Client side code FutureGame player1Future = lobbyProxy.createGame("Pedersen", 0);

Thus, the createGame() method in the ClientProxy of GameLobby must invoke the servant’s identical method, and return a FutureGame instance to the caller. As the returned FutureGame instance is on the server side, the actual return type on the client must of course be a proxy, FutureGameProxy, which is associated with that particular instance of FutureGameServant. Take a moment to consider how the two objects are associated before you read on.

How are proxies associated correctly with their servant objects? Through the ‘objectId’. So the key insight is that while the server cannot pass-byreference a new FutureGame instance to the client, it can pass the objectId, as they are simple values, typically strings. Once the client proxy receives the objectId, it can then instantiate a FutureGameProxy and tell it which objectId it should use. So the GameLobbyProxy’s implementation becomes 2nd Edition

60

Broker Part Two 1 2 3 4 5 6 7 8 9 10

// Client side GameLobbyProxy code. @Override public FutureGame createGame(String playerName, int playerLevel) { String id = requestor.sendRequestAndAwaitReply(GAMELOBBY_OBJECTID, MarshallingConstant.GAMELOBBY_CREATE_GAME_METHOD, String.class, playerName, playerLevel); FutureGame proxy = new FutureGameProxy(id, requestor); return proxy; }

That is, the call chain will end up on the server’s Invoker implementation which then also must do something a little different than normal, because the up-call to the GameLobbyServant’s createGame() method will return a FutureGame instance, and not a objectId string. So this is the next issue I have to tackle. The first part of the Invoker’s code to handle the createGame() method looks normal: demarshall the parameters and do the up-call to the servant (‘lobby’ in the code below): 1 2 3 4 5

// Server side Invoker code. if (operationName.equals(MarshallingConstant.GAMELOBBY_CREATE_GAME_METHOD)) { String playerName = gson.fromJson(array.get(0), String.class); int level = gson.fromJson(array.get(1), Integer.class); FutureGame futureGame = lobby.createGame(playerName, level);

Now I have the FutureGame instance, but what I need to return is the objectId, and I have none. So, the question is how to create that? Actually, I can do this in a number of ways. I will discuss alternatives later in the discussion section, but for now, I will solve it by adding a responsibility to the FutureGameServant, namely that it creates and maintains a unique ID; and by adding a responsiblity to the FutureGame interface, namely to have an accessor to this objectId: a getId() method: 1 2 3 4 5 6 7 8

public interface FutureGame { [...] /** Get the unique id of this game. * * @return id of this game instance. */ String getId(); }

Then I make the servant constructor assign an objectId: 2nd Edition

61

Broker Part Two 1 2 3

// Server side Servant code. public class FutureGameServant implements FutureGame, Servant { private String id;

4

public FutureGameServant(String playerName, int playerLevel) { // Create the object ID to bind server and client side // Servant-ClientProxy objects together id = UUID.randomUUID().toString(); [...] }

5 6 7 8 9 10 11

@Override public String getId() { return id; } [...]

12 13 14 15 16 17

}

I here use the Java library UUID which can create universally unique ID’s. The getId() method of course just returns the ‘id’ generated once and for all in the constructor. As always, the [...] represents the “rest of the code.” Why add the getId() to the interface, thus forcing both proxy and servant to implement it? Well, the proxy also need to know the objectId of its associated servant object, so it makes sense to expose it through an accessor method on both the server and client side code. So, now any newly created FutureGameServant object will generate a unique id, and I can complete the Invoker code by marshalling this and return it back to the client proxy, as is shown in line 7-8: 1 2 3 4 5 6

// Server side Invoker code. if (operationName.equals(MarshallingConstant.GAMELOBBY_CREATE_GAME_METHOD)) { String playerName = gson.fromJson(array.get(0), String.class); int level = gson.fromJson(array.get(1), Integer.class); FutureGame futureGame = lobby.createGame(playerName, level); String id = futureGame.getId();

7 8 9 10

reply = new ReplyObject(HttpServletResponse.SC_CREATED, gson.toJson(id)); } [...]

One very important thing is missing though, namely how the returned objectId is used by the proxy. So, consider the next line of our test case, the in which the join token is fetched:

2nd Edition

62

Broker Part Two 1 2 3

// Testing code. FutureGame player1Future = lobbyProxy.createGame("Pedersen", 0); String joinToken = player1Future.getJoinToken();

The FutureGame proxy code is the normal template implementation: 1 2 3 4 5 6 7 8

// Client side ClientProxy code. @Override public String getJoinToken() { String token = requestor.sendRequestAndAwaitReply(getId(), MarshallingConstant.FUTUREGAME_GET_JOIN_TOKEN_METHOD, String.class); return token; }

However, the Invoker will receive the operationName and the objectId, but 1 2 3 4 5 6

// Server side Invoker code. if (operationName.equals(MarshallingConstant.FUTUREGAME_GET_JOIN_TOKEN_METHOD)) { FutureGame futureGame = ??? String token = futureGame.getJoinToken(); reply = new ReplyObject(HttpServletResponse.SC_OK, gson.toJson(token)); }

The culprit is line 3. How does the Invoker know which FutureGame object to do the up-call on? The answer is that we need a name service. Remember name services are dictonaries that map remote object identities to the real remote objects. Here, I go for a simple name service implementation, and add another line, line 7, to the GameLobby’s invoker code for the createGame() method: 1 2 3 4 5 6 7

// Server side Invoker code. if (operationName.equals(MarshallingConstant.GAMELOBBY_CREATE_GAME_METHOD)) { String playerName = gson.fromJson(array.get(0), String.class); int level = gson.fromJson(array.get(1), Integer.class); FutureGame futureGame = lobby.createGame(playerName, level); String id = futureGame.getId(); nameService.putFutureGame(id, futureGame);

8 9 10 11

reply = new ReplyObject(HttpServletResponse.SC_CREATED, gson.toJson(id)); } [...]

‘nameService’ is a new abstraction that I introduce which is essentially a Map data structure with put() and get() methods that maps unique id (objectId) to a servant object. 2nd Edition

63

Broker Part Two 1 2 3

public interface NameService { void putFutureGame(String objectId, FutureGame futureGame); FutureGame getFutureGame(String objectId);

4

[...]

5 6

}

Every time, the server side creates an object, it must remember to add the newly created object reference into the name service under its unique object id. Now, my FutureGame invoker code for the getJoinToken() method can simply look up the proper instance, based upon the object id, and next do the upcall: 1 2 3 4 5

// Server side Invoker code. if (operationName.equals(MarshallingConstant.FUTUREGAME_GET_JOIN_TOKEN_METHOD)) { FutureGame futureGame = nameService.getFutureGame(objectId); String token = futureGame.getJoinToken(); reply = new ReplyObject(HttpServletResponse.SC_OK, gson.toJson(token));

6 7

} else if ( operationName.equals( [...]

The same process is of course repeated when creating a Game object: Add an accessor method getId() to the Game interface, let the GameServant constructor generate a unique ID and assign it to an instance variable; and return the objectId back to the ClientProxy which creates a GameProxy with the given objectId.

Object reference accessor methods The discussion above pointed out the process of passing object references for objects created by the server. The same process also applies for accessor methods that return object references. Our GameLobby case study already has such a method in FutureGame: 1 2

public interface FutureGame { [... ]

3

Game getGame();

4 5

}

The magic lives in the Invoker code. The insight is that the returned game instance already has an objectId, so it is just that, which must be returned to the client side proxy: 2nd Edition

64

Broker Part Two 1 2 3 4 5 6 7 8 9

// Server side Invoker code. [...] } else if (operationName.equals( MarshallingConstant.FUTUREGAME_GET_GAME_METHOD)) { FutureGame futureGame = nameService.getFutureGame(objectId); Game game = futureGame.getGame(); String id = game.getId(); reply = new ReplyObject(HttpServletResponse.SC_OK, gson.toJson(id)); }

Here, the Invoker fetches the future game servant, makes the upcall, and then just marshalls the returned object’s (a game instance) objectId back to the client side.

Discussion The presented solution now allows our Broker to handle both pass-by-value and pass-by-reference, however the latter only when method references are references to objects on the server. As outlined earlier, a local object residing on the client side can never be pass-by-reference to the server side. Argue why we cannot pass a client object reference to the server.

If you have a method in which a parameter is a server side object, ala this one: 1 2

Game game = futureGame.getGame(); lobbyProxy.tellIWantToLeave(game);

Then your proxy code of course shall just send the objectId to the server. This will allow the server side invoker to lookup the proper server object, and pass that to the equivalent tellIWantToLeave() method of the servant object. Note also that in a remote system you have to decide for each class if they are pass-by-value or pass-by-reference. Going back to the TeleMed case there a TeleObservation instance was actually part of the method signature of processAndStore() method. However, as it was record type class, only storing dumb values, it made perfectly sense to treat it as a pass-by-value object. The same goes for the String class. I have arguably added a responsibility to my domain roles, Game and FutureGame, by adding the getId() method to their interfaces, and thus added a responsiblility to create and handle (remote) object identities. One may 2nd Edition

65

Broker Part Two

argue, that remote aspects thereby sneak into the domain abstractions which is an aspect that does not belong there. However, the premise for this argument is basically that domain object should not be aware that they are working in a distributed environment, and from a software architectural viewpoint, this is simply an incorrect premise. Remote access to an object has profound implications to the architecture of a system, as quality attributes like performance, security and availability have to be analyzed and handled, as argued in Chapter Basic Concepts. The ‘getId()’ is thus just one small hint of that. The responsibility to assign the unique ID must be given to some role in the system. I gave it to the domain role itself in the discussion above. However, there are other options. • Often the domain itself has a notion of objectId, typically through some ‘catalog’, ‘invoicing’, ‘inventory’, or ‘orderService’ role. For instance, invoices often have a numerical sequence number assigned when created, which can serve as the ‘objectId’. Or objects are stored and handled in a database, and as the database will maintain a unique id for any tuple or document (often a primary key), it is obvious just to reuse that, or derive an id from it. • As the Invoker role is the one that calls any create() method, one may let the Invoker handle unique objectId creation and maintenance itself, and thus remove this responsibility completely from the servants.

Rewrite the GameLobby system in such a way, that the assignment and maintenance of unique object identities for created servant objects is the responsibility of the Invoker. That is, avoid introducing the getId() methods in the FutureGame and Game interfaces.

The Name Service implementation in my GameLobby systems is simple and have some limitations. I use an in-memory Map based data structure and thus it will not survive a server restart. This is of course not feasible for a large business system. Another implication is that our system relies on a single server, a single point of failure. “If that single server fails, the system fails.” This is of course also not feasible for a large system. Tried and tested solutions exist for both issues, in the form of cache servers and load balancing, so these are not intrinsic liabilities for the Broker pattern. However, they are outside the scope of this book. A final point worth discussing is what to return to the client to represent the servant object created on the server side. I used simple string typed object ids, but if the role contains many fixed valued attributes, you should consider using a Data Transfer Object (DTO) as alternative. For instance, our Game have accessors to get the names of the two players through the 2nd Edition

66

Broker Part Two

getPlayerName(int index) method, but as the names of the players are highly unlikely to change, it makes sense to transfer them as part of the getGame()

payload. So one suggestion for a DTO is 1 2 3 4 5

public class GameDTO { public String objectId; public String player0Name; public String player1Name; }

Thus, the Invoker’s code, that currently only transmits the object id: 1 2 3 4 5 6

} else if (operationName.equals(MarshallingConstant.FUTUREGAME_GET_GAME_METHOD)) { FutureGame futureGame = nameService.getFutureGame(objectId); Game game = futureGame.getGame(); String id = game.getId(); reply = new ReplyObject(HttpServletResponse.SC_OK, gson.toJson(id)); }

is then rewritten into returning a DTO instead: 1 2 3 4 5 6 7 8 9

} else if (operationName.equals(MarshallingConstant.FUTUREGAME_GET_GAME_METHOD)) { FutureGame futureGame = nameService.getFutureGame(objectId); Game game = futureGame.getGame(); GameDTO dto = new GameDTO(); dto.objectId = game.getId(); dto.player0Name = game.getPlayerName(0); dto.player1Name = game.getPlayerName(1); reply = new ReplyObject(HttpServletResponse.SC_OK, gson.toJson(dto)); }

This way, the receiving Requestor can create a more complete GameProxy that already stores the player names, and the proxy’s getPlayerName() method can then avoid the expensive server call, and simply return the local values. This caching-in-the-proxy trick is essential to improve performance in a distributed system, and another benefit of partially coding the Broker pattern yourself instead of relying on e.g. Java RMI. A DTO is actually similar to a resource in REST terminology which I will dicuss in the REST Chapter.

Issue 2: Invoker Cluttering The three types in our game lobby system have eight methods in total that need a remote implementation using our Broker. While having a switch in 2nd Edition

67

Broker Part Two

the invoker with eight branches it not much, it still highlights the issue: as more and more types must be supported with more and more methods, the invoker’s handleRequest() method will just become longer and longer, lowering analyzability and maintainability. Basically, cohesion is low because the up calls for all types are merged together into a single invoker implementation. As an example, the HotCiv project from Flexible, Reliable Software, has several types: Game, Unit, City, and Tile, that all have quite a long list of methods, and thus you quickly end up adding comments as kind of section marker in code that goes on page after page, like: 1 2 3 4

// === GAME if (operationName.equals(MarshallingConstants.GAME_GET_PLAYER_IN_TURN)) { ... } else if (operationName.equals(MarshallingConstants.GAME_END_OF_TURN)) {

5 6

[lots of more if clauses removed]

7 8 9

// === UNIT } else if (operationName.equals(MarshallingConstants.UNIT_GET_OWNER)) {

10 11

[lots of more if clauses removed]

12 13 14 15

// === CITY } else if (operationName.equals(MarshallingConstants.CITY_GET_OWNER)) { ...

In general, comments have their purpose, but these kind of section markers highlight low analyzability of the code - so let us refactor our code into smaller, more cohesive units. As the invoker code became cluttered because it handles all methods for all types, the solution is straight forward: Split it into a set of smaller and more cohesive roles - one for each type. Thus I split it into GameInvoker, a FutureGameInvoker, and a GameLobbyInvoker, each of which only handles the switch on operationName and associated servant up call for that particular type. I still need a single entry point, though, the one handleRequest() that is called from the ServerRequestHandler, which then must determine which of the three types the method call is for. This is a kind of root in the system, so I have called this class GameLobbyRootInvoker. Thus the root handleRequest will look something like

2nd Edition

68

Broker Part Two 1 2 3 4

@Override public String handleRequest(String request) { RequestObject requestObject = gson.fromJson(request, RequestObject.class); String operationName = requestObject.getOperationName();

5

String reply;

6 7

// Identify the invoker to use Invoker subInvoker = [find invoker for Game or FutureGame or Lobby]

8 9 10

// And do the upcall on the subInvoker try { reply = subInvoker.handleRequest(request);

11 12 13 14

} catch (UnknownServantException e) { reply = gson.toJson( new ReplyObject( HttpServletResponse.SC_NOT_FOUND, e.getMessage())); }

15 16 17 18 19 20 21

return reply;

22 23

}

This is a simple compositional design: the root invoker decides which of the three “sub invoker” to delegate the call to, each of which themselves implement the Invoker interface. It is actually a State pattern: based upon the the incoming call the invoker changes state to a “handle Game calls”, “handle FutureGame calls”, etc., and the request is delegated to the relevant state object. The GameLobbyInvoker sub invoker handles all methods for GameLobby etc., making each of these classes smaller and cohesive. The final piece is the algorithm is to select which of the invokers to use. A simple approach that I have selected is to make the operation name string constants follow a fixed template, as shown in the MarshallingConstant class 1

public class MarshallingConstant {

2 3

public static final char SEPARATOR = '_';

4 5 6 7 8

// Type prefixes public static final String GAME_LOBBY_PREFIX = "gamelobby"; public static final String FUTUREGAME_PREFIX = "futuregame"; public static final String GAME_PREFIX = "game";

9 10

// Method ids for marshalling

2nd Edition

69

Broker Part Two 11 12 13 14

public static final GAME_LOBBY_PREFIX public static final GAME_LOBBY_PREFIX

String GAMELOBBY_CREATE_GAME_METHOD = + SEPARATOR + "create-game-method"; String GAMELOBBY_JOIN_GAME_METHOD = + SEPARATOR + "join-game-method";

public static final FUTUREGAME_PREFIX public static final FUTUREGAME_PREFIX [...]

String FUTUREGAME_GET_JOIN_TOKEN_METHOD = + SEPARATOR + "get-join-token-method"; String FUTUREGAME_IS_AVAILABLE_METHOD = + SEPARATOR + "is-available-method";

15 16 17 18 19 20

Each method name string begins with the type name, followed by the method name. Then the sub invoker can be found by a simple lookup upon the prefix string: 1 2 3 4

@Override public String handleRequest(String request) { RequestObject requestObject = gson.fromJson(request, RequestObject.class); String operationName = requestObject.getOperationName();

5 6

String reply;

7 8 9 10 11 12

// Identify the invoker to use String type = operationName.substring(0, operationName.indexOf(MarshallingConstant.SEPARATOR)); Invoker subInvoker = invokerMap.get(type); [...]

And the three invokers are then stored in a Map data structure, that maps the prefix string to the actual invoker. 1 2 3

public GameLobbyRootInvoker(GameLobby lobby) { this.lobby = lobby; gson = new Gson();

4 5 6

nameService = new InMemoryNameService(); invokerMap = new HashMap();

7 8 9 10 11 12 13 14

// Create an invoker for each handled type/class // and put them in a map, binding them to the // operationName prefixes Invoker gameLobbyInvoker = new GameLobbyInvoker(lobby, nameService, gson); invokerMap.put(MarshallingConstant.GAME_LOBBY_PREFIX, gameLobbyInvoker); 2nd Edition

70

Broker Part Two 15

Invoker futureGameInvoker = new FutureGameInvoker(nameService, gson); invokerMap.put(MarshallingConstant.FUTUREGAME_PREFIX, futureGameInvoker);

16 17 18 19

Invoker gameInvoker = new GameInvoker(nameService, gson); invokerMap.put(MarshallingConstant.GAME_PREFIX, gameInvoker);

20 21 22 23

}

5.5 Summary of Key Concepts Server Created Objects The discussion above allows me to express the process handling server created object more abstractly.

Transferring Server Created Objects Consider a remote method ClassB create() in ClassA, that is, a method that creates new instances of ClassB. To transfer a reference to an object created on the server side, you must follow this template • Make the Class B Servant object generate a unique ID upon creation (typically in the constructor using id = UUID.randomUUID().toString();, or by the domain/database providing one), and provide an accessor method for it, like getId(). Often, it does make sense to include the getId() method in the interface, as the ClientProxy object also needs the ID when calling the Requestor. • Once a servant object is created, it must be stored in a name service using the unique id as key. • In the Invoker implementation of ClassA.create(), use a String as return type marshalling format, and just transfer the unique object id back to the client. • On the client side, in the ClassAProxy, create a instance of the ClassBClientProxy, and store the transferred unique id in the proxy object, and return that to the caller. • Client code can now communicate with the Class B servant object using the returned client proxy object. • When the server’s Invoker receives a method call on some created object, it must use the provided objectId to fetch the servant object from the name service, and call the appropriate method on it.

2nd Edition

71

Broker Part Two

The UML sequence diagrams below show the server side and the the client side of this algorithm respectively.

Transferring Server Created Objects - Server side

Transferring Server Created Objects - Client side

In the case of just returning object references, a simplified process applies:

Transferring Server Objects 2nd Edition

72

Broker Part Two

Consider a remote method ClassB getB() in ClassA, that is, a method that return references to instances of ClassB. To transfer a reference to an object created on the server side, you must follow this template • In the Invoker implementation of ClassA.getB(), retrieve the objectId of the ClassB instance, and use a String as return type marshalling format, and just transfer the unique object id back to the client.

The processes also introduced yet another responsibility of the invoker, and I therefore have to enhance the role description of Invoker and Requestor. Invoker • Performs demarshalling of incoming byte array. • Determines servant object, methods, and arguments (some of which may be object IDs in which case the servant reference must be fetched from the Name Service), and calls the given method in the identified Servant object. • Performs marshalling of the return value from the Servant object into a reply byte array, or in case of server side exceptions or other failure conditions, return error replies allowing the Requestor to throw appropriate exceptions. • When servants create new objects, store their IDs in the Name Service and return their ID instead. Requestor • Performs marshalling of object identity, method name and arguments into a byte array. • Invokes the ClientRequestHandler’s send() method. • Demarshalls returned byte array into return value(s). If returned value is an object ID of a server created object, create a ClientProxy with the given object ID and return that. • Creates client side exceptions in case of failures detected at the server side or during network transmission. And finally we add the Name Service • A server side storage that maps objectId’s to objects • Allows adding, fetching, and deleting entries in the storage 2nd Edition

73

Broker Part Two

Thus, the Broker UML diagram in the Broker Part One Chapter have to include a Name Service role and an association between the Invoker and the Name Service role. Update the Broker UML diagram to include the Name Service role.

Multi Type Dispatching The discussion above allows me to express the process handling multi type invokers more abstractly.

Multi Type Dispatching Consider an Invoker that must handle method dispatching for a large set of roles. To avoid a blob or god class Invoker implementation, you can follow this template: • Ensure your operationId follows a mangling scheme that allows extracting the role name. A typical way is to construct a String type operationId that concatenates the type name and the method name, with a unique seperator in between. Example: “FutureGame_getToken”. • Construct SubInvokers for each servant role. A SubInvoker is role specific and only handles dispatching of methods for that particular role. The SubInvoker implements the Invoker interface. • Develop a RootInvoker which constructs a (key, value) map that maps from role names (key) to sub invoker reference (value). Example: if you look up key “FutureGame” you will get the sub invoker specific to the FutureGameServant’s methods • Associate the RootInvoker with the ServerRequestHandler. In it’s handleRequest() calls, it demangles the incoming operationId to get the role name, and uses it to look up the associated SubInvoker, and finally delegates to its handleRequest() method.

5.6 Review Questions Outline the issues involved when there are server side methods that create objects to be returned to the client. Why is it that you just can’t marshall the newly created object itself?

2nd Edition

74

Broker Part Two

Outline the process involved in establishing a relation between a ClientProxy on the client side with the correct Servant object on the server side, the Transferring Server Created Objects process. Outline the responsibilities of the Name Service in this process. Outline the issue that arise in the Invoker code when dealing with many servant types each with potentially many methods. Explain the solution, and how it improves maintainability of the code.

2nd Edition

6. HTTP 6.1 Learning Objectives In the previous chapter, I unfolded the Broker pattern as well as an implementation of it for the TeleMed and GameLobby systems. In the IPC layer, I used Java sockets, which is Java’s fundamental network implementation. In this chapter, I will focus on another IPC approach that has achieved enormous success, namely the hypertext transfer protocol (HTTP). As the main engine of the world wide web, it is popular and is supported by numerous high quality frameworks, that I can reuse in implementing a strong IPC layer. In this chapter, I will start by outlining the basic concepts in HTTP, and once this foundation has been established, I will move on to discuss how Broker’s IPC layer can easily be implemented using it.

6.2 A HTTP Walk-through HTTP is the application protocol that powers the World Wide Web (WWW). With the huge success of WWW for human information search and retrieval, a lot of high quality software frameworks for building WWW client and server software became available. In addition, the HTTP protocol has a number of interesting properties that allows scaling, that is, handle large workloads and traffic. Combined, HTTP is an ideal platform for building distributed computing systems. But, let us first take a quick look at the HTTP protocol. The presentation here is by no means comprehensive, but should provide enough information for understanding it, our implementations of the Broker pattern on top of it, and the next chapter on REST. HTTP is a standard for the request-reply protocol between web browsers (clients) and web servers in the standard client-server architecture. The client sends requests to the webserver for a web page and the server sends a response back, typically an HTML formatted page.

Message Format All HTTP messages (requests and responses) are simple plain-text messages. The format of a request has the following format:

76

HTTP

• A request line, that specifies the HTTP verb, the wanted resource, and the version of the HTTP protocol. • Request header fields, which are key-value pairs defined by the HTTP protocol. • An empty line. • An optional message body. As an example, if you request your browser to fetch the URL 1

http://www.baerbak.com/contact.html

the following text message will be sent by the browser to my book’s web server, whose address on the internet is “www.baerbak.com”: 1 2 3

GET /contact.html HTTP/1.1 Host: www.baerbak.com Accept: text/html

Try it out, for instance using curl, see Sidebar: Curl. If you do, then you have requested my book’s web server to return the contents of the “contact.html” page.

Curl As HTTP is a plain-text format, you can actually quite easily communicate with web servers right from the shell. The tool curl is handy. Try the following command which will output both the request message as well as the response message (-v). 1

curl -v www.baerbak.com/contact.html

GET is one of the HTTP verbs, discussed in the next section, and this one simply requests the resource mentioned: the page “contacts.html”. The headers “Host” and “Accept” defines the host name of the web server (www.baerbak.com) and what return types (text/html) are accepted respectively. In the reply, the web server will return the requested resource in a response message, that is similar to the request format: • A single status line which includes the status code and status message.

2nd Edition

77

HTTP

• Response header fields, again a set of well-defined key-value pairs, on multiple lines. • An empty line. • The (optional) message body, that is the contents of the requested resource. The response to the above request is shown here: 1 2 3 4 5 6 7 8 9

HTTP/1.1 200 OK Date: Mon, 19 Jun 2017 09:58:25 GMT Server: Apache/2.2.17 (FreeBSD) mod_ssl/2.2.17 OpenSSL/1.0.0c ... Last-Modified: Mon, 13 Apr 2015 12:34:07 GMT ETag: "b46bce-676-5139a547e2dc0" Accept-Ranges: bytes Content-Length: 1654 Vary: Accept-Encoding,User-Agent Content-Type: text/html

10 11 12 13 14 15 16

Flexible, Reliable Software

...

The status line returned the 200 HTTP status code which is “OK” — everything works as expected. HTTP defines a rich set of status code that tell clients specifically what went wrong in case of errors, and also indicate how to remedy the situation. They are discussed in more detail in Section Status codes. Next follows a series of header fields, one notable one is “Content-Type” that states the format of the returned resource: “text/html” which tells the receiver that it is text formatted in the HyperText Markup Language (HTML). Finally, the response contents follows after the empty line in the returned message. As stated by the “Content-Type” the returned contents is formatted using HTML which allows a web browser to display the contents in a visually and typographical correct way.

Uniform Resource Identifier / URI Clients and servers communicate about resources, like our web page in the example above. A resource is information that can be named, which calls for some uniform way of defining a name. Uniform Resource Identifier (URI) is a string based schema for how to name resources. The full schema is 2nd Edition

78

HTTP 1

scheme:[//[user[:password]@]host[:port]][/path][?query][#fragment]

However, I will only use a simple subset of this schema, namely 1

scheme:[//host[:port]][/path]

Uniform Resource Locators (URL) follows the URI schema and identifies both the resource’s location as well as the means for retrieving it. One such example resource is the ’contact’ homepage retrieved above at http://www.baerbak.com/contact.html. It states that the resource “contact.html” can be found on the web server “www.baerbak.com” and the means to retrieve it is the HTTP protocol.

HTTP Verbs The HTTP version 1.1 protocol includes a number of verbs. The most widely used when browsing is the GET verb which simply means “get that resource for me”, but there are others. Those relevant for our discussion are: Verb GET POST PUT DELETE

Action request representation of a resource (URI) accept enclosed entity as new subordinate of resource (URI) request enclosed entity to be stored under URI request deletion of resource (URI)

Idempotent? Yes No Yes Yes

In contrast to the GET verb which simply retrieves existing information from a web server, the three other verbs allow clients to create (POST), modify (PUT), and delete (DELETE) information on a web server. Of course, such modifications of contents on the web server is only possible if the given web server accepts these verbs and the request is well formed. All verbs, except POST, are idempotent which means that executing them several times has the same effect as only executing it once. If you send one or ten identical PUT requests to the server, the outcome is the same, namely that the named resource is updated. The POST is different, as sending two POST requests will create two resources, etc. In the database community, you often speak of the CRUD operations: Create, Read, Update, and Delete; which are the four basic operations that a database supports. If you compare the HTTP verbs with CRUD, the are identical except for the naming: • Create: is the POST verb 2nd Edition

79

HTTP

• Read: is the GET verb • Update: is the PUT verb • Delete: is the DELETE verb Thus, a web server supporting all four HTTP verbs is fundamentally a database of resources that can be created, read, updated, and deleted.

HTTP Status Codes The HTTP protocol also defines a well defined vocabulary of status codes that describe outcomes of any operation whether they succeed or fail. The status codes are numerical values, like 200, and they are grouped into response classes, so all responses in range 200 - 299 are ‘Success’ codes, all in range 500 - 599 are server errors, etc. Some codes are much more used than others, so I will focus on these • 200 OK: Standard response for successful HTTP requests. Typically GET requests that were served correctly return 200 OK. • 201 Created: The request has been fulfilled, resulting in the creation of a new resource. This is the status code for a successful POST request. • 404 Not Found: The requested resource could not be found but may be available in the future. For instance if you GET a resource that is not present on the server, you get a 404. • 501 Not Implemented: The server either does not recognize the request method, or it lacks the ability to fulfill the request. You can find the full list of status codes on Wikipedia.

Media type The client and server need to agree on the data format of the contents exchanged between them. Sending JSON to a server that only understands XML of course leads to trouble. Therefore a central internet authority, Internet Assigned Numbers Authority (IANA), maintains a standard of possible media types, that is, standard formats. The media type is essential for a web browser in order to render read data correctly. For instance, the media type text/html states that the returned contents is HTML which the browser can then render correctly, if on the other hand the media type is text/plain it is just plain text that needs no further processing. Similar there are media types for images like image/jpeg or image/gif. Generally the format is in two parts that state type/subtype.

2nd Edition

80

HTTP

As our discussion is about programmatic exchange of data between server and client, the type application is most interesting, and the media types for XML: application/xml and JSON: application/json are the ones you will encounter. Media types were formerly known as MIME types or content types. For a client and server to express the media type of the contents of a GET, PUT or POST message you use the HTTP headers. To request that contents is formatted according to a specific media type, you set the ‘Accept’ header in the request message, like 1 2 3

GET /contact.html HTTP/1.1 Host: www.baerbak.com Accept: text/html

This states: “Send me the contents in HTML”. In the response message, the server will provide information about the media type of the returned contents, using the ‘Content-Type’ header: 1 2 3 4

HTTP/1.1 200 OK Date: Tue, 15 May 2018 07:33:46 GMT ... Content-Type: text/html

In the ideal world, a server will be able to return contents in a variety of formats, like application/xml and application/json, depending upon what the client requests. However, this of course requires extra effort of programming, and is all too often avoided. If a server cannot return contents in the requested format, the HTTP status code 406 Not Acceptable should be returned. Again, this is the ideal situation, all too often it just returns data in the format it knows, so be sure to check the ‘Content-Type’ in the client to be sure to parse correctly.

6.3 A HTTP Case Study: PasteBin It is easy to try out GET on a normal web server, but to showcase the other verbs, I will show how to write a rudimentary web server that accepts POST and GET messages. As client, I will simply use ‘curl’. Our case, PasteBin, is a simple online “database”/clipboard that you can copy a small chunk of text to, using POST, for later retrieval, using GET.

2nd Edition

81

HTTP

POST to create a resource Our pastebin web server will accept POST messages encoded in JSON. The JSON format must have a key, contents, whose value is a string representing the ‘clip’ that we want to store on the pastebin server. Or in HTTP terms, we want to create a resource that names our information. Furthermore, our server accepts only clips on the resource path /bin, which represents bins of JSON contents. Let us hop into it, you can find the source code in the FRDS.Broker Library1 in the pastebin folder. Start the web server using Gradle 1

gradle pastebin

This will start the pastebin server on localhost:4567/bin. Next I want to store the text ‘Horse’ on the pastebin server, so I send it a POST message using ‘curl’: 1 2 3 4 5 6 7

curl -i -X POST -d '{"contents":"Horse"}' localhost:4567/bin HTTP/1.1 201 Created Date: Mon, 07 May 2018 09:13:58 GMT Location: localhost:4567/bin/100 Content-Type: application/json Transfer-Encoding: chunked Server: Jetty(9.4.6.v20170531)

8 9

{"contents":"Horse"}

OK, let us take a look at each aspect in turn. The first line is the curl call which states: Send a POST message (-X POST) to the server at URI localhost:4567/bin, with a message body (-d) that contains ‘{“contents”:”Horse”}’, and let me see the full HTTP reply message (-i). So, ‘curl’ will print the message response which follows the reply format. The status code is 201 Created, telling me that the resource was stored/created. Next follows header fields. The most important one when you POST messages is the Location key/value pair: 1

Location: localhost:4567/bin/100

As you create contents when POSTing, the server must tell you where the information is stored (“named”), and this is the Location header field, which 1 https://bitbucket.org/henrikbaerbak/broker

2nd Edition

82

HTTP

here states that the URL for all future references to this resource is localhost:4567/bin/100. Finally, a web server may actually change the provided information, so it will transmit the named resource back to you. Here, however, no changes have been made, so it just returned the original JSON object, as the message body.

GETing the resource Now, to retrieve our clip from the server, I use the GET verb on the resource URI that was provided in the Location field. 1 2 3 4 5 6

curl -i localhost:4567/bin/100 HTTP/1.1 200 OK Date: Mon, 07 May 2018 09:15:47 GMT Content-Type: application/json Transfer-Encoding: chunked Server: Jetty(9.4.6.v20170531)

7 8

{"contents":"Horse"}

The status code is 200 OK, and the JSON object returned in the message body. If I try to access a non-existing resource, here bin 999: 1 2 3 4 5 6

curl -i localhost:4567/bin/999 HTTP/1.1 404 Not Found Date: Mon, 07 May 2018 09:16:27 GMT Content-Type: application/json Transfer-Encoding: chunked Server: Jetty(9.4.6.v20170531)

7 8

null

I get a 404 Not Found, and the message body contains ‘null’. Try to store different text clips in a number of bins and retrieve them again.

Implementing the Server Implementing a quality web server is not trivial. However, one of the big benefits of the success of world wide web is, that there are numerous open source web server frameworks that we can use. 2nd Edition

83

HTTP

I have chosen Spark2 as it has a shallow learning curve, and you need only to write little code to handle the HTTP verbs. It uses static import and lambda functions introduced in Java 8. The basic building block of Spark is a set of routes which represent named resources. Static methods define the HTTP verb, and the parameters define the URI and the lambda function to compute a response, like 1 2 3

get("/somepath", (request, response) -> { // Return something });

4 5 6 7

post("/somepath", (request, response) -> { // Create something });

Returning to our PasteBin, POST messages are handled by the following piece of code: 1 2 3

post("/bin", (req, res) -> { // Convert from JSON into object format Bin q = gson.fromJson(req.body(), Bin.class);

4 5 6

// Create a new resource ID String idAsString = ""+id++;

7 8 9

// Store bin in the database db.put(idAsString, q);

10 11 12 13 14 15 16

// 201 Created res.status(HttpServletResponse.SC_CREATED); // Set return type res.type("application/json"); // Location = URL of created resource res.header("Location", req.host()+"/bin/"+idAsString);

17 18 19 20

// Return the constructed bin return gson.toJson(q); });

The declaration states that a handler for POST messages is defined on the URI path “/bin”, followed by the lambda-function defining the handler. The first line of that method converts the request’s body (req.body()) from JSON into the internal class Bin using GSON. (Bin is just a class that contains a single 2 http://sparkjava.com/

2nd Edition

84

HTTP

String type named ‘contents’ and thus match the {"contents":"Horse"} message I used earlier.) The next two lines just creates a resource id and stores the object in an “database” called ‘db’. The result object, res, is modified so the status code is 201, and the Location field assigned. Finally, the constructed ‘bin’ is returned which becomes the message body. GET messages are handled by the following piece of code. 1 2 3

get("/bin/:id", (req, res) -> { // Extract the bin id from the request String id = req.params(":id");

4 5 6

// Set return type res.type("application/json");

7 8 9 10 11 12

// Lookup, and return if found Bin bin = db.get(id); if (bin != null) { return gson.toJson(bin); }

13 14 15 16 17

// Otherwise, return error res.status(HttpServletResponse.SC_NOT_FOUND); return gson.toJson(null); });

The algorithm is similar to the POST one. One important feature of Spark is that a path which contains an element starting with a colon, like :id above, are available through the params() method. So, if I send a GET request on /bin/100, then req.params(":id") will return the string “100”. Note also that the 200 OK status code is not assigned in the code fragment above, as this is default value assigned by Spark. Extend the PasteBin systems so it also supports UPDATE and DELETE requests.

6.4 Broker using HTTP So - we have HTTP: a strong request-reply protocol that enables us to transfer any payload between a client and a server, backed up by high quality open source web server implementations. This is an ideal situation to base our Broker pattern on. 2nd Edition

85

HTTP

Remember from our previous chapter that we can use any network transport layer as IPC by implementing proper pairs of ClientRequestHandler and ServerRequestHandler. Their responsibilities are to transport payloads of bytes, the marshalled requests and replies, between the client side and the the server side. Thus, a basic usage of the WWW infrastructure is simply to program a ClientRequestHandler and a ServerRequestHandler that use HTTP as transport layer. This usage is termed URI Tunneling as we use a single URI as a tunnel for transporting our messages. You can find the source code in package frds.broker.ipc.http in the broker project in the FRDS.Broker library3 .

Server Side A method call may create objects or change the state of objects on the server side and thus the POST verb is the most appropriate to use. Indeed I will be lazy and use POST for all methods irrespective of whether they are accessors or mutators - which is quite normal in URI tunneling, though certainly not consistent with the envisioned use of the HTTP verbs. Finally, the URI path is not that relevant as we just use HTTP as a tunnel for exchanging JSON request and reply objects - it does not name any particular resource, just a tunnel. In TeleMed, this tunnel is on the path “/bp” (short for bloodpressure). The server request handler is responsible for accepting the POST requests, hand it over to the invoker, and return the reply. The implementation below again uses the Spark framework. The path is assigned elsewhere in the ‘tunnelRoute’ instance variable to “/bp/”. (Fragment from broker package: UriTunnelServerRequestHandler.java): 1 2 3

// POST is for all incoming requests post(tunnelRoute, (req,res) -> { String marshalledRequest = req.body();

4 5

String reply = invoker.handleRequest(marshalledRequest);

6 7 8 9 10 11 12

// The reply is opaque - so we have no real chance of setting a proper // status code. res.status(HttpServletResponse.SC_OK); // Hmm, we also do not know the actual marshalling format but // just know it is textual res.type(MimeMediaType.TEXT_PLAIN); 3 https://bitbucket.com/henrikbaerbak/broker

2nd Edition

86

HTTP 13 14 15 16

return reply; });

Note that I do not need to set the Location field when just using a tunnel. Our payload contains all object id’s, and there is no need.

Client Side In PasteBin, I just used ‘curl’ on the command line to interact with the webserver. However, in TeleMed I of course need a programmatic way to send requests to the server. Here, I am using the UniRest4 library which allows HTTP requests to be constructed using a fluent API. So the sendToServerAndAwaitReply() method will look like this: (Fragment from broker package: UriTunnelClientRequestHandler.java) 1 2 3

@Override public String sendToServerAndAwaitReply(String request) { HttpResponse reply;

4

// All calls are URI tunneled through a POST message try { reply = Unirest.post(baseURL + path) .header("Accept", MimeMediaType.TEXT_PLAIN) .header("Content-Type", MimeMediaType.TEXT_PLAIN) .body(request).asString(); } catch (UnirestException e) { throw new IPCException("UniRest POST request failed on request=" + request, e); } return reply.getBody();

5 6 7 8 9 10 11 12 13 14 15 16

}

The core here is line 7-10 which is a single fluent statement. It tells UniRest to issue a POST request, UniRest.post(), on the given path. The baseURL and path variables are set elsewhere, and in the concrete case the concatenation of these variables result in the string “http://localhost:4567/bp”, so the request is sent to locally deployed server on the bloodpressure path “/bp”, as we defined in the previous section. Line 8 and 9 define header key/value pairs, and the final method call body().asString() in line 10 defines first the body of the request message (body()), and finally that the returned reply from the server must be interpreted as String 4 http://unirest.io/

2nd Edition

87

HTTP

(asString()) as we do not know the actual used marshalling format - they are just opaque byte arrays/strings. The last part of the method extracts the actual marshalled string from the reply and returns it back to the calling Requestor.

6.5 Summary of Key Concepts The Hyper Text Transfer Protocol (HTTP) is an application level protocol for transferring data between web browers and web servers. It is a text based format using a fixed, line-oriented, structure of both request and response messages. A request message states the HTTP verb (GET, POST, PUT, DELETE, and others), header fields, and optionally a message body containing data. POST and PUT uploads data in the message body to the web server (they are Create and Update messages respectively), while GET and DELETE do not (they are Read and Delete messages). The response contains a status code from a rich vocabulary of codes that details the outcome of the server operation. HTTP and associated quality web frameworks (like Spark, UniRest, and many others) form an excellent framework for implementing the Broker pattern. A widely used technique is URI Tunneling which simply handles all remote method calls on a single tunnel path, all handled by POST messages that contains the full method call payload in the message body.

6.6 Review Questions Enumerate and explain the four HTTP verbs. Which ones are idempotent and which are not? What does idempotent mean? Outline and explain the format of HTTP request and reply messages. Explain the URI scheme for naming resources. Give some examples from ordinary web browsing. What is a media type and why is it important to state it in requests and replies? Name some media types. Which header keys are used to set/define the media type? Explain how the HTTP protocol can be used as the Broker IPC implementation. Explain why both the creation and reading of TeleMed blood pressure measurements are using POST messages in my implementation, and explain why it does not align with the intention of the HTTP verbs.

2nd Edition

7. REST 7.1 Learning Objectives The learning objective in this chapter is the fundamental concepts and principles in the Representational State Transfer (REST) architectural style. This style of programming distributed systems has a number of attractive properties when it comes to designing and implementation systems that must be scalable, that is, support many clients making requests at the same time.

7.2 The Demise of Broker Architectures If you look broadly at how people architect distributed systems today, you will notice that REST based architectures is used and dicussed a lot, while Broker based architectures are not. Then, you may wonder why learn the Broker pattern at all? My personal view (and I stress that it is an opinion, not something I can claim based upon scientific studies) is that Broker is a strong pattern to base distributed systems upon, if you do it correctly. And, in the heydays of Broker in the 1990’ies, most developers got it wrong, the resulting systems suffered from a lot of issues with performance, maintainability, and availability, and the natural conclusion was that the pattern was broken. I attribute this misunderstanding from the fact that early proponents of the Broker pattern stated that tranparency was an important feature, in the sense that you were now able to code a method call obj.doSomething() and it was transparent whether the obj was a local or a remote object. Being able to program distributed systems using the object oriented paradigm was a major improvement compared to programming in terms of low-level send() and receive() calls, as argued in Chapter Basic Concepts - however - the ideal of transparency is not so! As argued several times already, a remote object is architecturally highly different from a local object. Thus, early broker based systems became slow, insecure, and volatile, because architects and programmers were not sufficiently focused on keeping a strong borderline between remote and local objects: server objects became highly coupled to client side objects, suddenly the server invokes methods on thousands of client side objects, distributed calls were not sufficiently guarded against network or server failures, etc.

89

REST

Another issue leading to abandoning Broker, again in my opinion, was that it was supported by frameworks such as Corba, Java RMI, and .NET remoting. While lowering the implementation burden, which is good, it also tied developers into fixed decisions regarding marshalling formats and IPC protocols, which leaves less room for architects to choose the right technique for the given challenges. So, the issue was not really the Broker pattern per se, it was the idea of transparency and fixed architectural decisions that lead to ill designed systems. And while these systems showed weaknesses, the HTTP protocol based world wide web showed another and seemingly much more robust approach to architecting distributed systems. No wonder, software developers quickly moved there instead. Representational State Transfer (REST), described in this chapter, is based upon the ideas of the world wide web and defines an architectural pattern that utilizes the key ideas of HTTP and WWW in a programming context. And one key aspect of REST is the clear separation of responsiblities of the client side and the server side respectively, that is, the exact opposite of transparency. And, as I have done with my Broker pattern, REST is a clear client-server system: servers do not call clients. A second issue was that an object by nature stores state: instance variables hold values that can be set and retrieved, defining the state of that object. However, it also follows that once a server creates an object, it only exists on that particular server. This has two problematic implications in a distributed setting: Firstly, if the server crashes or has to be stopped (for instance, to install a security update in the operation system), all state vanishes which of course is not acceptable. Secondly, if the number of clients being served by the server grows, the server becomes a bottleneck until the point where it can no longer keep up and becomes overloaded. An overloaded server is at best unbearably slow and at worst simply crashed. To avoid this, you have to scale out, and introduce more server machines and load balancing, but then object state cannot be tied to a particular server - any server in the set have to be able to recreate any object’s state. Your servers have to be stateless. My TeleMed system’s server was stateless as any uploaded measurement was immediately stored in the database, and retrievals were not based upon objects held in the server’s memory. So, an important aspect of creating server programs is to make it stateless, a key idea that was not considered in the original formulations of the Broker pattern. Thus, my final note on Broker is that it is a strong architectural pattern for architecting distributed systems if and only if you as an architect keeps a clear separation of which objects are local and which are remote, and ensure that your server is stateless. If you do that, then Broker has a programming model that in all respects are much nicer than the REST model, as you will experience in this chapter.

2nd Edition

90

REST

7.3 Representational State Transfer (REST) Fielding and Taylor presented an architectural pattern, Representational State Transfer (REST) in the paper Principled Design of the Modern Web Architecture1 , for the purpose of handling internet-scale systems. Their basic goal was to Keep the scalable hypermedia properties of the World Wide Web. This leads to five central principles of REST: • The central concept is the resource which is defined as any information that can be named. • A resource is identified by a resource identifier. • You transfer a representation of data in a format matching one of the standard media types. • All interactions are stateless, i.e. every request must contain all the information necessary for processing it. • Interactions are between clients and servers, the essential difference between the two is that a client initiates communication by making a request, whereas a server listens to connections and responds to requests in order to supply access to its services. Most of these aspect have already been treated in the previous chapter on HTTP, and I argued for the advantages of clear separation between client and server objects, and stateless interactions as vital for large-scale systems in the introduction. The REST principles by themselves are pretty abstract so I will make them more concrete in the following sections.

7.4 Richardson’s model for Levels in REST In the book, REST in Practice2 , the authors discusses distributed systems that uses the web and properties of the HTTP protocol, and present a model developed by Leonard Richardson to explain their maturity3 . Here “maturity” is in terms of how many features and principles from the REST style they use. Richardson identifies three levels and number them accordingly: • Level 0: URI Tunneling 1 Roy Fielding, and Richard N. Taylor: “Principled design of the modern Web architecture”, Proceedings of the 22nd international conference on Software engineering, Ireland, 2000. 2 Jim Webber, Savas Parastatidis, and Ian Robinson: “REST in Practice: Hypermedia and Systems Architecture”, O’Reilly Media, 2010. 3 If you search for the Richardson model on WWW, you may find descriptions in which four levels are mentioned. However, I follow the presentation given in the “Rest in Practice” book.

2nd Edition

91

REST

• Level 1: HTTP Verbs • Level 2: Hypermedia In Level 0: URI Tunneling systems, HTTP is simply used as a well supported IPC layer, just as I implemented the IPC layer of the Broker pattern using HTTP and used the Spark and UniRest libraries for implementation. None of the properties of HTTP is used at this level, it is only the IPC. Recall, that our TeleMed system only used the POST verb, and all method calls were made to a resource named “/bp” - which thus was not really a resource per se, just a way to identify the path the server and client had to communicate by. Another and more prominent example of URI Tunneling is SOAP, originally Simple Object Access Protocol, which gained a lot of attention around year 2000. SOAP is basically the Broker on top of HTTP, and a lot of tools to generate ClientProxies and Invokers code automatically. At Level 1: HTTP Verbs, systems start obeying the HTTP requirements: You design your system around resources with a resource identifier, and you use the HTTP verbs to create, read, update, and delete these resources. My PasteBin example in the previous chapter was a simple Level 1 system: Each POST created a new resource and told the client the resource identity of it (like “/bin/102”), which allowed the client to read it. Adding update and delete features to the PasteBin client is a simple exercise. Level 1 systems can handle a lot of simple systems that match the CRUD template. In Section Level 1 REST: TeleMed below, I will showcase how to (more or less) implement TeleMed using level 1 REST. The final level is Level 2: Hypermedia which uses hypermedia links to model application state changes, which departs radically from traditional objectoriented way of handling state changes. I will outline the concepts in Section Level 2 REST: GameLobby, and demonstrate it on our GameLobby system.

7.5 The Architectural Style REST is an architectural style/architectural pattern, that is, a certain way of organizing your software and design. As such it leads to other structures and design decisions than those in an object-oriented style. In object-orientation, you have objects that send messages to each other: one object invokes a method on another. Methods often follow the Command Query Separation (CQS) principle, that is, some methods are accessors/queries that retrieve state information, while others are mutators/commands that change state in the object. The classic example is the Account object with getBalance() accessor and withdraw(amount) mutator. This style allows state/data to be encapsulated with all sorts of different methods for manipulation. 2nd Edition

92

REST

REST is in contrast a data-centric style. In essence, it is based upon the named resource, the data, and just supports the four basic operations of Create, Read, Update, and Delete on the resource, as outlined in the previous chapter. Comparing to objects, REST is just the object’s fields/instance variables, and there are only these four fixed methods available. No other methods can be defined. Going back to our PasteBin application, you saw the REST style in practice. There was just a single root resource, the named information Bin, that represents a clipboard entry. A resource is identified by the path /bin/. By sending this resource a POST message with a JSON object, I told that I wanted a new sub-object bin created with the given data, which in turn created such a named resource with a name, /bin/100. This new resource I can then update (PUT), or read (GET), or delete (DELETE). This is all well for designing systems that fit a data-centric, database flavor, style. Indeed information systems: organized systems for the collection, organization, storage and communication of information which abound on the WWW fits the style well. As examples, consider internet shops (Create a shopping basket, Update its contents with items to buy, Read its contents so I can see it, Delete it once I have paid for the items), or social media (Create a profile, Read my own profile and that of my friends, Update my profile with images and text.) However, many domains do not fit well with this data-centric design. Generally, systems that have complex state changes and in which many objects are affected by a single operation do no fit well. We will return to how to design such systems, but first, let us return to our TeleMed system, and see how a REST design may look.

7.6 Level 1 REST: TeleMed We have already implemented the TeleMed system using the Broker pattern, and in the last chapter I used HTTP to implement the IPC layer. It was not REST, though; I only used POST messages on a single resource named /bp to handle all TeleMed methods. TeleMed actually fits the Level 1 REST style well: We create TeleObservations of blood pressure and the patient as well as physicians read them. How can our TeleMed system be designed using the REST style? The central REST concept is the resource which is named information. A specific measured blood pressure measurement for a specific patient fits this requirement well. So we need to identify it using a URI, remember the URI schema. My suggestion is to use a path that encodes a given patient and given instance of a measurement, something akin to

2nd Edition

93

REST 1

/bloodpressure/251248-1234/id73564823

Breaking down this URI path, it encodes the two pieces of information, using the patient id (251248-1234), and some computer generated id (id73564823) for that specific measurement. Another potential path may be even more readable, if we use the time as the last part of the path, like 1

/bloodpressure/251248-1234/2020-05-14-14-48-53

to identify the measurement made May 14th 2020 at 14.48.53 in the 24hour clock. The above schema also follows the rule that an URI should name things, and not actions. Paths consist of nouns, never verbs, because paths identify resources which are “things”. Now that I have a resource, so I need to access it. In REST you do not manipulate data directly, instead you manipulate a representation of data in a well known media type, such as XML and JSON. A direct manipulation example could be issuing SQL queries to a SQL server, or invoking XDS methods on an XDS infrastructure. REST enforces a uniform interface instead, requiring you to manipulate data in your representation of choice from the limited set of standard media types. So, to request a Read (GET) on this particular resource on the server assigned for such measurements, you would set a HTTP request stating that you need a JSON encoded representation of the measurement 1 2

GET /bloodpressure/251248-1234/id73564823 HTTP/1.1 Accept: application/json

which should return something like 1

{ patientId: "251248-1234", systolic: 128.0, diastolic: 76.0, time: "20180514T144853Z"

2 3 4 5 6

}

This GET on specific URI just returns a single measurement. However, Story 2: Review blood pressure requires that all measurements for the last week can be displayed. A de-facto way to support this is just to GET on the patient specific URI without the measurement specific part, that is on 2nd Edition

94

REST 1

/bloodpressure/251248-1234

This path may be coded to always return exactly the last week’s measurements as a JSON array; or I may code the server to look for URI query parameters like 1

/bloodpressure/251248-1234?interval="week"

which can then be used to filter the returned data. The other important user story is Story 1: Upload a blood pressure measurement. As this story is about uploading data to the server, which must store it, it is essentially a Create / POST. So the REST way is to let the client perform a POST on the URI for the particular patient, with a JSON payload with the measured values 1 2

POST /bloodpressure/251248-1234 HTTP/1.1 Content-Type: application/json

3 4

{ patientId: "251248-1234", systolic: 144.0, diastolic: 87.0, time: "20180515T094804Z"

5 6 7 8 9

}

and the server must then store the measurement, define a new URI that represents it, and return it in the Location for the newly created resource, similar to 1 2 3 4

HTTP/1.1 201 Created Date: Mon, 07 May 2018 12:16:51 GMT Content-Type: application/json Location: http://telemed.baerbak.com/bloodpressure/251248-1234/id73564827

5 6

{ patientId: "251248-1234", systolic: 144.0, diastolic: 87.0, time: "20180515T094804Z"

7 8 9 10 11

}

2nd Edition

95

REST

7.7 Documenting REST API In object-oriented designs we use interfaces to document how to interact with our roles. I need a similar thing for my REST designs. One important initiative for documenting REST interactions is the OpenAPI Initiative4 which is a JSON format supported by tools such as editors and code generators. The downside of OpenAPI is the detail required to specify an interface. Therefore, I instead propose a simple text format that show/exemplify the HTTP verbs, payloads, and responses. It is rudimentary and less strict, but much less verbose. The text format is simply divided into section, one for each REST operation, and just provides examples for data payloads, header key-value pairs, paths, etc. Let us jump right in, using a TeleMed REST API as example. 1 2

TeleMed ========

3 4 5

Create new tele observation ----------

6 7

POST /bloodpressure/{patient-id}

8 9

{ systolic: 128.0, diastolic: 76.0, time: "20180514T144853Z"

10 11 12 13

}

14 15 16 17

Response Status: 201 Created Location: /bloodpressure/{patient-id}/{measurement-id}

18 19 20

Get single measurement -------

21 22 23

GET /bloodpressure/{patient-id}/{measurement-id} (none)

24 25

Response 4 https://swagger.io/resources/open-api/

2nd Edition

96

REST 26 27 28 29

Status: 200 OK { systolic: 128.0, diastolic: 76.0, time: "20180514T144853Z" }

30 31 32

Status: 404 Not Found (none)

33 34 35

Get last week's measurement -------

36 37 38

GET /bloodpressure/{patient-id} (none)

39 40 41 42 43 44 45 46 47 48 49 50 51

Response Status: 200 OK [ { systolic: 124.1, diastolic: 77.0, time: "20180514T073353Z" }, { systolic: 128.8, diastolic: 76.2, time: "20180514T144853Z" }, { systolic: 132.5, diastolic: 74.2, time: "20180514T194853Z" }, ... ]

52 53 54

Status: 404 Not Found (none)

In this format each “method” starts with a headline, like “Get single measurement” etc., and is then followed by the request and reply format. Regarding the request I write the HTTP Verb and the URI path, and potential message body/payload. I use curly braces for parameter values so for instance {patient-id} must be replaced by some real id of a patient. The response is outlined after the request section, and may have multiple sections, each showing a particular HTTP status codes and examples of the format of the returned payloads; empty lines separate a list of possible return status codes, like “200 OK” or “404 Not Found” in the examples above.

7.8 Continued REST Design for TeleMed For the sake of demonstrating the remaining HTTP verbs, I have enhanced the TeleMed interface a bit so a patient can update and delete specific 2nd Edition

97

REST

measurements. (It should be noted that this is not proper actions in a medical domain - you do not delete information in medical journals, rather you add new entries that mark previous ones as incorrect and add the correct values.) The TeleMed interface is then augmented with two additional methods 1

public interface TeleMed {

2

// methods 'processAndStore', and 'getObservationsFor' not shown

3 4

/** * Return the tele observation with the assigned ID * * @param uniqueId * the unique id of the tele observation * @return the tele observation or null in case it is not present * @throws IPCException in case of any IPC problems */ TeleObservation getObservation(String uniqueId);

5 6 7 8 9 10 11 12 13 14

/** * Correct an existing observation, note that the time stamp * changes are ignored * * @param uniqueId * id of the tele observation * @param to * the new values to overwrite with * @return true in case the correction was successful * @throws IPCException in case of any IPC problems */ boolean correct(String uniqueId, TeleObservation to);

15 16 17 18 19 20 21 22 23 24 25 26 27

/** * Delete an observation * * @param uniqueId * the id of the tele observation to delete * @return true if the observation was found and deleted * @throws IPCException in case of any IPC problems */ boolean delete(String uniqueId);

28 29 30 31 32 33 34 35 36 37

}

Note that these three additional methods match the GET on single resource, UPDATE, and DELETE operations respectively; and that the parameters are 2nd Edition

98

REST

rather REST inspired by operating through a ‘uniqueId’ string that is thus similar to a (part of a) resource identifier.

7.9 Implementing REST based TeleMed While I have argued that Broker-based designs and REST are two different architectural styles, there are of course quite some similarities, as they both provide solutions to the basic challenges of remote communication: • The Request-Reply protocol is central in both styles. In Broker, we may have to code it directly in the RequestHandlers if we use sockets, while the HTTP already implements it. • The need for Marshalling of data contents. In Broker, we again coded it in the Requestor/Invoker pair, while HTTP relies on media types. • The need for Name services. In Broker, we used DNS systems to get the IP address of the server, while we used an internal implementation for getting a servant object associated with a given object id. REST, on the other hand, encodes everything into the URI: both the server identity as well as the resource identity. • The Proxy is only an issue for object-oriented designs. Looking at HTTP and thus REST I will argue that it basically merges the layers and does a lot of hard coupling. If we take the TeleMed POST message above 1 2

POST /bloodpressure/251248-1234 HTTP/1.1 Content-Type: application/json

it defines the IPC layer (HTTP over TCP/IP), it binds the marshalling layer (JSON and the HTTP text based message format), and actually also binds the domain layer in the respect that the URI’s parameter list define a resource named “bloodpressure” and social security number as the unique identifier. Thus, the client side TeleMed object, serves all three client-side roles of the Broker pattern: it is the ClientProxy as it implements the TeleMed interface, it is the Requestor as it does the marshalling, and it is also the ClientRequestHandler as it performs the IPC calls. Thus, the processAndStore() method (in a class I could name TeleMedClientProxyRequestorClientRequestHandler, but no, I will not) will become something like (Fragment in telemed-rest project: TeleMedRESTProxy.java)

2nd Edition

99

REST 1 2 3 4 5

@Override public String processAndStore(TeleObservation teleObs) { // Marshalling String payload = gson.toJson(teleObs); HttpResponse jsonResponse = null;

6

// IPC String path = "/bloodpressure/" + teleObs.getPatientId() + "/"; try { jsonResponse = Unirest.post(serverLocation + path). header("Accept", MimeMediaType.APPLICATION_JSON). header("Content-type", MimeMediaType.APPLICATION_JSON). body(payload).asJson(); } catch (UnirestException e) { throw new IPCException("UniRest POST failed for 'processAndStore'", e); }

7 8 9 10 11 12 13 14 15 16 17

// Extract the id of the measurement from the Location header String location = jsonResponse.getHeaders().getFirst("Location");

18 19 20

[extract teleObsID from the location header]

21 22

return teleObsID;

23 24

}

Note that this method on the client side is the ClientProxy, does the marshalling/demarshalling and does perform all IPC. On the server side, the layers are also merged in a single implementation it is the ServerRequestHandler as it receives network messages, and is the Invoker as it is does the demarshalling and marshalling, and the invocation of the Servant method. (Fragment in telemed-rest project: RESTServerRequestHandlerInvoker.java) 1 2 3 4 5

// IPC String storeRoute = "/bloodpressure/:patientId/"; post(storeRoute, (req, res) -> { String patientId = req.params(":patientId"); String body = req.body();

6 7 8

// Demarshall parameters into a JsonArray TeleObservation teleObs = gson.fromJson(body, TeleObservation.class);

9 10 11

// Invoker String id = teleMed.processAndStore(teleObs);

12

2nd Edition

100

REST 13 14 15

// Normally: 201 Created res.status(HttpServletResponse.SC_CREATED); res.type(MimeMediaType.APPLICATION_JSON);

16 17 18 19

// Location = URL of created resource res.header("Location", req.host() + "/" + Constants.BLOODPRESSURE_PATH + id);

20 21 22 23

// Marshalling return value return gson.toJson(teleObs); });

The post method binds the URI /bloodpressure/{patientId} so all incoming requests are routed to the lambda function enclosed. And this lambda function in turn does demarshalling and up call to the Servant, and replies to the client. All layers are thus merged. Compared to the Broker, REST thus have fewer hot spots and injection points, and thus no separation of concerns. This has the negative consequence that testability is lowered, as I have no way of injecting a test double in the place of the IPC layer. This is a major liability, and I will return later in this chapter to techniques to mitigate it. You can find the detailed code in folder telemed-rest at FRDS.Broker Library5 . Note that this code differs somewhat from the design provided here in the book. As the patient ID is also present in the exchanged JSON documents, the patient id part of the URI is not implemented in the source code. Note also that the code base does not contain any automated JUnit tests, it is pure manual testing.

7.10 Level 2 REST: GameLobby As I have described, REST fits a CRUD schema well, but not everything in computing is just creating, reading, updating, and deleting a single piece of information. Many applications must perform complex state changes involving a number of objects, something the OO style is well suited for. To model that, REST relies on the hypermedia concept which is actually well known from ordinary browsing web pages: You read a web page and it contains multiple hypermedia links that you can follow by clicking. In this sense, each web page contains a set of links that define natural state changes in the information the user receives. In the same manner, Level 2 REST defines state changes by returning links as part of the returned resources, each link in itself a resource that can be acted upon using the POST, GET, etc. 5 www.bitbucket.com/henrikbaerbak/broker

2nd Edition

101

REST

To illustrate, consider a catalog web page for a small web shop. The page presents a set of items that can be purchased and next to each item is a link named ‘add this item to shopping basket’. Clicking on such a link must naturally add the item to the basket, so the next time the user visits the shopping basket web page, the item appears on the list. Thus, besides a “/catalog” resource, the system also maintains a “/shoppingbasket/654321” (the shopping basket for user with id 654321) resource; and the ‘add item to basket’ link will refer to this resource, and clicking it will make the client issue an UPDATE request to it. This way of modeling state changes is also known by the acronym HATEOAS, Hypermedia As Engine Of Application State.

GameLobby One example that I will return to is the GameLobby introduced previously. The Story 1: Creating a remote game match a create operation thus a POST on a root path/resource that I name /lobby. When posting I must provide two parameters, namely my player name and the level of the game I want to play. So the equivalent of the call: 1

FutureGame player1Future = lobbyProxy.createGame("Pedersen", 0);

in the Broker based version of the code (See Walk-through of a Solution) becomes something along the lines of the following REST call: 1 2

GameLobby ========

3 4 5

Create Remote Game ----------

6 7

POST /lobby/

8 9

{ playerOne: "Pedersen", level: 0

10 11 12

}

13 14 15 16

Response Status: 201 Created Location: /lobby/{future-game-id}

17 18 19

{ playerOne: "Pedersen", 2nd Edition

102

REST

playerTwo: null, level: 0, available: false, next: null

20 21 22 23 24

}

In the Broker variant, a FutureGame instance is returned which can be queried for a joinToken. In my REST create operation, a new resource is created and its location also returned: /lobby/{future-game-id}; which serves nicely as our joinToken. Thus, Pedersen will simply tell Findus to join his game using that resource path. The returned resource is a JSON object embodying the future game resources, with the name of the players, the availability state, and a empty links section (“next: null”) which I will describe below. Note that our initial POST’s body only included values for keys that was meaningful for the client to define: first player’s name and the game’s level. The other key-value pairs were not defined. POST bodies are allowed to only provide partial resource definitions. It would be obvious to add a read operation on the resource, to allow Pedersen to test if Findus has indeed joined the game. Reading translates to a GET operation. 1 2

Read future game status ----------

3 4

GET /lobby/{future-game-id}

5 6 7

Response Status: 200 OK

8 9

{ playerOne: "Pedersen", playerTwo: null, level: 0, available: false, next: null

10 11 12 13 14 15

}

16 17 18

Status: 404 Not Found (none)

Story 2: Joining an existing game translates to updating the future game’s resource with the missing information (in our case, just the second player’s name). Updating is normally a PUT operation, but PUT’s body is defined to be the full representation of the resource. As Findus only wants to make a partial update, supplying his own name as player two, we cannot use the PUT verb, but have to use the POST verb. 2nd Edition

103

REST

There seems to be different opinions on how to update resources with partial information. Just using PUT seems obvious, but most authors agree that the strict semantics of full representation in PUT body removes this possibility. And a general recommendation is therefore to use POST6 . The HTTP verb, PATCH, has been added to handle partial updates, but is complex and not widely used. The complexity is because the RFC 5789 specification states that With PATCH, however, the enclosed entity contains a set of instructions describing how a resource currently residing on the origin server should be modified to produce a new version. So, you do not simply send a partial JSON object, but must use a special patch format that both client and server agrees upon. For the sake of simplicity, I will stick to POST.

1 2

Join A Game -----------

3 4

POST /lobby/{future-game-id}

5 6

{ playerTwo: "Findus"

7 8

}

9 10 11

Response Status: 200 OK

12 13

{ playerOne: "Pedersen", playerTwo: "Findus", level: 0, available: true, next: "/lobby/game/{game-id}"

14 15 16 17 18 19

}

20 21 22

Status: 404 Not Found (none)

There are several things to note about this proposed POST operation • The outcome of the update operation is the creation of an actual game resource. However, this new resource is a side-effect of updating an existing resource, not the outcome of a request specifically to create the game resource. Thus, you do not use the Location header field to communicate the resource URI as you do in normal create operations. 6 Jim Webber, Savas Parastatidis, and Ian Robinson: “REST in Practice: Hypermedia and Systems Architecture”, O’Reilly Media, 2010.

2nd Edition

104

REST

Instead, I use the next: field to return a hyperlink to the new resource, /lobby/game/{game-id}. This is an example of the Hypermedia As The Engine of Application State (HATEOAS). I will discuss it below. • As no new resource was created as result of the POST, the status code is 200 OK and the Location field is not used. HATEOAS resemble human web browsing where web pages contain many hyperlinks and the user is free to click any to move to another page. In REST, hyperlinks in the same way allows the client to make state changes on the server side resources by doing CRUD operations on the resources that the server provides. To paraphrase our GameLobby, by Updating the future game, it makes a state change by creating the resource, /lobby/game/{game-id}, and provides the client with a resource identifier for it. The next: key in my JSON above contains only a single resource, however, in more complex domains there may be multiple resources to operate on, so a list of resources may be provided instead. There is no agreed standard for how to name the section, but “next” or “links” are often used. As both players have joined, it is visible that the game is ready, by reading the future game resource: 1 2

Read future game status ----------

3 4

GET /lobby/{future-game-id}

5 6 7

Response Status: 200 OK

8 9

{ playerOne: "Pedersen", playerTwo: "Findus", level: 0, available: true, next: "/lobby/game/{game-id}"

10 11 12 13 14 15

}

16 17 18

Status: 404 Not Found (none)

Thus, both players can now access a resource that represents the started game. At an abstract level, every game unfolds as each player makes some state changes to the game state, typically by moving pieces. However, whereas our OO game has a move() method, our REST game resource has not! It only has the CRUD methods, so to speak. This is the key challenge in making 2nd Edition

105

REST

REST level 2 designs: how do we model complex state changes which are not easily represented by simple CRUD operations? And, I might add, one of those that takes a while to get used to if you come from an OO background. The key insight is to model the state change itself as a resource. So, the above POST operation by Findus created a game resource on path “/lobby/game/{game-id}”, however, as I also needs to make moves on this game, it should also create a move resource for that particular game: a resource defining one particular move to be made in the game. To rephrase this important insight, creating a game means creating two resources, one representing the state of the game, and another one representing the state of one or the next move. I define the URI of move resources as “/lobby/game/{game-id}/move/{moveid}”, and I define this resource’s representation to be that of the relevant data to make an actual move. Let us take chess as an example, a chess game move can be represented by the following JSON (Chess notation uses the letters a-h for columns, and 1-8 for rows): 1

{ player: "Pedersen", from: "e2", to: "e4",

2 3 4 5

}

That is, the above resource represents Pedersen moving the pawn on e2 to e4 on the chess board. That is, white player can PUT the above JSON to the “/lobby/game/{gameid}/move/{move-id}” to make a state change on the move object and thereby change the game resource’s state. So the procedure would be to • PUT a valid game move representation on the move resource • Verify that the PUT operation succeed (if the game move is invalid, the HTTP status code can tell so) • GET the game state to see the updated game state Thus, to Story 3: Playing the game both players can read the state of the game on the provided game resource

2nd Edition

106

REST 1 2

Read game status ----------

3 4

GET /lobby/game/{game-id}

5 6 7

Response Status: 200 OK

8

{

9

playerOne: "Pedersen", playerTwo: "Findus", level: 0, board: [ ... ], playerInTurn: "Pedersen", noOfMovesMade: 0, next: /lobby/game/{game-id}/move/{move-id}

10 11 12 13 14 15 16

}

17 18

Status: 404 Not Found (none)

19 20

This game resource represents the current state of the chess game: next player to make a move, number of moves made so far, the state of the board, etc. Again, the noteworthy aspect is the next attribute whose value is the URI of the next move resource to update in order to make a state change to the game. Let us continue with Chess as an example, so for Pedersen to make the opening pawn move, a PUT operation may look like 1 2

Make a Game Move -----------

3 4

PUT /lobby/game/{game-id}/move/{move-id}

5 6

{ player: "Pedersen", from: "e2", to: "e4",

7 8 9 10

}

11 12 13

Response Status: 200 OK

14 15 16

{ player: "Pedersen", 2nd Edition

107

REST

from: to:

17 18 19

"e2", "e4",

}

20 21

Status: 403 Forbidden

22 23 24

Status: 404 Not Found (none)

Here, 403 Forbidden, can be used to signal that the move was not valid. After a valid move, a new GET must be issued to fetch the updated state of the game - which player is next to move, how does the board look like now, and what is the next link which allows making the second move, etc. Note that I have designed the resource URI .../move/{move-id} as a list of moves, typically the first move is made by PUT on resource .../move/0, the second on resource .../move/1 and so forth. This way the move resource serves both as a HATEOAS way of changing the game’s state as well as a historical account of all moves made, to be inspected by GET queries.

Discussion The HATEOAS style departs fundamentally to a traditional OO style of design, and coming from an OO background, it takes quite a while to get used to. The hard part is to stop designing in terms of methods that do complex state changes and to start design resources that represent the state changes instead. My presentation here has only scratched the surface of this architectural style, and if you are serious about REST, I highly recommend the “REST in Practice” book.7 Both Broker and REST supports remote communication between clients and a server. From a compositional design perspective, REST mixes a number of distinct responsiblities into a single role. A REST server handles the Request/Reply protocol over the internet (that is, the IPC layer of Broker), it dictates the encoding and object identity (that is, the marshalling layer of Broker), and it has no well-defined concept of the protocol with the servant objects (that is, the domain layer of Broker). Instead it relies on abstract design principles with few programming or library counterparts. Thus, adopting REST is hard coupling. Compare to Broker where you may switch from a HTTP IPC to a MessageQueue based IPC, just be injecting new CRH and SRH implementations. 7 Jim Webber, Savas Parastatidis, and Ian Robinson: “REST in Practice: Hypermedia and Systems Architecture”, O’Reilly Media, 2010.

2nd Edition

108

REST

Another consequence of the role mixing is that TDD and testing REST architectures is less obvious: you do not have clearly defined IPC roles that can be replaced by test doubles as was the case with the Broker. What to do then, is the focus of the next section. Another aspect that has some liabilities is the fact that HATEOAS dictate that returned resources have a notion of the state machine it is part of, which from a role perspective is an unfortunate mixing of domain knowledge and implementation dependent technicalities. An example is the Game resource, which models true game domain state, like involved players, state of the board, next player to take a turn, but also contains the next attribute which is a hypermedia link, a highly implementation dependent technical detail. There is not clear separation of concern.

Outline of REST based GameLobby The presentation above is implemented in the gamelobby-rest project in the FRDS.Broker Library8 . Note however, that the implementation there is only for demonstrating the HATEOAS principles, most domain code are just hard coded fake-object implementations. I will not discuss the full implementation, but highlight the HATEOAS parts. The first aspect is the POST on the FutureGame resource, that creates both a game resource as well as the first move resource (error handling and other aspects have been omitted below): 1 2 3

post( "/lobby/:futureGameId", (request, response) -> { String idAsString = request.params(":futureGameId"); Integer id = Integer.parseInt(idAsString);

4 5

FutureGameResource fgame = database.get(id);

6 7 8 9

// Demarshall body String payload = request.body(); JsonNode asNode = new JsonNode(payload);

10 11

String playerTwo = asNode.getObject().getString("playerTwo");

12 13 14 15

// Update resource fgame.setPlayerTwo(playerTwo); fgame.setAvailable(true);

16 17 18

// Create game instance int gameId = createGameResourceAndInsertIntoDatabase(fgame);

19 8 https://bitbucket.org/henrikbaerbak/broker

2nd Edition

109

REST

fgame.setNext("/lobby/game/" + gameId); updateFutureGameInDatabase(id, fgame);

20 21 22 23 24

return gson.toJson(fgame); });

The code follows the standard template: demarshall incoming message, fetch relevant resource from storage, update state, and marshall and return resource. The important thing, however, is that the creation of the game resource also creates the (first) move resource. However, as the hypermedia link is embedded in the resource (in the class), this is hidden in the domain code: 1 2

private int createGameResourceAndInsertIntoDatabase(FutureGameResource fgame) { int theGameId = generateIDForGame();

3

// Create the move resource storage createMoveResourceListForGame(theGameId);

4 5 6

// Create the game resource theOneGameOurServerHandles = new GameResource(fgame.getPlayerOne(), fgame.getPlayerTwo(), fgame.getLevel(), theGameId);

7 8 9 10 11

return theGameId;

12 13

}

Though this code is only an initial sketch of a design, there are still two lessons learned here. The first one is that though a given game resource and its associated list of moves are of course tightly coupled in the domain, they must be kept as disjoint datastructures in our REST scenario. To see why, consider a classic OO design that would embedd the list of move resources within the game resource, as a field variable. Doing so, however, leads to the situation in which the JSON marshalling framework, like Gson, will embed the the full list into the returned resource, ala:

2nd Edition

110

REST 1 2 3 4 5 6 7 8

{"playerOne":"Pedersen","playerTwo":"Findus", "level":0,"id":77,"playerInTurn":"Pedersen","noOfMovesMade":2, "next":"/lobby/game/77/move/2", "moveResourceList":[ {"player":"Pedersen","from":"e2","to":"e4"}, {"player":"Findus","from":"e7","to":"e5"}, {"player":"null","from":"null","to":"null"}], "board":"[...]"}

which is certainly not what I want. One can often tweak marshalling to avoid some fields in the resulting JSON but that is a slippery slope, and cost quite a lot of processing. Second, as the two are kept separate from each other, the relation becomes weaker and the logic to keep them synchronized must reside outside the resources/objects themselves. This also means that making a move then involves making changes to these disjoint data structures: 1 2 3 4

// Update the move resource, i.e. making a transition in the game's state put( "/lobby/game/:gameId/move/:moveId", (request, response) -> { String gameIdAsString = request.params(":gameId"); Integer gameId = Integer.parseInt(gameIdAsString);

5 6 7 8

// Demarshall body String payload = request.body(); MoveResource move = gson.fromJson(payload, MoveResource.class);

9 10 11 12

// Update game resource with the new move GameResource game = getGameFromDatabase(gameId); makeTheMove(game, move);

13 14 15

return gson.toJson(move); });

The actual move is made in method makeTheMove() which necessarily have to fetch both the game resource and update it, as well as fetch the move list and update that. In essense, the data model to implement in REST is more like a relational database with disjoint tables linked by keys, than the encapsulated object model known from OO.

2nd Edition

111

REST

7.11 Testability and TDD of REST designs How can I implement the design above? Well, one way is to use manual testing and incremental development, similar to the PasteBin example. That is, implement a (Spark-Java) web server, that accepts a One Step Test operation. The obvious choice here would be POST on path /lobby as it is the starting point for the GameLobby system. Then use PostMan or Curl to verify that the web server accepts the request and returns proper JSON formatted resources. The problem is that it is does not follow the TDD principle automated test, and regression testing is suffering - if I want to experiment or refactor, I have a bunch of Curl commands I have to execute to get to a certain state of the server. And none of it is within JUnit control. Two options exists • Use integration testing techniques: Spawn a REST server in the JUnit fixture, send HTTP requests as part of the test cases, and verify the return HTTP responses. Downside: Brittle tests that may be slow. • Use the Facade pattern to encapsulate the REST paradigm using 3-1-2. Downside: There is more code to produce. In the gamelobby-rest project in the FRDS.Broker library, the first option is chosen, as the simplicity of the gamelobby server does not warrent the larger setup required by using a Facade. But let us look at the second option. The observation is that our web server acts as an intermediary between the requests arriving from the network, and our domain objects. For instance, a POST on path /lobby is a request, and the reply is an object that encapsulates statusCode, Location field, and a JSON body. Thus we can make a facade that is as close as possible to the interface of the web server, for instance with a method: 1 2 3 4

public interface GameLobbyFacade { public RESTReply postOnPathLobby(String restBody); ... }

and the initial TDD test may look like (in pseudo code):

2nd Edition

112

REST 1 2 3 4 5 6 7 8 9

@Test public void shouldCreateFutureGameWhenPosting() { ... String body = "{ player: Pedersen }"; RESTReply r = facade.postOnPathLobby(body); assertThat(r.getLocation(), is("/lobby/future-game-id-1")); assertThat(r.getStatusCode(), is(200)); ... }

The central requirement of the facade’s methods are • The methods should closely match those that the web server experience in all respects: parameters passed in, and return values. This is vital in order to make the code in the actual web server minimal and simple. The web server should only contain minimal code to extract parameters from the request, call the proper facade method, and return a response based on simple retrieval of values from the RESTReply object. The central aspect is simplicity in coding the web server because the probability of getting it wrong is minimal. Still we need manual testing of this code (or some integration tests that actually spawn the server), but the less code to manual test the better, and if the code is simple, chances of errors are low. The benefit of this approach is then: • The facade allows TDD using the normal xUnit testing framework support. • The facade allows automated testing of the core server code, i.e. the complex domain code that is vital to keep reliable during refactorings, and feature additions. Of course, there are also liabilities involved • The facade is a Java interface that does not match the REST/HTTP style perfectly, so I need to have helper objects, like the RESTReply interface/class to represent a valid HTTP reply. These require a bit extra coding. Or you may use the classes from, say, the javax.servlet.http package, but still it requires some extra effort. • If you TDD the facade before you have a good idea of the web server code, you may introduce a mismatch between what the web server actually have access to and what you have expected in the facade, requiring rework. So often it is a good idea so do some manual prototyping of the web server to minimize that risk. 2nd Edition

113

REST

7.12 Summary of Key Concepts Representational State Transfer (REST) is an architectural style/pattern that model systems and information in terms of resources that can be named through resource identifiers using URIs. Representations of data are exchanged between servers and clients using standard formats, the media types. All interactions are stateless, so every request must contain all relevant information to process it. At a more concrete level, resources are manipulated using the four CRUD verbs of HTTP: POST (create a resource), GET (read a resource), PUT (update a resource), and DELETE (delete a resource). For simple domains in which resources are independent and there is no need for transactions (updating several resources as one atomic operation) or complex state changes, this Level 1 REST is sufficient. The TeleMed case is an example, as it just deals with creating individual blood pressure measurements and reading them again. For a more complex domain, you need Level 2 REST in which resources also contains references/hypermedia links to other resources that represents state changes. Thus, a returned resources to a client will have a link section with a set of resource identifiers that each can be manipulated using HTTP verbs. In our GameLobby example, a game resource also contains a “move” resource which can be UPDATEd to make a move, thereby indirectly affecting the state of the game. My treatment of REST is highly inspired by Webber et al.’s book, REST in Practice9 . Pedersen and Findus appear in the children books by Svend Nordquist.

7.13 Review Questions Explain the three levels of REST usage in Richardson’s model. Outline the central concepts and techniques of Level 1 REST. Outline the central concepts and techniques of Level 2 REST. Explain how to model complex state changes using HATEOAS. Discuss the benefits and liabilities of REST compared to Broker. Consider the programming model, and the separation of concern with respect to marshalling, IPC, and domain layer. Discuss how TDD and automated testing can be made on a REST based architecture. 9 Jim Webber, Savas Parastatidis, and Ian Robinson: “REST in Practice: Hypermedia and Systems Architecture”, O’Reilly Media, 2010.

2nd Edition

Bibliography