Enterprise Integration with Mulesoft: Learn how to leverage MuleSoft to power Enterprise Integration (English Edition) 9355518501, 9789355518507

Harness the strength of the MuleSoft Anypoint Platform for seamless integration Key Features ● Get familiar with

229 48 38MB

English Pages 228 Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Enterprise Integration with Mulesoft: Learn how to leverage MuleSoft to power Enterprise Integration (English Edition)
 9355518501, 9789355518507

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Enterprise Integration with Mulesoft Learn how to leverage MuleSoft to power Enterprise Integration

Gaurav Aroraa Radhika Atmakuri Tanuja Mohgaonkar

www.bpbonline.com

ii



Copyright © 2023 BPB Online All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor BPB Online or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book. BPB Online has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, BPB Online cannot guarantee the accuracy of this information.

First published: 2023 Published by BPB Online WeWork 119 Marylebone Road London NW1 5PU UK | UAE | INDIA | SINGAPORE ISBN 978-93-55518-507

www.bpbonline.com



Dedicated to To Joseph Bulger, a friend, a mentor, or a colleague, who always inspires me with his positive attitude toward work & life. I have learned a lot from him, how to achieve your goals in life even when there are obstacles preventing you from reaching them.

— Gaurav Aroraa

iii

iv



About the Authors • Gaurav Aroraa is a Lead Integration Architect with IBM and has a plethora of experience across technologies with more than 26 years of experience in the Industry. He also has multiple feathers in his cap, viz. he is a MuleSoft Mentor, Microsoft MVP award recipient, a Mentor of Change with AIM NITI Aayog, Govt. of India, Business Coach with Business Blaster, Govt of NCT of Delhi. He is a lifetime member of the Computer Society of India (CSI), an advisory member and senior mentor at IndiaMentor, certified as a Scrum trainer and coach, ITIL-F certified, and PRINCE-F and PRINCE-P certified, Certified Microsoft Azure Architect. Gaurav is an open-source developer and a contributor to the Microsoft TechNet community. He has authored books across-the-technologies. His recent publications are:

o Microservices by Examples Using .NET Core (BPB)

o Data Analytics: Principles, Tools, and Practices… (BPB)

Recently, Gaurav has been recognized as a world record holder for writing books in exceptional technologies. He has more than 12 patents on his name in disciplined technologies. — Gaurav Aroraa • Radhika Atmakuri is a MuleSoft Architect and Technical Lead with over 17 years of experience in software development and integration. She has worked for companies such as IBM, AST LLC, Avaya, and Applicate IT Solutions, in various roles such as Senior Module Lead, Technical Architect, and Sr. Principal Consultant. Radhika is a certified MuleSoft Certified Integration Architect - Level 1, MuleSoft Certified Platform Architect, and Mulesoft Certified Developer Integration and API Associate. She also holds a Master’s degree in Computer Science from Osmania University. — Radhika Atmakuri • Tanuja Mohgaonkar is a Service Area Leader – Enterprise Integration with IBM (I) Pvt. Ltd. An Architect by profession, Tanuja has over 24 years of experience and has always been passionate about finding optimum solutions to Business problems. She has expertise in leading Integration teams across domains. Tanuja is also an active advocate of Diversity & Inclusion at work. She has been a coach and mentor to several aspiring MuleSoft practitioners in IBM and outside IBM. — Tanuja Mohgaonkar



v

About the Reviewer Joyce Jackson Joseph, also known as Joyce Thoppil, her maiden name, is a certified MuleSoft Delivery Champion and a certified MuleSoft Platform and Integration Architect as well. She has close to two decades of industry experience working with Integrations development, design and architecture and has helped with Integration solutions for varied clients from Insurance, Banking, Retail and even Healthcare domains across the globe. She is very passionate about exploring and learning new technologies and is also a Togaf-certified Enterprise Architect, Google Certified Professional Architect as well as AWS Certified Architect Associate.

vi



Acknowledgements In life, it’s hard to understand things when you don’t find support. My family is one such support system, and I am the luckiest to have them. I would like to thank my wife, Shuuby Aroraa, and my little angel, Aarchi Arora, who gave me permission to write and invest time in this book. A special thanks to the BPB team. Also, a big thanks to Radhika Atmakuri, Tanuja Mohgaonkar (my coauthors), Deepak Gupta, and Akshata Sawant. It was a long journey of revising this book, with valuable participation and collaboration of reviewers, technical experts, and editors. I would also like to acknowledge the valuable contributions of my colleagues and co-worker during many years working in the tech industry, who have taught me so much and provided valuable feedback on my work. Finally, I would like to thank all the readers who have taken an interest in my book and for their support in making it a reality. Your encouragement has been invaluable — Gaurav Aroraa



vii

Foreword • As a MuleSoft Developer Advocate, I’m responsible for evangelizing MuleSoft. Even before I became a Developer Advocate, I was a MuleSoft Ambassadress and a Meetup leader. I’ve always been passionate about learning MuleSoft and exploring MuleSoft. I’ve written several blogs, co-authored a book based on MuleSoft, hosted several MuleSoft Meetups, and have been a global speaker at conferences to drive MuleSoft and Salesforce adoption.

So, to begin with, what is MuleSoft? Yes, you must probably hear that it’s a middleware integration tool. But there’s so much to offer when it comes to the integration ecosystem. MuleSoft is Recognized as a Leader in Gartner Magic Quadrant™ for Integration Platform as a Service, Worldwide (iPaaS), and the credit goes to MuleSoft’s Anypoint Platform.



You can manage your entire API lifecycle - build, manage, deploy, and secure APIs with MuleSoft. You can build composable and reusable APIs, bring value to data in siloes by connecting different end systems, and much more. You have hundreds of in-built connectors, which help you to unlock data and connect different end systems seamlessly. You can also connect to your legacy systems.



MuleSoft’s Anypoint Studio is an Eclipse-based native studio that helps you to implement APIs and build Mule applications. Its low-code integration capabilities and transformation capabilities make integration an easier task.



MuleSoft’s capabilities don’t just limit to integration space; with MuleSoft’s Automation suite comprising of MuleSoft’s Robotic Process Automation (RPA) and MuleSoft Composer, you can take integration to the next level.



With MuleSoft’s RPA, you can automate your day-to-day manual task. MuleSoft’s Composer enables you to seamlessly integrate with external end systems in Salesforce’s ecosystem.



This book covers all the fundamental topics which will you help you design a REST API using Anypoint Platform’s Design Center. It explains how to implement the API and build, test, and debug a Mule application using the Anypoint Studio. It also explains deploying the Mule application on CloudHub and managing the APIs. Every chapter covers the core capabilities of MuleSoft. You can also learn about Non-functional requirements (NFRs) and how to implement the same using MuleSoft.

viii





As a reader, if you’re completely new to MuleSoft, APIs, and integration tools, this book will help you onboard on your MuleSoft journey. After finishing the book, you’ll have a better understanding of MuleSoft, core concepts of APIs and integration, different capabilities of MuleSoft, and so on. The chapters in this book are sequenced in a perfect manner to onboard a newbie and make them integration ready. This book also covers prerequisites to get started with MuleSoft, like concepts related to HTTP/HTTPS, REST APIs, and so on.



On completing the book thoroughly, you should be able to clear the fundamental certification - MCD Level 1 (MuleSoft Certified Developer).



— Akshata Sawant, Senior Developer Advocate, Salesforce



LinkedIn: /akshatasawant02/



Twitter: @sawantakshata02

• Today’s digital age is seeing an acceleration of the transformation of applications across industries. I have been working with multiple clients in their transformation journeys over the last few years, and I have noticed that strong technology choices, extreme automation, focused automation & agile product-centric delivery have accelerated the desired outcomes. Irrespective of the application disposition (Move / Migrate / Build / Transform / Retire), the role of integration in connecting different applications, systems & data sources is extremely critical. Being a Thought leader in Hybrid Cloud Transformation with over 26+ years of experience in the IT industry, I have led several of such complex modernizations & transformations across industries, discussed the future of the technical landscape with tech leaders, built competencies across integration software & assess the comparative applicability of this software in the context of specific clients’ needs.

As a leading provider of Integration software, MuleSoft enables organizations to connect different applications, systems, and data sources seamlessly. With MuleSoft’s Anypoint Platform, companies can build, design, and manage APIs (Application Programming Interfaces) that enable communication between different applications. MuleSoft has multiple connection offerings, supported by out-of-the-box templates, which allow faster deployment with minimum



ix

re-work or errors. Some of these unique features have made Mulesoft one of the most popular choices in the industry.

This book introduces the reader to the concept of Integration and how it fits into the world of applications. It then introduces Mulesoft with all its features, providing a comprehensive guide to the readers with a solid understanding of the platform and how to use it to solve integration challenges. The authors have done an excellent job of breaking down complex concepts and providing clear, practical examples to make learning simple & fun. They have also included case studies and real-world scenarios that demonstrate the benefits of using MuleSoft in different industries and use cases. The book covers the entire lifecycle of transformation, from designing API contracts & connectivity, writing Mule applications using Data Weave, using Anypoint Studio IDE to develop and test, including test-driven development, handling errors & debug issues, and also details the recently launched CloudHub 2.0, orchestrated containerized integration platform as a service (iPaas). In summary, whether you are an Architect, Developer, Tester, or business leader, this book has something for you.



As someone who has seen the impact of Mulesoft on multiple organizations’ application landscape, I was thrilled to see a book like this. It starts from the basics, covering the fundamentals of integration & Mulesoft, in particular, and then deep dives into the various features of the product. It also outlines the architectural perspectives of Mulesoft to me as it traverses a learner’s journey from novice to experienced to expert proficiency.



I hope you appreciate the book as much as I did. I hope you enjoy working on Mulesoft and are armed with the required knowledge to use it to solve your integration challenges.

Cheers!

Deepak Gupta



New Delhi, India

x



Preface Integration of enterprise applications is a complex task that requires a comprehensive understanding of the latest technologies and programming languages. MuleSoft and its supportive tools have become increasingly popular in the field of enterprise integration applications. This book is designed in such a way that a novice or advanced-level reader can refer to this book. This book will be helpful for all novice readers who wish to start their career in the field of integration using MuleSoft. There is no prerequisite to start with the contents of the book. The reader may or may not be technically sound and can be from any technology or any programming language. The book is also designed to provide a comprehensive guide to all readers and integrate enterprise applications with MuleSoft. It covers a wide range of topics, including the basics of RESTful services, Data Weave, Anypoint Platform, Designer, Mule RPA, and inside using a use-case. Throughout the book, you will learn about the key features of MuleSoft and how to use them to integrate enterprise applications that are efficient, reliable, and easy to maintain. You will also learn about best practices for writing and integration of APIs. The usage of Data Weave, an overview of Cloud Hub2.0, Non-functional requirements, and analysis and code-coverage by writing Munits test cases. This book is intended for everyone who wants to start with MuleSoft and wants to learn how to integrate enterprise applications. It is also helpful for experienced developers who want to expand their knowledge of these technologies and improve their skills in building robust and reliable applications. The current book is available as a ready reckoner for experienced professionals. With this book, you will gain the knowledge and skills to become a proficient developer in the field of enterprise integration using Mulesoft. Our intention is to get you ready for the next-generation integration platform using uleSoft. Chapter 1: Introduction to the Integration World – touches almost all landscapes of the integration world. A good start to dive into the integration world. It explains the integration technologies and why part of the requirement of integration tools like MuleSoft. Comparative analysis with cross-interaction technologies with MuleSoft. This chapter sets a good background for all who want to see themselves



xi

placed in the world of Integration. This chapter also presents a detailed overview of the history of MuleSoft by explaining API-LED connectivity concepts. Chapter 2: RESTful World – An Introduction – covers RESTful concepts and different concepts of HTTP protocols, Statuses, methods, etc. This chapter provides an understanding of the RESTful world in today’s modern development era. In the end, readers will be able to understand HTTP verbs, methods, and various statuses. This chapter contains code examples to explain the concepts, which does not mean that you need a development background. The code examples can be skipped. The main objective of this chapter is to collect a basic understanding of the RESTful world. Chapter 3: Anypoint Platform – An Introduction – explains all the tools provided by Mulethe Soft Platform and helps to make yourself ready to start actual development. This chapter can be skipped or overviewed or can be taken as a reference purpose for advanced-level readers. Chapter 4: Designing API – describe the concepts of ‘Designing API contracts and API-led connectivity’. This topic is to establish a standard and structured approach to building APIs that are reusable, scalable, and easy to maintain. Furthermore, you will learn how API contracts define the interface of an API, including input and output data types, expected behavior, and other details, which helps to ensure that all parties involved in developing and using the API understand its functionality and can build their applications accordingly. Following by API-led connectivity. In this topic, you will understand that API-led connectivity involves breaking down an organization’s IT systems into individual building blocks that can be reused across multiple applications and environments. By the end of this chapter, you’ll get acquainted with three layers viz Experience, Process, and System. Chapter 5: Anypoint Studio – An Inside Story – Anypoint Studio, which is an Integrated Development Environment (IDE). By the end of this chapter, you will gain an understanding of the various component of this IDE, viz. Editor, Compiler, Debugger, and so on. Chapter 6: An Introduction to Data Weave – shows the basic concepts of Data Weave. Overs all aspects of Data Weave, writing, and explaining.

xii



Chapter 7: Developing a Project – Connectors at a Glance – covers the phases involved in an API lifecycle and which components of the Anypoint Platform are involved in each phase. You will learn connectors briefly and the advantages of the connectors, and how we can create custom connectors and publish them in the exchange. Chapter 8: Error Handling and Debugging – An Insight Story – covers the various ways of handling errors in Mule by following the best practices. Also, you’ll learn how we can debug the Mule application to troubleshoot application-related issues easily. Chapter 9: Test-Driven Development Using Munit – explains what Test Driven Development is and what are the various advantages of following TDD. You will also learn through the various steps involved in the TDD approach. You’d also go through the different ways to create Munit test cases for the flows and the various scopes involved in Munit test cases. We have gone through various event processors and matchers that are useful in writing Munits. The chapter also covers how to write Munits using the Munit Test Recorder and what are the limitations of using the Test Recorder. Chapter 10: An Overview of NFRs and Mule RPA – covers the implementation of Non-Functional Requirements using API Manager by applying policies and so on. The chapter also touches on the basics of Mule RPA and explains the strength of automation concepts. Chapter 11: CloudHub 2.0 – An Introduction – covers several aspects related to CloudHub 2.0, including the provision of IPass services, a comparison between CloudHub 1.0 and CloudHub 2.0, the creation and configuration of private Spaces, and an examination of the limitations of CloudHub 2.0. Chapter 12: Universal API Management – An Introduction – covers Universal API Management, a set of MuleSoft product capabilities that together provide full life cycle management capabilities to APIs that are deployed anywhere in any architecture or environment. You will learn about the Flex Gateway and its features. The various configuration modes for the flex gateway. In order to enforce standards on APIs and reduce security and compliance risks, the chapter also covers API Governance.



xiii

Code Bundle and Coloured Images Please follow the link to download the Code Bundle and the Coloured Images of the book:

https://rebrand.ly/e0ucbvw The code bundle for the book is also hosted on GitHub at https://github.com/ bpbpublications/Enterprise-Integration-with-Mulesoft. In case there’s an update to the code, it will be updated on the existing GitHub repository. We have code bundles from our rich catalogue of books and videos available at https://github.com/bpbpublications. Check them out!

Errata We take immense pride in our work at BPB Publications and follow best practices to ensure the accuracy of our content to provide with an indulging reading experience to our subscribers. Our readers are our mirrors, and we use their inputs to reflect and improve upon human errors, if any, that may have occurred during the publishing processes involved. To let us maintain the quality and help us reach out to any readers who might be having difficulties due to any unforeseen errors, please write to us at : [email protected] Your support, suggestions and feedbacks are highly appreciated by the BPB Publications’ Family. Did you know that BPB offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.bpbonline.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at : [email protected] for more details. At www.bpbonline.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on BPB books and eBooks.

xiv



Piracy If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author If there is a topic that you have expertise in, and you are interested in either writing or contributing to a book, please visit www.bpbonline.com. We have worked with thousands of developers and tech professionals, just like you, to help them share their insights with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.

Reviews Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions. We at BPB can understand what you think about our products, and our authors can see your feedback on their book. Thank you! For more information about BPB, please visit www.bpbonline.com.

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors: https://discord.bpbonline.com



xv

Table of Contents 1. Introduction to the Integration World........................................................... 1 Introduction............................................................................................................... 1 Structure..................................................................................................................... 1 Objectives................................................................................................................... 2 Definition................................................................................................................... 2 Types of Middleware................................................................................................. 2 MuleSoft: history and the idea................................................................................. 5 Why MuleSoft for Integration................................................................................. 6 Introduction to other integration tools.................................................................. 7 Conclusion................................................................................................................. 8 Questions.................................................................................................................... 8 2. RESTful World – An Introduction................................................................. 9 Introduction............................................................................................................... 9 Structure..................................................................................................................... 9 Objectives................................................................................................................. 10 Introduction to RESTful......................................................................................... 10 Overview of HTTP protocol............................................................................13 Create a secured connection using HTTP.......................................................14 An overview of the URI and its types..............................................................15 HTTP verbs, methods, statuses, and more.......................................................... 19 Approaches to developing RESTful services........................................................ 27 Conclusion............................................................................................................... 30 Questions.................................................................................................................. 30 3. Anypoint Platform – An Introduction..........................................................31 Introduction............................................................................................................. 31 Structure................................................................................................................... 32 Objectives................................................................................................................. 32 An overview of Anypoint platform....................................................................... 33 Anypoint Design Center........................................................................................ 33 Flow designer....................................................................................................33 API designer.....................................................................................................33

xvi

 Open Design Center.........................................................................................33 Anypoint Exchange................................................................................................. 36 Open Anypoint Exchange................................................................................36 Anypoint Management Center.............................................................................. 43 Access Management.........................................................................................43 How do license and subscription works............................................................... 52 Conclusion............................................................................................................... 53 Questions.................................................................................................................. 53

4. Designing API................................................................................................55 Introduction............................................................................................................. 55 Structure................................................................................................................... 55 Objectives................................................................................................................. 56 Designing API contract.......................................................................................... 56 What is a RAML..............................................................................................56 API design in use..............................................................................................57 API-Led contract..................................................................................................... 60 Experience Layer..............................................................................................63 Process Layer....................................................................................................64 System Layer.....................................................................................................64 Correlation between Experience, Process, and System Layer.........................66 Conclusion............................................................................................................... 67 Questions.................................................................................................................. 67 5. Anypoint Studio – An Inside Story...............................................................69 Introduction............................................................................................................. 69 Structure................................................................................................................... 69 Objectives................................................................................................................. 70 What is an IDE, and why do we need it............................................................... 70 Advantages of IDE............................................................................................70 Anypoint Studio- an introduction........................................................................ 71 Studio Editor....................................................................................................74 Message Flow Tab.............................................................................................74 Global Elements Tab........................................................................................74 Configuration XML tab...................................................................................75 Package Explorer..............................................................................................76



xvii

Mule Palette......................................................................................................76 Properties..........................................................................................................78 Console.............................................................................................................78 Problems...........................................................................................................78 Creating Mule applications from Studio.............................................................. 79 Running Mule applications from Studio.............................................................. 81 Debugging Mule application............................................................................83 Deploying Mule application from studio to CloudHub..................................85 Export documentation from Studio................................................................85 Perspectives..................................................................................................86 Conclusion............................................................................................................... 87 Questions.................................................................................................................. 87 6. An Introduction to Data Weave.....................................................................89 Introduction............................................................................................................. 89 Structure................................................................................................................... 90 Objectives................................................................................................................. 90 Basic concept of Data Weave................................................................................. 90 Data types.........................................................................................................91 Data selectors...................................................................................................91 Single-value selector.........................................................................................91 Multi-value selector..........................................................................................91 Range selector...................................................................................................91 Index selector....................................................................................................92 Variables...........................................................................................................92 Operators..........................................................................................................93 Functions..........................................................................................................95 Rules to define functions:............................................................................95 Type constraints functions..........................................................................96 Optional parameters functions...................................................................97 Function overloading..................................................................................97 Creating custom modules.................................................................................98 Precedence in DataWeave...................................................................................... 99 Order of chained function calls......................................................................101 Debugging DataWeave...................................................................................102 Debugging DataWeave Online:.....................................................................103

xviii

 Data Weave library........................................................................................104 Prerequisites to create Data Weave library..............................................104 Publishing Data Weave library.................................................................106 Conclusion.............................................................................................................109 Questions................................................................................................................109

7. Developing a Project – Connectors at a Glance..........................................111 Introduction...........................................................................................................111 Structure.................................................................................................................111 Objectives...............................................................................................................112 API lifecycle...........................................................................................................112 Phase 1: Design..............................................................................................113 Phase 2: Prototype..........................................................................................114 Phase 3: Validate............................................................................................115 Phase 4: Develop............................................................................................117 Phase 5: Test...................................................................................................118 Phase 6: Deploy..............................................................................................118 Phase 7: Operate.............................................................................................120 Phase 8: Publish.............................................................................................122 Phase 9: Feedback...........................................................................................123 Phase 10: Start Over......................................................................................123 MuleSoft Connectors at a glance.........................................................................124 Advantage of Connectors...............................................................................124 Process to create custom Connectors.............................................................125 Steps to publish Custom Connectors on Anypoint Platform exchange...129 Conclusion.............................................................................................................131 Questions................................................................................................................131 8. Error Handling and Debugging – An Insight Story....................................133 Introduction...........................................................................................................133 Structure.................................................................................................................133 Objectives...............................................................................................................134 Error handling.......................................................................................................134 Classification of errors..........................................................................................135 Handling of errors.................................................................................................135 Try scope.........................................................................................................136



xix

Raise error......................................................................................................136 Validation module..........................................................................................137 On-error continue..........................................................................................139 On-error propagate........................................................................................140 Global error handler......................................................................................141 Best practices to define Error Handler...............................................................141 Debugging a Mule application............................................................................142 Debugging a remote Mule application..........................................................145 Conclusion.............................................................................................................147 Questions................................................................................................................147 9. Test-Driven Development Using Munit......................................................149 Introduction...........................................................................................................149 Structure.................................................................................................................149 What and why TDD..............................................................................................150 Advantages of using TDD..............................................................................150 Munit- an introduction........................................................................................150 Munit test suit........................................................................................................152 How to create Munit Test for a Mule flow.....................................................152 Munit test recorder.........................................................................................158 Creating Munits using test recorder...............................................................159 Limitations of using test recorder..................................................................162 Conclusion.............................................................................................................163 Questions................................................................................................................163 10. An Overview of NFRs and Mule RPA..........................................................165 Introduction...........................................................................................................165 Structure.................................................................................................................165 Objectives...............................................................................................................166 Overview to NFRs.................................................................................................166 Importance of NFRs.......................................................................................166 Implement NFRs using Anypoint manager.......................................................167 Use case: Mobile APIs....................................................................................168 An overview of MuleSoft RPA.............................................................................169 Importance of automation...................................................................................170 Conclusion.............................................................................................................171

Questions................................................................................................................171 11. CloudHub 2.0 – An Introduction................................................................173 Introduction...........................................................................................................173 Structure.................................................................................................................173 Objectives...............................................................................................................174 About CloudHub 2.0.............................................................................................174 Shared spaces in CloudHub 2.0................................................................174 Private spaces in CloudHub 2.0................................................................175 Creating Private Spaces.............................................................................175 Terminology changes......................................................................................180 Replicas......................................................................................................181 Clustering..................................................................................................181 Availability and scalability.......................................................................182 Disaster recovery of replica.......................................................................182 Redundancy...............................................................................................182 Zero-downtime updates............................................................................182 Limitations of CloudHub 2.0....................................................................182 Conclusion.............................................................................................................183 Questions................................................................................................................183 12. Universal API Management – An Introduction..........................................185 Introduction...........................................................................................................185 Structure.................................................................................................................185 Objectives...............................................................................................................186 Why UAPIM..........................................................................................................186 Discover APIs............................................................................................186 Securing the APIs using Flex Gateway..........................................................187 Managing the APIs using API Manager........................................................193 Enforce standards using API governance......................................................194 Creating API marketplace experiences:.........................................................199 Conclusion.............................................................................................................200 Questions................................................................................................................201 Index...................................................................................................................203

Chapter 1

Introduction to the Integration World Introduction

The world drastically changed in the last 10 years as we entered a digital era. And what a change it has been! Mankind rapidly moved from real to virtual-real, books to e-books and Kindles, working from the office to practicing working from anywhere! All due to connected systems and digital platforms. The most important component of a digital landscape is its “Middleware or Integration” layer. Be it streaming platforms like Amazon Prime and Netflix or E-commerce websites like Shopify and Zoho, integration of data from various partners, vendors, and payment gateways require a well-developed Integration Engine.

Structure

In this chapter, we will discuss the following topics: • Definition

• Types of middleware

• MuleSoft-history and the idea • Why MuleSoft for integration

• Introduction to other available integration tools

2



Enterprise Integration with Mulesoft

Objectives

The aim of this chapter is to highlight the current business and development trends of the industry. This chapter brings the history of MuleSoft, the idea behind the technology, and its core-basic insight.

Definition

Enterprise Integration is the task of uniting databases and workflows associated with disparate systems and business applications to ensure a consistent flow of information to provide better insight into organizational data. By merging and optimizing data and workflows between multiple software applications, organizations can achieve integration of disparate systems so as to support agile business operations.

Types of Middleware

Middleware connects applications, databases, and devices in order to provide unified services to end users. Middleware also connects applications and systems which are not designed to connect with each other by providing a gateway. Refer to the following figure:

Figure 1.1: Types of Middleware:

Introduction to the Integration World



3

Enterprise Service Bus (ESB): ESB is a pattern where a central component or an Integration application talks to the surrounding applications/ systems. The various functions performed by an ESB are data transformation, message routing, conversion of communication protocols, and the management of multiple requests. ESB works on Service Oriented Architecture (SOA). Refer to the following figure:

Figure 1.2: Enterprise Service Bus

Application interfaces connect with the ESB, and the next processes of transforming protocol, message routing, and data transformation are executed by the ESB application. It is a fully automated engine that enables developers to spend less time integrating. Most of the integration interfaces are reusable, enabling cost and effort savings. While ESB still is a preferred Middleware in many organizations, it also is seen as a bottleneck in a few. Making changes in flows is cumbersome and can often lead to destabilizing surrounding systems. Significant efforts are spent on Testing postupdates in the integration workflows. ESB has proved to be a costlier Middleware to maintain in the long run where the volume of Integrations is high. API-based integration: It is a messenger-based integration that processes incoming requests while ensuring the seamless functioning of Enterprise systems. API-based integration has replaced legacy integration systems due to the growing need for connectivity between disparate systems with uncommon Interfaces. Refer to the following figure:

4



Enterprise Integration with Mulesoft

Figure 1.3: API-based Integration

The different types of API based Integrations are:

• Database API: It is used to connect directly with DBMS. An application can retrieve data from database. An example can be 3rd party login on websites.

• Operating system API: These are used to create an emulator of another Operating System. An example can be Microsoft API for Macintosh can enable connection to Microsoft Cloud service.

• Remote API: It is used for talking to different machines for information exchange. Also, it is used to interact through communication network protocols.

• REST API: Most popular amongst all, it is a Web API that is developed to extend the functionality of Web Applications and Systems. Most of the cloud APIs are REST APIs. The above list includes the most commonly used API types. There are many more APIs that are not as widely used as the above. iPaaS or Integration Platform as a Service: Until a few years ago, Enterprise Service Bus or ESB was the most popular Middleware. ESB served within a Service Oriented Architecture, also called SOA. Today, in the cloud-native environment, demand for Middleware has shifted to Integration Platform as a Service (iPaaS). Refer to the following figure:

Introduction to the Integration World



5

Figure 1.4: Typical IPaas Architecture

iPaaS platform is a set of pre-built tools + connectors that can Integrate disparate systems from varied environments. Along with its tools, connectors, business rules, maps, and built-in code, iPaaS also provides custom development kits for the modernization of applications. IPaaS makes it easy for an organization to connect data, applications, and processes across hybrid cloud environments. IPaaS saves the efforts of purchasing, installing, managing, and maintaining Middleware. MuleSoft Anypoint Platform is one of the most popular iPaaS platforms in the market. In the current dynamic world of Agile delivery, distributed Architecture, and Multi-cloud environment, there is a constant need for mature low-code or no-code Integration platforms which can consolidate data to give a connected user experience. MuleSoft offers a unified solution for Automation, Integration, and APIs that quickly adapts to business requirements and complexity.

MuleSoft: history and the idea

MuleSoft is one of the newest players in the world of Integration. It has grown leaps and bounds in just 15 years. Founded in 2006 by Ross Mason and Dave Rosenberg. The name “Mule,” which was its original name, literally comes from the word “donkey work,” as the platform was founded to escape most of the drudgery work done by the Integration developers. Later in 2009, the name was changed to MuleSoft. MuleSoft was acquired by Salesforce in 2018, and since then, there has been no looking back for the company. MuleSoft is one of the most popular Integration platforms. There is a high demand for MuleSoft Practitioners. MuleSoft is based on a designfirst and API-led approach, and the aim of this book is to make the readers Heroes in the MuleSoft world irrespective of their prior knowledge of the subject.

6



Enterprise Integration with Mulesoft

Here are a few pieces of evidence to prove the growing demand for MuleSoft in the Industry:

• GitHub activity: There are numerous developers actively participating on GitHub, including MuleSoft’s official page. Reference: https://github.com/mulesoft

• Stack overflow activity: In the last year- 1M discussion.

Reference: https://stackoverflow.com/questions/tagged/mule

• Relevant YouTube link: Approx. 1M subscribers on different YT channels, including the MuleSoft’s official YT channel.

Reference: https://www.youtube.com/channel/UCdKHAtrkG11idCHbuDEfJ5A

• Google Trends: In the past 12 months, 180% rise in various searches related To MuleSoft only in India.

Reference: https://trends.google.com/trends/explore?q=%2Fg%2F11c3wvpr9q&geo=IN

• Job requirement: During the last quarter (past 3 months), 10,130 job-posting have been posted. Reference: https://www.naukri.com/mule-soft-jobs

Why MuleSoft for Integration

MuleSoft provides Innovative Integration to give faster, easier, and value-for-money solutions for the above-mentioned use cases, business scenarios, and more. MuleSoft provides an API-led approach which means that the solution can adapt to new applications, providing reusable APIs whenever the organization is ready to upgrade or scale. • With API-based applications, it is easier to couple or decouple systems. This helps in keeping a flexible bandwidth on the network.

• With reusable APIs, developers can optimize their time of development in discovering new ways to access legacy systems, data, and cloud applications. MuleSoft also provides the following advantages for developers: • Decreases development time:

It can reduce the time for resolution by efficiently managing all resources from one window.

Introduction to the Integration World



7

• Agility:

It can improve agility with an evolving architecture supported by the specific requirements of a business.

• Better innovation:

It can increase innovation and creativity value across the enterprise through tools that enable faster development and implementation of APIs.

• Customer satisfaction and experience:

It helps in increasing customer satisfaction and gaining competitive advantages by delivering a better user experience. • Increased productivity:

It can reduce development time and increase productivity through open technologies that are capable of promoting modularity and collaboration.

MuleSoft has different connection offerings, many of which are out-of-the-box templates that put your deployment at 80% complete the moment you click INSTALL. All you have to do is select the source and the target, and the data starts flowing. With the efficiency of the API-led solution, a 4GB laptop is sufficient to write code, eliminating the need for costly devices.

Introduction to other integration tools

MuleSoft is, of course, not a single integration tool in the industry; there are more integration tools in the industry that compete with MuleSoft. Following are the top 11 most trending integration tools as of the date (at the time of writing of this book): • IBM API Connect (https://www.ibm.com/cloud/api-connect)

• IBM Cloud Pak for Integration (https://www.ibm.com/cloud/cloud-pak-forintegration) • Azure API Management (https://azure.microsoft.com/en-in/services/apimanagement/) • Dell Boomi (https://boomi.com/)

• Jitterbit (https://www.jitterbit.com/)

• Apigee (https://cloud.google.com/apigee) • Zapier (https://zapier.com/)

• Cleao Integration Cloud (https://www.cleo.com/cleo-integration-cloud)

8



Enterprise Integration with Mulesoft

• Celigo (https://www.celigo.com/)

• Software AG Webmethods (https://www.softwareag.com/en_corporate/ platform/integration-apis.html) • Workato (https://www.workato.com/) How to choose a right Integration Tool: Choosing the right tool can be a tough task. Every tool has some good features and limitations. The best tool should provide ease of development and save productive hours. A comparison PoV should be run considering business requirements, business size, use cases, security, and budget. A piece of expert advice always helps.

Conclusion

In this chapter, we have learned the definition of Middleware, types of Middleware, its history, and the benefits of MuleSoft. In the next chapter, we will understand more about REST APIs and RESTful architecture.

Questions

1. What is Enterprise Application Integration (EAI) 2. What are the types of Middleware? 3. What is Enterprise Service Bus (ESB)? 4. Why to pick Mulesoft for integration? 5. Name & explain three integration tools other than MuleSoft.

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors: https://discord.bpbonline.com

Chapter 2

RESTful World – An Introduction Introduction

The development of the modern era requires a lot of interaction between the server and the client which can carry huge amounts of information. The beauty of this demand is that this interaction should be hassle-free, which tends to be a framework that fits this basic need using the same transitioning of data between server and clients, i.e., between Web interfaces (UI) and backend frameworks. Again, one needs to identify that the way should be such that it works irrespective of any language. The most popular and known formats are XML and json; these formats are supported by almost every framework. We are simply discussing the RESTful world.

Structure

In this chapter, we will discuss the following topics: • Introduction to RESTful

• HTTP verbs, methods, statuses, and more

• Different approaches to developing RESTful services

10



Enterprise Integration with Mulesoft

Objectives

This chapter covers the RESTful concepts and different concepts of the HTTP protocols, statuses, methods, etc. This chapter provides an understanding of the RESTful world in today’s modern development era. In the end, readers will be able to understand HTTP verbs, methods, and various statuses. This chapter contains code examples to explain the concepts, which does not mean that you need a development background. The code examples can be skipped. The main objective of this chapter is to collect a basic understanding of the RESTful world.

Introduction to RESTful

RESTful services are on REST architectural style, laid over RESTful framework. The RESTful framework is the most famous and well-defined framework for developing services/APIs. Here it’s worth understanding REST stands for Representational State Transfer. The question that would come to mind so is what RESTful services are. To cover this part, let’s first understand RESTful services: RESTful services, or simply REST services, are web services that use the REST architectural style to provide a standard way to access and manipulate web resources. Tip: REST services use HTTP methods (such as GET, POST, PUT, DELETE, etc.) to perform operations on resources identified by URIs.

Refer to the following figure:

Figure 2.1: Pictorial overview of RESTful services

RESTful World – An Introduction



11

The preceding Figure 2.1 is a pictorial overview of RESTful service, which simply explains that a service has GET, POST, PUT, and DELETE methods. These methods refer to the intended operation inside the service. Further, it is being called/ consumed by any language, e.g., JAVA, Python, ASP.NET, React. The interoperation data exchanges or requests and responses between service and calling (aka the consumer) are using json or XML. Now, let us understand the REST architectural style. The REST architectural style is a set of constraints that provide guidelines for designing scalable, reliable, and maintainable web services. Refer to the following figure:

Figure 2.2: Layered diagram of REST architectural style

The preceding Figure 2.2 is defined as a simple REST architectural layered diagram. A typical representation of the REST architectural style consists of layers, viz., with the client on the top layer, the server on the bottom layer, and intermediate layers for load balancing, caching, and security. The layers interact through the standardized interface defined by the REST constraints, allowing for scalable and maintainable web services. The REST architecture defines the following six constraints: • Client-server architecture: The client and server are separated into different components that interact through a standardized interface. • Statelessness: The server does not store any client context between requests. Each request from a client must contain all the information necessary to understand the request, allowing the server to be stateless.

12



Enterprise Integration with Mulesoft

• Cacheability: Clients can cache responses, reducing network traffic and improving scalability. • Layered system: The system is divided into layers, with each layer responsible for a specific aspect of the system. This allows the system to be scalable, reliable, and maintainable.

• Code on Demand (Optional): Servers can send executable code to clients, allowing them to dynamically extend their capabilities. • Uniform interface: The REST architectural style defines a uniform interface between components, allowing components to interact with each other through a standardized set of operations. Tip: R  oy Fielding has defined REST in his Ph.D. dissertation, which is available at: https://ics-uci.edu/~fielding/pubs/dissertation/top.htm

Further to the above, in the RESTful world - A RESTful framework is a set of guidelines and standards for creating scalable, reliable, and maintainable REST services. It provides a structure for designing and implementing RESTful APIs that adhere to the constraints of the REST architectural style, such as statelessness, clientserver architecture, and a uniform interface. For example, some of the popular RESTful frameworks include: • Flask (Python) • Spring (Java)

• Express (JavaScript/Node.js) • Laravel (PHP)

These frameworks often provide features such as routing, request handling, and response generation, as well as tools for data validation, security, and testing. The use of a RESTful framework can help streamline the development of RESTful APIs and ensure that they are built with best practices in mind. Note: You will be learning MuleSoft in the coming chapters, so it is worth clarifying that MuleSoft is not a RESTful framework, but it is a platform for building, integrating, and managing APIs and microservices. MuleSoft provides a range of tools and technologies for creating, deploying, and managing APIs and microservices, including support for RESTful services. MuleSoft provides a powerful, flexible, and scalable platform for building and deploying RESTful services, making it a popular choice for organizations that need to integrate multiple applications and services. However, MuleSoft is not a dedicated RESTful framework like Django Rest Framework or Flask-RESTful, and it offers more than just RESTful services.

RESTful World – An Introduction



13

REST is stateless; being stateless, the state of the system is always different. This clarifies that REST works on the serve-and-forget principle. In simple words, whenever any request comes to the server, the server serves the request and forgets the same. In this way, when the next request comes, whether it is coming from the same source, the request does not depend on the state of the previous request. So, the requests are handled independently at and by the server. Now a big question is how server identifies the similar requests. In other words, how each request finds the required resources on the web server. The answer to this question is Uniform Resource Identifier (URI). The HTTP connection served by the requests has URI; the identifier assists in searching/locating the specified resource on the web server. This can also be simplified as every resource on the web is assigned a unique identifier (URI). To make available these resources, the common type is Uniform Resource Locator (URL) being used on the web. Note: When we say that HTTP (Hypertext Transfer Protocol) is a stateless protocol, it means that the server does not maintain any state about the client’s previous requests. Each request from the client to the server is treated as an independent and separate transaction without any knowledge of any previous requests. In other words, when a client sends a request to the server, the server processes the request and sends back a response, but it does not keep track of any information about the client’s previous requests or interactions with the server. This allows the server to handle requests from multiple clients in a more efficient and scalable manner. To maintain state across multiple HTTP requests, web applications typically use techniques such as cookies, session tokens, or URL rewriting to pass state information back and forth between the client and the server.

Overview of HTTP protocol Before we discuss the various types of URI, let’s first understand how HTTP connection works. HTTP is a request-response protocol, which means that the client sends a request to the server, and the server responds with a response. Tip: H  TTP stands for HyperText Transfer Protocol and is the foundation of communication on the World Wide Web.

Here is a brief overview of how the HTTP/HTTPS protocol works:

• The client initiates a request: A client, typically a web browser, initiates an HTTP request by establishing a TCP connection to a server, specifying the URL of the resource it wants to access and the type of request it wants to make (e.g., GET, POST).

14



Enterprise Integration with Mulesoft

• The server responds: The server receives the request, processes it, and sends back a response message, which includes a status code and the requested resource (e.g., an HTML page or an image). • Connection is closed: After the server sends the response, the connection between the client and server is closed unless the client requests a persistent connection. • The browser renders the page: The client receives the response and renders the page in the browser.

Create a secured connection using HTTP HTTP is an unsecured protocol, which means that data is transmitted in plaintext and can be intercepted by anyone with access to the network. HTTPS (HTTP Secure) is a secure version of HTTP that uses encryption to protect the communication between the client and the server. • Use HTTPS: HTTPS (HTTP Secure) is an extension of HTTP that encrypts the communication between the client and server using SSL/TLS. It ensures that the data being exchanged is secure and cannot be intercepted by a third party. • Use SSL/TLS: Secure Sockets Layer (SSL) and Transport Layer Security (TLS) are cryptographic protocols that are used to secure HTTP communication. They provide encryption, data integrity, and authentication, making it difficult for attackers to eavesdrop on the communication. • Use authentication: Authentication is the process of verifying the identity of the client or server. It ensures that only authorized parties can access the resources. Authentication can be done using various methods such as username and password, certificates, tokens, and biometrics. • Use encryption: Encryption is the process of converting plain text into cipher text that can be read only by the intended recipient. It ensures that the data being exchanged is secure and cannot be intercepted by a third party. Encryption can be done using various algorithms such as AES, RSA, and DES. • Use firewalls: Firewalls are network security systems that monitor and control incoming and outgoing network traffic. They can be configured to allow or block specific types of traffic based on predefined rules. Firewalls can help prevent unauthorized access and protect against network-based attacks.

RESTful World – An Introduction



15

An overview of the URI and its types URI stands for Uniform Resource Identifier, which is a string that identifies an Internet name or resource. URIs provide a way to locate and access resources on the Internet. Note: For simplicity, we used Postman to test API. You can download postman from https://www.postman.com/downloads/. However, you can use any tool, editor, or IDE to test sample APIs, used for examples. We have used free or open APIs to explain the topics below: • OpenWeatherMap: This API provides current and forecast weather data for any location in the world. Refer to https://openweathermap.org/api. • REST Countries: This API provides information about countries, including their flags, currency, and language. Refer to https://restcountries.com/#apiendpoints-v3-all. • Random User Generator: This API generates random user data, including name, address, and profile picture. Refer to https://randomuser.me/

The URI is of the following types: • URL stands for Uniform Resource Locator and refers to a web resource that specifies its location and a recovery mechanism. This is a way to identify and locate resources on the Internet, such as web pages, pictures, and files. A URL is like a path to a website. It starts with a letter, like www.gauravarora.com, and it includes numbers and letters, having the following parts/ components: o  Scheme: The protocol or scheme used to access the resource, such as HTTP, HTTPS, FTP, etc. o  Authority: The authority component contains information about the server hosting the resource. It typically includes the domain name or IP address of the server. o  Path: The path component describes the location of the resource on the server’s file system. o  Query: The query component is used to specify parameters to be passed to the server. o  Fragment: The fragment component specifies a specific section of the resource to be displayed.

16



Enterprise Integration with Mulesoft



To understand the URL components in more detail, refer to the following code example:

https://restcountries.com/v3.1/name/India?fullText=true#name

where; https – is the scheme rescountries.com – is the authority /v3.1/name/India – is the path fullText = true – is the query parameter name – is the fragment

• URN stands for Uniform Resource Name, which is a type of URI. Unlike URLs, which are used to locate a resource on the web, URNs are used to identify a resource without any information about its location. In other words, a URN provides a unique name for a resource that can be used to reference it regardless of where it is located. This makes URNs particularly useful for the persistent identification of resources such as books, academic papers, or other types of content that might be moved or renamed over time. URNs have a specific syntax, which includes a namespace identifier followed by a specific identifier for the resource.

For example, the following is an example of a URN:

urn:isbn:0451450523



In this example, urn indicates that this is a URN, isbn is the namespace identifier, and 0451450523 is the identifier for the specific resource. URNs can be used in a variety of contexts, such as for academic papers, digital libraries, or digital object identifiers. However, they are not as commonly used as URLs, which are used to locate resources on the web.

• Data Uniform Resource Identifier (URI) is a type of URI scheme that allows developers to include data as a URI. This means that data can be embedded directly into HTML, CSS, JavaScript, or any other document format that supports URIs. Data URIs are typically used to embed images, videos, and other binary data into web pages, reducing the number of HTTP requests needed to load a page.

RESTful World – An Introduction





17

A Data URI consists of four parts:

o Scheme: The scheme of a Data URI is always data. o  MIME type: The MIME type of the data being embedded. This tells the browser what kind of data is being embedded, such as image/jpeg for a JPEG image, text/plain for plain text, etc. o  Optional parameters: Data URIs can include optional parameters such as character encoding, which is specified using the charset parameter. For example, charset=UTF-8 specifies that the data is encoded in UTF-8. o  Data: The actual data being embedded, encoded as base64 or raw binary data.

Consider the following code-block:

data:image/jpeg;base64,UklGRtYGAABXRUJQVlA4WAoAAAA4AAAANgAAJQAASUNDUKgBAAAAAAGobGNtcwIQAABtbnRyUkdCIFhZWiAH3AABABkAAwApADlhY3NwQVBQTAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA9tYAAQAAAADTLWxjbXMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAlkZXNjAAAA8AAAAF9jcHJ0AAABTAAAAAx3dHB0AAABWAAAABRyWFlaAAABbAAAABRnWFlaAAABgAAAABRiWFlaAAABlAAAABRyVFJDAAABDAAAAEBnVFJDAAABDAAAAEBiVFJDAAABDAAAAEBkZXNjAAAAAAAAAAVjMmNpAAAAAAAAAAAAAAAAY3VydgAAAAAAAAAaAAAAywHJA2MFkghrC/ YQPxVRGzQh8SmQMhg7kkYFUXdd7WtwegWJsZp8rGm/fdPD6TD//3RleHQAAAAAQ0MwAFhZWiAAAAAAAAD21gABAAAAANMtWFlaIAAAAAAAAG+iAAA49QAAA5BYWVogAAAAAAAAYpkAALeFAAAY2lhZWiAAAAAAAAAkoAAAD4QAALbPQUxQSMQCAAABkHRr2/E6T/Lm+Ixqu1Pbtm3bbXys1Y663KE50sy23Z9wbDVfPr33Wvne701OqmlEOHAbSZFcx1gLPY9QNDWSX8gnF8Ue5yeEwuHwL5wlRmIMAICOwpb5DhOAgYtSPBKtpb7B4JzruCAHIB6Af7uE+HDFwMmYjIMJXI6oyMmiKKA0IofTxuqU5kqzGsidp3SrhtTBYMWZGWI7chB1z3n9ZdSXl1ldiBwyJ9dUXV1CxGT2HwJwtwcx4RpsAmrG1gANbvnNiW0A3g0gJnGgECEDNQOFh9cAFaVjSqEC54iYxGYoOrRZxCSKoHEVFZ3IRaegmSbKx5aDmxqOUILEFujQoA+gKKDiOtF0GBwiE9zEBPopxaHhbTSRYBg9hg5uw6HjNv2Awbkdh475iq5FEsK+fISEApltVauKiuIxxVA1TTXr+rxGWNM0BUc3CeUUXIQtmccg5tk6iDEnmRCz5ifEnN8LMQ2BoBV/0dCxhb5AIODPXj0ky+uLJLi7y+6AVcCbPXB1jj/ SyZc3pX++L2DF6/FblTz5g8fke6xuWauGCd38GXu67skQWuYOWClsgTdncr9cq5zP71YhJmh/8jgjhicLnsfw5AoU4QjssX/EvQlKtI80tcpGKOpHFkAH5xr2Fdp+qO9OGJzLf+geDQeP+kP0FhrsoOIW+WFwyE+ziUTXocpPszdEAzRosFFR0dVGflK7qHM5VNlJrfUnRjM06CHss48QeSWEETovDPvAahgKjosjNJMYMer/FsABYWC7U5JUDdDoFgaYUfe7AE5vBfDWemxVoSWXa7cFK85MJ2JMZsyXV1ldratYbvqZ4sN-

18



Enterprise Integration with Mulesoft rK6+Iy4NtcUlJTEoSFiOZ2l4uslcjcjqIEpMTk4Xfsy9ZQgPGiKTqupPT5SAhklrkZBRDoulBToop8eD4H9GFEaqNPxMAR328ufErHA4rRkX3liBWUDggegEAAFAKAJ0BKjcAJgA+MRSHQqIhDVUCABABgloAL9rcg0Xrn45fjN0fXOHgbmdj17Gv0Zv2A9wG8Z+gB+nPWRft16OgLeAJTqoMHez+ZG7VcDTBgDOow5G1xcMuFRQA/ v6G8qVf/7f27H+Z4nvDMGGdTEMZNJePh3jv/9hImFruaR6rTa8odh6zzNv8XgyqHvpLpEObUjbdmMJf5fpQAj/9UflmK3/ZjS8KUVf8l9Mxl113Xlao/33PrEAbhko9CxZGpAeP+Y7i33E1/rvxgRquO4B39Nd+9dI4nWjeTDRVzlqTnL9U5S18vwsHbZURuJLkBZ1GDXIF6+8Q17jZcYLeq3k4Fn5oniAlirgfVQ9Hvm2b9LZCKCFmasK+Vc/x/ zN9z1qdBJHm0Z2OHyNjFfeBgY64tlFDkPsr/7jKE2Avv+7+3n/8m3wRW/9OH/PoNqQCm+eYugXIz5k1WR1WDPF/EoGKibtpkI81G9YinDTbGv/4k6BwiElXg9AAAEVYSUa6AAAARXhpZgAASUkqAAgAAAAGABIBAwABAAAAAQAAABoBBQABAAAAVgAAABsBBQABAAAAXgAAACgBAwABAAAAAgAAABMCAwABAAAAAQAAAGmHBAABAAAAZgAAAAAAAABJGQEA6AMAAEkZAQDoAwAABgAAkAcABAAAADAyMTABkQcABAAAAAECAwAAoAcABAAAADAxMDABoAMAAQAAAP//AAACoAQAAQAAADcAAAADoAQAAQAAACYAAAAAAAAA

Where;  The scheme is data.

 The MIME type is image/jpeg.



 There are no optional parameters







To use Data URI in the HTML document, simply include the source of an image tag or as the content of CSS property; refer following code blocks:

The data is encoded using base64 and begins with UklGRtYGAABXRUJQVl…

//html

//css url('data:image/jpeg;base64,UklGRtY…')

• URC stands for Uniform Resource Citation and is a type of URI that is used to identify scholarly works such as articles, books, and other publications. URCs typically include information about the author, title, publisher, and other metadata about the work.

RESTful World – An Introduction



19

HTTP verbs, methods, statuses, and more

It specifies how messages should be formatted, transmitted, and interpreted between client and server applications. HTTP uses a combination of HTTP verbs, methods, and statuses to communicate between the client and server. HTTP verbs HTTP verbs, also known as HTTP request methods, are used by the client to request an action from the server. The following are the commonly used HTTP verbs: • GET is an HTTP verb that is used to retrieve data from a server. The GET request is a safe and idempotent method, meaning that it can be called any number of times without changing the server’s state. The GET method is used to retrieve data from a specific resource, and the response is typically in JSON, XML, or HTML format.

The following are the steps to use Postman:



1. Open Postman and create a new request by clicking the New button in the top-left corner.



2. In the request URL field, enter the URL of the resource you want to retrieve. For example, if you want to retrieve a list of all users from the server, the URL might be https://randomuser.me/api/.



3. Select the HTTP method GET from the dropdown menu next to the URL field.



4. Click the Send button to send the request to the server.



5. If the request is successful, the server will respond with the requested data in the response body. You can view the response body in the Body tab of the Postman response pane.

20



Enterprise Integration with Mulesoft



Refer to the following figure:

Figure 2.3: Making a GET call to API using Postman

• POST method is used to send data to a server to create or update a resource. The data sent to the server is included in the body of the request. The HTTP POST method is often used to submit HTML form data to a server.

The following are the steps to use POST request using Postman:



1. Open Postman and create a new request by clicking on the New button.



2. Select the HTTP method as POST.



3. Enter the API endpoint URL in the address bar.



4. In the Body tab, select the raw option and choose the data format you want to send in the request, such as JSON or XML.

5. Enter the data you want to send in the request body.



6. Click the Send button to send the POST request.



Generally, the POST method uses the FETCH method; which is a newer and more powerful way to make HTTP requests in JavaScript. It is a function that returns a Promise that resolves to the Response object representing the HTTP response.

RESTful World – An Introduction





21

The following is an example of using FETCH in React to make a POST request with JSON data:

fetch('https://restcountries.com/v3.1/name/India?fullText=true#name', {

method: 'POST',



headers: {



'Content-Type': 'application/json'



},



body: JSON.stringify({



name: 'India'



})

})

.then(response => response.json())



.then(data => console.log(data))



.catch(error => console.error(error))



In this example, fetch is called with the URL to POST data to, along with an object containing options for the request. The method is set to POST, and the Content-Type header is set to application/json. The body of the request is a JSON stringified object with key-value pairs. The response is converted to JSON and logged to the console, and errors are also logged to the console.

• PUT is an HTTP verb used to update an existing resource on a web server. The resource can be a file, an image, a video, or any other type of data stored on the server. When using PUT, the entire resource is updated with the new data. This means that if a property is not included in the request, it will be set to null or empty on the server.

Here is an example of a PUT request using Postman: 1. Open Postman and create a new request. 2. Set the HTTP method to PUT. 3. Enter the URL of the resource you want to update.

22



Enterprise Integration with Mulesoft

4. In the Headers section, add any necessary headers such as Content-Type to specify the type of data being sent. 5. In the Body section, select the raw option and choose the data format you will be sending (e.g., JSON, XML). 6. Enter the new data for the resource in the body. 7. Send the request.

Refer to the following figure:

Figure 2.4: A sample request for PUT using PostMan



The preceding image tells that we’re sending json data to update a country with nativename in eng and hin.

• DELETE is an HTTP method used to request the deletion of a resource identified by a given URI. This method is typically used to delete data stored on a server or to remove a resource from a website. For example, if we need to delete a record of user with name Korkeavuorenkatu, use following steps to test API using Postman.

1. Open Postman and create a new request



2. Set the request method to DELETE



3. Enter the URL you want to send the DELETE request to in the address bar (e.g., https://randomuser.me/api/user/Korkeavuorenkatu)



4. Hit the Send button to send the request

RESTful World – An Introduction





23

Refer to the following figure:

Figure 2.5: Sample for DELETE API call using PostMan



Once the request is sent, the server will delete the resource (in this case, the user with the name Korkeavuorenkatu) and send back a response to confirm the deletion.



HTTP status codes that may be returned for a DELETE request include:

o 200 OK: If the resource was successfully deleted o  202 Accepted: If the request was accepted for processing, but the deletion has not been completed o  204 No Content: If the resource was successfully deleted and there is no response body o 404 Not Found: If the resource to be deleted was not found o  405 Method Not Allowed: If the HTTP method is not allowed for the requested resource HTTP methods HTTP methods are similar to HTTP verbs, but they are more specific and describe the type of action that is being performed. The commonly used HTTP methods are: • CONNECT: is used to establish a network connection to a web server. It is used in conjunction with the HTTP Upgrade header to create a persistent TCP/IP tunnel between the client and the server. This method is primarily used for proxy servers to establish a secure SSL/TLS connection to a remote server on behalf of the client.

24



Enterprise Integration with Mulesoft



const https = require('https');



const options = {



hostname: 'www.gaurav-arora.com',



port: 443,



method: 'CONNECT',

};

const req = https.request(options);



req.on('connect', (res, socket) => {



socket.write('GET / HTTP/1.1\r\n' +



'Host: www.gaurav-arora.com:443\r\n' +



'Connection: close\r\n' +



'\r\n');



socket.on('data', (chunk) => {



console.log(chunk.toString());



});



socket.on('end', () => {



console.log('Request complete'); });

}); req.end();

In the preceding code- example, we are using the https module to establish a connection to the www.gaurav-arora.com server on port 443 using the CONNECT method. Once the connection is established, we send a regular HTTP request over the tunnel to get the root page of the website. The response from the server is then logged to the console. • TRACE: It is used to retrieve a diagnostic trace of the actions performed by an HTTP request/response. When a TRACE request is received by a server, the server echoes back the received request to the client, allowing the client to see what is being received by the server.

RESTful World – An Introduction



25



Here is an example of using the TRACE method in a request sent through Postman:



1. Open Postman and create a new request.



2. Set the request URL to the endpoint you want to trace, for example: https://randomuser.me/api/.



3. Change the HTTP method to TRACE.



4. Send the request.



Refer to the following figure:

Figure 2.6: Sample TRACE API call using PostMan

• HEAD: is similar to GET, but it only returns the response headers without the response body. It can be used to retrieve meta-information about a resource without transferring the entire resource representation. This can be useful for reducing network bandwidth and improving performance.

Proceeding code-example Head method with Fetch:



fetch(https://randomuser.me/api/', {



method: 'HEAD',

})

.then(response => {



console.log('Headers:', response.headers);

})

.catch(error => { console.error('Error:', error);

});

26



Enterprise Integration with Mulesoft

In this example, we are making a HEAD request to https://randomuser.me/api/. The response object contains only the headers of the resource, which we are logging to the console. If there is an error in the request, we catch it and log the error to the console. • OPTIONS: It is used to determine the HTTP methods and other options that are supported by a web server for a particular URL. It can be used by clients to determine which specific methods are allowed on a resource before sending a request. The OPTIONS method sends a request to the server for a particular resource and retrieves the HTTP headers that describe the allowed methods and other capabilities of the server. The response typically includes a list of the supported methods, such as GET, POST, PUT, DELETE, etc.

Note: Not all servers may support the OPTIONS method, and some may only return a subset of the available methods. It’s also important to ensure that the server is properly configured to handle OPTIONS requests and return the appropriate headers.

Refer to the following figure:

Figure 2.7: Sample OPTION API call using PostMan

HTTP statuses HTTP status codes are used by the server to indicate the result of a client request. The commonly used HTTP statuses are: • 1xx (Informational): The request has been received and is continuing to process. • 2xx (Successful): The request was successful and the server has returned the requested data. • 3xx (Redirection): The request was redirected to another location. • 4xx (Client Error): The request was unsuccessful due to a client error, such as a bad request or unauthorized access.

RESTful World – An Introduction



27

• 5xx (Server Error): The request was unsuccessful due to a server error, such as a timeout or an internal server error.

Approaches to developing RESTful services

This section touches on the chore of the development of RESTful services. Our aim is not to go through in development life-cycle but we’d require to understand the various approaches to develop RESTful services. • Manual approach: This involves writing the code from scratch using a programming language like Java, Python, or Node.js. This approach requires a good understanding of the REST architectural style and the programming language being used. • Framework-based approach: This involves using a framework that provides support for building RESTful services. Popular frameworks include Spring Boot for Java, Django for Python, and Express.js for Node.js. This approach can significantly reduce the amount of code that needs to be written, as the framework handles many of the details of implementing RESTful services. • Low-code approach: This involves using a low-code platform that provides a visual interface for building RESTful services. Examples of such platforms include Salesforce Lightning Platform, Microsoft Power Apps, and Google App Maker. This approach requires little to no coding, making it accessible to non-developers. • API-first approach: This involves designing the API for the RESTful service before any code is written. Tools like Swagger or OpenAPI can be used to create a detailed specification for the API, which can then be used to generate the code required to implement the API. This approach can help ensure that the API is well-designed and meets the needs of the consumers of the service. The following code examples demonstrate the code in various framework/ languages: • React: In React, you can use the Axios library to make RESTful API calls. Here’s an example code snippet for making a GET request:

import axios from 'axios';

axios.get('https://randomuser.me/api/')

.then(response => {

28



Enterprise Integration with Mulesoft



console.log(response.data);



})



.catch(error => {



console.log(error);



});

• Java/Spring MVC: Spring MVC is a popular framework for building RESTful services in Java. Here is an example code snippet for defining a REST endpoint that returns a list of users: @RestController @RequestMapping("/")

public class UserController {



@GetMapping



public List getUsers() {



/ / Code to fetch list of users from database or other data source



return userList;



}

}

• Python/Flask: Flask is a lightweight web framework that can be used to build RESTful services in Python. Here’s an example code snippet for defining a REST endpoint that returns a list of users:

from flask import Flask, jsonify



app = Flask(__name__)



@app.route('/', methods=['GET'])



def get_users():



# Code to fetch list of users from database or other data source return jsonify(user_list)

RESTful World – An Introduction



29

• ASP.NET Web API: ASP.NET Web API is a framework for building RESTful services in .NET. Here is an example code snippet for defining a REST endpoint that returns a list of users: [Route("users")]

public class UserController : ApiController

{

[HttpGet]



public IHttpActionResult GetUsers()



{



/ / Code to fetch list of users from database or other data source



return Ok(userList);



}

}

• Node.js/Express.js: Express.js is a popular framework for building RESTful services in Node.js. Here’s an example code snippet for defining a REST endpoint that returns a list of users:

const express = require('express');



const app = express();



app.get('/users', (req, res) => {



/ / Code to fetch list of users from database or other data source



res.send(userList);

});

app.listen(3000, () => {



console.log('Server listening on port 3000');

});

These preceding code examples are just for reference to get an understanding of the code.

30



Enterprise Integration with Mulesoft

Conclusion

In this chapter, we have gone through the basic building blocks of the RESTful world. The chapter explained the HTTP protocol, verbs, methods, and statuses. The stateless concept has also been understood following by securing the HTTP requests using various ways. Then chapter took us forward and explained the various components and types of URI. In the next chapter, we will discuss about the working of the API using Anypoint platform starting from the introduction to implementation.

Questions

1. Explain HTTP protocol. 2. What do you mean by Stateless? 3. What is the difference between URL and Data URI? 4. What is the difference between HEAD and GET? 5. What are HTTP Statuses?

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors: https://discord.bpbonline.com

Chapter 3

Anypoint Platform – An Introduction Introduction

In Chapter 2, RESTful world – an introduction, we have built our foundation by going through the RESTful services. Because adaption of services strengthens the RESTful APIs as the adaption of API Led Connectivity based APIs strengthens MuleSoft based application. To proceed further we should have basic ideas about different HTTP protocols, including all the concepts that we discussed. Note: If coming from altogether non-programming background or as a newbie in the technology world, make sure you have gone through and understood all the basic concepts, discussed in Chapter 2, RESTful world – an introduction.

The world wide web became an Innovation in the 20th century which spread like wildfire changing the world of technology forever. The next iteration of the web is believed to be the APIs and its capabilities. Just like the www was produced and consumed by anyone, APIs can be produced and consumed by any organization or individual. What needs to be understood is how Integration APIs can be developed to accelerate your organization and business needs. Businesses today, are constantly looking for new ways to transform their organizations by using technology and data to drive

32



Enterprise Integration with Mulesoft

innovation and business results. Organizations typically have scattered system landscape consisting of legacy, multi-cloud systems and processes. The goal of a developer is to integrate them to form a seamless integrated system. Mulesoft Anypoint Platform is a complete solution for API led connectivity. It is

an integration platform which comes with enriched tools, which in simple words, provides a facility to design and develop APIs.

Mulesoft Anypoint Platform is the engine which helps you to deliver connected experience. In this chapter, we will discuss different parts of Anypoint platform and learn more about Anypoint platform.

Structure

In this chapter, we will discuss following topics: • An overview to Anypoint platform • Design Center • Exchange

• Management Center

o API Manager



o Data Gateway



o Runtime Manager

o MQ

o Visualizer

o Monitoring

o Secrete Manager

• How do licenses and subscription work

Objectives

After studying this unit, you should be able to understand all the tools etc, provided by MuleSoft Platform and make yourself ready to start the actual development process. This chapter can be skipped or overviewed or can be taken as a reference purpose for advanced level readers.

Anypoint Platform – An Introduction



33

An overview of Anypoint platform

Anypoint platform powered by MuleSoft is a unified iPass solution for managing end to end API life cycle. This platform is used to design, build, deploy, run, monitor as well as govern APIs in Mule. In coming sections, we will discuss and learn various components and their functions offered by Anypoint Platform.

Anypoint Design Center

Anypoint Design Center is one of the essential components offered by Anypoint Platform to design APIs. Anypoint Design Centre mainly consists of the following:

Flow designer Using Flow designer is a web based low-code tool to develop the Mule applications which connects to systems in the form of workflows.

API designer It is a web-based tool where one can define the API specifications, fragments. Once they are ready, they can be shared at organization level. To share at an organization level, specifications and fragments need to be published in Anypoint exchange. Once this is published in exchange, one can test the specification with the mocked data and show the end user how the API is going to work before starting the implementation.

Open Design Center We discussed in previous sections that Design Center is a part of the Anypoint Platform. So, one can reach to the Design Centre by following these steps:

1. Login to Anypoint using existing credentials from the link: https://anypoint. mulesoft.com/login/ the proceeding Figure 3.1 depicts an Anypoint login screen:

34



Enterprise Integration with Mulesoft

Figure 3.1: Sign in to Anypoint Platform Note: You may notice there is a hyper link Use custom domain in Figure 3.2 – it belongs to an Enterprise License; we will discuss this in the coming chapters.

2. In case you are accessing Anypoint platform first time, please click on Sign up from the screen or use this link: https://anypoint.mulesoft.com/login/ signup see Figure 3.2:

Figure 3.2: Sign up to Anypoint Platform

Anypoint Platform – An Introduction



35

3. After the successful login, you will reach to Anypoint platform Home screen, Figure 3.3 depicts the home screen, however you can view the different options which depend upon the license or non-license versions.

Figure 3.3: Anypoint Platform Home

4. Click on Start designing button from the Anypoint Platform Home screen, if you have permission, you would see the project listing screen.

To access design centre, one should have assigned to the Design Centre permission

Figure 3.4: Design Center Projects

36



Enterprise Integration with Mulesoft



In the preceding image, you can find a listing of all the projects as per your permissions. These projects are either your owned or collaborated with other owners. The screen has the following options available:

5. Filter or search project from the projects table.

6. The projects list table contains the following column:

•  Name: It is the name of your API specification project. We will discuss this in detail in Chapter 4, Designing API.

•  Project Type: It is the type of your project. We will discuss all different available types in the coming chapter during our discussion of designing an API. •  Last Update: It is the date when the contributors of this project modified the project. • Created By: It is the name of the user who created the project.

Anypoint Exchange

Anypoint Exchange is a place where one can find the assets that are published at the organization level and the assets that are built by MuleSoft. Assets can be templates, connectors, custom policies, fragments, APIs, Dataweave libraries which can be reused in any number of APIs. This is the place where one can publish the API specifications, fragments created using Anypoint Design Center so that your assets are available for others in your business organization.

Open Anypoint Exchange To open Anypoint Exchange dashboard/landing page, users should have the permission to access assets. To reach to Anypoint Exchange, follow the steps below: 1. Login to Anypoint using the existing credentials from the link: https:// anypoint.mulesoft.com/login/ (refer to Figure 1.1). 2. In case you are accessing Anypoint platform for the first time, please click on Sign up from the screen or use this link: https://anypoint.mulesoft.com/ login/signup (refer to Figure 1.2).

Anypoint Platform – An Introduction



37

3. After the successful login, you will reach to Anypoint Platform Home screen. Figure 1.3 depicts the home screen, however, you can view different options which depend upon the licensed or non-licensed versions.

4. Click on Discover & Share button from the Anypoint Platform Home screen, if you have the permission, you would see the Anypoint Exchange screen as depicts in Figure 3.5:

Figure 3.5: Anypoint Exchange

The preceding image depicts Anypoint Exchange screen, you may get the different view as per your access/permission levels. To access Anypoint Exchange, one should have assigned to the Exchange with valid/required permission. As shown in the Figure 3.5, Anypoint Exchange can be visualized into 3-parts as detailed below:

• Title or header bar: Topmost area (a black bar in the Figure 3.6) contains name of the organization Business Group where you’ve logged-in and viewing all the assets (e.g. as per image it is my Org). You can switch between different Organization Business Group if you have access to more than one Business Group. Next to the Business Group you can find Exchange Help icon as depicted in Figure 3.6 which is self-explanatory:

38



Enterprise Integration with Mulesoft

Figure 3.6: Exchange Help



We are not going to discuss this in detail. After that you will see username (in image its GA), you can see Profile page and Privacy Policy by clicking on it.

Figure 3.7: User Menu

Anypoint Platform – An Introduction



39

The preceding Figure 3.7 depicts user menu options like:

• Profile when you click on it, you’ll be redirected to Access Management screen, it is basically user profile screen (Figure 3.8).

Profile Settings tab provides the facility to setup configure multi-factor

authentication (MFA), allow to Reset password, also allows to change the default Environment for the current selected Business Group of the Organization.

Figure 3.8: Profile Settings

Contact Info tab allows you to change your basic contact information viz. First name, Last name, Phone Number, Email and so on, refer to Figure 3.9:

Figure 3.9: Contact Info

40



Enterprise Integration with Mulesoft

Membership tab provides the facility to search details of your teams and your

roles in these teams, refer to Figure 3.10:

Figure 3.10: Membership Apps tab provides the details of all the application which are connected with your Profile e.g., you may have setup a Multi Factor Authentication by adding Microsoft connector, refer to Figure 3.11 (currently there is no app connected):

Connected

Figure 3.11: Connected Apps

• Privacy Policy will redirect you to a new page https://www.salesforce.com/ company/privacy/ MuleSoft is a part of Salesforce family now and MuleSoft is also following a common Privacy Policy as a company.

• Sign Out as it is self-explanatory from the label of this option, by clicking on it, you will be signed out and redirected to login page here: https://anypoint. mulesoft.com/login/signin. The left bar contains all the assets available for you, these assets are either built by you, your collaborators or provided by MuleSoft as visualized in Figure 1.4.

• Organization’s assets: it allows to list all the assets from the current organization e.g. my Org is in our case.

Anypoint Platform – An Introduction



41

• Provided by MuleSoft: all Assets available for you, which are either developed by MuleSoft or its associates/partners. • Shared with me: provides a list of all assets which are shared with you by others

My applications: It is self-explanatory from its name, lists all the application belongs

to you, refer to Figure 3.12:

Figure 3.12: My applications

Each client application can have a client ID, client secret, description, URL, redirect URIs, grant types, and a usage dashboard. The usage information enables you to set the verification duration. If not specified, the range is the previous eight days. The usage lists total number of requests to the API, average latency, error rate, and graphs for total requests, requests by HTTP status code, and error percentage. The applications in My Applications are registered using API Manager in Anypoint Platform. To register a client application to an existing API or API Group, the client application must first request the access. When the request is approved by the API or API Group owner, a contract is created between the client application and the API or API Group, and the client application is registered. Public portal: By clicking on it, you will be redirected to a public URL (does not require to Log In), the public assets can be Organizational level. The public portal’s

URL in our case looks like: https://anypoint.mulesoft.com/exchange/portals/myorg. • Search: This part of the Exchange screen provides you the facility to search and navigate to the available assets. Refer to Figure 3.13:

42



Enterprise Integration with Mulesoft

Figure 3.13: Search Exchange assets

You can perform searches for all types of assets as detailed below: • Connectors • Templates • Examples • Policies

• API Groups • REST APIs

• SOAP APIs

• DataWeave Libraries • Async APIs • HTTP APIs

• API Spec Fragments • Custom

Assets published under organization level are private and not accessed by the outside world. Exchange helps to create API portals, view and test APIs using mocking service.

Anypoint Platform – An Introduction



43

Anypoint Management Center

Management Center is the place where the organization can manage APIs, users, partners, analyze the performance of APIs and monitoring the APIs using various subcomponents.

Figure 3.14: Anypoint Management Center

Anypoint Management Center contains the below components:

Access Management Anypoint Access Management can be used to performed the below functions:

• Create business groups and accordingly configure access and permissions to the associated users • Invite people to create Anypoint Platform account • Define roles

• Define Runtime Manager environments • Verify the subscription

• Configure external identity systems

44



Enterprise Integration with Mulesoft

Anypoint Access Management also provides Rest API to connect programmatically to access the information related to the users, organization, business groups, client management, roles and permissions. Refer to the following figure:

Figure 3.15: Anypoint Access Management

API Manager API Manager is used to manage, secure APIs. Using API Manager: • We can apply the policies to an API. • We can define the SLA to an API. • We can add alerts to the API.

• Collect and track analytics data. API Manager is tightly coupled with Anypoint Exchange, Runtime Manager, Anypoint Designer and Anypoint Studio. Once you design the API using Anypoint Designer and publish the API in Exchange it can be visible in API Manager.

Anypoint Platform – An Introduction



45

During the implementation of API in Anypoint Studio we can enable the auto discovery feature so that API Manager can track the API changes whenever they are modified, published, or deployed. Login to Anypoint Platform and click on API Manager from main menu to access API Manager. Refer to the following figure:

Figure 3.16: Anypoint Access Management

Switch the environments to see the Active APIs details. In the Navigation Menu,

• API groups: Allow to create an API Group by adding more than one API instances as a single unit.

• Automated policies: Apply the policies to every API that is going to be deployed in an environment. Once automated policies are configured no need to add those policies explicitly to API instance. • Client applications: Are external services that consume APIs.

• Customer policies: Can be developed by anyone and applied to the APIs.

• DataGraph: Unifies the data of all the APIs in an application network. This requires a separate licensing cost.

• Analytics: Provides the detail of how the APIs are performing and how many requests are received by an API etc.

46



Enterprise Integration with Mulesoft

Runtime Manager • Using Runtime Manager APIs can be deployed, managed, and monitored in different environments like Sandbox, QA, Production. To access Runtime Manager login into Anypoint Platform and click on Runtime Manager from the main menu. Refer to the following figure:

Figure 3.17: Runtime Manager

You can see what all APIs are deployed and on which runtime version the APIs deployed. Runtime Manager can be used in the below scenarios: Cloud Hub Deployment • As a part of this deployment, Mule APIs get deployed on MuleSoft hosted Cloud environment i.e., the control plane and runtime plane would be both managed by MuleSoft for your application. This is an iPass service which meets all server requirements without doing any configuration or customization. • API logs can be accessed from Runtime Manager console log for trouble shooting.

• For the APIs deployed on CloudHub we can use insight for analytics and monitoring.

• To deploy an API in MuleSoft CloudHub user needs to enter Application Name, Deployment Target as CloudHub. User can select the number of vCores and workers required and Mule runtime version as per the API deployment

Anypoint Platform – An Introduction



47

requirements. The number of vCores available for deployment will depend on the license taken for MuleSoft. Refer to the following figure:

Figure 3.18: Overview of Deploy Application

Hybrid Deployment/PCE (Private Cloud Deployment) • On-premises servers hosts the app deployment, use Runtime Manager cloud console to manage them. For analytics, the data should be sent to 3rd party Analytics tools/applications. • On-premises servers needs to be added to Runtime Manager as shown below. From the navigation, select the option servers and use Add Server button to add On-premises server. • We can create Server Group, Cluster as well.

• The server should be in started state to deploy APIs. Refer to the following figure:

Figure 3.19: Pictorial overview of Hybrid Deployment

48



Enterprise Integration with Mulesoft

Anypoint Runtime Fabric • Runtime Fabric instances hosts the app deployment, use Runtime Manager cloud console to manage those APIs. • Runtime Fabric instance needs to be added to Runtime Manager as shown in the below screen.

• Once you click on Create Runtime Fabric you will get the option of various instances using which you can create one. Refer to the following figure:

Figure 3.20: Overview of Runtime Fabric deployment

Data Gateway • Data Gateway supports the integration with legacy system such as Oracle, SAP, My SQL, MS SQL. Using Salesforce Lighting Connect and Data Gateway salesforces users can extract data from legacy systems within Salesforce. • Data Gateway can be accessed through Anypoint Platform or can be installed through Salesforce AppExchange using Salesforce Package. Refer to the following figure:

Figure 3.21: Anypoint Data Gateway

Anypoint Platform – An Introduction



49

Anypoint MQ • Anypoint MQ is the messaging system offered by MuleSoft which enables asynchronous messaging between the applications.

• Anypoint MQ integrated with Anypoint Platform so one can access the MQ from the Platform it also offers role-based access and connectors.

• Anypoint MQ cannot be monitored from Anypoint Monitoring. We need to use Anypoint MQ usage graphs to check how MQ is performing. This can be accessed from Access Management screen. • Anypoint MQ can be accessed from Anypoint Platform main menu as shown below. You can create queue by clicking ‘+’ symbol as shown in the below screen. Refer to the following figure:

Figure 3.22: Anypoint MQ

Anypoint Visualizer • Anypoint Visualizer shows the different aspects of the application network graph. It provides the real-time graphical representation of the APIs and how the APIs are interacting to each other. It also shows the third-party systems that are invoked by Mule APIs.

• Using this graph, we can review the architecture and identify the flaws, how the architecture can be improved for performance etc. This data is not visible to everyone. Users who have additional permissions can only see these graphs. • Anypoint Visualizer can be accessed from Anypoint Platform main menu.

50



Enterprise Integration with Mulesoft

Refer to the following figure:

Figure 3.23: Anypoint Visualizer

• The connection between nodes is the traffic between the APIs These calls include both inbound, outbound. If an application deployed in multiple workers, it would show as 2 different nodes. • We can identify any redundant APIs through these graphs. Can be identify which APIs are having more load and where is the delay etc.

• Anypoint Visualizer collects the data from deployed Mule APIs, proxies from Mule Runtime instances to identify the incoming and outgoing calls of the API. Anypoint Monitoring • It is the standard monitoring tool to monitor Mule applications. The monitoring tools provide feedback from Mule flows and components. It also provides the details of how the API is performing.

• Developers use monitoring tools to troubleshoot and diagnose the issues occur in the API calls. • Anypoint Monitoring provide ways to create dashboards, alerts by configuring various metrics thresholds like response time, number of requests received by an API etc.

Anypoint Platform – An Introduction



51

• Custom dashboards can be created by exposing the metrics based on the application network on a single screen for monitoring. Alerts can be triggered based on the events and thresholds. These dashboards can be viewed like charts, graphs etc. Refer to the following figure:

Figure 3.24: Anypoint Monitoring

Secretes Manager • Anypoint Secretes Manager used as secure vault to store the secret key, password, and certificates.

• Secretes Manager can be accessed from Anypoint Platform main menu. However, one should have permission to access Secretes Manager. • The secrets can be accessed only authorized platform services only.

• Secret Manager enables to store the secrets in secrete groups. These groups are associated with environment, business group.

• One can configure the secrets that are accessed by Anypoint Platform services. You can control which supported platform services are authorized to access these Secrets.

52



Enterprise Integration with Mulesoft



Secretes Manager can store below secrete types:



o TLS certificates

o Keystore

o Truststore

o Certificates



Below shared secrets can be stored in Secrets manager. These are not shared to Anypoint Platform services.



Shared Secrets are used for encryption and decryption. These Secrets are known by both message sender and receiver.

o Passwords

o Symmetric keys o S3 credentials

o Blob

How do license and subscription works

MuleSoft licensing subscription is on annual basis. One should renew the license every year. One can take a basic subscription and add extra components based on the organization need upgrade to the next subscription level. There are 3 plans available from MuleSoft:

• Gold: In this, you will get core features like Mule Runtime, Exchange, Design Centre, Select and Community category connectors, Anypoint Management Centre, Unlimited API Portals. • Platinum: Along with the core features that are available in Gold subscription, advanced features related to deployment features at enterprise level like High Availability, Business Groups, Deployment support (Hybrid, Cloud, RTF), Anypoint MQ and Security.

• Titanium: Along with Gold, Platinum features mission critical features like customizable log retention, distributed log management, transaction tracing, data analysis. Priority support will be available in this subscription.

Anypoint Platform – An Introduction



53

Conclusion

Anypoint Platform is an end-to-end framework where one can design, deploy, manage, and monitor the APIs. It is one of the best iPaas solutions for integration of scattered systems. • Anypoint Platform design center gives developer tools to design, build and test APIs • Anypoint Exchange is the marketplace for connectors, templates and prebuilt APIs

• Management Centre is the place where the organization can manage APIs, users, partners, analyze the performance of APIs and monitoring the APIs using various subcomponents In the next chapter, we will get familiar with RAML concepts and design our first API.

Questions

1. Why integration tools are important in today’s organization landscape? 2. What is Mulesoft Anypoint Platform?

3. What are the different offerings/parts of Mulesoft Anypoint Platform?

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors: https://discord.bpbonline.com

Chapter 4

Designing API

Introduction

Designing a contract is the base to lay out the foundation of API development and API-Led connectivity. The aim of this chapter is to get started with API designing. .

Structure

In this chapter, we will discuss the following topics: • Designing API contract • API-Led contract

o Introduction to experience layer



o System layer



o Process layer

56



Enterprise Integration with Mulesoft

Objectives

The objective of this chapter is to describe the concepts of ‘Designing API contracts and API-led connectivity’. This topic is to establish a standard and structured approach to building APIs that are reusable, scalable, and easy to maintain. Furthermore, you will learn how API contracts define the interface of an API, including input and output data types, expected behavior, and other details, which helps to ensure that all parties involved in developing and using the API understand its functionality and can build their applications accordingly. Following by API-led connectivity. In this topic, you will understand that API-led connectivity involves breaking down an organization’s IT systems into individual building blocks that can be reused across multiple applications and environments. By the end of this chapter, you will get acquainted with three layers viz Experience, Process, and System.

Designing API contract

Before we start the discussion to Design the API contract, let us first discuss API Contract. API Contract is an agreement between the provider (sometimes known as host or server) of an API and its users (called consumer/end-user). It defines the rules and requirements for communication between the API provider and its consumers, including the data formats, message structures, and operations supported by the API. In Mule, API Contract is generally designed using the OpenAPI Specification (OAS) or RAML, which is a widely adopted standard for describing APIs. It includes a machine-readable format for describing the API, which can be used automatically to generate documentation, code libraries, and other tools for API consumers. An API Contract in Mule can be designed using Anypoint API Designer (Chapter 3, Anypoint platform – an introduction for Design Center), which provides a graphical interface for defining the endpoints, methods, parameters, and responses of an API. The resulting OAS or RAML file can then be imported into Anypoint Studio for implementation using Mule flows and connectors.

What is a RAML RESTful API Modeling Language or RAML is a YAML-based language used for defining APIs. It is a simple and concise language that allows developers to define the resources, methods, parameters, and responses of an API in a structured manner. RAML is designed to be human-readable and machine-readable, making it easy for developers and tools to understand and work with.

Designing API



57

RAML provides a powerful set of features, such as data types, resource types, traits, and includes, that allow developers to define reusable components for their APIs. RAML also supports annotations, which can be used to add metadata to an API definition, such as documentation, security information, and other details. Using RAML, developers can create a complete API definition, including all of the necessary documentation and examples, which can then be used by tools to generate code, documentation, and tests. RAML is widely used in the API development community and is supported by a variety of tools and platforms.

API design in use Designing API contract is done using RESTful API Modeling Language (RAML). RAML is a language for designing RESTful APIs, which provides a simple and concise way to describe the structure of a RESTful web API. Note: Design Center is a web-based graphical user interface that allows you to design APIs and integrations in MuleSoft’s Anypoint Platform.

To design an API contract using Design Center, follow these steps:

• Log in to your Anypoint Platform account and open Design Center.

• Click on the Create new project button and give your project a name. • Select the API specification option and click on the Create button.

• In the Design tab, define your API’s resources, methods, and responses.

• In the Implement tab, you can implement the API’s logic in MuleSoft’s Anypoint Studio.

• In the Test tab, you can test your API by sending requests and viewing responses. Here is a sample RAML API contract designed in Design Center: #%RAML 1.0 title: Sample API version: v1 baseUri: http://api.sample.com/{version} /types: get:

58



Enterprise Integration with Mulesoft

description: Retrieve all types responses: 200: body: application/json: example: | [ { "id": 1, "name": "Type 1" }, { "id": 2, "name": "Type 2" } ] post: description: Create a new type body: application/json: example: | { "name": "New Type" } responses: 201: body:

Designing API application/json: example: | { "id": 3, "name": "New Type" } /types/{id}: get: description: Retrieve a type by ID responses: 200: body: application/json: example: | { "id": 1, "name": "Type 1" } put: description: Update a type by ID body: application/json: example: | { "name": "Updated Type" }



59

60



Enterprise Integration with Mulesoft

responses: 200: body: application/json: example: | { "id": 1, "name": "Updated Type" } delete: description: Delete a type by ID responses: 204: description: Type deleted successfully

In this example, we define a simple API for managing types. We have two resources: /types and /types/{id}. The /types resource supports GET and POST methods for retrieving all types and creating a new type, respectively. The /types/{id} resource supports GET, PUT, and DELETE methods for retrieving, updating, and deleting a type by ID, respectively. We also provide example payloads for the request and response bodies.

API-Led contract

An API-led contract is an approach to designing APIs in Mule 4.0 that emphasizes the creation of well-defined and reusable APIs. It involves breaking down an organization’s IT architecture into a set of APIs that can be consumed independently by different systems or applications. The API-led contract involves creating three layers of APIs: System API, Process API, and Experience API. System APIs are the most granular level of APIs, representing the individual functions and services of the organization’s underlying systems. Process APIs combine several system APIs with supporting a specific business process, while Experience APIs are the front-end APIs that define the user experience.

Designing API



61

Can you create an API contract without using Anypoint Studio? Yes, you can create an API contract in Mule 4.0 without using Anypoint Studio by manually creating the contract file(s) in the desired format, such as RAML or OpenAPI, and then using Mule tools or third-party tools to generate the API implementation code based on the contract. However, using Anypoint Studio provides a user-friendly graphical interface for creating and managing API contracts, which can make the process faster and easier. An API-led contract ensures that APIs are designed in a way that is flexible, reusable, and scalable. It helps to promote consistency across the organization’s APIs and reduces duplication of effort in API development. The objective of API-led connectivity in Mule 4.0 is to provide a standardized approach to connect applications, data, and devices in a way that is reusable, scalable, and easy to manage. The API-led approach divides connectivity into three distinct layers: experience, process, and system. The API-led contract is the agreement between these layers that defines how data is passed between them. It ensures that each layer has a clear understanding of the information being exchanged and how it should be processed. The main goal of the API-led contract is to create a standardized, reusable interface between applications and systems, which makes it easier to maintain and manage the overall architecture. By defining clear contracts between each layer, teams can work independently and deliver faster, as they don’t need to worry about the implementation details of other layers. The API-led contract ensures that each layer can operate independently while still being part of a larger system. For example, let’s consider an e-commerce website that needs to connect with a payment gateway to process transactions. The experience layer is responsible for handling the user interface, which includes displaying the payment options to the user. The process layer is responsible for validating the payment and generating the necessary authorization requests. The system layer is responsible for communicating with the payment gateway and processing the authorization response. In this scenario, the API-led contract defines the format of the data that needs to be exchanged between the three layers. The experience layer sends a request to the process layer, which then forwards it to the system layer. The system layer processes the request and sends a response back to the process layer, which then forwards it to the experience layer.

62



Enterprise Integration with Mulesoft

By defining a clear API-led contract between these layers, it becomes easier to maintain and manage the overall architecture, as each layer can operate independently and evolve at its own pace without affecting the other layers. In Mule 4.0, the API-led connectivity approach is used to design and implement APIs. This approach involves breaking down an API into three layers: Experience, Process, and System layers. The correlation between these layers can be explained using an example of a banking application. The Experience Layer in the banking application would provide the user interface through which the user interacts with the application. For example, a user could log in, check their account balance, transfer funds, etc. This layer is responsible for managing the user experience and providing a user-friendly interface. The Experience Layer would use an API contract to define the data that can be accessed and the operations that can be performed. The Process Layer would handle the business logic of the banking application. For example, it would validate user input, perform data transformations, and enforce security policies. This layer would use the API contract defined by the Experience Layer to expose the required data and functionality to the System Layer. The System Layer would be responsible for integrating with the back-end systems of the banking application. For example, it would interact with the core banking system to retrieve account information, process transactions, and manage user authentication. This layer would use the API contract defined by the Process Layer to access and manipulate the required data. The correlation between these layers can be seen in the flow of data and functionality through the layers. The Experience Layer defines the API contract, which is then used by the Process Layer to provide the required functionality. The Process Layer, in turn, uses this API contract to expose the required data and functionality to the System Layer, which interacts with the back-end systems to retrieve or manipulate data. For example, let’s say a user wants to transfer funds from one account to another using the banking application. The Experience Layer would provide the user interface through which the user can enter the details of the transfer, such as the amount and the account numbers. This information would be validated and transformed by the Process Layer before being sent to the System Layer. The System Layer would then interact with the core banking system to process the transfer and provide a response to the user through the Process Layer and Experience Layer.

Designing API



63

In summary, the Experience, Process, and System Layers in Mule 4.0 are closely correlated, with each layer building on the API contract defined by the layer above it. The separation of concerns provided by this approach allows for greater flexibility and scalability in API design and implementation.

Experience Layer In Mule 4.0, the Experience Layer refers to the layer responsible for delivering a consistent and user-friendly experience to the end-users of the API. This layer typically consists of API portals, documentation, and client SDKs. A use case for the Experience Layer could be the development of an API portal for a banking API. The API portal would provide documentation on the various endpoints of the API, as well as examples of how to use the API with various programming languages. It could also include a user-friendly interface for end-users to interact with the API, such as a dashboard for viewing account information or making transactions. Here is an example of how to implement an API portal in Mule 4.0: Start by creating a new Mule project in Anypoint Studio and add a new RAML file. Define the API endpoints in the RAML file and add annotations to describe the expected request and response formats. Use the API Console component in Mule to generate an interactive API portal based on the RAML file. The API Console component will automatically generate documentation for the API endpoints, including sample requests and responses. Customize the API portal by adding custom styles and branding. Use the API Portal component in Mule to create custom pages and add additional functionality, such as user authentication and role-based access control. Publish the API portal to a public URL or private network for end-users to access. You can also generate client SDKs for various programming languages to make it easier for developers to interact with the API. Overall, the Experience Layer in Mule 4.0 provides a powerful set of tools for building user-friendly and accessible APIs. By leveraging these tools, developers can create a seamless and intuitive experience for end-users and accelerate the adoption of the API.

64



Enterprise Integration with Mulesoft

Process Layer In Mule 4.0, the Process Layer is responsible for implementing the actual business logic of the API. This layer contains the core Mule application logic that processes data, applies transformations, and executes actions based on the API request and response. A use case of the Process Layer could be a payment processing system where the API receives payment requests from various sources and performs the necessary actions to process the payment. In this case, the Process Layer would contain the logic to verify the payment details, check for fraud, apply any discounts or taxes, charge the customer’s account, and generate receipts. Code example: Here is an example code snippet of a Mule 4.0 flow that implements the Process Layer of a simple API that receives a request payload containing two numbers and returns their sum:





In this example, the API is designed to receive a POST request at the /add endpoint, which contains a JSON payload with two numbers (number1 and number2). The setpayload component applies the transformation logic to add the two numbers and set the resulting value as the new payload, which is returned as the API response. This logic represents the core of the API’s business logic and is a simple example of the Process Layer in action.

System Layer In Mule 4.0, the System Layer represents the underlying systems, databases, and applications that need to be integrated. It is responsible for interacting with external systems and services and transforming data into a format that can be understood by the Process and Experience Layers. A common use case for the System Layer in Mule 4.0 is when a company has multiple legacy systems that need to be integrated into a new application. The System Layer can handle the integration with these legacy systems by providing APIs that expose

Designing API



65

the functionality of the legacy systems without requiring the entire system to be replaced or re-engineered. Code example: Here is an example of how the System Layer can be used in Mule 4.0 to integrate with a legacy system:











In this example, the System Layer is represented by the db:select and db:insert components, which interact with a legacy database. The http:listener component in each flow represents the Experience Layer, which exposes the API to the outside world. The set-payload component is used to transform the data into a format that can be understood by the Experience Layer.

66



Enterprise Integration with Mulesoft

The System Layer can also be used to interact with other external systems, such as web services or APIs. By providing APIs that expose the functionality of these external systems, Mule 4.0 can help to integrate these systems into a cohesive whole, allowing data to flow seamlessly between different systems and applications.

Correlation between Experience, Process, and System Layer In Mule 4.0, the API-led connectivity approach is used to design and implement APIs. This approach involves breaking down an API into three layers: Experience, Process, and System layers. The correlation between these layers can be explained using an example of a banking application. The Experience Layer in the banking application would provide the user interface through which the user interacts with the application. For example, a user could log in, check their account balance, transfer funds, etc. This layer is responsible for managing the user experience and providing a user-friendly interface. The Experience Layer would use an API contract to define the data that can be accessed and the operations that can be performed. The Process Layer would handle the business logic of the banking application. For example, it would validate user input, perform data transformations, and enforce security policies. This layer would use the API contract defined by the Experience Layer to expose the required data and functionality to the System Layer. The System Layer would be responsible for integrating with the back-end systems of the banking application. For example, it would interact with the core banking system to retrieve account information, process transactions, and manage user authentication. This layer would use the API contract defined by the Process Layer to access and manipulate the required data. The correlation between these layers can be seen in the flow of data and functionality through the layers. The Experience Layer defines the API contract, which is then used by the Process Layer to provide the required functionality. The Process Layer, in turn, uses this API contract to expose the required data and functionality to the System Layer, which interacts with the back-end systems to retrieve or manipulate data. For example, let’s say a user wants to transfer funds from one account to another using the banking application. The Experience Layer would provide the user interface through which the user can enter the details of the transfer, such as the amount and the account numbers. This information would be validated and transformed by the

Designing API



67

Process Layer before being sent to the System Layer. The System Layer would then interact with the core banking system to process the transfer and provide a response to the user through the Process Layer and Experience Layer. In summary, the Experience, Process, and System Layers in Mule 4.0 are closely correlated, with each layer building on the API contract defined by the layer above it. The separation of concerns provided by this approach allows for greater flexibility and scalability in API design and implementation.

Conclusion

In this chapter, we have gone through the basic concepts of API Contract and API Layers. We understood the requirement of the contract, why we need to design the contract etc. We get an idea about various layers Experience API (eapi), Process API (papi), and System API (sapi) followed by correlation between all three layers. In next chapter, we will learn about Data weave, this is an important for every Mulesoft developer. We will learn the importance of Data Weave and see, how we can implement Data Weave scripts in a Mule project.

Questions

1. What is RAML?

2. What do you mean by API Contract?

3. Can we design an API contract without AnyPoint Studio? 4. Why do we need Process Layer?

5. What is the importance of the System Layer?

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors: https://discord.bpbonline.com

Chapter 5

Anypoint Studio – An Inside Story Introduction

Anypoint Studio is an Integrated Development Environment (IDE) to develop and test Mule applications. IDE is a software application using which programmers/developers can do coding to develop other software applications. Along with development, developers can compile, build, package, and deploy the code on a server. In earlier days, developers used to write code on a text editor and run it in a separate compiler. If any errors occur during compilation developer needs to make the changes in the text editor. This process used to take a lot of development time.

Structure

In this chapter, we will discuss the following topics: • What is an IDE, and why do we need it • Anypoint Studio – an introduction

• Creating Mule applications from Studio

• Running Mule applications from Studio

70



Enterprise Integration with Mulesoft

Objectives

In this chapter, you will understand.Anypoint Studio which is an Integrated Development Environment (IDE). By the end of this chapter you will gain the understanding about the various component of this IDE viz. Editor, Compiler, Debugger, and so on.

What is an IDE, and why do we need it

Integrated Development Environment or IDE provides a playground for developers to write, debug, and test code. For instance, for Mule Development, we use Anypoint Studio as an IDE. Generally, IDE provides: • Editor

• Compiler

• Debugger In other words, we can also say an IDE is a software application that allows us to write, compile, and debug code in different programming languages. Advanced IDEs also provide features like Syntax highlighting and code-auto completion.

Advantages of IDE The following are some of the advantages offered by IDE:

• It is a user-friendly interface that helps developers to develop the code and identify syntax errors easily. This eventually reduces the debugging time. • We can deploy the applications locally and run the application.

• Debug the application to troubleshoot the failures while running the applications. • Reduces the complexity related to the configuration. • Generates the documentation of the application.

Anypoint Studio – An Inside Story



71

Anypoint Studio- an introduction

Anypoint Studio is an Eclipse-based IDE to build, compile, test, and deploy Mule applications. Its features enhance the developer’s productivity. Below are the various features of Anypoint Studio: • Studio comes with Mule runtime, which is useful for deploying the applications locally and testing.

• Mule Palette is useful for dragging and dropping the connectors and components in the flow.

• Studio is integrated with Anypoint Exchange to download templates, modules, and other resources easily. • From Studio, we can import API specifications from Anypoint Platform Design Center. • Studio embedded with the unit testing framework.

• Built-in support to deploy the applications on CloudHub.

• Run the application in debug mode to troubleshoot the applications. • Export the documentation of flows. • Export the flows as diagrams.

• Creates maven-based projects. Anypoint Studio 7.x supports Mule 4.x projects as the structure of the Mule 3 and Mule 4 projects got changed. Anypoint Studio 7.x won’t support Mule 3 projects Anypoint Studio 6 is used for Mule 3 projects. You can download and install the latest version of Anypoint Studio from the below link: Download Anypoint Studio & Mule | MuleSoft You can also get older versions of Studio from the above link. You should have Java installed on your system before downloading and installing Anypoint Studio. Also, set the JAVA_HOME path in your system variables. You can get the studio for Windows, Linux, and Mac operating systems. Once you click on the Download button, the zip file gets downloaded into your system.

72



Enterprise Integration with Mulesoft

Figure 5.1: Info screen to download Anypoint Studio

Once the zip file is downloaded, you can extract the folder and open it. You can see the below structure of the files. Click on Anypoint Studio.

Figure 5.2: Folder structure of Anypoint Studio download zip

Once you click on Anypoint Studio, it will ask for a workspace. By default, it takes the path where you have extracted your Anypoint Studio zip.

Anypoint Studio – An Inside Story



73

Figure 5.3: Workspace selection

By using the Browse button, you can change the workspace location. After selecting the workspace, click on the Launch button. Anypoint Studio IDE opens with a Welcome message when you load the studio the first time. Note: In case Java is not installed while opening Studio, it asks for the Java path.

It is not necessary for you to have only a workspace. You can have multiple workspaces, and you can switch from one workspace to another workspace from Studio, as shown below.

Figure 5.4: Switching workspace from Anypoint Studio

74



Enterprise Integration with Mulesoft

Studio Editor Studio editor helps to develop, design, and edit the applications, properties, and configuration files. Mule configuration is available in src/main/mule/. The Mule configuration file is an XML file. The editor contains 3 tabs.

Message Flow Tab When you open the configuration file, it opens under the Message Flow tab with a visual representation of the application. We can drag and drop the modules from the Mule Palette.

Figure 5.5: Visualizing Message Flow

Global Elements Tab This tab contains the global configurations for the modules that can be created, edited, and deleted. E.g., API Kit router configuration, HTTP Listener configuration, etc. We can also configure the properties files that are used by these global configurations also in this tab. If we created any secured properties file that is used by the global configurations also needs to be configured under this tab. Any keys related to the environment also can be defined under this tab.

Anypoint Studio – An Inside Story



75

Figure 5.6: Global Configuration

Configuration XML tab This tab shows the XML of the Mule application. The changes applied here will be automatically applicable on the Message Flow tab.

Figure 5.7: XML Configuration

Anypoint Studio displays the following views.

76



Enterprise Integration with Mulesoft

Package Explorer Package Explorer displays all Mule projects and their corresponding folders and the files available in that respective Mule project.

Figure 5.8: Overview of Package Explorer

Mule Palette Mule Palette contains all the connectors and components which help in developing a Mule API. You can add more connectors or components from your Anypoint Exchange account by clicking on Search in Exchange button.

Figure 5.9: Overview of Mule Pallet

Anypoint Studio – An Inside Story



77

Once you click on Search in Exchange, you will get a popup to add your Anypoint Account details. An Anypoint account can be added in 2 ways. One is by providing the credentials. Another way of adding the account is by using a custom domain account. The credentials can be stored by Studio for 7 days. After 7 days, you need to re-login in case you want to download any modules from Anypoint Exchange.

Figure 5.10: User Login Screen

Once you have added the account, you can search for the modules which you want to add and click on the Finish button. Once you click on the Finish button, the corresponding jar of those modules will be downloaded to Studio, and they will be visible in Mule Palette. Also, you can use them in your project.

Figure 5.11: Module selection

78



Enterprise Integration with Mulesoft

Properties The Properties window shows the selected module/component and connector properties from Canvas with default values. We can edit the properties as per our requirements.

Figure 5.12: Properties window

Console The console view shows the console of the embedded Mule Server within Anypoint Studio, which displays the logs and events which are occurring on the Mule Server. While deploying the application, it shows the deployment logs; while running the application, it shows the application logs, etc.

Figure 5.13: Console output

Problems The problem window shows the errors and problems related to the project. For e.g., if the project is missing any dependencies or syntax errors etc.

Anypoint Studio – An Inside Story



79

Figure 5.14: Problem TAB

Creating Mule applications from Studio There are multiple ways to create a Mule application:

1. Create a new Mule application directly in Studio by clicking File > New Mule Project.

Figure 5.15: Create New Mule Project



Here, we have multiple options to create the project.



• Provide a RAML from the local



• Provide a name and create an empty project.

• Download the RAML directly from Anypoint Design Center. Refer to the following figure:

80



Enterprise Integration with Mulesoft

Figure 5.16: Providing a valid Mule Project Name

2. We can import the existing project by using the import option. This can be done from the extracted zip folder or using the jar file. To import the application, click on the File > Import option. Once you select the option, one popup window will be displayed with the below options. You can choose the option according to your requirement.

Refer to the following figure:

Anypoint Studio – An Inside Story



81

Figure 5.17: Importing Mule Project

Running Mule applications from Studio We can run the Mule application by following methods:

1. Right-click on the project name from package explorer. You can see a set of options visible that we can perform on that project. Select Run As, and you can see three options. Select Mule Application. This will start the deployment of the mule application in the embedded Mule runtime within the studio.

Refer to the following figure:

Figure 5.18: Run Mule Application

82



Enterprise Integration with Mulesoft

2. From the Studio editor’s Message Flow tab, right-click on Canvas, and you can see the option Run project option along with the project name. This will also deploy the Mule application in the embedded Mule runtime of Studio.

Refer to the following figure:

Figure 5.19: Run Mule Project from Message Flow Window

3. While running the Mule applications, if we want to pass any runtime arguments to the Mule application, we can open the run configuration window and provide the respective arguments and run the application. From the menu, we can click on the Run menu item to see the Run Configurations option.

Refer to the following figure:

Figure 5.20: Run Mule Application with Custom Configuration

4. Once you select the Run Configurations option, we can see a pop up window. Click on the New button to create the configuration to run the application.

Anypoint Studio – An Inside Story



83

Under the General tab, the Mule Runtime version can be changed to which version you want to use to run the Mule application.

5. Using the Arguments tab, we can pass run time arguments required for the Mule application. 6. We can create multiple configurations for different Mule applications based on the change in arguments or Mule runtime version. 7. We can also run multiple Mule applications using the same configuration.

Refer to the following figure:

Figure 5.21: Managing Configuration

Debugging Mule application When there is an issue while running the application, like the request not processing successfully, getting a null response, issues in the payload, response getting generated wrongly, and so on. We can debug these applications by adding the break points in the flow. We can right-click on the processor/component where you want to add the break point and select Add Breakpoint option. We can also remove the breakpoint by right-clicking on the processor/component and selecting the Remove Breakpoint option if the break point is not required.

84



Enterprise Integration with Mulesoft

Refer to the following figure:

Figure 5.22: Debug by adding a breakpoint to a specific flow

Once break points are added wherever in the flow is required, run the Mule application as Debug project. The project will get deployed. Refer to the following figure:

Figure 5.23: Run and Debug Mule Application

If we need to pass any configuration during the debug of the Mule application, we can select Debug Configuration option similar to Run Configuration, which we discussed in the above topic.

Anypoint Studio – An Inside Story



85

Deploying Mule application from studio to CloudHub Once the Mule application is developed and tested by deploying locally in Anypoint Studio embedded Mule runtime, we can deploy the Mule application on CloudHub. To do this right, click on the project folder and select Anypoint Platform > Deploy to CloudHub. Refer to the following figure:

Figure 5.24: Deploy the application to CloudHub from Anypoint Studio

Export documentation from Studio We can generate the documentation from Anypoint Studio. Once we select the option, it will ask for the path where to generate the documentation. Once the path is provided, documentation-related html files get created for all the Mule configuration files. We can also export the diagrams from Anypoint Studio. It generates the PNG file with all the flows.

86



Enterprise Integration with Mulesoft

Refer to the following figure:

Figure 5.25: Export documentation

Perspectives A Perspective is nothing but how you want to view the various views and editors of the studio in a specified arrangement. The default perspective is the Mule Design perspective. You can change the perspectives in various ways, like debug perspective, etc. If you choose the other option in the below picture, you will get a popup with various other options in which you want to see the editor’s arrangement. Refer to the following figure:

Figure 5.26: Change Perspective from Studio

Anypoint Studio – An Inside Story



87

Conclusion

In this chapter, we have learned how to use Anypoint Studio IDE to develop the Mule application. Also, we learned what the different features like developing Mule applications, debugging, running, and deploying are. We have also gone through various views available in Studio and how they make it simpler to develop Mule applications. We have also understood how we can connect Studio with the Anypoint Platform and perform various operations directly on the Anypoint Platform.

Questions

1. What is an Integrated Development Environment? 2. What are the 3-features of IDE?

3. How do you write a simple application using Anypoint Studio? Explain stepby-step. 4. What is a workspace?

5. Can you have multiple workspaces? If yes, how to switch between the workspaces?

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors: https://discord.bpbonline.com

Chapter 6

An Introduction to Data Weave Introduction

Data Weave is a programming language that can be written in Mule applications. This language is used to transform the data which travels through Mule applications. Data Weave scripts act on data that is present in Mule Event. For example, once the Mule application receives the data from the source or target systems, the Data Weave language is applied to transform the data. Data Weave also enables one to read and parse data from one format to transform it into a different format. For example, read a CSV file and convert the CSV data into an XML or JSON format. The latest version of Data Weave language is 2.0 and supported by Mule 4 runtime and above versions. Anypoint Studio 7 supports Data Weave 2.0. The below versions of Anypoint Studio support Data Weave 1.0.

90



Enterprise Integration with Mulesoft

Structure

In this chapter, we will discuss the following topics: • Basic concept of Data Weave • Precedence in Data Weave • Debugging Data Weave • Data Weave Library

Objectives

The aim of this chapter to get introduced with a scripting language which is Data Weave. Data Weave is one of the most important ingredient of the recipe for your integration dish, if you want to setup your carrier in the field of Integration with MuleSoft, one should know Data Weave. This chapter is not only provide the basics of the Data Weave but also help to understand the various important of the Data Weave following by where and when to use this scripting language.

Basic concept of Data Weave

Data Weave follows the same concepts which most languages follow. Adding the header and version and importing the pre-built packages or modules. Defining the variables and constants etc. It also supports various data structures. As shown in the below example, the first line tells the version of the Data Weave language. The second line tells the output format generated by this Data Weave script. There are various prebuild modules available in the Data Weave language, and we can import them into the script and use the inbuilt functions of those modules as per the requirement in our script. The header and body of the script are separated by 3 dashes. %dw 2.0 output application/json import dw::core::Strings --payload

An Introduction to Data Weave



91

Data types Data types specify what kind of data can be stored in a variable. The simple data types that are supported by Data Weave are String, Boolean, Number, and Date. It also supports composite and complex data types like Array, Object, Any, etc.

Data selectors The use of data selectors is that they navigate the data structures like an object, Array, to find the matching criteria of the data. Below are some of the main selectors which Data Weave supports.

Single-value selector The single-value selector allows you to filter an object using the key. In the below example from the incoming payload, you are looking for a key whose name is username. %dw 2.0 output application/json --payload.userName

Multi-value selector If we want to filter an array with matching key values, we need to use a multi-value selector type. %dw 2.0 output application/json --payload.*username

Range selector If we want to filter values based on a range from an array, we can go with the Range selector type. In the below example, the output will display the values from an array index ranging from 0 to 1.

92



Enterprise Integration with Mulesoft

%dw 2.0 output application/json --payload[0 to 1]

Index selector Using a single-value selector, we navigate the array by passing the key name. Using an index selector, and we navigate the array by passing the index value. %dw 2.0 output application/json --payload[1]

Variables Like other languages DataWeave also supports variables. The use of variables is to store the data and use those variables later in the script. Variables can also be used to store the results of the calculations and use those variables later. We can define the variables using the var keyword in the script. Using a set Variable component also, we can define the variables. You can find the set Variable component in Mule Palette. Refer to the following figure:

Figure 6.1: Search in Mule Palette

An Introduction to Data Weave



93

Syntax to define the variable in the script is: var name = expression

or var name = "We can learn"

Operators DataWeave supports various operators like other languages, which includes mathematical operators, logical operators, Equality, and relational operators, along with append, update and pretend operations on Arrays. Below mathematical operators are supported by DataWeave:

Figure 6.2: Mathematical operators

Below logical operators are supported by DataWeave:

Figure 6.3: Logical Operator

94



Enterprise Integration with Mulesoft

Below relational operators are supported by DataWeave:

Figure 6.4: Relational Operator

Below are the operations that can be performed on Arrays:

Figure 6.5: Overview of Various Operators

An Introduction to Data Weave



95

Functions A function is a set of instructions defined to perform a specific task. The main advantages of functions are reusability and code are going to be modular, and programs are becoming easy to manage and understand. We can define functions using DataWeave. Using fun keyword functions can be defined in DataWeave. Syntax: fun (parameters separated with comma) =

Example: %dw 2.0 output application/json fun toAdd(num1, num2) = num1 + num2 --toAdd(5, 6) toAdd is the name of the function which takes num1 and num2 as parameters.

The ‘=’ sign marks the beginning of the code block to execute the function that is called.

Rules to define functions:

• Function names should be a valid identifier.

• Function name must begin with a letter (a-z), either lower or upper case.

• After the first letter, the name can contain any combination of letters, numbers, and underscore(_). • The function name cannot match any reserved keyword of DataWeave. • The function name should not contain any special characters.

96



Enterprise Integration with Mulesoft

Type constraints functions Type constraint functions are those functions where you declare what type of parameters the function is going to accept and what type of response the function is going to return. Syntax: fun (param1: Type, param2: Type): Result Type =

Example: %dw 2.0 output application/json fun toAdd(num1: Number, num2: Number): Number = num1 + num2 --toAdd(5, 6)

In case we pass a value other than the number while calling the toAdd function, it will throw an error like the below: %dw 2.0 output application/json fun toAdd(num1: Number, num2 = 8): Number = num1 + num2 --toAdd(5, "a")

Refer to the following figure:

Figure 6.6: Exception in Data Weave scripts

An Introduction to Data Weave



97

Optional parameters functions We can assign a default value to the parameters of a function. In comparison, calling the function if we won’t pass any value for the parameter which has the default value. In the below example, while calling the toAdd function, only num1 is passed, and the default value of num2 is used during the calculation. %dw 2.0 output application/json fun toAdd(num1: Number, num2 = 8): Number = num1 + num2 --toAdd(5)

Output: 13

In the below example, the default value override with the value passed during the call of the toAdd function. %dw 2.0 output application/json fun toAdd(num1: Number, num2 = 8): Number = num1 + num2 --toAdd(5, 10)

Output: 15

Function overloading DataWeave supports function overloading as well. Function Overloading means you can define any number of functions with the same name by passing different types or different numbers of arguments. In the below example, there are 2 functions defined with the name toAdd. However, one of the functions accepts 2 parameters, and one accepts 3 parameters.

98



Enterprise Integration with Mulesoft

Example: %dw 2.0 output application/json fun toAdd(num1: Number, num2 = 8): Number = num1 + num2 fun toAdd(num1, num2, num3) = num1 + num2 + num3 --toAdd(6,7,8)

Output: 21

Creating custom modules We have a lot of built-in modules in DataWeave, like dw::Core, dw::Crypto, dw::Core::Strings, etc. We can create our own modules and include them in DataWeave scripts. Create modules by writing in DataWeave and save those files with the extension .dwl. Place these files in your applications /src/main/resources/modules folder. Refer to the following figure:

Figure 6.7: Pictorial view of modules in Explorer

An Introduction to Data Weave



99

Once you have created the .dwl file, you can use it in your transformations of the application by using the import keyword. Refer to the following figure:

Figure 6.8: Using the custom module in the Data Weave script

Precedence in DataWeave

DataWeave expressions are compiled in a specific order. The result of the compilation of something at one level can serve as input at a higher level, not at a lower level. The following figure depicts table orders operations and functions from first compiled(1) to last compiled (10):

Figure 6.9: Pictorial Overview of Levels

100



Enterprise Integration with Mulesoft

The following figure, Figure 6.10, shows the Level of Operators or Functions:

Figure 6.10: Level of Operators or Functions

The following figure, Figure 6.11, shows the Conditional Functions:

An Introduction to Data Weave



101

Figure 6.11: Conditional Functions

Order of chained function calls When the functions are chained together in the DataWeave script, the order of processing is from first to last. fun getDictionary(value:String, dictionary) = (readUrl("classpath:// dict/" ++ dictionary, "application/json") filterObject ($ == value) pluck $$)[0]

From the above example, the first readUrl functional will be executed. The output of the readUrl is input to the filterObject function, and the output of filterObject is input to the pluck function. You can force the evaluation order by inserting the (and) into the DataWeave expression. The order of the function calls is very important as we need to take care of whether the output of the first function is accepted by the second function or not.

102



Enterprise Integration with Mulesoft

Debugging DataWeave DataWeave is the most used language in MuleSoft integrations. When you are not expecting the DataWeave transformation response as expected, we can debug the transformation. You can provide a sample input payload and click on the preview button and check whether the output is as expected by you or not. Accordingly, you can update the

DataWeave script to change the response as expected by you. Refer to the following figure:

Figure 6.12: Debugging Data Weave

Another way of debugging is to add a breakpoint on transformation and run the Mule application in debug mode. Hit the API, and the processing gets stopped at the transformer. To understand how the payload is coming, what fields are populating and what are not populating can be analyzed by clicking payload on the Mule Debugger window. We can also click on the watch/expression button, and you can try the DataWeave script expression to understand the response.

An Introduction to Data Weave



103

Refer to the following figure:

Figure 6.13: Debugging

Debugging DataWeave Online: We don’t need Anypoint Studio to debug DataWeave always. MuleSoft provided an online playground tool to debug the Mule applications. You can use the below link to open the playground. https://developer.mulesoft.com/learn/dataweave/playground By default, the screen looks like the below:

Figure 6.14: Debugging Online

104



Enterprise Integration with Mulesoft

It has 3 main sections.

• Input Explorer is used to adding the input payload against which you want to test the DataWeave script. • A script where we write DataWeave script we want to test.

• Output is the place the response gets generated against the DataWeave script using the input payload.

Data Weave library Data Weave scripts are very important in our Mule applications. We can identify the common scripts that are useful in various Mule applications and across organizations. Once we identify the scripts, we can implement those scripts and create a Data Weave library. This library can be published in Anypoint Exchange to that it is available at the organization level; once published, users who belong to the organization can be downloaded and use various Mule applications.

Prerequisites to create Data Weave library To create a Data Weave library, we need: • Visual Code Editor • Java

• Maven

• Set environment variables for Java and Maven Once Visual Code Editor is installed, install Data Weave extension 2.0 by using the shortcut Ctrl+Shift+X.

An Introduction to Data Weave



105

Refer to the following figure:

Figure 6.15: Visual Code Editor

Once the DataWeave extension is installed:

• Click on create new Library project and complete the details for the library which you are creating in specific boxes.

• Once we create the project, it adds folder src and pom.xml. In src 2, subfolders get created main and tested.

• Add desired refusable functions in the main folder. We can add as many functions as required.

106



Enterprise Integration with Mulesoft

Refer to the following figure:

Figure 6.16: Working with Visual Code Editor

Publishing Data Weave library To publish the library, we need to make the below changes in pom.xml: • Uncomment the exchange repositories

• Update the organization id with project.groupId • Run mvn deploy from the console Refer to the following figure:

Figure 6.17: Publishing code

An Introduction to Data Weave



107

Once it gets published, you can check in the Anypoint Exchange. Refer to the following figure:

Figure 6.18: Viewing published code in Anypoint Exchange

To use the DataWeave library in Mule applications, copy the dependency snippet from Anypoint Exchange and add the dependency in your respective Mule application pom.xml. Refer to the following figure:

Figure 7.19: Data Weave Library

108



Enterprise Integration with Mulesoft

The following figure shows Visualizing Dependency Snippets:

Figure 6.20: Visualizing Dependency Snippets

Once the dependency is added, the library can be seen in Package Explorer, as shown below:

Figure 6.21: Viewing Dependency in Package Explorer

Use the library function by importing them in your DataWeave component within the Mule application.

An Introduction to Data Weave



109

Figure 6.22: Using Custom Library in Mule Application

Conclusion

In this chapter, we have talked about the use of DataWeave language and its importance of it. We have also gone through the various modules available in DataWeave. Also, we discussed how we could create custom modules for an application. We have also identified various ways of debugging the DataWeave scripts and what are their advantages of it. We have also mentioned the online tool provided by MuleSoft for debugging. We have also learned how DataWeave Library can be created and published in the exchange and the advantages of the library.

Questions

1. What is Data Weave?

2. How to create a custom Data Weave component? 3. What is a visual code editor?

4. How to work on Data Weave using Visual Code Editor? 5. Deploy and test Custom Component.

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors: https://discord.bpbonline.com

Chapter 7

Developing a Project – Connectors at a Glance Introduction

An Application Programming Interface (API) is an intermediate software that allows two or more applications to talk to each other. It is how multiple applications, data, and devices in an organization can connect while sharing data and information. Mulesoft API-led connectivity provides a framework to enhance the API experience while ensuring security and ease of development. Developing an API project has its own path, which is nothing but API Lifecycle. The previous chapter was an introduction to Data Weave – where we discussed various aspects of writing a script using Data Weave to achieve the end goal.

Structure

In this chapter, we will discuss the following topics: • API Lifecycle Steps • Sample API • Connectors

112



Enterprise Integration with Mulesoft

Objectives

In this chapter, we will see how a MuleSoft project/API can be implemented, what is the API lifecycle and how Anypoint Platform components are used in this API lifecycle process.

API lifecycle

As mentioned in the introduction of this chapter, every API development project follows its own path, which is the API Lifecycle. In general, every path has its own parts/phases. We also have various phases for the Mule Soft API Lifecycle. All these phases, when combined, complete the lifecycle. The MuleSoft API Lifecycle has 10 phases. We will discuss these phases in detail in the coming sections. Tip: A  nypoint Platform has all the required tools which help us to effectively develop MuleSoft API throughout all the phases. Refer to Chapter 2, RESTful World – an introduction, just in case you skipped this chapter.

MuleSoft API lifecycle has the following phases: • Phase 1: Design

• Phase 2: Prototype • Phase 3: Validate

• Phase 4: Develop • Phase 5: Test

• Phase 6: Deploy

• Phase 7: Operate • Phase 8: Publish

• Phase 9: Feedback

• Phase 10: Start Over Tip: M  uleSoft Life cycle management is broadly covered in Design -> Implementation -> Management

In the coming sections, we will discuss each phase in detail. We will also discuss the relevant tool which can be used during the design/development of the API. Refer to the following figure:

Developing a Project – Connectors at a Glance



113

Figure 7.1: Phases of MuleSoft API Lifecycle

Phase 1: Design Anypoint Designer’s API designer is a web-based tool that can be used to define API specifications using RAML or OAS languages. As part of this specification, we can define what are the main resources or endpoints of the API, request, and response type, define datatypes, define error codes, etc. Note: Refer to Chapter 3, Any point platform – an introduction, to know about the tool.

Apart from the API specification, one can define fragments and traits using API designer. These are reusable assets. Below is the sample raml from Design Center. Refer to the following figure:

114



Enterprise Integration with Mulesoft

Figure 7.2: Design Center

Phase 2: Prototype Anypoint Flow Designer is a web-based tool to do API implementation without using the studio or any IDE. However, it is recommended for small POC’s only. It is the recommended IDE tool to implement the API by importing API specifications from Anypoint Platform Design Center, which was created through an API designer. Anypoint Studio can be downloaded from the Anypoint Platform. This comes with built-in Mule runtime so that one can deploy the applications locally and test through postman. Once API specification is imported into the studio, one can see the generated basic structure of API, including the endpoints (resources) and corresponding methods flows with common error handling mechanisms along http listeners and the API Kit router. API Kit router routes the request based on the endpoint and the method selected while sending the request. It also validates the request based on the datatypes and security constraints defined as part of RAML specification. Below is the sample API:

Developing a Project – Connectors at a Glance



115

Figure 7.3: Implementing the API in Studio

Phase 3: Validate Anypoint Design Center has a mocking service. This service allows the users to access the API specification without implementing the API. The mocking service uses the example data provided in the API specification as a request and response. To access the mocking service, the API needs to be published in Anypoint Exchange. Once done, we can see the API like below with the details of endpoints (resources) corresponding methods like POST, GET, etc., along with request and response structure. In the below screenshot/employee is an endpoint, and add is the sub-resource with the POST method:

116



Enterprise Integration with Mulesoft

Figure 7.4: Pictorial view of Prototyping (Mock services)

Anypoint Studio has the API console. When developers import RAML specification from Design Centre to Studio, API Console gets generated. API console interacts with actual API implementation that is running in the local instance of Mule runtime. The API console flow looks like the one below in the studio. Once API is deployed, the console can be accessed:

Figure 7.5: Pictorial view of APIKit Console

Developing a Project – Connectors at a Glance



117

Phase 4: Develop Anypoint Studio is used to develop and test the API implementation. Studio comes with a built-in Mule runtime instance, which can be used to deploy and test the APIs. Once the API specification is imported into the studio based on the resources/ endpoints and methods written in the specification, the flows get generated, as shown in the below image. The actual transformation logic and orchestration of the call can be implemented in the flows through the studio.

Figure 7.6: Pictorial view of Flows

118



Enterprise Integration with Mulesoft

Phase 5: Test Anypoint Studio has a debugging mode, which focuses on debugging and testing the Mule applications. Studio also comes with a built-in Munit tool, which can be used to test the flows like Junit for Java, NUnit for .NET, Pytest for Python, etc. This approach is called Test Driven Development. We will explain this concept in more depth in Chapter 10, Enforcing NFRs using Anypoint API Manager, an approach to testdriven development.

Phase 6: Deploy Anypoint Runtime Manager is the place where Mule runtime servers are managed. These runtimes are either hosted in MuleSoft Cloudhub or the organizations’ onpremise servers. MuleSoft Cloudhub is a fully managed cloud service for Mule runtimes. Mule applications can be deployed directly on Cloudhub from Anypoint Studio. On the Runtime Manager console, we can see the application logs if the logs are not redirected to any 3rd party tools like Splunk, etc. On Runtime Manager settings, we can add alerts related to application deployment, CPU utilization, thresholds, etc. Once the API is deployed on Anypoint Runtime Manager, if it is deployed successfully, the status will be shown as Started. Refer to the following figure:

Figure 7.7: Checking App status in Runtime Manager

Developing a Project – Connectors at a Glance



119

You can access the logs by clicking the API and selecting the logs as shown below:

Figure 7.8: Accessing App Logs

Settings option can be used to stop, and start the application, modify the Mule runtime version, increase the number of workers and worker size, etc. The properties tab can be used to define any properties like mule env, the key used to secure the properties, etc. Refer to the following figure:

Figure 7.9: Settings – reset App status

120



Enterprise Integration with Mulesoft

Phase 7: Operate Anypoint Platform API Manager is used to manage the APIs, secure the APIs and apply the policies to restrict the access of APIs. To do this, one should configure the API Auto Discovery. API Auto Discovery allows a deployed application on Mule Runtime to connect with API Manager to download and manage the policies applied to that application. Also, with this feature Mule application can act as its own API Proxy. To enable Auto discovery configuration, we require the Client ID and client secret assigned to the business organization in which the Mule application is going to deploy. Once you enable Auto Discover configuration, the API gets active in API Manager as shown below:

Figure 7.10: Pictorial view to locate API from API Manager

Once the auto-discovery is enabled and API is active in API Manager, we can apply policies provided by MuleSoft by clicking the Automated Policies option. Anypoint Analytics is a feature available in API Manager, and it provides useful information on how APIs are accessed. Below are some of the details which analytics provides:

• Requests by Date: Line chart that shows the number of requests for all APIs in your organization. • Requests by Location: Map chart that shows the number of requests for each country of origin. • Requests by Application: A bar chart that shows the number of requests from each of the top five registered applications.

Developing a Project – Connectors at a Glance



121

• Requests by Platform: Ring chart that shows the number of requests broken down by platform.

Figure 7.11: API Manager – Analytics

The proceeding image is an pictorial overview of the analytics of the poc_employee api. The graphics are shown in the basis of Requested by Date as shown in the image itself.

Figure 7.12: Graphical overview

The proceeding Figure picturized Geographical overview of the requests for poc_ employee API.

122



Enterprise Integration with Mulesoft

Figure 7.13: Geographical overview

There are more visualization that can be presented using various forms, proceeding image is showing visualization of API requests by Application and by Platform.

Figure 7.14: Analysis of the App

Phase 8: Publish Every API needs to be published on Anypoint Exchange at the organizational level. These APIs can be discovered and reused by various teams of the organization. Teams can go to Anypoint Exchange and search for APIs, as shown below. The APIs will be shown under the organization in which it is published. One should select the organization and, from the drop-down, select Rest APIs to check what all the APIs are available at the organization level as shown below:

Developing a Project – Connectors at a Glance



123

Figure 7.15: Anypoint Exchange

The preceeding image visualization of the poc_employee app in Anypoint Exchange.

Phase 9: Feedback Feedback is the continuous process of the lifecycle. Once the API specification is done using the mock service, we get the feedback from stakeholders, and accordingly, either we will move to develop, and testing phase or we will update the specification as per the feedback provided.

Phase 10: Start Over API Lifecycle will not stop here and again go into the next iteration based on the feedback. E.g., Existing Product management APIs are not capable enough to handle the customization of products, so based on this new requirement, a new set of API versions is created, which will follow its own Lifecycle.

124



Enterprise Integration with Mulesoft

MuleSoft Connectors at a glance

MuleSoft connectors enable to connect external systems. These connectors are used either to receive or to send messages to external systems. MuleSoft connectors can connect to SaaS applications such as Salesforce and SAP. They can integrate with Cloud infrastructure services like AWS, Azure, etc. Use ODBC, JDBC, and other open protocols to connect various databases. Connectors can be classified into either endpoint connectors or operational-based connectors. End-point connectors are based on one-way or two-way communication. Operational connectors are allowed to exchange data between two systems. Examples of End-point connectors are: HTTP, VM, FTP, SFTP, JMS, etc. Examples of Operational connectors are: SFDC, DB, Webservice Consumer, etc. MuleSoft Connectors are classified into 3 categories:

• Select: Available to everyone. However, one should Anypoint Platform minimum subscription to get access to these connectors. For example: DB, SFDC, HTTP, JMS, etc. • Premium: These are available only to licensed users only.

• Custom Connectors: These connectors are developed by the MuleSoft partners developer community and are reviewed and approved by MuleSoft. These connectors can be available freely or need to pay. It is up to the team who developed the connector. Getting access and support for these connectors is from MuleSoft partners and not from MuleSoft.

Advantage of Connectors Let us look at some advantages of MulesSoft Connectors:

• Reduces code complexity because the Mule app can connect to the target system without writing any program.

• Makes code maintenance easier because not all the changes in the target system require changes in the Mule application. The connector configuration can be updated without updating other parts of the application. • Proactively interprets the metadata of the target system, which makes the transformation of data easier. • Simplifies authentication against the target system.

Developing a Project – Connectors at a Glance



125

Process to create custom Connectors To develop custom connectors below prerequisites are required: • Java JDK version 8.x • Anypoint Studio 7.x

• Apache Maven 3.3.9 or higher

• Define Maven Home and Java Home system variables. We will discuss this in the following steps:

1. Go to the directory or Anypoint Studio workspace where you want to implement the connector code and execute the below command: mvn org.mule.extensions:mule-extensions-archetype-mavenplugin:generate



Once we execute the command, it will download and install the necessary packages as shown below:

Figure 7.16: Execution of maven command:

2. The command will prompt a couple of questions like the ones below. By default, all the values are empty. Once all the values are provided, the project gets created, and all the dependencies will be downloaded.

i. Enter the name of the extension (Name of the connector. Connector is called extension in Mule 4) ii. Enter the extensions groupId

126



Enterprise Integration with Mulesoft



iii. Enter the extensions artifactId



v. Enter the extensions main package



iv. Enter the extensions version Refer to the following figure:

Figure 7.17: Providing connector detail

3. Once all the details are provided, it generates the connector-specific classes and packages with the provided connector information and completes the build.

Refer to the following figure:

Figure 7.18: Packages, classes generation



Once the above step is completed, open Anypoint Studio and open the project from the file system and select the project directory created in Step 2. Refer to the following figure: which provides the pictorial overview of a connector overview:

Developing a Project – Connectors at a Glance



127

Figure 7.19: Overview of connector flow

4. Once the project gets opened, the required classes, along with Mule SDK annotations, below are the classes which get generated:

i. Java class created with the extension name given in Step 2. This class is going to have an annotation named as @Extension with the name of the connector. Another annotation named @Configuration annotation is used to point to the configuration class, which describes the connector configuration-related information.



Refer to the following figure provides an overview of Extension class

Figure 7.20: Extension class



ii. Class Configuration defines the parameters that are required to be displayed in the connector configuration window. This class uses @ ConnectionProvider and @Operation annotations.

Refer to the following figure provides an overview to configuring a, extension class

128

Enterprise Integration with Mulesoft





Figure 7.21: Configuring extension class

iii. The next class, Connection Provider, is used to manage the connection to the target system. This class must implement one of the 3 connection providers classes available in Mule based on the requirement. The parameters required for the connection are defined in this class. Also need to override connect, disconnect, and validate methods to provide the connection-specific requirement. The 3 connection providers of Mule are: CachedConnectionProvider PoolingConnectionProvider ConnectionProvider  By default, it uses PoolingConnectionProvider. In connect method, the



logic to connect the system needs to be implemented. Similarly, in the disconnect method, the logic to disconnect from the system needs to be implemented. Refer to the following figure, visualize a class provides connection

Figure 7.22: Connection provider class

Developing a Project – Connectors at a Glance





129

iv. The last class is the Operations class, where operations related to a connector can be defined. We can define any number of operations. All the methods defined as public are considered operations of the connector. Public methods are not exposed as operations. We can inject configurations and connections to an operation using @Config and @ Connection annotations in method arguments.



Refer to the following figure visualize of Operations class:

Figure 7.23: Operations class



v. All the operation methods need to be implemented as per the connector requirement. vi. Using the command mvn clean install, we can install the connector in the local maven repository.

vii. To test the connector, create a Mule application and add the connector dependency in the pom.xml.

viii. Each time when you run mvn clean install, change the version of the connector in pom.xml to reflect the new changes.

ix. Similarly, to test the connection in the Mule application, change the new version of the connector.

Steps to publish Custom Connectors on Anypoint Platform exchange Once the custom connector development and testing are completed, it should be published on the Anypoint Platform organization level so that the connector can be reused by other teams. 1. Login to Anypoint Platform 2. Go to Access Management

130



Enterprise Integration with Mulesoft

3. Click the Organization Name or Business Group 4. Copy the Organization Id

5. Modify the groupId of the Maven project with the Organization Id in the pom.xml of the Java project. 4.0.0





1.0.0



mule-test-connector mule-extension Test Extension

6. Add the Maven façade as a repository in the distribution management section of the project’s pom.xml, as shown in the below image. The Organization Id is the one copied in Step 4.

The proceeding image provides an overview of façade of distribution management.

Figure 7.24: Overview of maven façade – distribution management

7. Add Anypoint Platform credentials in the settings.xml file located in the Maven .m2 directory.

Figure 7.25: Anypoint platform Credentials

8. Publish the connector by running the mvn deploy command at the command prompt. This published custom connector can be used at the organizational level. If we want to publish the connector on exchange under MuleSoft Assets, the process will be different. The MuleSoft team will involve here and will do a code review. We need to provide access to the repository to the MuleSoft team.

Developing a Project – Connectors at a Glance



131

The MuleSoft team has a specific checklist for best practices, security vulnerabilities, etc. Once all of them are passed, MuleSoft will publish the connector on the Anypoint Platform exchange. These connectors will be displayed as MuleSoft partner assets.

Conclusion

In this chapter, we learned the phases involved in an API lifecycle and which components of the Anypoint Platform are involved in each phase. We looked into connectors briefly and the advantages of the connectors, and how we can create custom connectors and publish them in the exchange. The next chapter will cover the Error handling and debugging of a Mule project.

Questions

1. What is Mulesoft Anypoint Platform?

2. What is the life cycle of an API in Mulesoft?

3. Which are the different phases of Mulesoft API development? 4. What is the function of Mulesoft connectors? 5. How to build a custom connector?

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors: https://discord.bpbonline.com

Chapter 8

Error Handling and Debugging – An Insight Story Introduction

In this chapter, we will see the way errors can be handled in a MuleSoft project/ API. How we can debug a Mule project using Anypoint Studio. Error Handling and Debugging of an application required during the troubleshooting of the issues. Errors needs to be handled properly for every scenario so that end user can understand the issue and why the request is not able to process. So, that end user can correct the request and send it for the processing again.

Structure

In this chapter, we will discuss following topics: • Error handling

• Classification of errors • Handling of errors • Best practices to •

Handle errors

• Debugging of Mule application

134



Enterprise Integration with Mulesoft

Objectives

In this chapter, you will be understanding the various ways of handling errors in Mule by following the best practices. Also, you’ll learn how we can debug the Mule application to trouble shoot the application related issues easily.

Error handling

Mule 4 introduces a formal error concept that is easier to use. Each component/ connector in Mule declares the type of the errors it can throw, so that one can identify potential errors that can occur during the design. Mule errors will have the following components: • Description of the error • Error type

• Cause of the error thrown by Java • Error message

Error type consists of both a name space and an identifier separated by “:”. Example: HTTP:NOT_FOUND, FILE:NOT_FOUND Mule Connectors define their own name space and core runtime errors have an implicit one like MULE:EXPRESSION and EXPRESSION are considered as one. Another important aspect of error types is that they might have a parent type. For example: HTTP_UNAUTHORIZED has MULE:CLIENT_SECURITY as the parent which in turn has MULE:SECURITY as the parent. This establishes error types as specifications of more global ones. MULE:SECURITY will catch HTTP as well as OAuth errors. We can consider there are two types of errors:

• System level errors: System level errors are related to systems. There is no message, business logic in the code is involved. For example: connectivity issues to the system. We cannot handle these errors.

• Message level errors: Message level errors occur when message involved in the Mule flow.

Error Handling and Debugging – An Insight Story



135

Classification of errors

Before starting of the API implementation, if we classify the errors as follows as it will be easy to handle during API implementation: • Business errors

• Technical errors

• APIKit router errors • Unhandled errors

Raising and handling of the errors can be done in two ways: • Automatic • Manual

Automatic errors are those errors which raised by MuleSoft automatically in case of system connectivity and so on. Manual errors are handling using validation module. Typically, business errors are handled using validation module. Raise Error component provided by Mule can be used to handle these errors. This component allows us to customize the error type and description.

Handling of errors

The errors raised should be handled and proper error response should be sent to the consumer. Following are the different ways to handle the errors: • Try scope

• Raise error

• Validation module

• On-error propagate • On-error continue

• Global error handler

136



Enterprise Integration with Mulesoft

Try scope When you want to handle error while executing in a message processer, you can keep the message processer within Try scope. For example, if we want to process the collection of objects without stopping in case of failure of one object.

Raise error This component generates a Mule error in case of any failure which allows you to customize the error description and type. Use this component only to raise core runtime errors like MULE: Connectivity and to raise custom error types. In Raise Error component, configuration defines the type of the error and the custom message which you want to throw based on the functionality. In the following example, while adding an employee in the Salesforce system we are checking whether the employee already exists or not if an employee exists already raising an error with a description Employee exists and the type is custom defined EMP:Exists.

Figure 8.1: Raise Error Component

Error Handling and Debugging – An Insight Story



137

The following figure shows the Raise error component configuration:

Figure 8.2: Raise Error Component Configuration

Validation module Validation component in Mule 4 can be used to verify the contents of a message/ payload in a Mule flow that matches a specific criterion. If the message/payload doesn’t meet the criteria the module returns the validation exception. Based on the exception, you can customize the validation error message that can be understand easily. In the following example, the validation module Is true is verifying whether the email salary fields are null if they are not null then only the message will process further else it will throw the validation error.

Figure 8.3: Validator Component

138



Enterprise Integration with Mulesoft

Validator configuration has been shown in the following figure:

Figure 8.4: Validator Configuration

There are lot of validation operations available such as to verify a string is empty or not, whether the email is a valid email or not, is the collection is empty or not. We can find them in Mule Palette from Anypoint Studio as shown in the following figure:

Figure 8.5: Various Validations

Error Handling and Debugging – An Insight Story



139

On-error continue On error continue can be used when an error occurred in parent flow or sub flow and still you want to continue the process without stopping. For example, during the processing of a collection in for each after processing couple of records there might be an error in between while processing a record due to any reason, we can use on error continue to handle the error of that record by sending an email or by keeping the record into a queue and want to continue the process of next record. In the following example, if the Salesforce Query in the for each loop returns the employee is already exists in on error continue the employee is removed and continued the process.

Figure 8.6: On Error Continue

140



Enterprise Integration with Mulesoft

On error continue configuration has been shown in the following figure:

Figure 8.7: On Error Continue Configuration

You can define the error type so that the error handler can be executed only when the error type matches with the one defined in the On Error Continue properties as shown in the preceding diagram.

On-error propagate On error propagate can be used when you want to propagate the error to calling parent flow or sub flow. By default, the errors can be propagated to parent flow if we don’t use the On Error Propagate. However, you can customize the error and passthrough parent flow using On Error Propagate. In the following example, if any errors occur while retrieving the employees from Salesforce On Error Propagate propagates the error to the parent flow which is calling getAllEmployees sub flow.

Figure 8.8: On Error Propagate

Error Handling and Debugging – An Insight Story



141

The following figure shows On Error Propagate configuration:

Figure 8.9: On Error Propagate Configuration

Global error handler In any Mule application, define a Global Error Handler where any number of internal error handlers can be defined. If we have not defined any error handling mechanism in parent flow or sub flow the error handler will be redirected to Global Error Handler.

Best practices to define Error Handler Some of the best practices to define Error Handler are as follows-

• Use Custom Type Errors by defining a type based on business validation. It should have a proper description so that the end user can have the clarity on the error and take appropriate action while sending the request.

• Handle all errors at one place except the process which are defined in Async scope. Async scope runs as a separate process with separate thread. Due to this separate process propagating errors is not possible for the activities which are running in Async scope.

• Use Mule correlation id to corelate the request and response. This is easy to track the errors of a particular request as well. Always create a generic error message handler which can be used by APIKit router to route the error in case of matching errors are not available.

142



Enterprise Integration with Mulesoft

Debugging a Mule application

Mule application can be run in debug mode by adding breakpoints and defining the debug configuration in Anypoint Studio. The application gets paused at breakpoint and we can inspect the payload elements and variables how the values are processed during the flow. To debug an application in Anypoint Studio we can add breakpoints and run the application in debug mode.

1. Adding breakpoint: right click on the component on which user wants to add the breakpoint then we can see an option to add breakpoint.

Figure 8.10: Adding breakpoint

2. We can add the debug configurations by opening the debug configuration window.

Error Handling and Debugging – An Insight Story



143

Figure 8.11: Open Debug Configuration

3. Once Debug configuration window opens, we can specify the properties, port etc.

Figure 8.12: Debug Configuration Window

144



Enterprise Integration with Mulesoft

4. After adding the required properties click on debug and the application will run in debug mode and it prompts to open the debug perspective as shown below:

Figure 8.13: Prompt for Debug perspective

5. Click on yes to open the perspecting and run the application. The application will stop where user added the breakpoint. We can evaluate what are the values and parameters are passed to the request as shown below.

Figure 8.14: Debugger parameters

Error Handling and Debugging – An Insight Story



145

We can also inspect the error object and its elements in case of any error during the debugging of the application.

Debugging a remote Mule application Sometimes it is difficult to reproduce the errors in local. Mule application which are running remotely on a standalone server can be debugged from Studio. To do this we need to import the same code which is running on the remote server in studio and run-in debug mode by doing debug configurations. Before that we need to update the wrapper.conf file at the server by adding the following properties: • wrapper.java.additional.80=-Dmule.env=dev

• wrapper.java.additional.81=-Dvault.key=Gary1234

• wrapper.java.additional.82=-Dmule.debug.enable=true • wrapper.java.additional.83=-Dmule.debug.port=6767

• wrapper.java.additional.84=-Dmule.debug.suspend=false Once the preceding properties added in wrapper.conf, Configure the debug configurations for remote mule application as follows:

Figure 8.15: Remote Debugging configurations

146



Enterprise Integration with Mulesoft

Click on the new button and the configuration window will appear as follows. Enter the server and port details and click on debug. The debugger will start, and you can verify by running the command based on the server type to verify the port between studio and the server is established or not. Once you see it is established you can run the API endpoint and debug the application.

Figure 8.16: Remote Debugging configurations

In addition to configuring the debugger for trouble shooting we can update the logging level in log4j properties files. If you are running Mule application on a standalone server log4j.properties file can be found in MULE_HOME/conf/log4j.properties. You can enable the logging for the required package which you want to trouble shoot. In case of Cloudhub deployment, the logging levels can be configured from Runtime Manager as shown in the following figure:

Error Handling and Debugging – An Insight Story



147

Figure 8.17: Logging Levels

Conclusion

In this chapter, we have seen how the Mule application errors can be handled using various scopes and components. Also, we have understood how the Mule validators are used to validate the data without writing any code and by simply configuring the components by providing specific conditions. This chapter also covers how to debug Mule application in different ways for different kind of deployment modes. In the next chapter, we will discuss what is Test Driven Development and how it works. What are the advantages of Test-Driven Development. We also talk about the process to write Munits for the Mule flows.

Questions

1. What is the use of Try scope?

2. What are different ways of handling errors in Mule? 3. What is the use of validation module?

4. What is the difference between Error Continue and Error Propagate? 5. What is the advantage of defining Global Error Handler?

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors: https://discord.bpbonline.com

Chapter 9

Test-Driven Development Using Munit Introduction

Test-driven development is a software development process in which test cases are developed to validate the code and what it will do. Test cases for each function will be written and tested, including the success and failure scenarios. In case the test case fails, we need to update the code or write a new code to make it bug-free. Test-driven development (TDD) starts by writing test cases for each functionality in an application. TDD framework suggests developers write new code in a function if an automated test fails to avoid duplication.

Structure

In this chapter, we will discuss the following topics: • What and Why TDD

• MUnit – an introduction

• Writing tests and code coverage

150



Enterprise Integration with Mulesoft

What and why TDD

The idea behind the TDD framework is to write and correct the failed tests before writing new code. TDD doesn’t mean doing lots of testing or writing the test cases and building the system that passes the tests. The TDD cycle has below steps: 1. Write a test

2. Make it run

3. Change the code to make it right, known as Refactor 4. Repeat the process

Advantages of using TDD Several advantages of using TDD have been listed below. Let us have a look at them: • Provides constant feedback about the functions.

• Ensures that your application actually meets the requirements. • Acts as a safety net against the bugs. • Quality of the coding increases.

• Easy to identify the breakages in the code in case of any code is added to the existing functions. • Suitable for Continuous Integration and Continuous Development.

Munit- an introduction

Munit is a framework from MuleSoft which helps in writing test cases for the APIs developed in Mule. This is suitable for continuous integration and development processes as it identifies any breakages in the code if various teams are working on the same API. Munits are fully integrated with Anypoint Studio, which allows designing, developing, and running like any Mule application. Using Munit, we can:

• Create a test using Mule code • Mock the processors

• Verify processor calls

Test-Driven Development Using Munit

• Spy any processor

• Generate code/flow coverage reports Munit is divided into two submodules: • Munit

• Munit Tools Each module has its own dependencies. You need to add the below dependency for Munit.

com.mulesoft.munit munit-runner 2.3.11

mule-plugin

test

You need to add the below dependency for Munit Tools.

com.mulesoft.munit munit-tools 2.3.11

mule-plugin

test



151

152



Enterprise Integration with Mulesoft

Munit test suit

The base file that gets generated for Munit is the test suit file located in the /src/ test/munit directory of a Mule application project. Each test suit file is a collection of test cases and runs independently from other Munit test suits. In a Mule application, we can have any number of Munit test suit files. Munit Test is the basic processor of the Munit test suite. The Munit test is divided into 3 scopes:

• Behavior: In this scope, we define all the preconditions before executing the test logic. • Execution: This scope defines the testing logic, which will wait for all the processes to finish before executing the next scope.

• Validation: The validation scope contains the validation of the results of the execution scope. All these scopes are optional.

How to create Munit Test for a Mule flow To create a Munit test for a flow, right-click on the flow for which the Munit test needs to be created and select Munit and select the option to create a blank test for the flow.

Figure 9.1: Create a MUnit blank teste

Test-Driven Development Using Munit



153

An empty Munit test flow gets created under the src/text/munit folder of the project, as shown below:

Figure 9.2: Empty MUnit files in Package explorer

Search for the Munit components in the Mule palette as shown below. We can see all the Munit components that are available.

Figure 9.3: Search in Mule Palette

Create input (expected request) and output (expected response) under the src/ test/resource folder. In the folder created, mock request and response files, which can be passed to the actual flow when we run the Munit test case.

154



Enterprise Integration with Mulesoft

Figure 9.4: adding code

In the behavior scope, you can specify the preconditions, like which process needs to be executed and what kind of mock request or response is required for that process to execute. Mock When Event Processor enables you to mock an event processor when it matches the defined name and attributes. You can click Pick a target processor to select the processor for which you are providing the preconditions, as shown below. Select the respective attributes of the processor and click on the Ok button.

Figure 9.5: Target processor

Once you select the processor, add the mock payload by using the MunitTools methods. Add attributes if required by any of that processors.

Test-Driven Development Using Munit



155

Figure 9.6: Adding attributes

This processor also enables you to define mock errors in your operations. We need to provide TypeId, and Cause parameters to trigger the errors under the error tab. To dynamically mock a variable, use Then-Call, as shown below. Select the ThenCall check box and select the flow from the drop-down which you want to call after the configured processor execution completes.

Figure 9.7: Decision on the call with Then Call

156



Enterprise Integration with Mulesoft

Under the Execution part, we will point to the flow for which we are writing the Munit using the flow reference component. Using the setEvent component, we will set the required payload or Mule Event to the flow for which we are writing the Munit. It has two properties, as mentioned below: • Start with an empty event.

• Clone the incoming event and override values. If option 2 is set as true, it takes the event generated by the code. By default, the value is set as false. The payload in the setEvent processor has three attributes: • Value: Defines the payloads message.

• Encoding: This is an optional field. Defines the encoding of the message.

• Media Type: This is an optional field. Defines the mime type of message.

Figure 9.8: Adding Payload

Verify Event Processor is used to verify whether the processor has been called with specific attributes or not. If the verification is not successful, Munit gets failed. Verification can be done on any processor, even if we won’t create the mocks.

Test-Driven Development Using Munit



157

The attributes for the Verify are:

• The Event processor attribute is configured with the processor which you want to mock.

• The times attribute would define the verification as successful if the event processor was called for a particular number of times. • The atmost attribute would define the verification as successful if the event processor was called the maximum number of times. • The atLeast attribute would define the verification as successful if the event processor was called the minimum number of times.

Figure 9.9: Adding attribute values

The Assert That event allows to run assertions to validate the Mule Event content state. It uses Dataweave functions called Munit Matchers. For example: to assert the payload is equal to a certain value, use equal to matcher. To check payload is a null value, use the nullValue matcher. Matchers are classified into below categories: • String Matchers

• Comparable Matchers

158



Enterprise Integration with Mulesoft

• Core Matchers

• Resource Matchers

• Iterable and Map Mathers

Figure 9.10: Adding Matchers

The various other processors provided by Munit Tools are Storage Event Processors, which allow you to manage temporary storage during the time of running the test cases. We have the below operations as part of this processor: • Store

• Remove

• Retrieve

• Clear Stored Data Sleep Processor allows to pause the run of the test case for a certain period of time. Another important event processor is Spy Event Processor. This allows you to spy on an event processor to see what is going to happen before and after the processor is called. Spy Processor should be used Verify or Assert.

Munit test recorder Munit plays an important role in testing Mule applications. Writing Munits takes considerable time. Also, if there are any changes to flows and subflows, we need to update the Muits also accordingly. MuleSoft has recently introduced an automatic way of writing Munits called Munit Test Recorder.

Test-Driven Development Using Munit



159

Below are the prerequisites to using Munit Test Recorder: • Anypoint Studio: 7.5 and later • Mule Runtime: 4.3 and later

• Munit version: 2.2.5 and later

• Munit Studio plugin: 2.5.0 and later

Creating Munits using test recorder To create Munits using Test Recorder, go to the flow for which you want to create Munit and right-click.

Figure 9.11: MuNit using Test Record

Once the recording starts, the Mule project gets started at the backend to check the connectivity with respect to the system that is configured in the Mule flow. If the connectivity fails, the recorder will not start. If the connection is successful, the recorder will start, and you will get a pop-up with configure Test option as disabled.

Figure 9.12: Ready to Configure Test

160



Enterprise Integration with Mulesoft

Start running the API through Postman to start the recording. Send the request 2 to 3 times. Once the recording captures the request and response to the configuration, the Test option gets enabled automatically.

Figure 9.13: Configuring Input

Click on Configure Test option, and a screen will appear where we can provide the name of the test suite and the test name.

Figure 9.14: Naming New Record Test

Provide the test name and file name details and click on Next. The new window gets open where you configure the test-related input and output parameters, as shown in the below picture.

Test-Driven Development Using Munit

Figure 9.15: Provisioning Parameters

Once selected, click on the Next button, and you can see the test summary.

Figure 9.16: Finalizing



161

162



Enterprise Integration with Mulesoft

Once click on the Finish button, the Munit test case for the flow gets generated.

Figure 9.17: Complete the Test

Limitations of using test recorder Below are some of the limitations where Munit test cases cannot be generated using MunitTest Recorder. • Munit tests cannot be generated if the flows with errors are raised inside the flow or with error continue components. • Munit tests cannot be generated if there is a processor before or inside each processor.

• Munit tests cannot be generated if the data is modified as part of the iteration of a loop.

Test-Driven Development Using Munit



163

Conclusion

In this chapter, we have learned what Test Driven Development is and what are the various advantages of following TDD. We have also gone through the various steps involved in the TDD approach. We have gone through the different ways to create Munit test cases for the flows and the various scopes involved in Munit test cases. We have gone through various event processors and matchers that are useful in writing Munits. We also learned how to write Munits using Munit Test Recorder and what are the limitations of using Test Recorder. In the next chapter, we will be talking about NFRs and Mule RPA. We’ll see what are NFRs, why these are important and how to implement. In coming chapter, we will discuss an overview of Mule RPA and see how to do the automation using Mule RPA.

Questions

1. What is TDD?

2. Why should we use TDD?

3. How to create a MUnit test?

4. What is Record and Play concept?

5. How to create multiple Record and Play tests?

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors: https://discord.bpbonline.com

Chapter 10

An Overview of NFRs and Mule RPA Introduction

Non-Functional Requirements or NFRs are a set of requirements that specify the characteristics of a system that are not directly related to its functionality but are critical to its success. These requirements can include performance, scalability, availability, security, and reliability. MuleSoft RPA, or Robotic Process Automation, is a solution offered by MuleSoft that allows organizations to automate repetitive and rule-based tasks. It enables the integration of robotic automation capabilities into MuleSoft’s Anypoint Platform, providing a unified platform for managing both API-led connectivity and RPA.

Structure

In this chapter, we will discuss the following topics: • Overview of NFRs and importance • An overview of Mulesoft RPA • Importance of automation

166



Enterprise Integration with Mulesoft

Objectives

The aim of this chapter is to understand NFRs and how to enforce them using API Manager. Concept of Automation. Understanding MuleSoft RPA and its importance.

Overview to NFRs

NFRs stand for Non-Functional Requirements, which are the requirements that define the performance, security, scalability, reliability, and other aspects of the system’s behavior rather than its functionality. In the context of API Manager, NFRs are important as they define the quality of the API Manager service that the customers will experience. In the context of Anypoint API Manager, NFRs play a crucial role in ensuring that APIs meet the desired level of quality, especially in terms of performance and security. By defining NFRs upfront and measuring the API’s performance against these requirements, organizations can ensure that their APIs are always performing optimally and meeting the needs of their users. Note: To know more and to get started with Anypoint API Manager, refer to Chapter-3, Anypoint platform – an introduction.

Anypoint API Manager provides a set of tools and capabilities to help organizations manage NFRs for their APIs. This includes the ability to define NFRs as part of the API specification using RAML or OpenAPI, set performance and security policies based on these requirements, and monitor the API’s performance against these policies in real-time. For example, an organization may define an NFR for an API that specifies a response time of less than 200ms for 90% of all requests. Using Anypoint API Manager, the organization can create a performance policy based on this requirement and configure the API gateway to enforce this policy. The API manager will then monitor the API’s performance in real-time and alert the organization if it falls below the defined threshold.

Importance of NFRs Below are some important NFRs with respect to API manager:

• Performance: It defines the speed and responsiveness of the API Manager service. The API Manager must be capable of handling many API requests in real-time and providing a smooth experience to customers.

An Overview of NFRs and Mule RPA



167

• Security: It defines the measures taken to protect the API Manager service and the APIs from security threats, such as unauthorized access, data breaches, and cyber-attacks. The API Manager must comply with industry-standard security protocols and implement advanced security features to ensure the security of the APIs.

• Scalability: It defines the ability of the API Manager service to handle a growing number of APIs and customers over time. The API Manager must be designed in such a way that it can be easily scaled up or down depending on the demand. • Reliability: It defines the ability of the API Manager service to operate consistently and predictably over time. The API Manager must be designed to avoid service downtime, data loss, or any other disruptions that may negatively affect customers.

Implement NFRs using Anypoint manager

The implementation of non-functional requirements (NFRs) using API Manager can be done in the following step-by-step process:

• Identify the NFRs that need to be implemented: Determine the NFRs that are required for your API. Common NFRs include performance, security, reliability, scalability, and availability. • Define the NFRs: Define the NFRs that you have identified in the previous step. The definition should include specific metrics and targets for each NFR.

• Choose the appropriate policies: API Manager offers a range of policies that can be applied to APIs to implement NFRs. For example, the rate-limiting policy can be used to enforce performance NFRs by limiting the number of API calls that can be made within a given time frame. • Apply the policies: Once you have identified the appropriate policies, apply them to your APIs through the API Manager console. You can apply policies at the API level, operation level, or endpoint level, depending on your requirements. • Test the NFRs: Test the NFRs that you have implemented to ensure that they are working as expected. For example, if you have implemented a security policy, test the API to ensure that unauthorized access is prevented. • Monitor the NFRs: Monitor the NFRs to ensure that they are being met over time. API Manager provides a range of monitoring tools that can be used to track metrics such as API usage, response times, and error rates.

168



Enterprise Integration with Mulesoft

The preceding Figure 10.1 depicts the implementation of Non-Functional requirements using policies:

Figure 10.1: API Manager: Policy screen

In the preceding Figure 10.1, we have two policies Spike Control and Cross-Origin resource sharing; the first one fulfills the requirement of Rate-limits. In simple words, it controls the number of messages to be processed by an API. The second one enables API to be called via JavaScript XMLHttpRequest.

Use case: Mobile APIs A company provides a REST API for a mobile application that allows users to check the availability of products and make purchases. The API is expected to handle a large number of requests, be highly available, and be secure. • Define NFRs: In this step, we define the NFRs that need to be implemented for the API. Based on the use case, the following NFRs can be defined:

• Performance: The API should handle at least 1000 requests per second with a response time of less than 1 second. • Availability: The API should have a minimum uptime of 99.99%.

• Security: The API should be protected from common web attacks such as SQL injection and cross-site scripting.

• Configure API Manager: In this step, we configure the API Manager to implement the NFRs.

o For performance, we can configure API Manager to monitor the API’s performance using tools such as Apache JMeter or Gatling. We can set up alerts to notify us when the API’s response time exceeds the threshold.

An Overview of NFRs and Mule RPA







169

o For availability, we can use API Manager’s built-in health checks to monitor the API’s uptime. We can configure the health checks to send notifications when the API is down or not responding. o For security, we can configure API Manager to enforce security policies such as OAuth2.0 or JSON Web Tokens (JWT) authentication. We can also use API Manager’s security features to scan the API for vulnerabilities.

• Test and monitor: In this step, we test and monitor the API to ensure that the NFRs are being implemented correctly. We can use tools such as Postman or SoapUI to test the API’s functionality and performance. We can also use API Manager’s built-in monitoring and analytics features to track the API’s usage and performance.

An overview of MuleSoft RPA

Mulesoft is an integration platform that enables enterprises to connect different systems and applications to automate business processes. Mulesoft Robotic Process Automation (RPA) is an extension of Mulesoft’s capabilities that allows organizations to automate repetitive and manual tasks using software robots. Mulesoft RPA provides a visual interface for creating automation workflows, which can be triggered manually or scheduled to run at specific times. These workflows can interact with various systems and applications, such as web pages, desktop applications, and databases, to perform tasks such as data entry, form filling, and report generation. Tip: M  uleSoft RPA is a cloud-based solution that combines the power of MuleSoft’s API-led connectivity with RPA capabilities. It allows organizations to automate end-to-end business processes by integrating APIs, data, and applications.

Mulesoft RPA also provides monitoring and management capabilities that allow organizations to track the performance of their robots and manage their resources effectively. With Mulesoft RPA, organizations can improve operational efficiency, reduce errors, and free up employees to focus on higher-value tasks. The MuleSoft RPA can be in action using the following steps:

• Sign in to MuleSoft RPA: To use MuleSoft RPA, you need to sign up for an Anypoint Platform account and request access to RPA. Once you have access, you can start creating RPA bots.

170



Enterprise Integration with Mulesoft

• Create an RPA bot: To create an RPA bot, you need to define the task you want to automate, record the steps of the task, and configure the bot. MuleSoft RPA uses a visual, drag-and-drop interface to make it easy to create bots without any coding. • Test and deploy the bot: Once you have created the bot, you need to test it to ensure it works as expected. Once you are satisfied with the bot’s performance, you can deploy it to production.

• Monitor and manage the bot: MuleSoft RPA provides a dashboard that allows you to monitor and manage your bots. You can view bot performance metrics, analyze bot logs, and make adjustments to the bot’s configuration as needed.

Importance of automation

MuleSoft RPA is an automation solution that can help organizations streamline their business processes, reduce errors, and increase productivity. Here are some examples of how MuleSoft RPA can be used to improve automation:

• Data Entry: Data entry is a time-consuming and error-prone task that is often outsourced to third-party vendors. With MuleSoft RPA, organizations can automate data entry tasks, reducing the time required to complete them and eliminating errors.

• Customer Service: Many customer service tasks can be automated using MuleSoft RPA. For example, organizations can use RPA to automatically respond to frequently asked questions or to route customer inquiries to the appropriate department. • Financial Services: MuleSoft RPA can help automate many financial services tasks, such as account reconciliation, invoicing, and payment processing. By automating these tasks, organizations can reduce errors and improve processing times. • HR Services: MuleSoft RPA can also be used to automate HR services tasks, such as onboarding new employees, processing employee benefits, and managing employee records.

• Supply Chain Management: MuleSoft RPA can be used to automate many tasks in supply chain management, such as inventory management, order processing, and shipment tracking. By automating these tasks, organizations can improve efficiency and reduce errors.

An Overview of NFRs and Mule RPA



171

Conclusion

In this chapter, we have gone through the basic concepts of NFRs using API Manager. We discussed the importance of NFRs, including implementations of NFRs, and why we need these. Further, we discussed Mule RPA, why RPA, and how it helps in process automation. This is a newly introduced member of the Mule family. In the next chapter, we will see another remember of Anypoint Platform CloudHub 2.0, it is not new, but we will discuss the updated version of Cloud Hub and the new features and updates of the same. We will also visualize some of its areas.

Questions

1. What are NFRs?

2. How to implement NFRs using Any point Manager. 3. What is MuleSoft RPA?

4. How to get started with MuleSoft RPA?

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors: https://discord.bpbonline.com

Chapter 11

CloudHub 2.0 – An Introduction Introduction

MuleSoft recently launched a major update of their fully managed Mule Runtime engine on Cloud Service. CloudHub 2.0 is an orchestrated containerized integration platform as a service (iPaas) where one can deploy APIs as lightweight containers in the cloud.

Structure

In this chapter, we will discuss the following topics: • CloudHUb 2.0 – at a glance

• Shared and Private Spaces in CloudHub 2.0

• Let us create Private Spaces in CloudHub 2.0

• What changes in CloudHub2.0 since CloudHub 1.0

174



Enterprise Integration with Mulesoft

Objectives

The aim of this chapter is to provide an overview of new and updated CloudHub 2.0. There are a few changes in the naming and new features have been added in the CloudHub 2.0. This chapter provides the understanding of the new features at a glance.

About CloudHub 2.0

CloudHub 2.0 is a powerful cloud-based integration platform that offers a range of features. Some of them have been listed below: • Provides for deployments across 12 regions globally.

• Scalability of the infrastructure and built-in services up or down based on the volume of the requests received by the API. • Highly available and scalable through redundancy, intelligent healing, and zero-downtime updates. • Reduce dependency on the infrastructure team.

• Provides isolation by running each Mule instance in a separate container.

• Sensitive information or data can be encrypted at rest and in transit within the Anypoint Platform. • Built-in security policies, firewall controls, and restricted shell access.

Shared spaces in CloudHub 2.0 A shared space is an elastic cloud of resources where Mule instances are running in a multi-tenant environment. Multi-tenant environments, multiple customers share the same application, running on the same operating system, on the same hardware, and with the same data storage mechanism. The distinction between the customers is achieved during application design. Thus customers do not share or see each other’s data. CloudHub 2.0 provides one shared space in each supported region to which one can deploy the applications. We can deploy apps in a Shared Space when we have a need of: • No isolation is required from the public cloud

• APIs have no requirement to connect to an on-premises data center.

CloudHub 2.0 – An Introduction



175

• Configuration of custom certificates is not required.

• The default cloudhub.io domain is going to be used by APIs

Private spaces in CloudHub 2.0 CloudHub 2.0 private space is a virtual, private, and isolated logical space in which Mule applications run. This is like VPC in CloudHub 1.0. We can deploy apps in a Private Space when we have the below requirement: • Single tenancy for Mule apps.

• VPN or gateway setup requires where apps need to be connected on-premise data center. • Configuration of custom certificates. • Private endpoints are required.

• Supports vanity domain names. In each Private Space, we need to define

• Creation of a Private Network, where we define the range of IP addresses (CIDR block) where apps can be deployed. • Define connection type from the private network to the external network by using Anypoint VPN or using transit gateway.

• TLS contexts, which define the domains that are available while deploying apps to the private spaces, and optionally enable mutual TLS. • Firewall rules to allow and block inbound and outbound traffic.

• Environments and business groups to allow to deploy to the private space.

Creating Private Spaces Let us have a look at how we can create a private space: 1. Login to Anypoint Platform. 2. Go to Runtime Manager. 3. Select Private Spaces.

4. Click on Create private space.

176



Enterprise Integration with Mulesoft

Refer to the following figure:

Figure 11.1: Pictorial overview to start with Private spaces from Runtime Manager

5. The dialog box appears below. Enter the name of the private space in it. 6. Once the name is entered, click on Create button.

Refer to the following figure:

Figure 11.2: Name a Private space

7. After clicking on Create button below screen gets to appear.

8. You can create a private network or connections as shown in the below screen:

CloudHub 2.0 – An Introduction



177

Figure 11.3: Private Spaces – network setting screen here

9. With the help of your network team, you can Create Private Network by passing the CIDR block. 10. The region can be selected as per the corporate network which needs to be connected. The apps will run in that region.

Refer to the following figure:

Figure 11.4: Private spaces – region setting

178



Enterprise Integration with Mulesoft

11. Connections can also be created under the network tab by clicking create connection. Connection types supported by private spaces are VPN and transit gateway.

Figure 11.5: Private spaces – Connection settings

12. Under the Domain & TLS tab, TLS context can be created by clicking on Create TLS Context, as shown in the below screen:

Figure 11.6: Domain & TLS tab

13. Upload the Keystore file and trust store file.

14. Accept the default ciphers are added or remove the selected ciphers.

CloudHub 2.0 – An Introduction





179

Refer to the following figure:

Figure 11.7: TKS context – providing SSL configurations

15. Under the Firewall rules tab, define inbound traffic and outbound traffic rules.

Refer to the following figure:

Figure 11.8: Firewall rules

16. Under the Environments tab, associate the environments and business groups. This is required to deploy the applications in private spaces. Refer to the following figure:

180



Enterprise Integration with Mulesoft

Figure 11.9: Environments tab

17. Using the Advanced tab, we can configure how the ingress load balancer handles the HTTP requests, log levels, response time out, and AWS service roles. Refer to the following figure:

Figure 11.10: Advanced tab

Terminology changes CloudHub2.0 introduced new features/concepts like shared and private spaces. In CloudHub2.0, the Mule app can be run in shared spaces using multi-tenant and many more. All these features, along with terminology changes, have been stated in Table11.1:

CloudHub 2.0 – An Introduction CloudHub 1.0



181

CloudHub 2.0

Virtual private cloud (VPC)

Private Space

Dedicated Load Balancer

Ingress Controller

Worker – EC2 server instance to run an API Replica – Container instance to run an API Table 11.1: Terminology changes in CloudHub2.0

Replicas Replicas are dedicated instances where Mule runtime is running to deploy Mule APIs. Characteristics of the Replicas:

• Capacity: Each Replica will have a specific capacity to process the data. Based on the Mule application requirement, select the size of the replica while deploying the application. • Isolation: Each Replica runs in a separate container.

• Manageability: Each Replica is monitored independently.

• Locality: Each Replica runs in a specific global region like the EU, US, and APAC. Replicas have different capacities, storage, and memory capabilities. The replica has a minimum of 8 GB of storage and 0.1 vCore. You can scale replicas vertically by selecting one of the available vCore sizes. Replicas with less than 1 vCore provide limited CPU and I/O for apps with smaller workloads. Also, it can burst at higher CPU speeds for a short period. Replicas with more than 1 vCore provide performance consistency.

Clustering Clustering provides load distribution, scalability, and reliability to applications that are deployed in CloudHub 2.0. Clustering enables the communication between multiple replicas and the sharing of data among them as they share a common memory. The HTTP load balancing service distributes the load between these replicas.

182



Enterprise Integration with Mulesoft

Availability and scalability CloudHub 2.0 is designed such that it is highly available and scalable through intelligent healing, redundancy, and zero downtime updates.

Disaster recovery of replica CloudHub 2.0 monitors the replicas continuously for problems and provides a selfhealing mechanism to recover from them: • For hardware failure, the platform migrates the applications to new replicas automatically. • For an application crash, the platform recognizes the crash and redeploys the replica automatically.

Redundancy All CloudHub 2.0 platform services, from load balancing to the API layer, are always available in at least two data centers.

Zero-downtime updates CloudHub 2.0 supports updating Mule applications at runtime so that end users of the API experience zero downtime.

Limitations of CloudHub 2.0 The following are some of the limitations of using CloudHub 2.0:

• CloudHub 1.0 VPC Peering and direct connect have been deprecated in CloudHub 2.0. Only VPN and Transit Gateway supports. • Only Mule 4.3.0 through 4.4.x are supported. • Persistent Queues are not supported.

• Overwriting JVM parameters and overriding default JVM trust stores with custom trust stores does not allow. • TLS 1.3 is supported.

Applications bursting depends on the application usage of other applications deployed in the private space and is not guaranteed.

CloudHub 2.0 – An Introduction



183

Conclusion

This chapter covers several aspects related to CloudHub 2.0, including the provision of IPass services, a comparison between CloudHub 1.0 and CloudHub 2.0, the creation and configuration of private Spaces, and an examination of the limitations of CloudHub 2.0. In the next chapter, we will learn about Universal API Management and Flex Gateway. We will read in detail how UAPIM is being called a set of MuleSoft product capabilities that together provide full life cycle management capabilities to APIs that are deployed anywhere in any architecture or environment.

Questions

1. What is Replica?

2. How to create a private space?

3. What is the major difference between shared space and private space? 4. CloudHub 2.0 supports which versions of Mule Runtime? 5. Which TLS version is supported by CloudHub 2.0?

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors: https://discord.bpbonline.com

Chapter 12

Universal API Management – An Introduction Introduction

APIs are instrumental in integration to enhance connectivity and improve business agility. The Universal API Management objective is to manage, govern and discover the APIs built and deployed anywhere. The APIs developed in any language or deployed anywhere can be managed from the MuleSoft Anypoint Platform. To use Universal API Management benefits no need to buy Mule runtime. Buying API Management is sufficient to grab the features of Universal API Management.

Structure

In this chapter, we will discuss the following topics: • Universal API Management – why to have



o Discovery of API

o Flex Gateway – secure APIs

186



Enterprise Integration with Mulesoft

• Managing API

• API governance

• Marketplace experience

Objectives

The aim of this chapter is to provide the insight on Universal APIs. At the end of this chapter readers will get familiar with the universal API management.

Why UAPIM

APIs are becoming an important integral part of every organization’s digital economy. The adoption of APIs by every enterprise is growing rapidly to expand the business. However, it is becoming difficult for the organization to manage, govern and operate these APIs efficiently to grow the business and face the challenges with the competitors. Universal API Management in Anypoint Platform is a collection of new and existing products which provide MuleSoft customers with flexible management and governance of APIs. It is opening the scope of full lifecycle management of APIs, which are built and deployed anywhere. There are five aspects of UAPIM:

• Discover the APIs (with CLI)

• Secure the APIs using Flex Gateway

• Manage the APIs centrally using API Manager

• Enforce standards on APIs using API Governance • Create API marketplace experiences

Discover APIs UAPIM provides a catalog CLI to discover and catalog any type of API definitions, documentation, and associated metadata as part of an automated process. Catalog CLI can be embedded into automation tools such as CI/CD processes to automatically trigger and publish the API assets in Anypoint Exchange.

Universal API Management – An Introduction



187

Using Catalog CLI, we can:

• Identify all APIs in the project structure and create a descriptor file. The descriptor file is used to publish the APIs in Anypoint Exchange. • Identify the new and updated APIs on a regular basis and update the descriptor file. • Define conditional triggers in descriptor files to determine which APIs can be published and which cannot be published based on branches and tags. • Set the asset version based on branches and tags.

• Publish the APIs by an automated process or using custom scripts.

Securing the APIs using Flex Gateway Security of the APIs is an important aspect. UAPIM provides Flex Gateway to secure both Mule and non-Mule APIs. Mule API Gateway runs on Mule runtime. Flex Gateway is a customer-hosted distributed entity installed on any cloud or data center. It receives the commands from the Anypoint Platform control plane and secures the back-end APIs. Flex current supports below OS environments:

Figure 12.1: Supported OS by Flex

188



Enterprise Integration with Mulesoft

Flex Gateway can run in two modes:

• Connection mode: In this mode, the gateway is fully connected with the MuleSoft Control plane. This allows centralized management and security. Anypoint API Manager enables full lifecycle management of APIs, and Anypoint Runtime Manager helps to deploy the Gateway. • Local mode: Operates independently without connecting through Anypoint Control Plane. Manages all configuration and policy applications through configuration files. Build CI/CD pipelines for deployments.

To run the Flex Gateway, manage servers, and read servers, permissions are required in Runtime Manager. These permissions can be given by the admin through Access Management. Go to Access Management and select Connected Apps. Enter the name of your Flex Gateway. Refer to the following figure:

Figure 12.2: Access Management

Universal API Management – An Introduction



189

Select Add scopes and select Read Server, Manage Servers from the list of scopes:

Figure 12.3: Select/Add Scope

Click on Next and select the organization and environment. Refer to the following figure:

Figure 12.4: Select Context

190



Enterprise Integration with Mulesoft

Once it is created, we can see the app under connected apps. Refer to the following figure:

Figure 12.5: Visualizing List of Connected Apps

In this chapter, we are taking the example of the docker environment to add a Flex Gateway. Perform the below steps on your Docker instance: 1. Create a directory

mkdir flex

2. Create a conf folder under the flex directory.

cd flex



mkdir conf



cd conf

From Runtime Manager, to add a Flex Gateway, select Docker. It will show the steps that you need to perform on your Docker instance to setup the Flex Gateway, as shown below:

Universal API Management – An Introduction



191

Figure 12.6: Runtime Manager

As a part of Step 1, pull the get docker image by running the command shown in Runtime Manager:

Figure 12.7: Console status while pulling Docket Image

192



Enterprise Integration with Mulesoft

Once the pull docker image is completed, register the gateway by using the belowshown command. Replace the gateway name which you have given while creating the app using the connected app option in Access Management. Refer to the following figure:

Figure 12.8: Registering Gateway

For example: docker run --entrypoint flexctl -w /registration -v $(pwd):/registration mulesoft/flex-gateway:1.2.0 \ register \ --client-id=08f9d32846784c59ababbfa45aaf47d5 \ --client-secret=596CF4da6c1747CA90ce79701C5C5b43 \ --environment=3f9fcc52-71a6-4774-879d-22b9f750a831 \ --organization=d5f09599-aa76-4a23-b3b7-3de9a318b79b \ --connected=true \ flex-gw

If the registration is successful, you will see the below output:

Figure 12.9: Successful message for registration

Universal API Management – An Introduction



193

Check registration.yaml created under the conf folder after downloading the docker image. Refer to the following figure:

Figure 12.10: Console -listing files in the conf folder

Start the gateway:

Figure 12.11: Starting Gateway

Once the gateway is successfully started, it will be visible under Runtime Manger -> Flex Gateways.

Managing the APIs using API Manager Everything can be managed in a single place by using Anypoint Platform API Manager. It leverages the runtime capabilities of Anypoint Flex Gateway, Anypoint Mule Gateway, and Anypoint Service Mesh. All of them enforce policies and collect and track analytics data. When you are adding an API, it gives the runtime option:

Figure 12.12: API Manager

194



Enterprise Integration with Mulesoft

You can apply policies by going through the Policies tab. Refer to the following figure:

Figure 12.13: Applying Policy

Enforce standards using API governance API Governance provides frictionless and always on-compliance checks with non-blocking enforcement at design time. It provides a central console to identify compliance risks and security risks. It also integrates checks with DevOps automation and CI/CD pipelines. A governance profile applies chosen governance rulesets to a selected group of APIs. It consists of the following: • Name and Description.

• Applicable governance rulesets.

• Filter criteria to identify the APIs to govern. • Conformance on notification rules.

Universal API Management – An Introduction



195

Refer to the following figure:

Figure 12.14: API Governance Profile Creation

Provide the name and purpose of the profile and click on the Next button. You will get a set of Rulesets options. Refer to the following figure:

Figure 12.15: API Governance Profile screen

196



Enterprise Integration with Mulesoft

Rulesets are a collection of rules and guidelines that are applied to selected APIs published in the exchange. Below are some of the rulesets Provided by MuleSoft in exchange by default:

Figure 12.16: Exchange Screen here

You can see all the rulesets published in the exchange under the API Governance Rulesets page. From the drop-down, you can choose the rulesets provided by MuleSoft or rulesets defined by the Organization. You can choose as many rulesets based on your requirement and click on the Next button. Refer to the following figure:

Universal API Management – An Introduction



197

Figure 12.17: Ruleset selection

Next, you will get to choose the Filter Criteria, where you can choose the type of API, Lifecycle to which you want to apply the governance profile. Refer to the following figure:

Figure 12.18: Selecting Filter criteria

It also displays the APIs which are satisfying the Filter Criteria based on the values you have chosen in it:

198



Enterprise Integration with Mulesoft

Figure 12.19: Insert caption here

In the next screen, you can see the notification details of whom you want to include in case of any compliances. Refer to the following figure:

Figure 12.20: Setting up notifications

Once you click on the Next button, you will get a screen to review your selection and gives the option to create the governance profile. API Governance Dashboard looks like below, where you can generate the report and

see all the profiles are at risk, and you can see at API level report as well. Refer to the following figure:

Universal API Management – An Introduction



199

Figure 12.21: API Governance Dashboard

Dashboard report with APIs:

Figure 12.22: Listing APIs

Creating API marketplace experiences: Through Exeperience Hub, we have the ability to consume any API mule or nonmule and create a digital space. We are able to create marketplaces to showcase our APIs as products, let users consume them, and have full information about who is using what and when.

200



Enterprise Integration with Mulesoft

Anypoint Exchange is a shared place where all assets are published for self-service and reuse by different parts of the organization. It can contain both public and private assets. Private assets are available to organizations. Refer to the following figure:

Figure 12.23: Anypoint Exchange to check the assets

To build a successful API, you should define it iteratively. To do this, get feedback from developers on usability and functionality. To do this, provide ways for developers to discover and hit the APIs. Anypoint Exchange promotes searching and discoverability of APIs. Anypoint Platforms makes this easy with Anypoint Exchange portals. Portals allow developers to learn about an API and try it out. Developers can request access to any APIs in the exchange portal.

Conclusion

In this chapter, we learned about Universal API Management, a set of MuleSoft product capabilities that together provide full life cycle management capabilities to APIs that are deployed anywhere in any architecture or environment. Additionally, we learned about the Flex Gateway and its features. The various configuration modes for the flex gateway. In order to enforce standards on APIs and reduce security and compliance risks, we also learned about API Governance.

Universal API Management – An Introduction

Questions

1. Define Universal API.

2. Why do we need Universal API?

3. How to create API under UAPIM? 4. What is API Governance?

5. How to set notifications for API Governance? 6. What is Experience Hub?



201

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors: https://discord.bpbonline.com

Index A Anypoint Access Management Anypoint Monitoring 50, 51 Anypoint MQ 49 Anypoint Visualizer 49, 50 API Manager 44, 45 Cloud Hub Deployment 46, 47 Data Gateway 48 Hybrid Deployment/PCE (Private Cloud Deployment) 47 Navigation Menu 45 Runtime Fabric 48 Runtime Manager 46 Secretes Manager 51, 52 Anypoint Design Center 33 API designer 33 Flow designer 33 opening 33-36 Anypoint Exchange 36 accessing 37-42 opening 36, 37 Anypoint Flow Designer 114 Anypoint Management Center 43 Access Management 43, 44 Anypoint platform 33 Anypoint Studio 69, 71 Configuration XML tab 75

console view 78 documentation, exporting from 85, 86 downloading 71 editor 74 Global Elements tab 74 installing 71-73 Message Flow tab 74 Mule application, creating from 79, 80 Mule application, deploying to CloudHub 85 Mule application, running from 81-83 Mule Palette 76, 77 Package Explorer 76 Perspective, changing from 86 problem window 78 Properties window 78 API based integration database API 4 operating system API 4 remote API 4 REST API 4 API-based integration 3 API Contract designing 56 using 57, 60 API governance 194 using 194-198 API Kit router 114

204



Enterprise Integration with Mulesoft

API-led contract 60-62 correlation, between layers 66 Experience Layer 62, 63 Process Layer 62, 64 System Layer 62, 64, 65 API lifecycle 112 deploy phase 118, 119 design phase 113 develop phase 117 feedback phase 123 operate phase 120-122 prototype phase 114 publish phase 122, 123 start over 123 test phase 118 validate phase 115, 116 API marketplace experiences creating 199, 200 API project developing 111 Application Programming Interface (API) 111 ASP.NET Web API 29 authentication 14 automatic errors 135

C CloudHub 2.0 173, 174 availability 182 clustering 181 disaster recovery of replica 182 features 174 limitations 182 Private Space 175 redundancy 182 Replicas 181 scalability 182 Shared Space 174 terminology changes 180 zero-downtime updates 182 code examples

in ASP.NET Web API 29 in Java/Spring MVC 28 in Node.js/Express.js 29 in Python/Flask 28 in React 27 custom modules, DataWeave creating 98, 99

D data selectors, DataWeave index selector 92 multi-value selector 91 range selector 91 single-value selector 91 Data Uniform Resource Identifier (URI) 16 components 17, 18 DataWeave 89, 90 custom modules, creating 98 data selectors 91 data types 91 debugging 102, 103 debugging online 103, 104 functions 95 operators 93, 94 precedence 99 variables 92 DataWeave library 104 prerequisites 104-106 publishing 106-108

E encryption 14 Enterprise Integration 2 Enterprise Service Bus (ESB) 3 Error Handler best practices, for defining 141 error handling 134, 135 on error continue 139, 140 on error propagate 140, 141 raise error 136 Try scope 136

Index validation module 137, 138 errors automatic errors 135 classification 135 manual errors 135 message level errors 134 system level errors 134 Express.js 29

F firewalls 14 Flask 28 Flex Gateway connection mode 188 for securing APIs 187-193 local mode 188 functions, DataWeave 95 optional parameters functions 97 overloading 97 rules, for definition 95 type constraint functions 96

G Global Error Handler defining 141

H HTTP methods 23 CONNECT 23, 24 HEAD 25, 26 OPTIONS 26 TRACE 24, 25 HTTP protocol overview 13 secured connection, creating with 14 working 13, 14 HTTPS 14 HTTP statuses 26 1xx 26 2xx 26



205

3xx 26 4xx 26 5xx 27 HTTP verbs 19 DELETE 22, 23 GET 19, 20 POST 20, 21 PUT 21, 22

I Integrated Development Environment (IDE) 69, 70 advantages 70 integration tools 7 selecting 8 iPaaS (Integration Platform as a Service) 4 architecture 5

J JSON Web Tokens (JWT) authentication 169

L licensing subscription, MuleSoft Gold 52 Platinum 52 Titanium 52

M manual errors 135 message level errors 134 Middleware 2 API-based integration 3, 4 Enterprise Service Bus (ESB) 3 iPaaS 4 types 2 Mule application creating, from Studio 79, 80 debugging 83, 84, 142-144 deploying to CloudHub, from Studio 85 documentation, exporting from Studio 85, 86

206



Enterprise Integration with Mulesoft

Perspective 86 remote Mule application, debugging 145, 146 running, from Studio 81-83 MuleSoft 5 advantages 6, 7 for integration 6 history 5 industry demand 6 licensing subscription 52 MuleSoft Connectors 124 advantages 124 categories 124 custom connectors, creating 125-129 custom connectors, publishing on Anypoint Platform exchange 129, 130 Mulesoft Robotic Process Automation (RPA) automation 170 examples 170 overview 169, 170 Munit 150 using 150, 151 Munit Test 152 creating, for Mule flow 152-158 Munit test recorder 158 limitations 162 Munits, creating with 159-162 prerequisites 159

N Non-Functional Requirements (NFRs) 165 implementing, using Anypoint Manager 167, 168 importance 166, 167 mobile API use case 168, 169 overview 166

O on error continue 139 on error propagate 140 OpenAPI Specification (OAS) 56 optional parameters functions 97

P precedence, in DataWeave 99, 100 chained function calls order 101 Private Space, CloudHub 2.0 175 creating 175-180

R Raise Error component 136 RAML (RESTful API Modeling Language) 56, 57 remote Mule application debugging 145, 146 REST architectural style 11 cacheability 12 client-server architecture 11 Code on Demand (Optional) 12 layered system 12 statelessness 11 uniform interface 12 RESTful 10 frameworks 12, 13 RESTful services API-first approach 27 framework-based approach 27 low-code approach 27 manual approach 27

S Secure Sockets Layer (SSL) 14 Service Oriented Architecture (SOA) 3 Shared Space, CloudHub 2.0 174 Spring MVC 28 system level errors 134

T test-driven development (TDD) 149, 150 advantages 150 Transport Layer Security (TLS) 14 type constraint functions 96

Index

U Uniform Resource Citation (URC) 18 Uniform Resource Identifier (URI) 13 overview 15 Uniform Resource Locator (URL) 13, 15 components 15 Uniform Resource Name (URN) 16 Universal API Management 185, 186 API governance, using 194-199



207

API marketplace experiences, creating 199, 200 APIs, discovering with CLI 186, 187 APIs, managing with API Manager 193, 194 APIs, securing with Flex Gateway 187-193

V Validator Component 137